text
stringlengths
4
5.48M
meta
stringlengths
14
6.54k
\section{Introduction} \label{sec:introduction} Quantum computers are believed to be efficient for simulating quantum systems \cite{Simulating Physics with Computers,Quantum Simulation} and have been shown to have many other applications \cite{Quantum Computing and Quantum Information}. Protocols demonstrating the power of quantum computers include Shor's algorithm for prime factorisation \cite{Poly-Time Algorithms for Prime Factorisation and Discrete Logarithms on a Quantum Computer}, Grover's algorithm for unstructured search \cite{A fast quantum mechanical algorithm for database search}, and the BB84 protocol for public key exchange \cite{Quantum cryptography: Public key distribution and coin tossing}. That said, it may be some time before a large scale universal quantum computer capable of demonstrating the computational power of these protocols is built. In the meantime several intermediate, non-universal models of quantum computation, like the one clean qubit model \cite{Power of One Bit of Quantum Information,Hardness of Classically Simulating the One-Clean-Qubit Model} and the boson sampling model \cite{An Introduction to Boson-Sampling}, have been developed and may prove easier to implement. The \emph{Instantaneous Quantum Poly-time} (IQP) machine \cite{Temporally_Unstructured_Quantum_Computation} is another such non-universal model with significant practical advantages \cite{Fault-tolerant computing with biased-noise superconducting qubits: a case study, Architectures for quantum simulation showing quantum supremacy}. In spite of the fact that IQP uses only commuting gates (in contrast to the non-commuting gate set needed for universal computations), it is believed to remain hard to classically simulate \cite{Classical Simulation of Commuting Quantum Computations Implies Collapse of the Polynomial Hierarchy,Average-case complexity versus approximate simulation of commuting quantum computations} even in a noisy environment \cite{Achieving quantum supremacy with sparse and noisy commuting quantum computations}. Hence, providing evidence that a machine can perform hard IQP computations would be a proof of its quantum supremacy. In \cite{Temporally_Unstructured_Quantum_Computation}, the authors present a \emph{hypothesis test} that can be passed only by devices capable of efficiently simulating IQP machines, providing the aforementioned evidence of the capability to perform hard IQP computations. The client in that work is purely classical, however computational assumptions (conjecturing the hardness of finding hidden sub-matroids) were required for the security of the test against a malicious server. In the present work, by providing a suitable implementation of the IQP machine in the setting of Measurement Based Quantum Computing (MBQC) \cite{A one-way quantum computer,Measurement-based quantum computation on cluster states}, we are able to use tools from quantum cryptography (e.g. blind quantum computing \cite{Universal Blind Quantum Computation,Unconditionally Verifiable Blind Quantum Computation}) to develop an information-theoretically secure hypothesis test. To do so, we need to empower the client with minimal quantum capabilities such as those required in standard Quantum Key Distribution schemes. The structure of this work is as follows. In Section \ref{sec:Preliminaries}, we formally introduce the IQP machine and develop an implementation of it in MBQC that is more suitable for our blind delegated setting than previous ones \cite{Temporally_Unstructured_Quantum_Computation,Measurement-based classical computation}. In Section \ref{sec:Blind IQP} we derive a delegated protocol for IQP computations that keeps the details of the computation hidden from the device performing it, and prove information-theoretic security in a composable framework. Finally in Section \ref{sec:hypothesis test} we develop our hypothesis test for quantum supremacy, which a limited quantum client can run on an untrusted Server. \section{Preliminaries} \label{sec:Preliminaries} \subsection{X-programs} \label{subsec:Xprograms} The IQP machine introduced in \cite{Temporally_Unstructured_Quantum_Computation}, is defined by its capacity to implement $X$-programs. \begin{definition} An \emph{$X$-program} consists of a Hamiltonian comprised of a sum of products of $X$ operators on different qubits, and $\theta\in[0,2\pi]$ describing the action for which it is applied. The $i$-th term of the sum has a corresponding vector $\mathbf{q}_{i}$, called a \emph{program element}, which defines on which of the $n_p$ input qubits, the product of $X$ operators, which constitute that term, act. $\mathbf{q}_i$ has 1 in the $j$-th position when $X$ is applied on the $j$-th qubit. As such, we can describe the $X$-program using $\theta$ and a poly-size list of $n_a$ vectors $ \mathbf{q}_i \in \left\{ 0 , 1 \right\} ^{n_{p}}$ or, if we consider the matrix $\mathbf{Q}$ which has as rows the program elements $\mathbf{q}_i,i=1,\dots,n_a$, simply by the pair $\left( \mathbf{Q} , \theta \right) \in \left\{ 0 , 1 \right\}^{n_{a} \times n_{p}} \times \left[ 0 , 2 \pi \right]$. \end{definition} Applying the $X$-program discussed above to the computational basis state $\ket{0^{n_p}}$ and measuring the result in the computational basis allows us to see an $X$-program as a quantum circuit with input $\ket{0^{n_p}}$, comprised of gates diagonal in the Pauli-X basis, and classical output. Using the random variable $X$ to represent the distribution of output samples, the probability distribution of outcomes $\widetilde{x} \in \{0,1\}^{n_{p}}$ is: \begin{equation} \label{equ:IQP probability distribution} \mathbb{P} \left( X = \widetilde{x} \right) = \left| \bra{\widetilde{x}} \exp \left( \sum_{i=1}^{n_a} i \theta \bigotimes_{j: \mathbf{Q}_{ij} = 1} X_{j}\right) \ket{0^{n_{p}}} \right|^2 \end{equation} Note that the $i$ not used as an index is the imaginary unit. \begin{definition} \label{def:IQP oracle} Given some X-program, an \emph{IQP machine} is any computational method capable of efficiently returning a sample $\widetilde{x} \in \{0,1\}^{n_{p}}$ from the probability distribution \eqref{equ:IQP probability distribution}. \end{definition} \subsection{IQP In MBQC} \label{subsec:IQP in MBQC} We present an implementation of a given $X$-program in MBQC that will be used later in our protocol design. First notice that using the equality below: \begin{equation*} \label{equ:X-prog unitary with Z-prog} \exp \left( \sum_{i=1}^{n_a} i \theta \bigotimes_{j: \mathbf{Q}_{ij} = 1} X_{j}\right) = H_{n_p} \left( \prod_{i=1}^{n_a}\exp \left( i \theta \bigotimes_{j: \mathbf{Q}_{ij} = 1} Z_{j}\right) \right) H_{n_p} \end{equation*} equation \eqref{equ:IQP probability distribution} can be rewritten as: \begin{equation} \label{equ:Z-gate X-prog distribution} \mathbb{P} \left( X = \widetilde{x} \right) = \left| \left( \bra{\widetilde{x}} H_{n_p} \right) \left( \prod_{i=1}^{n_a} \exp \left( i \theta \bigotimes_{j: \mathbf{Q}_{ij} = 1} Z_{j}\right) \right) \ket{\mathbf{+}^{n_p}} \right|^2 \end{equation} For any given $i$, we now show the following lemma. \begin{lemma} \label{lemma:X-prog circuit} The circuit of Figure \ref{fig:z-prog circuit} implements the unitary: \begin{equation} \label{equ:single Z-prog unitary term} \exp \left( i \theta \bigotimes_{j : \mathbf{Q}_{ij} = 1} Z_{j}\right) \end{equation} \end{lemma} \begin{figure} \centering \begin{tikzpicture}[scale = 0.7] \draw[very thick] (0,6) node[anchor=east] {$\tilde{p}_1$} -- (7,6); \node at (0,5) {$\vdots$}; \draw[very thick] (0,4) node[anchor=east] {$\tilde{p}_{\# i}$} -- (5,4); \draw[very thick] (0,3) node[anchor=east] {$\tilde{p}_{\# i+1}$} -- (9,3); \node at (0,2) {$\vdots$}; \draw[very thick] (0,1) node[anchor=east] {$\tilde{p}_{n_{p}}$} -- (9,1); \draw[very thick] (0,0) node[anchor=east] {$\ket{+}$} -- (4,0); \filldraw[very thick] (1,0) circle (3pt) -- (1,6) circle (3pt); \node at (2,5) {$\ddots$}; \filldraw[very thick] (3,0) circle (3pt) -- (3,4) circle (3pt); \draw[thick] (4,-0.4) rectangle (5,0.4); \draw (4.1,-0.1) .. controls (4.3,0.2) and (4.7,0.2) .. (4.9,-0.1); \draw[thick, ->] (4.5, -0.2) -- (4.8, 0.3); \draw[very thick] (5,-0.1) -- (7.6,-0.1) -- (7.6,5.5); \draw[very thick] (5,0.1) -- (5.4,0.1) -- (5.4,3.5); \draw[very thick] (5.6,3.5) -- (5.6,0.1) -- (7.4,0.1) -- (7.4,5.5); \draw[very thick] (5,3.5) rectangle (6,4.5) node[pos = 0.5] {$Z$}; \draw[very thick] (6,4) -- (9,4); \node at (6.5,5) {\reflectbox{$\ddots$}}; \draw[very thick] (7,5.5) rectangle (8,6.5) node[pos = 0.5] {$Z$}; \draw[very thick] (8,6) -- (9,6); \end{tikzpicture} \caption{The circuit implementing Expression \eqref{equ:single Z-prog unitary term}. The input qubits $\{p_j\}_{j=1}^{n_p}$ are rearranged so that if $\# i$ is the Hamming weight of row $i$ of matrix $\mathbf{Q}$, then for $k=1,\dots,\#i$ each $\tilde{p}_k$ corresponds to one $p_j$ such that $\mathbf{Q}_{ij}=1$ and for $k=\#i+1,\dots,n_p$ they correspond to the ones such that $\mathbf{Q}_{ij}=0$. The ancillary qubit measurement is in the basis $\left\{ \ket{0_{\theta}} , \ket{1_{\theta}} \right\}$ defined in expression \eqref{equ:IQP measuremnt basis}.} \label{fig:z-prog circuit} \end{figure} \begin{proof} To prove this statement, we will prove that the effect of Figure \ref{fig:z-prog circuit} and expression \eqref{equ:single Z-prog unitary term} is the same on all inputs. Without loss of generality, we can consider only computational basis input states $\ket{p}=\ket{p_1}\dots\ket{p_{n_p}}$, $p_{j} \in \left\{ 0 , 1 \right\}$. Since the operation that we perform is linear, the result then follows for all inputs. Notice that, representing the $n_{p}$-qubit identity operator by $\mathbb{I}_{n_{p}}$, we can rewrite Expression \eqref{equ:single Z-prog unitary term} as: \begin{equation} \label{equ:single Z-prog unitary term rewritten} \cos{\theta} \mathbb{I}_{n_{p}}+ i \sin{\theta} \bigotimes_{j : \mathbf{Q}_{ij} = 1} Z_{j} \end{equation} The above operator on $\ket{p}$ has two possible outcomes: \begin{enumerate} \item For the $j \in \left\{ 1,\dots,n_p \right\}$ such that $\mathbf{Q}_{ij}=1$, if the number of $\ket{p_j}=\ket{1}$ is even, then there will be a phase change of $\cos{\theta} + i \sin{\theta}$, as the $\bigotimes_{j : \mathbf{Q}_{ij} = 1} Z_{j}$ operator will extract an even number of negatives. \item For the $j$'s ($j=1,\dots,n_p$) such that $\mathbf{Q}_{ij}=1$, if the number of $\ket{p_j}=\ket{1}$ is odd, then the phase change will be $\cos{\theta} - i \sin{\theta}$. \end{enumerate} Hence, depending on the parity of $\ket{p}$ in the positions where $\mathbf{Q}_{ij}=1$, the effect is to produce one of the two states: \begin{equation} \label{equ:single Z-prog unitary term operating on basis} \left( \cos{\theta} \pm i \sin{\theta} \right) \ket{p} = e^{ \pm i \theta }\ket{p} \end{equation} We now show the effect of the circuit in Figure \ref{fig:z-prog circuit} is the same as the operator in expression \eqref{equ:single Z-prog unitary term rewritten}. For ease of readability, in Figure \ref{fig:z-prog circuit} we consider a permutation of the states $\ket{\tilde{p}_1},\dots,\ket{\tilde{p}_{\#i}},\dots,\ket{\tilde{p}_{n_p}}$ such that the first $\#i$ qubits are the ones for which the value in the corresponding position in the program element is 1. The action of the controlled-Z gates is to check the parity of $\ket{1}$'s in the input as each appearance of a $\ket{1}$ will flip the bottom \emph{ancillary} qubit between the states $\ket{+}$ and $\ket{-}$. After the action of all controlled-Z operators, we have the state $\ket{p} \ket{+}$ if there is an even number of $\ket{\tilde{p}_k}=\ket{1}$ for $k=1,\dots,\#i$ and $\ket{p} \ket{-}$ if this number is odd. Making a measurement of the ancillary qubit in the basis: \begin{equation} \label{equ:IQP measuremnt basis} \left\{ \ket{0_{\theta}} , \ket{1_{\theta}} \right\}= \left\{ \frac{1}{\sqrt{2}} \left( e^{-i\theta}\ket{+} + e^{i\theta}\ket{-} \right) ,\frac{1}{\sqrt{2}} \left(e^{-i\theta} \ket{+} - e^{i\theta}\ket{-} \right) \right\} \end{equation} leaves us with one of the two states $\pm e^{ - i \theta}\ket{p}$ in the odd parity case and with the state $e^{ i \theta} \ket{p}$ in the even parity case. The negative sign preceding the exponential term in the odd parity case comes from measuring the state $\ket{1_{\theta}}$ (a measurement outcome of $1$) and the positive sign comes from measuring $\ket{0_{\theta}}$. In the case of a measurement outcome $1$, we then apply $Z$ operators to all unmeasured qubits to ensure that the resulting states are as in expression \eqref{equ:single Z-prog unitary term operating on basis} and with the same dependency of the sign on the parity of $\ket{p}$. \end{proof} We now consider generating the full distribution of equation \eqref{equ:Z-gate X-prog distribution} using measurement based quantum computing \cite{A one-way quantum computer,Measurement-based quantum computation on cluster states}. An MBQC computation consists of a graph describing the pattern of entanglement amongst the qubits in a state, a measurement pattern describing the order of measurements of qubits in that state, and a set of corrections on later measurements which can depend on the outcomes of previous ones. We now identify all of these components of an MBQC computation in the case of an IQP computation. \begin{lemma} \label{lem:IQP graph design} A graph and measurement pattern can always be designed to simulate an $X$-program efficiently. \end{lemma} \begin{proof} Producing the distribution in Eq. \eqref{equ:Z-gate X-prog distribution} can be achieved by inputting the state $\ket{+^{n_{p}}}$ into a circuit made from composing circuits like the one in Figure \ref{fig:z-prog circuit} (one for each term of the product in Eq. \eqref{equ:Z-gate X-prog distribution}) and measuring the result in the Hadamard basis. The $Z$ corrections commute with the controlled-$Z$ operations and therefore they can be moved to the end of the new, larger circuit. Because there is no dependency between the measurements, they can be performed in any order or even simultaneously. The $Z$ corrections, conditional on the measurement outcomes of the ancillary bits, can then be implemented via bit flips. \end{proof} A formal description of the protocol described in these proofs can be found in Algorithm \ref{alg:x-prog in MBQC} of the Appendix. We introduce some further terminology which is used in that algorithm and in the remainder of this work. The reader will notice that the entanglement pattern used in Algorithm \ref{alg:x-prog in MBQC} and implicit in the proof of Lemma \ref{lem:IQP graph design} is that of an \emph{undirected bipartite graph}, which we will refer to as an IQP graph. \begin{definition} An \emph{undirected bipartite graph}, which we refer to as an \emph{IQP graph}, consists of a bipartition of vertices into two sets $P$ and $A$ of cardinality $n_p$ and $n_a$ respectively. We may represent such a graph by $\mathbf{Q} \in \left\{ 0 , 1 \right\}^{n_{a} \times n_{p}}$. An edge exists in the graph when $\mathbf{Q}_{ij}=1$, for $i=1,\dots,n_a$ and $j=1,\dots,n_p$. We call the set $P$ \emph{primary vertices} and the set $A$ \emph{ancillary vertices}. \end{definition} By referring to the bottom qubit of Figure \ref{fig:z-prog circuit} as the ancillary qubit and the others as primary qubits we understand why this type of graph is relevant and how the $X$-program matrix $\mathbf{Q}$, interpreted and a bipartite graph, exactly describes the entanglement pattern. Throughout this work, we refer to $\mathbf{Q}$ interchangeably as a matrix corresponding to an $X$-program and a graph and the reader may wish to direct their attention to Figure \ref{fig:bipartite graph} for an example. \begin{figure \centering \begin{tikzpicture}[scale = 0.7] \filldraw (1.5,0) circle (3pt) node[anchor = north east] {$a_{1}$}; \filldraw (4.5,0) circle (3pt) node[anchor = north east] {$a_{2}$}; \filldraw (0,2) circle (3pt) node[anchor = south east] {$p_{1}$}; \draw[very thick] (0,2) -- (1.5,0); \filldraw (3,2) circle (3pt) node[anchor = south east] {$p_{2}$}; \draw[very thick] (3,2) -- (4.5,0); \filldraw (6,2) circle (3pt) node[anchor = south east] {$p_{3}$}; \draw[very thick] (6,2) -- (1.5,0); \node at (11,1) {$\mathbf{Q} = \left( \begin{array}{ccc} 1 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right)$}; \end{tikzpicture} \caption{An example of an IQP graph described by matrix $\mathbf{Q}$. Here, $n_{p} = 3$ and $n_{a} = 2$ while the partition used is $P = \left[ p_1 , p_2 , p_3 \right]$ and $A = \left[ a_1 , a_2 \right]$.} \label{fig:bipartite graph} \end{figure} \section{Blind Delegated IQP Computation} \label{sec:Blind IQP} The next step towards our method for verifying IQP machines is to build a method for blindly performing an IQP computation in a delegated setting. We consider a Client with limited quantum power delegating an IQP computation to a powerful Server. The novel method that we use in this work is to keep the $X$-program secret by not revealing the quantum state used. The intuition behind the method used to perform this hiding is that the Client will ask the Server to produce a quite general quantum state and then move from that one to the one that is required for the computation. If this is done in a blind way then the Server only has some knowledge of the general starting state from which any number of other quantum states may have been built. Hence, there are two key problems that to be addressed in the following subsections: \begin{enumerate} \item \label{pt:BIQP problem 1} How to move from a general quantum state to a specific one representing an IQP computation. \item \label{pt:BIQP problem 2} How to do so secretly in a delegated setting. \end{enumerate} \subsection{Break and Bridge} \label{sec:Break, Bridge Operators} \ANNASCOMMENT{\begin{figure} \centering \begin{tikzpicture}[scale = 0.7] \draw[very thick] (0,0) -- (1,1) -- (2,0) -- (3,1); \end{tikzpicture} \end{figure}} The break and bridge operations on a graph $\widetilde{G}=(\widetilde{V},\widetilde{E})$, with vertex set $\widetilde{V}$ and edge set $\widetilde{E}$, which were introduced in \cite{Unconditionally Verifiable Blind Quantum Computation, Multi-party entanglement in graph states}, are exactly those necessary to solve the `how to move' element of problem \ref{pt:BIQP problem 1}. \begin{definition} \label{def:bridge and break} The \emph{break} operator acts on a vertex $v \in \widetilde{V}$ of degree 2 in a graph $\widetilde{G}$. It removes $v$ from $\widetilde{V}$ and also removes any edges connected to $v$ from $\widetilde{E}$. The \emph{bridge} operator acts also on a vertex $v \in \widetilde{V}$ of degree 2 in a graph $\widetilde{G}$. It removes $v$ from $\widetilde{V}$, removes any edges connected to $v$ from $\widetilde{E}$ and adds a new edge between the neighbours of $v$. \end{definition} Figure \ref{fig:bridge and break chain} gives an example of multiple applications of the bridge and break operators. Once this is translated from a graph theoretic idea to an operation on quantum states, we will have address the `how to move' component of problem \ref{pt:BIQP problem 1}. The \emph{extended IQP graphs}, which we define now, is the `general quantum state' also mentioned in problem \ref{pt:BIQP problem 1}. \ANNASCOMMENT{\begin{definition} To refer to the resulting graph we introduce the following terminology. \label{def:br-sub-graph} Given a graph, $\widetilde{G}$, a new graph, $G$ called a \emph{br-sub-graph}, is obtained from $\widetilde{G}$ by applying consecutive break or bridge operations to $\widetilde{G}$. We impose the condition that the vertices to which break and bridge operations are applied must not neighbour each other in the graph $\widetilde{G}$. A \emph{br-sup-graph}, $\widetilde{G}$, of a graph, $G$, is such that $G$ is a br-sub-graph of $\widetilde{G}$. \end{definition}} \begin{figure \centering \begin{tikzpicture}[scale = 0.7] \filldraw (0,0) circle (3pt); \filldraw (0,1) circle (3pt); \filldraw (0,2) circle (3pt); \filldraw (1,0) circle (3pt); \filldraw (1,1) circle (3pt); \filldraw (1,2) circle (3pt); \filldraw (2,0) circle (3pt); \filldraw (2,1) circle (3pt); \filldraw (2,2) circle (3pt); \draw[very thick] (0,0) -- (0,1); \draw[very thick] (0,1) -- (1,0); \draw[very thick] (2,0) -- (2,1); \draw[very thick] (2,1) -- (2,2); \draw[very thick] (0,2) -- (1,2); \draw[very thick] (1,0) -- (1,1); \draw[very thick] (1,1) -- (2,2); \draw[very thick] (1,2) -- (2,2); \draw[very thick , ->] (2.5,1) -- (3.5,1); \filldraw (4,0) circle (3pt); \filldraw (4,1) circle (3pt); \filldraw (4,2) circle (3pt); \filldraw (5,0) circle (3pt); \filldraw (5,1) circle (3pt); \filldraw (5,2) circle (3pt); \filldraw (6,0) circle (3pt); \filldraw (6,2) circle (3pt); \draw[very thick] (4,0) -- (4,1); \draw[very thick] (4,1) -- (5,0); \draw[very thick] (4,2) -- (5,2); \draw[very thick] (5,0) -- (5,1); \draw[very thick] (5,1) -- (6,2); \draw[very thick] (5,2) -- (6,2); \draw[very thick] (6,2) -- (6,0); \draw[very thick , ->] (6.5,1) -- (7.5,1); \filldraw (8,0) circle (3pt); \filldraw (8,1) circle (3pt); \filldraw (8,2) circle (3pt); \filldraw (9,0) circle (3pt); \filldraw (9,1) circle (3pt); \filldraw (10,2) circle (3pt); \filldraw (10,0) circle (3pt); \draw[very thick] (8,0) -- (8,1); \draw[very thick] (8,1) -- (9,0); \draw[very thick] (10,0) -- (10,2); \draw[very thick] (9,0) -- (9,1); \draw[very thick] (9,1) -- (10,2); \end{tikzpicture} \caption{An example of a sequence of one bridge and one break operation.} \label{fig:bridge and break chain} \end{figure} \begin{definition} An \emph{extended IQP graph} is represented by $\widetilde{\mathbf{Q}} \in \left\{ -1 , 0 , 1 \right\}^{n_{a} \times n_{p}}$. The vertex set contains $A = \left\{ a_{1} , ... , a_{n_{a}} \right\}$ and $P = \left\{ p_{1} , ... , p_{n_{p}} \right\}$ while $\widetilde{\mathbf{Q}}_{ij}=0$ and $\widetilde{\mathbf{Q}}_{ij}=1$ has the same implications, regarding the connections between these vertices, as in IQP graphs. We interpret $\widetilde{\mathbf{Q}}_{ij}=-1$ as the existence of an intermediary vertex $b_k$ between vertices $p_j$ and $a_i$, and denote with $n_b$ the number of -1s in $\widetilde{\mathbf{Q}}$. As such the vertex set also includes the \emph{bridge and break vertices} $B = \left\{ b_{1} , ... , b_{n_b} \right\}$ and the edge set includes edges between $b_{k}$ and $a_{i}$ as well as $b_{k}$ and $p_{j}$ when $\widetilde{\mathbf{Q}}_{ij}=-1$. To keep track of these connections we define the surjective function $g$ for which $g \left( i , j \right) = k $ where $b_{k}$ is the intermediate vertex connected to $a_{i}$ and $p_{j}$. \end{definition} An \emph{extended IQP graph} $\widetilde{\mathbf{Q}}$ can be built from an IQP graph $\mathbf{Q}$ by replacing any number of the entries of $\mathbf{Q}$ with $-1$. Throughout the remainder of this work we will use the tilde notation to represent an extended IQP graph $\widetilde{\mathbf{Q}}$ build from an IQP graph $\mathbf{Q}$ in this way. Figure \ref{fig:IQP extended graph} displays an example of an extended IQP graph. By applying a bridge operator to $b_{1}$ and a break operation to $b_{2}$ in $\widetilde{\mathbf{Q}}$ of Figure \ref{fig:IQP extended graph} we arrive at $\mathbf{Q}$ of Figure \ref{fig:bipartite graph}. It is in this sense that an extended IQP graph is `more general' that an IQP graph. \begin{figure \centering \begin{tikzpicture}[scale = 0.7] \filldraw (1.5,0) circle (3pt) node[anchor = north east] {$a_{1}$}; \filldraw (4.5,0) circle (3pt) node[anchor = north east] {$a_{2}$}; \filldraw (0,2) circle (3pt) node[anchor = south east] {$p_{1}$}; \draw[very thick] (0,2) -- (1.5,0); \filldraw (0.75,1) circle (3pt) node[anchor = south west] {$b_{1}$}; \filldraw (3,2) circle (3pt) node[anchor = south east] {$p_{2}$}; \draw[very thick] (3,2) -- (4.5,0); \filldraw (6,2) circle (3pt) node[anchor = south east] {$p_{3}$}; \draw[very thick] (6,2) -- (1.5,0); \draw[very thick] (6,2) -- (4.5,0); \filldraw (5.25,1) circle (3pt) node[anchor = north west] {$b_{2}$}; \node at (11,1) {$\widetilde{\mathbf{Q}} = \left( \begin{array}{ccc} -1 & 0 & 1 \\ 0 & 1 & -1 \end{array} \right)$}; \end{tikzpicture} \caption{An example of an extended IQP graph described by matrix $\widetilde{\mathbf{Q}}$ with $(n_a,n_p,n_b)=(2,3,2)$, $P = \left[ p_1 , p_2 , p_3 \right]$ and $A = \left[ a_1 , a_2 \right]$. Two vertices $b_1$ and $b_2$ are introduced and the function $g: \mathbb{Z}_{n_a\times n_p} \rightarrow \mathbb{Z}_{n_b}$ is defined as $g \left( 1 , 1 \right) = 1$ and $g \left( 2 , 3 \right) = 2$.} \label{fig:IQP extended graph} \end{figure} It is convenient to now introduce the following definition which allows us to use the graphs defined above to describe the entanglement pattern of quantum states. \begin{definition} Consider a matrix $\mathbf{G} \in \left\{-1,0,1\right\}^{n_a\times n_p}$ and use function $g \left( i , j \right) = k$ to define index $k=1,\dots,n_b$ for the elements $\mathbf{G}_{ij}=-1$. The circuit $E_{\mathbf{G}}$ on $(n_a + n_p +n_b)$ qubits applies controlled-$Z$ operations between qubits $p_{j}$ and $a_{i}$ if $\mathbf{G}_{ij} = 1$ and, between qubits $b_{g(i,j)}$ and $a_{i}$, and, $b_{g(i,j)}$ and $p_{j}$, when $\mathbf{G}_{ij} = -1$. \end{definition} Using the above notation, the state built in Lemma \ref{lem:IQP graph design} is $E_{\mathbf{Q}} \ket{+}^{n_{a} + n_{p}}$. We refer to such a state, or $Z$ rotations there of, as an \emph{IQP state}. We will call states of the form $E_{\mathbf{Q}} \ket{+}^{n_{a} + n_{p}}$ or, again, their $Z$ rotations, as \emph{IQP extended state} We can now state Lemma \ref{lem:state bridge and break correctness} which teaches us how to translate bridge and break operations from graph theoretical ideas into practical operations on quantum states. A similar lemma can be found in \cite{Unconditionally Verifiable Blind Quantum Computation}. \begin{lemma} \label{lem:state bridge and break correctness} Consider a quantum state $E_{\mathbf{Q}}\ket{\phi}$ where $\ket{\phi}$ is arbitrary. If $\widetilde{\mathbf{Q}}$ is an extended IQP graph built from $\mathbf{Q}$ then there exists a state $E_{\widetilde{\mathbf{Q}}}\ket{\psi}$, which can be transformed into the state $E_{\mathbf{Q}}\ket{\phi}$ through a sequence of Pauli-$Y$ basis measurements on qubits and local rotations around the Z axis on the unmeasured qubits through angles $\left\{ 0 , \frac{\pi}{2} , \pi , \frac{3 \pi}{2} \right\}$. \end{lemma} The detailed proof of Lemma \ref{lem:state bridge and break correctness}, which can be found in Appendix \ref{apen:Detailed proof of Lemma state bridge and break correctness}, shows us that we can create the following state. \begin{equation} \label{equ:section final state} \prod_{k = 1}^{n_{b}} \left( S_{p_{j}}^{(-1)^{s^{b}_{k}+r^{b}_{k}}} \otimes S_{a_{i} }^{(-1)^{s^{b}_{k}+r^{b}_{k}}} \right)^{d^{b}_{k}} \left( Z_{p_{j}}^{r^{b}_{k}} \otimes Z_{a_{j}}^{r^{b}_{k}} \right)^{1-d_{k}} E_{\mathbf{Q}}\ket{\phi} \end{equation} where $p_j$ and $a_i$ are the primary and ancillary qubits connected to $b_k$ respectively. The operations performed to achieve this are measurements of the qubits corresponding to bridge and break vertices (which we call \emph{bridge and break qubits}) of $E_{\widetilde{\mathbf{Q}}}\ket{\psi}$ in the Pauli $Y$ basis. The quantity $s_k^{b}$ is the outcome of this measurement on qubit $b_{k}$ while the quantities $r^{b}_{k}$ and $d^{b}_{k}$ tell us that said qubit was initialised in the state $\ket{b_k} = Y^{r^{b}_{k}} \sqrt{Y}^{d^{b}_{k}} \ket{0}$. It is possible to perform an IQP computation using this method. Although the quantum state generated using this method would equal $E_{\mathbf{Q}} \bigotimes_{1}^{n_{a} + n_{p}} \ket{+}$ up to some $S$ corrections, these corrections may be accounted for by making corrections to the primary and ancillary measurement bases (see also the circuits in Figures \ref{fig:original graph generation circuit} and \ref{fig:bridge and break graph generation circuit} in Appendix \ref{sec:Pictorial Evolution of Algorithms in This Paper}). Algorithm \ref{alg:real IQP resource honest server distributed} of the Appendix uses the methods discussed to build an IQP state. \subsection{The Protocol} \label{sec:security proof} We can now address problem \ref{pt:BIQP problem 2} of the introduction to this section. To do so we use the tools of the previous section to blindly create an IQP state at the Server side. What we wish is to construct the \emph{Ideal Resource} of Figure \ref{fig:ideal resource} which takes as input from the Client an IQP computation, $\left( \mathbf{Q} , \theta \right)$, and in return gives a classical output $\widetilde{x}$. If the Server is honest, then $\widetilde{x}$ comes from the distribution corresponding to $\left( \mathbf{Q} , \theta \right)$. If the Server is dishonest, then they can input some quantum operation $\mathcal{E}$ and some quantum state $\rho_{B}$ and force the output to the Client into the classical state $\mathcal{E}\left( \mathbf{Q} , \theta , \rho_{B} \right)$. We would like for the Server only to receive a IQP extended graph $\widetilde{\mathbf{Q}}$ which can be built from $\mathbf{Q}$, the distribution $\mathcal{Q}$ over the possible $\mathbf{Q}$ from which $\widetilde{\mathbf{Q}}$ could be built and $\theta$. Let us assume that this is public knowledge. \begin{figure \centering \begin{tikzpicture}[scale = 0.7] \draw[very thick] (0,0.5) rectangle (8,2.5) node[align = center, pos = 0.5] {$\widetilde{x} = \begin{cases} x & \text{ if honest} \\ \mathcal{E} \left( \mathbf{Q} , \rho_{B} , \theta \right) & \text{if dishonest}\end{cases}$} node[anchor = south east] {$\mathcal{S}$}; \draw[very thick, ->] (-2,2) -- (0,2) node[anchor = south, pos = 0.5] {$\mathbf{Q}$}; \draw[very thick, <-] (-2,1) -- (0,1) node[anchor = south, pos = 0.5] {$\widetilde{x}$}; \draw[very thick, dotted, <-] (8,2) -- (10,2) node[anchor = south, pos = 0.5] {$\mathcal{E}$}; \draw[very thick, dotted, <-] (8,1) -- (10,1) node[anchor = south, pos = 0.5] {$\rho_{B}$}; \draw[very thick, <->] (-2,0) -- (10,0) node[anchor = north, pos = 0.5] {$\widetilde{\mathbf{Q}} , \mathcal{Q},\theta$}; \end{tikzpicture} \caption{The ideal blind delegated IQP computation resource.} \label{fig:ideal resource} \end{figure} The proposed real communication protocol is described in detail by Algorithm \ref{alg:real IQP resource honest server} and graphically shown in Figure \ref{fig:real IQP computation resource}. The element of blindness is added to the work of Section \ref{sec:Break, Bridge Operators} and Algorithm \ref{alg:real IQP resource honest server distributed} by introducing some random rotations on the primary and ancillary qubits. These rotations are such that they can be corrected by rotating, in the same way, the measurement bases of those qubits, and therefore ensuring that the original IQP computation is being performed. \begin{algorithm} \caption{Blind distributed IQP computation} \label{alg:real IQP resource honest server} \textbf{Public:} $\widetilde{\mathbf{Q}} , \mathcal{Q}$, $\theta$ \textbf{Client input:} $\mathbf{Q}$ \textbf{Client output:} $\widetilde{x}$ \textbf{Protocol:} \begin{algorithmic}[1] \STATE The Client randomly generates $r^{p} , d^{p} \in \left\{ 0 , 1 \right\}^{n_{p}}$ and $r^{a} , d^{a} \in \left\{ 0 , 1 \right\}^{n_{a}}$ where $n_{p}$ and $n_{a}$ are the numbers of primary and ancillary qubits respectively. \label{alg line:real IQP resource honest server - primary and ancila random key generation} \STATE The Client generates the states $\ket{p_{j}} = Z^{r^{p}_{j}} S^{d^{p}_{j}} \ket{+} $ and $\ket{a_{i}} = Z^{r^{a}_{i}} S^{d^{a}_{i}} \ket{+}$ for $j \in \left\{ 1 , \dots, n_{p} \right\}$ and $i \in \left\{ 1 , \dots, n_{a} \right\}$ \label{alg line:real IQP resource honest server - primary and ancillary state generation} \STATE Client creates $d^b \in \left\{0,1\right\}^{n_b}$ in the following way: For $i=1,\dots,n_a$ and $j=1,\dots,n_p$, if $\widetilde{\mathbf{Q}}_{ij}=-1$ and $\mathbf{Q}_{ij}=0$, then $d^b_k=0$ else if $\widetilde{\mathbf{Q}}_{ij}=-1$ and $\mathbf{Q}_{ij}=1$ then $d^b_k=1$. He keeps track of the relation between $k$ and $(i,j)$ via the surjective function $g: \mathbb{Z}_{n_a \times n_p} \rightarrow \mathbb{Z}_{n_b}$. \STATE The Client generates $r^{b} \in \left\{ 0 , 1 \right\}^{n_{b}}$ at random and produces the states $\ket{b_{k}} = Y^{r^{b}_{k}} \left( \sqrt{Y} \right)^{d^{b}_{k}} \ket{0} $ for $k \in \left\{ 1 , \dots, n_{b} \right\}$ \label{alg line:real IQP resource honest server - bridge and break state generation} \STATE State $\rho$ comprising of all of the Client's produced states is sent to the Server. \STATE The Server implements $E_{\widetilde{\mathbf{Q}}}$. \STATE The Server measures qubits $b_{1} , \dots, b_{n_{b}}$ in the $Y$-basis $\left\{ \ket{+^{Y}} , \ket{-^{Y}} \right\}$ and sends the outcome $s^b \in \left\{ 0 , 1 \right\}^{n_{b}}$ to the Client. \STATE The Client calculates $\Pi^{z} , \Pi^{s} \in \left\{ 0 , 1 \right\}^{n_{p}}$ and $A^{z} , A^{s} \in \left\{ 0 , 1 \right\}^{n_{a}}$ using equations \eqref{equ:primary Z correction term}, \eqref{equ:primary S correction term}, \eqref{equ:ancila Z correction term} and \eqref{equ:ancila S correction term}. \begin{align} \label{equ:primary Z correction term} \Pi^{z}_{j} &= \sum_{i,k:g(i,j)=k} r_k^b \left( 1 - d^{b}_k \right) - r^{p}_{j} \\ \label{equ:primary S correction term} \Pi^{s}_{j} &= \sum_{i,k:g(i,j)=k} (-1)^{s^{b}_k+r^{b}_k} d^{b}_k - d^{p}_{j}\\ \label{equ:ancila Z correction term} A^{z}_{i} &= \sum_{j,k:g(i,j)=k} r_k^{b} \left( 1 - d^{b}_k \right) - r^{a}_{i}\\ \label{equ:ancila S correction term} A^{s}_{i} &= \sum_{j,k:g(i,j)=k}(-1)^{s^{b}_k+r^{b}_k} d^{b}_k - d^{a}_i \end{align} \STATE The Client sends $A \in \left\{0,1,2,3\right\}^{n_a}$ and $\Pi \in \left\{0,1,2,3\right\}^{n_p}$ for the ancillary and primary qubits respectively, where $A_{i} = A^{s}_{i} + 2 A^{z}_{i} \pmod 4$ and $\Pi_{j} = \Pi^{s}_{j} + 2 \Pi^{z}_{j} \pmod 4$. \STATE The Server measures the respective qubits in the basis below for the ancillary and primary qubits respectively. \begin{equation} \label{equ:primary and ancillary measurement basis} S^{- A_{i} } \left\{ \ket{0_\theta} , \ket{1_\theta} \right\} \text{ and } S^{- \Pi_{j} } \left\{ \ket{+} , \ket{-} \right\} \end{equation} The measurement outcomes $s^{p} \in \left\{ 0 , 1 \right\}^{n_{p}}$ and $s^{a} \in \left\{ 0 , 1 \right\}^{n_{a}}$ are sent to the Client. \STATE The Client generates and outputs $\widetilde{x} \in \left\{ 0 , 1 \right\}^{n_{p}}$ as follows. \begin{equation} \label{equ:IQP final outcome calculation} \widetilde{x}_{j} = s^{p}_{j} + \sum_{i:\mathbf{Q}_{ij} = 1} s^{a}_{i} \pmod 2 \end{equation} \end{algorithmic} \end{algorithm} \begin{figure \centering \begin{tikzpicture}[scale = 0.7] \draw[very thick] (0,0.5) rectangle (1,4.5) node[anchor = south east] {$\pi_{A}$}; \draw[very thick, ->] (-2,4) -- (0,4) node[anchor = south, pos = 0.5] {$\mathbf{Q}$}; \draw[very thick, <-] (-2,1) -- (0,1) node[anchor = south, pos = 0.5] {$\widetilde{x}$}; \draw[very thick, ->] (1,4) -- (5,4) node[anchor = south, pos = 0.5] {$\rho$}; \draw[very thick, <-] (1,3) -- (5,3) node[anchor = south, pos = 0.5] {$s^b$}; \draw[very thick, ->] (1,2) -- (5,2) node[anchor = south, pos = 0.5] {$A,\Pi$}; \draw[very thick, <-] (1,1) -- (5,1) node[anchor = south, pos = 0.5] {$s^a,s^p$}; \draw[very thick] (5,0.5) rectangle (6,4.5) node[anchor = south east] {$\pi_{B}$}; \draw[very thick, dotted, <-] (6,4) -- (8,4) node[anchor = south, pos = 0.5] {$\mathcal{E}$}; \draw[very thick, dotted, <-] (6,3) -- (8,3) node[anchor = south, pos = 0.5] {$\rho_{B}$}; \draw[very thick, dotted, ->] (6,1) -- (8,1) node[anchor = south, pos = 0.5] {$\rho_{B}'$}; \draw[very thick, <->] (-2,0) -- (8,0) node[anchor = north, pos = 0.5] {$\widetilde{\mathbf{Q}} , \mathcal{Q}, \theta$}; \draw[very thick, dotted] (1.5, 0.5) rectangle (4.5,5) node[anchor = south east] {$\mathcal{R}$}; \end{tikzpicture} \caption{The real communication protocol of Algorithm \ref{alg:real IQP resource honest server}.} \label{fig:real IQP computation resource} \end{figure} During the execution of the protocol of Algorithm \ref{alg:real IQP resource honest server}, the Server sends two classical bit strings to the Client that correspond to the measurement outcomes of the sent qubits. If the Server wants to deviate from the protocol, he will again use some quantum map $\mathcal{E}$ on the information received so far together with the state $\rho_B$ he has in his own register. At the final step of the protocol the Server may output some quantum state $\rho_{B}'$. To prove composable security of the proposed protocol we drop the notion of a malicious Server for that of a global distinguisher that has a view of all inputs and outputs of the relevant resources. To recreate the view of a malicious Server, we develop a simulator $\sigma$ interfacing between the ideal resource $\mathcal{S}$ of Figure \ref{fig:ideal resource} and the distinguisher in such a way that the latter cannot tell the difference between an interaction with the ideal resource and the real protocol. We employ the Abstract Cryptography framework introduced in \cite{Abstract cryptography, Cryptographic security of quantum key distribution} and teleportation techniques inspired by \cite{Composable security of delegated quantum computation} to prove security in the case of a malicious Server. We will prove that: \begin{equation} \pi_A\mathcal{R}\equiv \mathcal{S}\sigma \end{equation} where $\mathcal{R}$ is the communication channel (quantum and classical) used by the Client and the Server in the protocol. \begin{theorem} \label{thm:security proof} The protocol described by Algorithm \ref{alg:real IQP resource honest server} is information theoretically secure against a dishonest Server. \end{theorem} For the sake of brevity, we give only an intuitive proof here and leave a thorough proof to Appendix \ref{thm:security of real resource appendix}. \begin{proof} The proof consists of a pattern of transformations of the real protocol of Algorithm \ref{alg:real IQP resource honest server}, into the ideal resource plus simulator setting of Algorithm \ref{alg:real IQP resource honest server with simulator}. These transformations leave the computation unchanged, therefore ensuring the indistinguishability of the two settings and so the security of the protocol. As the computation itself is not changed by the transformations we also ensure that we are still sampling from the original IQP distribution, providing evidence for the correctness of Algorithm \ref{alg:real IQP resource honest server with simulator}. \begin{algorithm} \caption{Blind distributed IQP computation with simulator} \label{alg:real IQP resource honest server with simulator} \textbf{Public:} $\widetilde{\mathbf{Q}} , \mathcal{Q}$, $\theta$ \textbf{Client input:} $\mathbf{Q}$ \textbf{Client output:} $\widetilde{x}$ \textbf{The simulator} \begin{algorithmic}[1] \STATE Generates $n_{p}+n_a+n_b$ EPR pairs and sends half of each to the ideal resource and the other half to the distinguisher. \STATE Receives the bitstring $s_{b} \in \left\{ 0 , 1 \right\}^{n_{b}}$ from the distinguisher and forwards it to the ideal resource. \STATE Randomly generates $\Pi \in \left\{ 0 , 1 , 2 , 3 \right\}^{n_{p}}$ and $A \in \left[ 0 , 1 , 2 , 3 \right]^{n_{a}}$ and sends them to the ideal resource and distinguisher. \STATE Receives the bitstrings $s^{p} \in \left\{ 0 , 1 \right\}^{n_{p}}$ and $s^{a} \in \left\{ 0 , 1 \right\}^{n_{a}}$ from the distinguisher and forwards them to the ideal resource. \end{algorithmic}\vspace{0.1in} \textbf{The ideal resource} \begin{algorithmic}[1] \STATE Calculates $d^b \in \left\{0,1\right\}^{n_b}$ in the following way: For $i=1,\dots,n_a$ and $j=1,\dots,n_p$, if $\widetilde{\mathbf{Q}}_{ij}=-1$ and $\mathbf{Q}_{ij}=0$, then $d^b_k=0$ else if $\widetilde{\mathbf{Q}}_{ij}=-1$ and $\mathbf{Q}_{ij}=1$ then $d^b_k=1$. Keep track of the relation between $k$ and $(i,j)$ via the surjective function $g: \mathbb{Z}_{n_a \times n_p} \rightarrow \mathbb{Z}_{n_b}$. \STATE Measures the corresponding half EPR pairs in the bases $\sqrt{Y}^{d^{b}_{k}} \left\{ \ket{0} , \ket{1} \right\}$ getting outcomes $r^{b}_k$, for $k=1,\dots,n_b$. \STATE Calculates $d^{p} \in \left\{ 0 , 1 , 2 , 3 \right\}^{n_{p}}$ and $d^{a} \in \left\{ 0 , 1 , 2 , 3 \right\}^{n_{a}}$ using equations \eqref{equ:primary measurement term} and \eqref{equ:ancila measurement term} respectively. \begin{align} \label{equ:primary measurement term} d^{p}_{j} &= \sum_{i,k:g(i,j)=k} (-1)^{s^{b}_k+r^{b}_k}d^{b}_k + 2 \sum_{i,k:g(i,j)=k} r_k^{b} \left( 1 - d^{b}_k \right) - \Pi_{j} \\ \label{equ:ancila measurement term} d^{a}_{i} &= \sum_{j,k:g(i,j)=k} (-1)^{s^{b}_k+r^{b}_k}d^{b}_k + 2 \sum_{j,k:g(i,j)=k} r_k^{b} \left( 1 - d^{b}_k \right) - A_{i} \end{align} \STATE Measures the remaining half of the EPR pairs corresponding to the ancillary and primary qubits in the bases $S^{d^{a}_{i}} \left\{ \ket{+} , \ket{-} \right\}$ and $S^{d^{p}_{j}} \left\{ \ket{+} , \ket{-} \right\}$, getting outcomes $r^{a}_{i}$ and $r^{p}_{j}$ for $i=1,\dots,n_a$ and $j=1,\dots,n_p$ respectively. \STATE Generates and outputs $\widetilde{x} \in \left\{ 0 , 1 \right\}^{n_{p}}$ using equation \eqref{equ:IQP final outcome calculation with corrections}. \begin{equation} \label{equ:IQP final outcome calculation with corrections} \widetilde{x}_{j} = \left( s^{p}_{j} + r^{p}_{j} \right) + \sum_{i:\mathbf{Q}_{ij} = 1} \left( s^{a}_{i} + r^{a}_{i} \right) \end{equation} \end{algorithmic} \end{algorithm} \begin{figure \centering \begin{tikzpicture}[scale = 0.7] \draw[very thick, ->] (-8,5.5) -- (-7,5.5) node[anchor = south, pos = 0.5] {$\mathbf{Q}$}; \draw[very thick] (-7,5) rectangle (-6,6) node[anchor = south east] {$\pi_{A}^{1}$}; \draw[very thick, ->] (-6,5.5) -- (-5,5.5) node[anchor = south, pos = 0.5] {$\rho$}; \draw[very thick, dotted, <-] (-5,4.5) -- (-8,4.5) node[anchor = south, pos = 0.25] {$\mathcal{E}$}; \draw[very thick, dotted, <-] (-5,3.5) -- (-8,3.5) node[anchor = south, pos = 0.25] {$\rho_{S}$}; \draw[very thick, dotted, ->] (-5,2.5) -- (-8,2.5) node[anchor = south, pos = 0.25] {$\rho_{S}'$}; \draw[very thick] (-5,2) rectangle (-4,6) node[anchor = south east] {$\pi_B$}; \draw[very thick, ->] (-4,5.5) -- (-2,5.5) node[anchor = south, pos = 0.5] {$s^b$}; \draw[very thick, <-] (-4,4.5) -- (-2,4.5) node[anchor = south, pos = 0.5] {$A, \Pi$}; \draw[very thick, ->] (-4,3.5) -- (-2,3.5) node[anchor = south, pos = 0.5] {$s^a,s^p$}; \draw[very thick] (-2,3) rectangle (-1,6) node[anchor = south east] {$\pi_{A}^{2}$}; \draw[very thick, ->] (-1,3.5) -- (0,3.5) node[anchor = south, pos = 0.5] {$\widetilde{x}$}; \draw[very thick, <->] (-8,1.5) -- (0,1.5) node[anchor = north, pos = 0.5] {$\widetilde{\mathbf{Q}} , \mathcal{Q},\theta$}; \draw[very thick] (1.5,0.5) rectangle (9,7.5) node[anchor = south east] {$\pi_{A}^{1}$}; \draw[very thick, ->] (1,6) node[anchor = east] {$\mathbf{Q}$} -- (3,6); \draw[very thick] (3,5.5) rectangle (4,6.5) node[align = center, pos = 0.5] {$f$}; \draw[very thick] (4,6.1) -- (6.6,6.1) -- (6.6,2.4); \draw[very thick] (4,5.9) -- (4.4,5.9) -- (4.4,5.4); \draw[very thick] (6.4,2.4) -- (6.4,5.9) -- (4.6,5.9) -- (4.6,5.4); \draw[very thick] (3,5) node[anchor = east] {$\ket{+}$} -- (4,5); \draw[very thick] (3,4) node[anchor = east] {$\ket{0}$} -- (9.5,4); \node[anchor = east] at (3,3) {$\vdots$}; \node[anchor = east] at (4,3) {$\vdots$}; \node[anchor = east] at (9.5,2.5) {$\vdots$}; \draw[very thick] (3,2) node[anchor = east] {$\ket{+}$} -- (6,2); \draw[very thick] (3,1) node[anchor = east] {$\ket{0}$} -- (9.5,1); \filldraw (3.5,5) circle (3pt); \draw[very thick] (3.5,4) circle (3pt); \draw[very thick] (3.5,5) -- (3.5,3.9); \filldraw (3.5,2) circle (3pt); \draw[very thick] (3.5,1) circle (3pt); \draw[very thick] (3.5,2) -- (3.5,0.9); \draw[thick] (4,4.6) rectangle (5,5.4); \draw (4.1,4.9) .. controls (4.3,5.2) and (4.7,5.2) .. (4.9,4.9); \draw[thick, ->] (4.5, 4.8) -- (4.8, 5.3); \node at (5.5,3) {$\ddots$}; \draw[thick] (6,1.6) rectangle (7,2.4); \draw (6.1,1.9) .. controls (6.3,2.2) and (6.7,2.2) .. (6.9,1.9); \draw[thick, ->] (6.5, 1.8) -- (6.8, 2.3); \draw[decoration={brace,mirror,raise=5pt},decorate] (9.5,1) -- (9.5,4) node[right = 10pt, pos = 0.5] {$\rho$}; \draw[very thick] (5,5.1) -- (7.5,5.1); \draw[very thick] (5,4.9) -- (7.5,4.9); \node at (7.5,3.5) {$\vdots$}; \draw[very thick] (7,2.1) -- (7.5,2.1); \draw[very thick] (7,1.9) -- (7.5,1.9); \draw[decoration={brace,mirror,raise=5pt},decorate] (7.5,2) -- (7.5,5) node[right = 10pt, pos = 0.5] {$r$}; \draw[very thick, ->] (3,7) node[anchor = east] {$d^{p,a}$} -- (3.5,7) -- (3.5,6.5); \end{tikzpicture} \caption{The real protocol with the state generation phase of the protocol, $\pi_{A}^{1}$ isolated (left) and further analysed (right) using an equivalent protocol based on teleportation, where $f$ represents the measurement angle calculation on one half of the EPR pairs (see Algorithm \ref{alg:real IQP resource honest server with added teleportation} in Appendix for details).} \label{fig:real ideal resource rearranged} \end{figure} \ANNASCOMMENT{ \begin{figure} \centering \begin{tikzpicture}[scale = 0.7] \draw[very thick] (1.5,0.5) rectangle (9.5,7.5) node[anchor = south east] {$\pi_{A}^{1}$}; \draw[very thick, ->] (1,6) node[anchor = east] {$\mathbf{Q}$} -- (3,6); \draw[very thick] (3,5.5) rectangle (4,6.5) node[align = center, pos = 0.5] {$f$}; \draw[very thick] (4,6.1) -- (6.6,6.1) -- (6.6,2.4); \draw[very thick] (4,5.9) -- (4.4,5.9) -- (4.4,5.4); \draw[very thick] (6.4,2.4) -- (6.4,5.9) -- (4.6,5.9) -- (4.6,5.4); \draw[very thick] (3,5) node[anchor = east] {$\ket{+}$} -- (4,5); \draw[very thick] (3,4) node[anchor = east] {$\ket{1}$} -- (10,4); \node[anchor = east] at (3,3) {$\vdots$}; \node[anchor = east] at (4,3) {$\vdots$}; \node[anchor = east] at (10,2.5) {$\vdots$}; \draw[very thick] (3,2) node[anchor = east] {$\ket{+}$} -- (6,2); \draw[very thick] (3,1) node[anchor = east] {$\ket{1}$} -- (10,1); \filldraw (3.5,5) circle (3pt); \draw[very thick] (3.5,4) circle (3pt); \draw[very thick] (3.5,5) -- (3.5,3.9); \filldraw (3.5,2) circle (3pt); \draw[very thick] (3.5,1) circle (3pt); \draw[very thick] (3.5,2) -- (3.5,0.9); \draw[thick] (4,4.6) rectangle (5,5.4); \draw (4.1,4.9) .. controls (4.3,5.2) and (4.7,5.2) .. (4.9,4.9); \draw[thick, ->] (4.5, 4.8) -- (4.8, 5.3); \node at (5.5,3) {$\ddots$}; \draw[thick] (6,1.6) rectangle (7,2.4); \draw (6.1,1.9) .. controls (6.3,2.2) and (6.7,2.2) .. (6.9,1.9); \draw[thick, ->] (6.5, 1.8) -- (6.8, 2.3); \draw[decoration={brace,mirror,raise=5pt},decorate] (10,1) -- (10,4) node[right = 10pt, pos = 0.5] {$\rho_{\mathbf{Q},d,r}$}; \draw[very thick] (5,5.1) -- (7.5,5.1); \draw[very thick] (5,4.9) -- (7.5,4.9); \node at (7.5,3.5) {$\vdots$}; \draw[very thick] (7,2.1) -- (7.5,2.1); \draw[very thick] (7,1.9) -- (7.5,1.9); \draw[decoration={brace,mirror,raise=5pt},decorate] (7.5,2) -- (7.5,5) node[right = 10pt, pos = 0.5] {$r^{p,a,b}$}; \draw[very thick, ->] (3,7) node[anchor = east] {$d^{p,a}$} -- (3.5,7) -- (3.5,6.5); \end{tikzpicture} \caption{$\pi_{A}^{1}$ detailed. Here $f$ represents the process of calculating measurement angles to be performed on one half of the EPR pair. This is the task performed by lines \ref{alg line:real IQP resource honest Server with added teleportation - primary and ancila EPR measurment} and \ref{alg line: real IQP resource honest server with added teleportation - bridge and break EPR measuremnt} of Algorithm \ref{alg:real IQP resource honest server with added teleportation}.} \label{fig:teleportation state generation} \end{figure} } Line \ref{alg line:real IQP resource honest server - primary and ancillary state generation} of Algorithm \ref{alg:real IQP resource honest server} generates at random one of the four states $\ket{+}$, $\ket{+^{Y}}$, $\ket{-}$ and $\ket{-^{Y}}$. The same effect is achieved by measuring an EPR pair with equal probability in one of the bases $\left\{ \ket{+} , \ket{-} \right\}$ and $\left\{ \ket{+^{Y}} , \ket{-^{Y}} \right\}$. The application of the $\left( \sqrt{Y} \right)^{d^{b}_{k}}$ operation in line \ref{alg line:real IQP resource honest server - bridge and break state generation} of Algorithm \ref{alg:real IQP resource honest server} decides, according to the graph to be created, if the bridge and break qubit will be drawn from the set $\left\{ \ket{+} , \ket{-} \right\}$ or $\left\{ \ket{0} , \ket{1} \right\}$. Using the same information to choose between the measurement bases $\left\{ \ket{+} , \ket{-} \right\}$ and $\left\{ \ket{0} , \ket{1} \right\}$ on one half of an EPR pair has the same effect. The random rotation $Y^{r^{b}_{k}}$ then has the same effect of the randomness that is intrinsic to the EPR pair measurement. This may be visualised in Figure \ref{fig:real ideal resource rearranged} which presents a simple rearrangement of the Real Resource of Figure \ref{fig:real IQP computation resource} in order to isolate the state generation phase $\pi_{A}^{1}$ and to examine an equivalent circuit based on teleportation. The next transformation is to delay the first measurement of the EPR pairs as implied in Figure \ref{fig:pre generated randomness resource}. Since information about the measurement outcome $r$ is not yet available to define $\Pi$ and $A$, the Client chooses random $\Pi$ and $A$ which will then corrected for by using these values to compute the measurement bases for the Client's half of the primary and ancillary EPR pairs. Finally, Figure \ref{fig:rearranged into simulator setting} simply involves a rearrangement of the players in Figure \ref{fig:pre generated randomness resource} to match those in the simulator/distinguisher setting. The formal description of the protocol displayed by Figure \ref{fig:rearranged into simulator setting} is seen in Algorithm \ref{alg:real IQP resource honest server with simulator}. \begin{figure \centering \begin{tikzpicture}[scale = 0.65] \draw[very thick, ->] (0,8) -- (8,8) node[anchor = south, pos = 0.5] {$\mathbf{Q}$}; \draw[very thick] (8,7.5) rectangle (9,8.5) node[align = center, pos = 0.5] {$\widehat{f}$}; \draw[very thick] (2,6) node[anchor = east] {$\ket{+}$} -- (9,6) node[pos = 0.75, anchor = south] {$\rho$}; \draw[very thick, ->] (2,4) node[anchor = east] {$\ket{0}$} -- (4,4); \filldraw (3,6) circle (3pt); \draw[very thick] (3,4) circle (3pt); \draw[very thick] (3,6) -- (3,3.9); \draw[very thick, <->] (5,3) -- (6,3) -- (6,7) -- (8.5,7) -- (8.5,7.5); \draw[very thick] (6,7) -- (5,7) node[anchor = east] {$\Pi, A$}; \draw[thick] (9,5.6) rectangle (10,6.4); \draw (9.1,5.9) .. controls (9.3,6.2) and (9.7,6.2) .. (9.9,5.9); \draw[thick, ->] (9.5, 5.8) -- (9.8, 6.3); \draw[very thick] (10,6.1) -- (11,6.1) node[anchor = south, pos = 0.5] {$r$}; \draw[very thick] (10,5.9) -- (11,5.9); \draw[very thick] (9,8.1) -- (9.6,8.1) -- (9.6,6.4); \draw[very thick] (9,7.9) -- (9.4,7.9) -- (9.4,6.4); \draw[very thick, dotted, <-] (4,3) -- (0,3) node[anchor = east] {$\mathcal{E}$}; \draw[very thick, dotted, <-] (4,2) -- (0,2) node[anchor = east] {$\rho_{B}$}; \draw[very thick, dotted, ->] (4,1) -- (0,1) node[anchor = east] {$\rho_{B}'$}; \draw[very thick] (4,0.5) rectangle (5,4.5) node[anchor = south east] {$\pi_B$}; \draw[very thick, ->] (5,4) -- (8,4) node[anchor = south, pos = 0.5] {$s^{b}$}; \draw[very thick, ->] (5,2) -- (8,2) node[anchor = south, pos = 0.4] {$s^{a} , s^{p}$}; \draw[very thick] (8,1.5) rectangle (9,4.5) node[anchor = south east] {$\pi_{A}^{2}$}; \draw[very thick, ->] (9,2) -- (12,2) node[anchor = south, pos = 0.5] {$\widetilde{x}$}; \draw[very thick, <->] (0,0) -- (12,0) node[anchor = north, pos = 0.5] {$\widetilde{\mathbf{Q}} , \mathcal{Q}$, $\theta$}; \end{tikzpicture} \caption{The real protocol with only one input qubit for simplicity, where the Client sends random measurement instructions $A,\Pi$ to the Server and delays the teleportation measurement until after the Server has sent the measurement outcomes $s= \left\{ s^a,s^b,s^p \right\}$. $r = \left\{ r^{p} , r^{a} , r^{b} \right\}$. Here $\widehat{f}$ represents the process of calculating measurement angles to be performed on one half of the EPR pair from Eqs. \eqref{equ:primary measurement term} and \eqref{equ:ancila measurement term} (for details see Algorithm \ref{alg:real IQP resource honest server with added teleportation, rearrangement and pre-made randomness} in the Appendix). } \label{fig:pre generated randomness resource} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale = 0.65] \draw[very thick, ->] (1,7) -- (3,7) node[anchor = south, pos = 0.5] {$\mathbf{Q}$}; \draw[very thick] (3,6.5) rectangle (4,7.5) node[align = center, pos = 0.5] {$\widehat{f}$}; \draw[very thick, ->] (9,5) node[anchor = west] {$\ket{+}$} -- (5.5,5) -- (5.5,5.6); \draw[very thick, ->] (8,6) node[anchor = east] {$\ket{0}$} -- (11,6) node[anchor = south, pos = 0.825] {$\rho$}; \filldraw (8.5,6) circle (3pt); \draw[very thick] (8.5,5) circle (3pt); \draw[very thick] (8.5,6) -- (8.5,4.9); \draw[very thick, dotted] (7,1) rectangle (10,8) node[anchor = south east] {$\sigma$}; \draw[thick] (5,5.6) rectangle (6,6.4); \draw (5.1,5.9) .. controls (5.3,6.2) and (5.7,6.2) .. (5.9,5.9); \draw[thick, ->] (5.5, 5.8) -- (5.8,6.3); \draw[very thick] (4,7.1) -- (5.6,7.1) -- (5.6,6.4); \draw[very thick] (4,6.9) -- (5.4,6.9) -- (5.4,6.4); \draw[very thick, dotted, <-] (12,7) -- (13,7) node[anchor = south, pos = 0.5] {$\mathcal{E}$}; \draw[very thick, dotted, <-] (12,6) -- (13,6) node[anchor = south, pos = 0.5] {$\rho_{B}$}; \draw[very thick, dotted, ->] (12,2) -- (13,2) node[anchor = south, pos = 0.5] {$\rho_{B}'$}; \draw[very thick] (11,1) rectangle (12,8) node[anchor = south east] {$\pi_B$}; \draw[very thick, ->] (11,4) -- (4,4) node[anchor = south, pos = 0.35] {$s^{b}$}; \draw[very thick, <->] (11,3) -- (4.5,3) node[anchor = south, pos = 0.4] {$\Pi , A$} -- (4.5,8) -- (3.5,8) -- (3.5,7.5); \draw[very thick, ->] (11,2) -- (4,2) node[anchor = south, pos = 0.35] {$s^{a} , s^{p}$}; \draw[very thick] (3,1.5) rectangle (4,4.5) node[anchor = south east] {$\pi_{A}^{2}$}; \draw[very thick, ->] (3,2) -- (1,2) node[anchor = south, pos = 0.5] {$\widetilde{x}$}; \draw[very thick, dotted] (2.5,1) rectangle (6.5,8.5) node[anchor = south east] {$\mathcal{S}$}; \draw[very thick] (5,6.1) -- (4,6.1); \draw[very thick] (5,5.9) -- (4,5.9); \node[anchor = east] at (4,6) {$r$}; \draw[very thick, <->] (1,0) -- (13,0) node[anchor = north, pos = 0.5] {$\widetilde{\mathbf{Q}} , \mathcal{Q}$, $\theta$}; \end{tikzpicture} \caption{The ideal resource $\mathcal{S}$ and the simulator $\sigma$ for the malicious Server, shown with only one input qubit for simplicity. The simulator has no access to the private information $\mathbf{Q}$ and any time. A global distinguisher cannot tell the difference between this setting and the real protocol.} \label{fig:rearranged into simulator setting} \end{figure} \end{proof} We can now be sure that our communication protocol is indistinguishable from an ideal resource (defined in Figure \ref{fig:ideal resource}) which performs an IQP computation without communicating any information to the Server which is not already public. This means that the communication protocol does not reveal any information about the computation to the Server. Furthermore, this is proven in a composable framework \cite{Abstract cryptography,Cryptographic security of quantum key distribution,Composable security of delegated quantum computation} and so can be used as part of future protocols as we will in section \ref{sec:hypothesis test}. \section{The Hypothesis Test} \label{sec:hypothesis test} \subsection{Previous work} \label{sec:SB hypothesis test} We now have all the tools to form a test for a Server to run in order to prove to a Client that they are capable of solving classically non-simulatable problems. Specifically, we ask the Server to perform an IQP computation that we believe is classically hard, but whose solution can easily be checked by a classical Client. The general idea of our \emph{Hypothesis Test}, building on the work of \cite{Temporally_Unstructured_Quantum_Computation}, is that there is some hidden structure in the program elements, $\mathbf{q}_i$, of the $X$-program that results in some structure in the distribution of the outputs, known only to the Client. The Client can use this structure to check the Server's reply. A Server possessing an IQP machine can reproduce this structure by implementing the $X$-program. A Server not in possession of an IQP machine cannot generate outputs obeying the same rules. We summarise this discussion by three conditions that a hypothesis test using this method must meet. \begin{enumerate}[label = \arabic{list}.\arabic*] \item \label{pt:hypothesis test IQP hard} The Client asks the Server to perform an IQP computation that is hard to classically simulate. \item \label{pt:hypothesis test structure} The Client can check the solution to this computation because they know some secret structure that makes this checking processes efficient. \item \label{pt:hypothesis test hidden structure} The Server must be unable to uncover this structure in polynomial time. \end{enumerate} \stepcounter{list} \noindent A particular `known structure' of the output which is used in \cite{Temporally_Unstructured_Quantum_Computation} to satisfy condition \ref{pt:hypothesis test structure} is its \emph{bias}. \begin{definition} \label{def:bias} If X is a random variable taking values in $\left\{ 0 , 1 \right\}^{n_p}$ and $\mathbf{s} \in \left\{ 0 , 1 \right\}^{n_p}$ then the bias of $X$ in the direction $\mathbf{s}$ is $\mathbb{P} \left( X \cdot \mathbf{s}^T = 0 \right)$ where the product is performed modulo 2. Hence, the bias of a distribution in the direction $\mathbf{s}$ is the probability of a sample from the distribution being orthogonal to $\mathbf{s}$. \end{definition} To calculate the bias of $X$ in direction $\mathbf{s} \in \left\{ 0 , 1 \right\}^{n}$, we form the linear code $\mathcal{C}_{\mathbf{s}}$ by selecting all rows, $\mathbf{q}_{i}$ of the X-program, $\left(\mathbf{Q}, \theta \right) \in \left\{ 0 , 1 \right\}^{n_{a} \times n_{p}} \times \left[ 0 , 2 \pi \right]$, such that $\mathbf{q}_{i} \cdot \bf{s}^{T} = 1$ and forming, from them, a new matrix, $\mathbf{Q}_{\mathbf{s}}$, which is the generator matrix of $\mathcal{C}_{\bf{s}}$. Defining $n_{\bf{s}}$ to be the number of rows of $\mathbf{Q}_{\bf{s}}$ allows us to understand the following expression. The derivation can be found in \cite{Temporally_Unstructured_Quantum_Computation}. \begin{equation} \label{equ:bias prediction} \mathbb{P} \left( X \cdot \mathbf{s}^{T} = 0 \right) = \mathbb{E}_{ \mathbf{c} \sim \mathcal{C}_{\mathbf{s}}} \left[ \cos^{2} \left( \theta \left( n_s - 2 \cdot \# \mathbf{c} \right) \right) \right] \end{equation} We find that the bias of an X-program in the direction $\bf{s}$ depends only on $\theta$ and the linear code defined by the generator matrix $\mathbf{Q}_{\mathbf{s}}$. One can now imagine a hypothesis test derived from these facts. Although the X-program that will be implemented, needs to be made public, the direction $\mathbf{s}$ which will be used for checking, will be kept secret. This gives a Client, with the computational power to calculate the quantity of expression \eqref{equ:bias prediction}, the necessary information to compute the bias, but does not afford the Server the same privilege. What we want to show is that the only way for the Server to produce an output with the correct bias is to use an IQP machine. If the Server could somehow uncover $\mathbf{s}$ then they could calculate the value of expression \eqref{equ:bias prediction} and return vectors to the Client which are orthogonal to $\mathbf{s}$ with the correct probability. We specialise the conditions mentioned at the beginning of this section to this particular method. \begin{enumerate}[label = \arabic{list}.\arabic*] \item \label{pt:bias hypothesis test IQP hard} The X-Program sent to a Server represents an IQP computation that is hard to classically simulate. \item \label{pt:bias hypothesis test structure} It must be possible for a Client, having knowledge of a secret $\mathbf{s}$ and the X-program, to calculate the quantity of expression \eqref{equ:bias prediction}. \item \label{pt:bias hypothesis test hidden structure} The knowledge of the Server must be insufficient to learn the value of $\bf{s}$. \end{enumerate} \stepcounter{list} In \cite{Temporally_Unstructured_Quantum_Computation} the authors develop a protocol for building an $X$-program and a vector $\mathbf{s}$ performing this type of hypothesis test. The code $\mathcal{C}_{\mathbf{s}}$ used to build the $X$-program is a quadratic residue code with $\theta = \frac{\pi}{8}$. Condition \ref{pt:bias hypothesis test IQP hard} is conjectured, by \cite{Temporally_Unstructured_Quantum_Computation}, to be satisfied by these X-programs. This conjecture is supported by giving a classical simulation that is believed to be optimal and achieves maximum bias value $0.75$; different from that expected from an IQP machine. A hypothesis test with X-programs, such as the random circuits of \cite{Classical Simulation of Commuting Quantum Computations Implies Collapse of the Polynomial Hierarchy}, for which connections to an implausible collapse in the polynomial hierarchy has been made, is an open problem. Condition \ref{pt:bias hypothesis test structure} is also satisfied by the construction in \cite{Temporally_Unstructured_Quantum_Computation}, by proving that the bias value, which is $\cos^{2}\left( \frac{\pi}{8} \right)$ for their choice of X-program and $\mathbf{s}$, can be calculated in polynomial time. The way in which condition \ref{pt:bias hypothesis test hidden structure} is addressed in \cite{Temporally_Unstructured_Quantum_Computation} relies on the fact that the right-hand side expression of Eq.\eqref{equ:bias prediction} is equal for all generator matrices in a \emph{matroid} \cite{Matroid Theory}. \begin{definition} A $i$-point binary \emph{matroid} is an equivalence class of matrices with $i$ rows, defined over $\mathbb{F}_2$. Two matrices, $\mathbf{M}_1$ and $\mathbf{M}_2$, are said to be equivalent if, for some permutation matrix $\mathbf{R}$, the column echelon reduced form of $\mathbf{M}_1$ is the same as the column echelon reduced form of $\mathbf{R} \cdot \mathbf{M}_2$ (In the case where the column dimensions do not match, we define equivalence by deleting columns containing only $0$s after column echelon reduction). \end{definition} In order to move to a new matrix within the same matroid, consider the right-multiplication with matrix $\mathbf{A}$ on $\mathbf{Q}$. Notice that $\mathbf{q}_i \mathbf{s}^{T} = \left( \mathbf{q}_i \mathbf{A} \right) \left( \mathbf{A}^{-1} \mathbf{s}^{T} \right)$. Rows which were originally non-orthogonal to $\mathbf{s}$ are now non-orthogonal to $\mathbf{A}^{-1} \mathbf{s}^{T}$, hence we can locate $\mathbf{Q}_{\mathbf{s}}$ in $\mathbf{Q}$ by using $\mathbf{A}^{-1} \mathbf{s}^{T}$. A way to hide $\mathbf{s}$ is therefore to randomise it with such an operation $\mathbf{A}$. We now understand what to do to the $X$-program we are considering, so that the value of the bias does not change. To increase the hiding of $\mathbf{s}$, the matrix might also include additional rows orthogonal to $\mathbf{s}$, which do not affect the value of the bias. The combination of matrix randomisation and the addition of new rows makes it hard, as conjectured in \cite{Temporally_Unstructured_Quantum_Computation}, up to some computational complexity assumptions, for the Server to recover $\mathbf{s}$ from the matrix that it receives. It is now simply a matter for the Server to implement the $X$-program and for the Client to check the bias of the output in the direction $\mathbf{s}$. This is the approach used by \cite{Temporally_Unstructured_Quantum_Computation} to address condition \ref{pt:bias hypothesis test hidden structure}. \subsection{Our Protocol} \label{sec:our protocol} The main contribution of this work is to revisit condition \ref{pt:bias hypothesis test hidden structure}. By giving to the Client limited quantum capabilities, we remove the computational assumption of \cite{Temporally_Unstructured_Quantum_Computation}, and therefore provide unconditional security against a powerful quantum Server. In Algorithm \ref{alg:Our hypothesis test protocol} we provide a hypothesis test that uses the blind delegated IQP computation resource of the previous section to verify quantum supremacy. \begin{theorem} Algorithm \ref{alg:Our hypothesis test protocol} presents an information-theoretically secure solution to condition \ref{pt:bias hypothesis test hidden structure}. \end{theorem} \begin{proof} Let us begin by recalling that when $\mathcal{C}_{\mathbf{s}}$ in expression \eqref{equ:bias prediction} is the quadratic residue code space then we know that the value of that expression is $\cos^{2} \frac{\pi}{8}$. Notice that, in particular, the all one vector is in the quadratic residue code space. As such the matrix $\mathbf{Q}_{\mathbf{s}}$, introduced on line \ref{alg line: Our hypothesis test protocol - start with QR} of Algorithm \ref{alg:Our hypothesis test protocol}, which is the quadratic code generator matrix $\mathbf{Q}_{r}$ with a column of all ones appended to it also generates the quadratic residue code. The vector $\mathbf{s} \in \left\{ 0 , 1 \right\}^{n_{p}}$ with all zero entries, with the exception of the last entry which is set to one, is non orthogonal to all rows of $\mathbf{Q}_{\mathbf{s}}$. Hence, adhering to the notation here and of Section \ref{sec:SB hypothesis test}, $\mathcal{C}_{\mathbf{s}}$ is the quadratic residue code and expression \eqref{equ:bias prediction} is equal to $\cos^{2} \frac{\pi}{8}$. The reader may refer to Figures \ref{fig:bipartite graph quadratic residue code} and \ref{fig:bipartite graph quadratic residue code extended by all one vector} for some intuition about the above matrices. $\mathbf{A}$, defined in line \ref{alg line: Our hypothesis test protocol - randomisation matrix} of Algorithm \ref{alg:Our hypothesis test protocol}, is the operation which adds columns of $\mathbf{Q}_{s}$, chosen when $\widehat{\mathbf{s}_{i}} = 1$, to the last column of $\mathbf{Q_{s}}$. We know that the resulting matrix, $\mathbf{Q} = \mathbf{Q_{s}} \mathbf{A}$, is also a generator matrix of the quadratic residue code as all the columns of $\mathbf{Q}_{s}$ are in the quadratic residue code space. We also know, from the discussion of Section \ref{sec:SB hypothesis test}, that all the rows of $\mathbf{Q}$ are non-orthogonal to $\mathbf{A}^{-1} \mathbf{s}^{T}$. As such $\mathcal{C}_{\mathbf{A}^{-1} \mathbf{s}^{T}}$, when $\mathbf{Q}$ is the $X$-program of concern, is the quadratic residue code space and hence the bias of the $X$-program $\mathbf{Q}$ in the direction $\mathbf{A}^{-1} \mathbf{s}^{T}$ is $\cos^{2} \frac{\pi}{8}$. This matrix may be visualised in Figure \ref{fig:bipartite graph quadratic residue code extended by all one vector randomised} and this fact is exploited in line \ref{alg line:Our hypothesis test protocol - test orthogonality} of Algorithm \ref{alg:Our hypothesis test protocol}. We know, however, that from any $\mathbf{Q}$ we can make the IQP extended graph $\widetilde{\mathbf{Q}}$, which is the matrix $\mathbf{Q}_{r}$ with a column of minus ones appended to the end. Observing Figure \ref{fig:bipartite graph quadratic residue code extended by all one vector IQP extended} may help to visualise this. We can now use the resource of Section \ref{sec:security proof} to perform a blind IQP computation. By using the blind IQP computation resource of Section \ref{sec:security proof} we have solved condition \ref{pt:bias hypothesis test hidden structure} but do so now with information theoretic security as opposed to the reliance on computational complexity assumptions used by \cite{Temporally_Unstructured_Quantum_Computation}. This is true because, as a product of using the resource of Section \ref{sec:security proof}, the Server learns only the distribution $\mathcal{Q}$ over the possible set of graphs $\mathbf{Q}$. By setting $\mathbf{Q} = \mathbf{Q_{s}} \mathbf{A}$, Algorithm \ref{alg:Our hypothesis test protocol} develops a bijection mapping $\widehat{\mathbf{s}} \in \{0,1\}^{n_{p} - 1}$ to a unique matrix $\mathbf{Q} \in \{0,1\}^{n_{a} \times n_{p}}$. So $\mathcal{Q}$ is equivalent to the distribution from which $\widehat{\mathbf{s}}$ is drawn. In this case it is the uniform distribution over a set of size $2^{n_{p} - 1}$. \end{proof} \begin{figure} \centering \begin{tikzpicture}[scale = 0.7] \filldraw (1.5,0) circle (3pt) node[anchor = north] {$a_{1}$}; \node at (3,0) {$\dots$}; \filldraw (4.5,0) circle (3pt) node[anchor = north ] {$a_{n_{a}}$}; \filldraw (0,2) circle (3pt) node[anchor = south ] {$p_{1}$}; \draw[very thick, dotted] (0,2) -- (1.5,0); \filldraw (3,2) circle (3pt) node[anchor = south ] {$p_{2}$}; \draw[very thick, dotted] (3,2) -- (4.5,0); \node at (4.5,2) {$\dots$}; \filldraw (6,2) circle (3pt) node[anchor = south ] {$p_{n_{p} - 1}$}; \draw[very thick, dotted] (6,2) -- (1.5,0); \node at (13,1) {$\mathbf{Q}_{r}$}; \end{tikzpicture} \caption{Quadtatic residue code generator matrix, $\mathbf{Q}_{r}$, and the graph that it describes. Note that, to save space, this is only illustrative and that the connections in this image do not correspond to an actual quadratic residue code. This is implied by the dotted lines.} \label{fig:bipartite graph quadratic residue code} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale = 0.7] \filldraw (1.5,0) circle (3pt) node[anchor = north ] {$a_{1}$}; \node at (3,0) {$\dots$}; \filldraw (4.5,0) circle (3pt) node[anchor = north ] {$a_{n_{a}}$}; \filldraw (0,2) circle (3pt) node[anchor = south ] {$p_{1}$}; \draw[very thick, dotted] (0,2) -- (1.5,0); \filldraw (3,2) circle (3pt) node[anchor = south ] {$p_{2}$}; \draw[very thick, dotted] (3,2) -- (4.5,0); \node at (4.5,2) {$\dots$}; \filldraw (6,2) circle (3pt) node[anchor = south ] {$p_{n_{p} - 1}$}; \draw[very thick, dotted] (6,2) -- (1.5,0); \filldraw (9,2) circle (3pt) node[anchor = south ] {$p_{n_{p}}$}; \draw[very thick] (9,2) -- (1.5,0); \draw[very thick] (9,2) -- (4.5,0); \node at (13,1) {$\mathbf{Q}_{s} = \left( \begin{array}{c|c} & 1\\ \mathbf{Q}_{r} & \vdots\\ & 1 \end{array} \right)$}; \end{tikzpicture} \caption{A matrix also generating the quadratic residue code space.} \label{fig:bipartite graph quadratic residue code extended by all one vector} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale = 0.7] \filldraw (1.5,0) circle (3pt) node[anchor = north ] {$a_{1}$}; \node at (3,0) {$\dots$}; \filldraw (4.5,0) circle (3pt) node[anchor = north ] {$a_{n_{a}}$}; \filldraw (0,2) circle (3pt) node[anchor = south ] {$p_{1}$}; \draw[very thick, dotted] (0,2) -- (1.5,0); \filldraw (3,2) circle (3pt) node[anchor = south ] {$p_{2}$}; \draw[very thick, dotted] (3,2) -- (4.5,0); \node at (4.5,2) {$\dots$}; \filldraw (6,2) circle (3pt) node[anchor = south ] {$p_{n_{p} - 1}$}; \draw[very thick, dotted] (6,2) -- (1.5,0); \filldraw (9,2) circle (3pt) node[anchor = south ] {$p_{n_{p}}$}; \draw[very thick, dotted] (9,2) -- (1.5,0); \node at (13,1) {$\mathbf{Q} = \left( \begin{array}{c|ccc} & 1\\ \mathbf{Q}_{r} & \vdots & \bigoplus_{j = 1}^{n_{p} - 1} & \widehat{\mathbf{s}}_{j} {\mathbf{Q}_{r}}_{j} \\ & 1 \end{array} \right)$}; \end{tikzpicture} \caption{A randomised version of Figure \ref{fig:bipartite graph quadratic residue code extended by all one vector}. Here ${\mathbf{Q}_{r}}_{j}$ is the $j^{\text{th}}$ column of $\mathbf{Q}_{r}$} \label{fig:bipartite graph quadratic residue code extended by all one vector randomised} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale = 0.7] \filldraw (1.5,0) circle (3pt) node[anchor = north ] {$a_{1}$}; \node at (3,0) {$\dots$}; \filldraw (4.5,0) circle (3pt) node[anchor = north ] {$a_{n_{a}}$}; \filldraw (0,2) circle (3pt) node[anchor = south ] {$p_{1}$}; \draw[very thick, dotted] (0,2) -- (1.5,0); \filldraw (3,2) circle (3pt) node[anchor = south ] {$p_{2}$}; \draw[very thick, dotted] (3,2) -- (4.5,0); \node at (4.5,2) {$\dots$}; \filldraw (6,2) circle (3pt) node[anchor = south ] {$p_{n_{p} - 1}$}; \draw[very thick, dotted] (6,2) -- (1.5,0); \filldraw (9,2) circle (3pt) node[anchor = south ] {$p_{n_{p}}$}; \draw[very thick] (9,2) -- (1.5,0); \draw[very thick] (9,2) -- (4.5,0); \filldraw (1.5 + 45/8,6/4) circle (3pt) node[anchor = south] {$b_{1}$}; \filldraw (4.5 + 4.5/3,2/3) circle (3pt) node[anchor = north west] {$b_{n_{b}}$}; \node at (13,1) {$\widetilde{\mathbf{Q}} = \left( \begin{array}{c|c} & -1\\ \mathbf{Q}_{r} & \vdots\\ & -1 \end{array} \right)$}; \end{tikzpicture} \caption{An IQP extended graph of all possible $\mathbf{Q}$ of Figure \ref{fig:bipartite graph quadratic residue code extended by all one vector randomised}} \label{fig:bipartite graph quadratic residue code extended by all one vector IQP extended} \end{figure} \begin{algorithm} \caption{Our hypothesis test protocol} \label{alg:Our hypothesis test protocol} \textbf{Input:} $n_{a}$ prime such that $n_{a} + 1$ is a multiple of $8$. \textbf{Client output:} $o \in \left\{ 0 , 1 \right\}$ \textbf{Protocol:} \begin{algorithmic}[1] \STATE Set $n_{p} = \frac{n_{a} + 1}{2}$ \STATE \label{alg line: Our hypothesis test protocol - start with QR} Take the quadratic residue code generator matrix $\mathbf{Q_{r}} \in \left\{ 0 , 1 \right\}^{n_{a} \times \left( n_{p} - 1 \right)}$ \STATE \label{alg line: Our hypothesis test protocol - add column of ones}Let $\mathbf{Q_{s}} \in \left\{ 0 , 1 \right\}^{n_{a} \times n_{p}}$ be $\mathbf{Q_{r}}$ with a column of ones appended to the last column. \STATE Pick $\widehat{\mathbf{s}} \in \left\{ 0 , 1 \right\}^{n_p - 1}$ chosen uniformly at random. \STATE \label{alg line: Our hypothesis test protocol - randomisation matrix} Define the matrix $\mathbf{A} \in \left\{ 0 , 1 \right\}^{n_p \times n_p}$ according to equation \eqref{equ:transformation matrix}. \begin{equation} \label{equ:transformation matrix} \mathbf{A}_{i,j} = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{if } i \neq j \text{ and } j < n_{p} \\ \widehat{\mathbf{s}}_{i} & \text{if } j = n_{p} \text{ and } i < n_p \end{cases} \end{equation} \STATE Set $\mathbf{Q} = \mathbf{Q_{s}} \mathbf{A}$ and $\theta = \frac{\pi}{8}$. \STATE Set $\widetilde{\mathbf{Q}}$ to be the matrix $\mathbf{Q_{r}}$ with a column of $-1$ appended \STATE Set $\mathcal{Q}$ to be the uniform distribution over all possible $\mathbf{Q}$ for different $\widehat{\mathbf{s}}$. \STATE Perform the IQP computation $\mathbf{Q}$ using Algorithm \ref{alg:real IQP resource honest server} with inputs $\mathbf{Q}$, $\widetilde{\mathbf{Q}}$, $\mathcal{Q}$ and $\theta$ and outputs $\widetilde{x}$ and $\rho_{B}'$. \STATE Let $\mathbf{s} \in \left\{ 0 , 1 \right\}^{n_p}$ be the vector with entries all equal to zero with the exception of the last which is set to one. \STATE \label{alg line:Our hypothesis test protocol - test orthogonality} Test the orthogonality of the output $\widetilde{x}$ against $A^{-1} \mathbf{s}^{T}$ setting $o = 0$ if it is not orthogonal and $o = 1$ if it is orthogonal. \end{algorithmic} \end{algorithm} \ANNASCOMMENT{ \begin{theorem} Algorithm \ref{alg:Our hypothesis test protocol} presents an information-theoretically secure solution to condition \ref{pt:bias hypothesis test hidden structure}. \end{theorem} \begin{proof} Consider $\mathbf{Q_{s}} \in \left\{ 0 , 1 \right\}^{n_{a} \times n_{p}}$ from Algorithm \ref{alg:Our hypothesis test protocol}. Since the all one vector is in the code space generated by $\mathbf{Q_{r}}$ this does not change the matroid, and therefore the bias in a direction non-orthogonal to all of the rows of $\mathbf{Q_{r}}$. Such a direction, $\mathbf{s} \in \left\{ 0 , 1 \right\}^{n_p}$, would be the vector with entries all equal to zero with the exception of the last which is set to one. $\mathbf{A}$ is the matrix which adds columns of $\mathbf{Q}_{s}$, chosen when $\widehat{\mathbf{s}_{i}} = 1$, to the last column of $\mathbf{Q_{s}}$. By setting $\mathbf{Q} = \mathbf{Q_{s}} \mathbf{A}$, Algorithm \ref{alg:Our hypothesis test protocol} develops a bijection mapping $\widehat{\mathbf{s}} \in \{0,1\}^{n_{p} - 1}$ to a unique matrix $\mathbf{Q} \in \{0,1\}^{n_{a} \times n_{p}}$. Because of the discussion of Section \ref{sec:hypothesis test} we know that the bias of the distribution of $\mathbf{Q}$ in the direction $\mathbf{A}^{-1} \mathbf{s}^{T}$ is the same as for the quadratic residue code matroid. Hence condition \ref{pt:bias hypothesis test structure} is satisfied. By using the blind IQP computation resource of Section \ref{sec:security proof} we have solved condition \ref{pt:bias hypothesis test hidden structure} but do so now with information theoretic security as opposed to the reliance on computational complexity assumptions used by \cite{Temporally_Unstructured_Quantum_Computation}. This is true because, as a product of using the resource of Section \ref{sec:security proof}, the Server learns only the distribution $\mathcal{Q}$ over the possible set of graphs $\mathbf{Q}$. This, we have shown, is equivalent to the distribution from which $\widehat{\mathbf{s}}$ is drawn. In this case it is the uniform distribution over a set of size $2^{n_{p} - 1}$. \end{proof} } \section{Conclusion and Future Work} We have presented a protocol that can be used by a limited quantum Client, able to prepare one-qubit Pauli operator eigenstates, to delegate the construction of IQP circuits to a powerful quantum Server. By giving the Client of the computation limited quantum abilities (i.e. manipulation of single qubits), we have managed to remove the computational restriction of the Server required in previous work \cite{Temporally_Unstructured_Quantum_Computation}, and therefore have proven information-theoretical security against a malicious Server. The protocol is also proven to be composable and therefore can be used to verify an IQP machine as part of a larger delegated computation. IQP circuits are also important because they are relatively easy to implement in an experimental setup in comparison to fully fledged quantum computers needed for universal computations. Our protocol requires two layers of measurements, in order to do the appropriate corrections resulting from the blind creation of the state at the Server's side, and for a small number of qubits, it can be implemented even with present technology. A future avenue of research would therefore be the study of this protocol under realistic experimental errors in view of a potential implementation. \section{Acknowledgements} The authors would like to thank Andru Gheorghiu and Petros Wallden for many enlightening discussions and feedback. This work was supported by grant EP/L01503X/1 for the University of Edinburgh School of Informatics Centre for Doctoral Training in Pervasive Parallelism (http://pervasiveparallelism.inf.ed.ac.uk/) from the UK Engineering and Physical Sciences Research Council (EPSRC) and by grants EP/K04057X/2, EP/N003829/1 and EP/ M013243/1, as well as by the European Union’s Horizon 2020 Research and Innovation program under Marie Sklodowska-Curie Grant Agreement No. 705194. \input{Bibliography.tex}
{'timestamp': '2017-04-10T02:00:58', 'yymm': '1704', 'arxiv_id': '1704.01998', 'language': 'en', 'url': 'https://arxiv.org/abs/1704.01998'}
\section{Introduction} \input{./Introduction.tex} \section{Structure of amorphous zinc oxy-nitride (\lowercase{\textit{a}}-Z\lowercase{n}ON)} \input{./Structure.tex} \section{Electronic structure of \lowercase{\textit{a}}-Z\lowercase{n}ON and \lowercase{\textit{a}}-IGZO} \input{./Electronic.tex} \section{Orbital overlap integral and electronic conduction} \input{./Overlap.tex} \section{Conclusions} { A multi-anion approach towards amorphous semiconductors has distinct advantages over multi-cation based amorphous oxide semiconductors. Comparison of electronic structures of the two materials systems (\textit{a}-ZnON and \textit{a}-IGZO) highlights the role of cations and anions in the electronic transport. While the electronic conduction in \textit{a}-IGZO is found to be through overlapping empty \textit{s}-orbitals of mixed cations and so it also depends on composition (ratio of cations), in \textit{a}-ZnON, the electronic conduction path is through overlapping Zn-\textit{s} orbitals and is independent of composition (ratio of anions). The ease of conduction in these amorphous structure can be estimated in terms of extent of overlap between empty \textit{s} orbitals of metal cations. The total normalized orbital overlap integral is shown to have a direct correlation with calculated carrier effective mass, this correlation can be used as a qualitative estimate for the observed electron mobilities. Effective mass calculations based on \textit{ab initio} DFT calculation of electronic structure are computationally expensive, on the other hand, the calculation of the total normalized orbital overlap integral as an estimate for carrier effective mass can be done on for many different amorphous structures or structural averages from many amorphous structures can be calculated with very small computational expense. Calculation of the pairwise orbital overlap integrals for different metal cation pairs also provides an insight into the role of individual metal cations in forming the conduction path in multi-cation amorphous structures which is not straightforward from the DFT calculations. While single structure based DFT calculation are limited in their scope, calculations based on many structures or averages of many structures, such as generated by evolutionary algorithm based codes, should give us better insight into the average properties and role of individual cations/anions in these materials and can be used effectively while predicting properties of new compositions of amorphous semiconductors. } \begin{acknowledgments} { Authors would like to acknowledge support from HPC, IIT Kanpur for use of their computing facility in the present work. J. Srivastava acknowledges financial support from MHRD, Govt. of India and A. Gaur acknowledges support from MeitY, Govt. of India under Visvesvaraya YFRF program. Authors also thank Divya et. al. for providing a-IGZO-2217 structure for comparative studies. } \end{acknowledgments} \nocite{*}
{'timestamp': '2019-08-20T02:28:08', 'yymm': '1812', 'arxiv_id': '1812.11333', 'language': 'en', 'url': 'https://arxiv.org/abs/1812.11333'}
\section{Introduction} High-resolution linear spectropolarimetry measures the change in linear polarization across a spectral line and is a useful probe of circumstellar environments at small spatial scales. Circumstellar disks, rotationally distorted winds, magnetic fields, asymmetric radiation fields (optical pumping), and in general, any scattering asymmetry can produce a change in linear polarization across a spectral line such as $H\alpha$. These signatures can directly constrain the density and geometry of the circumstellar material. Typical spectropolarimetric signals are small, often a few tenths of a percent change in polarization across a spectral line. Measuring these signals requires very high signal to noise observations and careful control of systematics to measure signals at the 0.1\% level. In this letter, we present the spectropolarimetric variability, as well as spectroscopic variability, of the Herbig Ae/Be stars AB Aurigae, MWC480, MWC158, MWC120, and HD58647. To date, only a few detections of spectropolarimetric signals in Herbig AeBe's have been reported, and the variability of these signatures has not been studied in detail (Vink et al. 2002, 2005 Mottram et al. 2007). We show that variability is significant, and show how it can provide information about the near-star environment with future modeling. The H$\alpha$ line in these stars is very strong, having line/continuum ratios of roughly 3 to 12, typically with P-Cygni profiles or central reversals. Our observations of AB Aurigae, MWC480 and MWC120 show P-Cygni profiles with strong variability of the blueshifted absorption component, often over 10-minute timescales. This is entirely consistent with other studies (Catala et al. 1999, Beskrovnaya et al. 1995). The H$\alpha$ lines of MWC158 and HD58647 showed strong central reversals and were much more stable, though we had fewer observations. The amplitude of the change in linear polarization across the H$\alpha$ line is roughly 1\% for AB Aurigae, MWC480, and MWC158, while HD58647 and MWC120 show smaller, but still significant signatures. These polarization changes are all centered on the absorption component, not on line center, and almost always have a single-loop trajectory in QU space, a so-called QU-loop. Many types of polarization effects are known in optical astronomy, some related to scattering and others relating to atomic and molecular processes. Polarization effects can be seen in broad-band continuum polarization, or as changes in polarization resolved across a spectral line. Early analytical studies showed the possibility of spectropolarimetric effects from scattering very close to the central star (McLean 1979, Wood et al. 1993 \& 1994). Recent monte carlo modelling of scattering by circumstellar materials has shown a wealth of possible polarimetric line-effects from disks, winds, and envelopes (Vink et al. 2005, Harries 2000, Ignace 2004). For example, unpolarized line emission that forms over broad stellar envelopes can produce a depolarization in the line core relative to the stellar continuum. Small clumps in a stellar wind that scatter and polarize significant amounts of light can enhance the polarization at that clump's specific velocity and orientation. This technique probes small spatial scales, being sensitive to the geometry and density of the material near the central star. Even for the closest young stars (150pc), these spatial scales are smaller than 0.1 milliarcseconds across and will not be imaged directly, even by 100m telescopes. Since the circumstellar material is involved in accretion, outflows, winds and disks, with many of these phenomena happening simultaneously, spectropolarimetry can put unique constraints on the types of densities and geometries of the material involved in these processes. A preliminary study of the H$\alpha$ line at medium spectral resolution (R$\sim$8500, rebinned heavily) in Herbig Ae/Be stars showed many different morphologies and amplitudes (Vink et al. 2002). Some showed polarization changes as high as 2\%, while others showed none at all. Since the models predicting polarization across spectral lines are not currently invertable and predict spectropolarimetric effects centered on the emissive core, we wanted to do an in-depth study of a few sources to see if the variability of the spectropolarimetric line profiles could shed some light on the nature of the near-star environment. \section{Targets} Since spectropolarimetry is a photon-hungry technique, we wanted to apply this technique to bright, well-studied stars that had previously-detected spectropolarimetric signatures to monitor their variability and the nature of the polarimetric signatures. We chose AB Aurigae and MWC480 for close study, and MWC120, MWC158, and HD58647 as other bright observable targets. The Herbig Ae star, AB Aurigae (HD31293, HIP22910) is the brightest of the Northern hemisphere Herbig Ae stars (V=7.1) and is one of the best studied intermediate-mass young stars. It has a near face-on circumstellar disk resolved in many wavelengths (eg: Grady et al. 2005, Fukagawa et al. 2004). It also has an active stellar wind with it's strong emission lines often showing strong P-Cygni profiles. Spectroscopic measurements put AB Aurigae somewhere between late B and early A spectral types (B9 in Th\'{e} et al. 1994, B9Ve in Beskrovnaya et al. 1995, A0 to A1 Fernandez et al. 1995) with an effective temperature of around 10000K. The star has a wind that is not spherically symmetric with a mass loss rate of order $10^{-8} {M_\odot}$ per year, and an extended chromosphere reaching $T_{max}\sim$17000K at 1.5$R_\ast$ (Catala \& Kuasz 1987, Catala et al. 1999). A short-term variability study done by Catala et al. (1999) showed that an equatorial wind with a variable opening angle, or a disk-wind originating 1.6$R_\ast$ out with a similar opening angle could explain the variability. There were only two previous high-resolution spectropolarimetric observations of AB Aurigae. One was a single data set taken in 1999 and published with two different papers (Vink et al. 2002, Pontefract et al. 2000), another was taken in 2004 (Mottram et al. 2007). The polarization varied by roughly 0.4\% to 0.7\% across the line, but to achieve the required S/N in each resolution element, the polarization spectra were rebinned to constant flux with a lower effective resolution of 2700 (25 elements over 60$\AA$ but with varying spectral coverage). We are uncertain why the QU loops changed shape between the Pontefract et al. 2000 and Vink et al. 2002 papers, but there is clearly a shape change between Vink et al. 2002 and Mottram et al. 2007 showing evidence of moderate variability. MWC 480 and MWC120 also showed strong blue-shifted absorption components. The H$\alpha$ line and the continuum polarization of MWC480 had been studied in detail by Beskrovnaya \& Pogodin (2004) who concluded that MWC480 also had an inhomogenious wind which was variable on short timescales. MWC480 had a large amplitude signature ($\sim$1.8\%) in Vink et al. (2002), but showed a less significant signature in Mottram et al. 2007 again pointing to variability. MWC158 and HD58647 showed strong absorption near line-center. MWC158 is a mid-B type star, and was previously studied for spectroscopic variability as well as low-resolution spectropolarimetry (Bjorkman et al. 1998, Pogodin 1998, Jaschek \& Andrillat 1998). HD58647 is a late B type star (B9 in Th\'e et al. 1994). H$\alpha$ line spectropolarimetry for MWC120 and MWC158 was presented by Oudmaijer et al. (1999) showing line effects. All stars had signatures in Vink et al. 2002. Clearly the signatures are at least mildly variable long timescales, partly motivating this study. \section{Observations} We observed our targets on 8 nights during the engineering of the HiVIS spectropolarimeter (with 5 more lost to weather) in 2004 and on 27 nights over the fall and winter (with 13 more lost to weather) of 2006-2007. We observed AB Aurigae or MWC480 continuously for several hours on some nights, and all 5 targets intermittently on others with a focus on AB Aurigae and MWC480. We have a total of 148 polarization measurements for AB Aurigae, 58 for MWC480, 24 for MWC120, 39 for MWC158, 19 for HD58647, plus 33 unpolarized standard star observations taken over the 40 nights in 2006-2007. We achieved a continuum S/N of typically 500 or better for our observations. The polarization data were subsequently binned-by-flux to a nearly constant S/N for each resolution-element, typically 800-1000, accounting for the 0.1\%-0.2\% noise seen in the continuum polarization measurements. The H$\alpha$ line for all stars showed significant variability in intensity, width, and profile shape that is entirely consistent with other spectroscopic variability studies (Beskrovnaya et al. 1995, Beskrovnaya \& Pogodin 2004, Catala et al. 1999). Figure \ref{fig:lprof} shows the average H$\alpha$ line for AB Aurigae for each night to illustrate the nightly variations. Figure \ref{fig:pcyg} shows the absorption-trough of AB Aurigae on five of the most variable nights. On some nights, AB Aurigae and MWC480 showed dramatic spectroscopic variation on a timescale of minutes to hours, mostly in the blueshifted absorption trough. Some general line width and line strength variability was also seen on short timescales. All nights showed much smaller but significant variation. Each star showed a significant change in linear polarization above 0.2\%. Figure \ref{fig:rawpol} shows examples of the spectropolarimetry for AB Aurigae before any flux-dependent binning or frame-rotation, illustrating the quality of the data. The full spectropolarimetry for all five targets and many unpolarized standards is shown in figure \ref{fig:swap-all}. The continuum has been removed and they have been de-rotated to a common instrumental QU frame (but not to the celestial sphere). It should be noted that a full polarization-calibration of the telescope is not yet complete. The telescope induces continuum polarization at the few percent level and can rotate the plane of polarization, as described in Harrington et al. 2006. We suspect this rotation to be a large contributor to the aparent variability of the polarization in figure 3. However, these telescope properties have a weak wavelength dependence and are not expected to cause significant polarization effects across a single spectral line, and only amount to rotations, attenuations, and translations of the QU loops in QU space. The change in polarization occurred in the absorption component, either central or P-Cygni, for all five stars. The change was strongest ($\sim$1\%) for the stars with the strongest P-Cygni absorptions (AB Aurigae and MWC480). There was also a significant change of about 0.5\% in the central reversal of MWC158. The change was much weaker in MWC120 and HD58647, which showed a P-Cygni absorption and strong central reversal respectively. AB Aurigae exhibits some intrinsic spectropolarimetric variability. Even though we don't have a full model for the telescope's polarization effects yet, observations at a single pointing over many different nights show changes in the shapes and widths of the polarization spectra. The overall width and structure of the polarization spectra certainly change from night to night, regardless of what telescope calibration we will apply to this data in the future. \section{Discussion} In the analytic studies of McLean (1979) and Wood et al. (1993 and 1994), and in subsequent monte-carlo models of Vink et al. (2005), the simple "disk-scattering'' polarization models predict spectropolarimetric signatures centered on the emissive core. The models use the stellar surface (chromosphere) as the source of a broad unpolarized emission line. This line flux is then scattered by the circumstellar material, which doppler-shifts and polarizes the scattered flux. This scattered light causes the polarimetric effects when added to the original unpolarized line. McLean (1979) also mentions a depolarization effect, where the stellar continuum is polarized, and unpolarized H$\alpha$ emission depolarizes the starlight across the line. This effect would also be strongest in the emissive core. In our stars, the change in polarization occurred in and around the absorptive component, whether central or blue-shifted, and the polarization near the emission peak was nearly identical to the continuum polarization. McLean (1979) did mention another effect in a P-Cygni absorption trough, but only when there is a signature in the emissive component. The lack of models explaining signatures only in the absorptive component led us to explore other explanations that would require the absorbing material to also be the polarizing material. We are developing a new model where an anisotropic radiation field causes the absorbing material to polarize the transmitted light. The polarization originates from the anisotropy in the lower level populations of the n=2 to n=3 H$\alpha$ transition in the intervening gas. Anisotropic radiation from the star excites the intervening gas and leads to a population anisotropy in the n=2 substates (called optical pumping). The anisotropy causes the absorbing material to absorb different incident polarizations by different amounts. The main difference between this model and the scattering model is that only the absorbing material, the material occulting the photosphere and chromosphere, is responsible for the changing polarization, whereas the scattering models integrate scattered light from the entire circumstellar region with each part contributing to the polarization change. In AB Aurigae, the H$\alpha$ photons are thought to come from an extended chromosphere, out to 1.5$R_\ast$ with the P-Cygni absorption occuring further out where the incident radiation anisotropy is significant (Catala et al. 1999). The optical pumping model would produce polarization where there is absorption of the underlying H$\alpha$ emission. The optical pumping model we are developing shows good promise of giving a direct constraint on the density and geometry of the absorbing material, and thus a possible way of determining the circumstellar material's physical properties (Kuhn et al. submitted). While the depolarization explanation and the electron-disk scattering models of Vink et al. (2005) could explain many of their observations, they do not explain polarization effects isolated in the absorptive component. The optical pumping model can explain these absorption-only polarization effects. We compiled this very large high-precision data set to show the diversity of spectripolarimetric effects and hope that future modeling efforts will allow us to use such data to constrain the density and geometry of circumstellar material. \acknowledgements This work was partially supported by the NSF AST-0123390 grant. We wish to thank Katie Whitman for help during the engineering observations and for discussions about data reduction. We also wish to thank Don Mickey for many stimulating discussions about telescope polarization.
{'timestamp': '2007-08-04T04:56:46', 'yymm': '0708', 'arxiv_id': '0708.0601', 'language': 'en', 'url': 'https://arxiv.org/abs/0708.0601'}
\section{Introduction} The study of high energy scattering near the horizon of a black hole has lead to an improved perspective on quantum chaos \cite{LO, Dray:1984ha, SS, KitaevTalks, MSS}. The scrambling of information near the horizon of the black hole is related to the chaotic spread of information on the boundary quantum system. The signature of quantum chaos used in this context is given by the exponential growth of the square of double commutators $\langle |[A(0),B(t)]|^2 \rangle_\beta$ evaluated at a thermal state with inverse temperature $\beta$, \cite{ KitaevTalks,LO}. The main object needed to compute this double commutator is the real part of the out-of-time-ordered (OTOC) correlator $\langle A^\dagger(0) B^\dagger(t) A(0) B(t) \rangle_\beta$. The connected part of the OTOC grows exponentially with time in a chaotic system, with a rate defined as the Lyapunov exponent $\lambda$. This growth happens long after the dissipation time $t_d$ controlled by the thermalization scale of time-ordered correlators. At a much larger scrambling time $t_{\rm sc}$ the connected part of the OTOC, and also the commutator, saturate. In reference \cite{SS} it was explained in detail how, in holographic theories, the bulk computation of an OTOC is given by a high energy scattering near the black hole horizon \cite{Dray:1984ha}. The result is a convolution between wavefunctions (bulk-boundary propagators), that evolve the particles from the boundary to the near horizon region, and a local high energy S-matrix near the horizon. A high energy gravitational scattering is equivalent to a classical shockwave interaction \cite{Dray:1984ha}. This gives a Lyapunov exponent $\lambda = 2 \pi / \beta$ which was shown in \cite{MSS} to be maximal. When doing this calculation in string theory, and assuming inelastic effects are subleading, reference \cite{SS}, building upon \cite{Brower:2006ea}, explains how the sum over all stringy modes are equivalent to an effective elastic Pomeron which produces a perturbative correction $\lambda = \frac{2\pi }{\beta} (1 - \mathcal{O}(\alpha'))$, where $\alpha'$ is related to the string tension, which is analogous to the flat space Regge asymptotics. On the other hand, the saturation of the OTOC at the scrambling time is related to the decay of the bulk-boundary propagators and therefore to the quasinormal modes of the black hole. So far the attention has focused on OTOC that appear on double commutators, since they are more directly related to the definition of chaos of \cite{LO} as explained above. In this note we will analyze general OTOC between four arbitrary operators. For holographic theories, the importance of these quantities is more evident in the bulk. When the operators are different, the bulk scattering is completely inelastic and the Pomeron controlling these OTOC does not necessarily have the quantum numbers of the vacuum, for example. In particular, gravity plays no role in the local near horizon bulk interaction, and the growing piece of these correlators probe interactions that are not universal. Therefore it is interesting to study them to get more information about the bulk. In this paper we will extend the chaos bound and constrain arbitrary out-of-time ordered correlators. The assumptions we will use are the same as in the original chaos bound of \cite{MSS}. In particular, we will focus for simplicity on hermitian operators although we will relax this assumption later. For the exponential ansatz of an OTOC we will use the notation \begin{eqnarray}\label{intro:eq:} {\rm Re}~{\rm Tr} [ y A(0) y B(t) y C(0) y D(t) ] \approx F_d - \varepsilon_{ABCD} e^{\lambda_{ABCD} t}, \end{eqnarray} where by $F_d$ we denote the order one factorized approximation (which implicitly also depends on the choice of operators) and $\varepsilon$ is a small correction. Following \cite{MSS} we define $y$ such that $y^4=e^{-\beta H}/Z$. For systems with a large number of degrees of freedom $N$ the amplitude of the growing piece is of order $\varepsilon\sim N^{-2}$ while the factorized piece is generically of order one. Unless all four operators $A$, $B$, $C$ and $D$ are all different, the correlator in \eqref{intro:eq:} is real. Correlators involving combinations $ABAB$, $ABCB$ or $ABAD$ are all real. We will refer to configurations such as $ABAB$ that appear on double commutators as `diagonal' or `elastic' OTOC, while we will refer to the generic OTOC as `off-diagonal' or `inelastic' correlators. In this notation, the chaos bound of \cite{MSS} is a statement about the positivity of the prefactor $\varepsilon_{ABAB}>0$ and a bound on the growth rate $\lambda_{ABAB} \leq 2\pi/\beta$. This is valid for any choice of $A$ and $B$, even though in most examples the Lyapunov exponent is independent on the choice of operators. We will show (see details in section \ref{sec:proof}) that the inelastic OTOC for four different operators is also similarly constrianed \begin{equation} \lambda_{ABCD} \leq \lambda_{\rm diag} \leq \frac{2\pi}{\beta}, \end{equation} where $\lambda_{\rm diag} \equiv {\rm min} (\lambda_{ABAB},\lambda_{ADAD},\lambda_{CBCB}, \lambda_{CDCD})$. This means a generic off-diagonal OTOC cannot grow faster than diagonal ones. From the gravity side, this puts a bound on the spin of the effective mode controlling this interaction, it cannot be bigger than $2$. It is reasonable to expect all diagonal or off-diagonal OTOC for arbitrary $A$, $B$, $C$, $D$ to grow with the same rate $\lambda_L$, even if not maximal \footnote{For example in the SYK model \cite{Sachdev:1992fk, KitaevTalks,Maldacena:2016hyu,Kitaev:2017awl,Gu:2018jsv} the exponentially growing piece always comes from the same set of ladder diagrams, regardless of how we glue it to external operators.}. With this assumption, we can also bound the amplitude of the growing piece. In the general case of four different operators the constraint is presented in section \ref{sec:proof}. If we take two of the operators to be the same then we can write a simpler version \begin{equation}\label{ec:introbound} (\varepsilon_{ABCB})^2 \leq \varepsilon_{ABAB} \varepsilon_{CBCB}, \end{equation} and similarly for $\varepsilon_{ABAD}$. The same structure is present for the case of an OTOC between four different operators, $\varepsilon_{ABCD}$ is bounded by the prefactors appearing in diagonal correlators. From the gravity side this puts a bound on how strongly can matter couple to the effective mode controlling this interaction. In the context of holographic theories, the coefficient in the left hand side of the inequality \eqref{ec:introbound} is given by an inelastic scattering between particles in the bulk, which in general does not involve graviton exchange. It is interesting to see that we can bound such a process by the right hand side, which is universally fixed by gravitational interactions and the equivalence principle. Even though the inelastic coupling $\varepsilon_{ABCD}$ does not necessarily have a definite sign, its magnitude cannot be bigger than a mean of the gravitational couplings. In this context, this analysis suggests that gravity is the highest spin interaction, and the strongest with that spin. In section \ref{sec:nonh} we generalize the chaos bound to non-hermitian operators. Then similar constraints on an OTOC between four non-hermitian operators can be derived. In section \ref{sec:2dCFT} we make some comments regarding the behavior of inelastic OTOC for 2d CFT. In \cite{Roberts:2014ifa} the authors show how the maximal chaos exponent is controlled by the dominance of the identity Virasoro block. For an OTOC between different operators the identity channel does not appear in the OPE expansion. Using semiclassical expressions for non-vacuum blocks at large central charge $c$ we study the behavior of off-diagonal OTOC. In particular, we see that before the scrambling time, Virasoro descendants are not important in the second sheet. This shows how gravity naively plays no role in the physics of these OTOCs. After the scrambling time at which exponential growth stops, we show how quasinormal modes dictate the decay of the OTOC. We conclude in section \ref{Sec:conc} with open questions and future directions. \section{Constraints on generic OTOC}\label{sec:proof} In this section we will show how to bound general OTOC between arbitrary operators. The argument is simple but requires some notation. In order to do that, we will begin by stating the chaos bound from \cite{MSS}, which we will refer to as the elastic chaos bound. In \cite{MSS} the authors focus on a particular correlator \begin{equation} F(t) \equiv {\rm Tr} [ y A(0) y B(t) y A(0) y B(t) ], \end{equation} between hermitian operators $A$ and $B$, where divergences are regularized by placing them symmetrically along the euclidean circle of size $\beta$. This is implemented by inserting the operators $y$ defined as $y^4=e^{-\beta H}/Z$. The motivation for considering such correlators comes from its relation to commutators square between operators $A(0)$ and $B(t)$. We will consider times that are much larger than the dissipation time $t_d$ but smaller than the scrambling time $t_{sc}$, which we assume to be parametrically larger (as in, for example, large $N$ theories). In this regime the OTOC is almost constant and given by its factorized contribution $F_d$ to leading order, where \begin{equation} F_d \approx {\rm Tr} [ y^2 A y^2 A] {\rm Tr} [ y^2 B y^2 B] \end{equation} For chaotic systems we expect the subleading behavior to have an exponential behavior \begin{equation}\label{eq:ans} F(t) = F_d - \varepsilon ~e^{\lambda t} +\ldots \end{equation} where $\lambda$ is the Lyapunov exponent of the system. The parameter $\varepsilon$ is a small constant which controls the scrambling time at which the OTOC decays. For a large $N$ system it is of order $\varepsilon \sim N^{-2}$. From now on we will denote these prefactors of exponentially growing terms by $\varepsilon$ to denote they are small compared to the factorized term. The chaos bound from \cite{MSS} states that the quantity $F(t)$ is bounded by the right hand side of equation \eqref{eq:ans} with both \begin{equation} \varepsilon \geq 0 ~~~{\rm and}~~~\lambda \leq \frac{2\pi}{\beta} \end{equation} We will take this as our starting point for the generalizations below. Therefore we will implicitly use the same assumptions and caveats as in \cite{MSS}. \subsection{An inelastic chaos bound} In this section we will prove the bound stated in the introduction regarding OTOC between four different operators. We will focus first on hermitian operators. The upshot is that the growing piece of a general OTOC cannot grow faster than exponentially, with the maximal rate $\lambda = 2\pi/\beta$. We will also see how to put a bound on the magnitude of the growing piece. To simplify the presentation, we will go over the argument in steps. We will first generalize the chaos bound to a correlator ${\rm Tr} [ y A(0) y B(t) y C(0) y B(t) ]$. This OTOC is real for arbitrary operators since \begin{equation}\label{eq:realABCB} {\rm Tr} [ y A y B(t) y C y B(t) ]^\dagger = {\rm Tr} [ B(t) y C y B(t) y A y ]={\rm Tr} [ y A y B(t) y C y B(t) ], \end{equation} where we used that the operators are hermitian. In the first line we applied the hermitian conjugation inside the trace and in the second one used the cyclic property of the trace\footnote{Similarly, one can show that $ {\rm Tr} [ y A y B(t) y A y D(t) ]$ is real and symmetric under exchange of $B$ and $D$. }. Moreover, the OTOC is also symmetric under the exchange of $A$ and $C$ \begin{equation}\label{eq:symABCB} {\rm Tr} [ y A y B(t) y C y B(t) ]={\rm Tr} [ y C y B(t) y A y B(t) ] \end{equation} which follows from the cyclic property of the trace. To bound ${\rm Tr} [ y A(0) y B(t) y C(0) y B(t) ]$ we will analyze a diagonal correlator of the form \begin{eqnarray}\label{eq:corrvwvw} F(t)= {\rm Tr} [ y V y B(t) y V y B(t) ] ,~~~ V=\alpha_1 A + \alpha_2 C, \end{eqnarray} for arbitrary real coefficients $\alpha_1$ and $\alpha_2$. To simplify the notation, we omit the time argument when the operator is inserted at $t=0$. Expanding each term in the right hand side of equation \eqref{eq:corrvwvw} gives \begin{eqnarray}\label{eq:ABCBexpand} F(t) &=& \alpha_1^2 {\rm Tr} [ y A y B(t) y A y B(t) ]+\alpha_2^2 {\rm Tr} [ y C y B(t) y C y B(t) ] \nn && + 2 \alpha_1 \alpha_2 {\rm Tr} [ y A y B(t) y C y B(t) ]. \end{eqnarray} This contains the correlator we want to bound. There for by using the information we learn from the chaos bound on diagonal OTOC we can bound off-diagonal OTOC such as ${\rm Tr} [ y A y B(t) y C y B(t) ]$. Before we move on we can write an ansatz for these OTOC similar to equation \eqref{eq:ans}. For concretenes and to set notation we write \begin{eqnarray} {\rm Tr} [ y A y B(t) y A y B(t) ] &=& F^{AA}_d - \varepsilon_{AA}~ e^{\lambda_{AA} t},\\ {\rm Tr} [ y C y B(t) y C y B(t) ] &=& F^{CC}_d-\varepsilon_{CC} ~e^{\lambda_{CC} t},\\ {\rm Tr} [ y A y B(t) y C y B(t) ] &=&F^{AC}_d- \varepsilon_{AC} ~e^{\lambda_{AC} t}. \end{eqnarray} Where we indicate the dependence on the operators of the factorized leading contribution $F_d$, the amplitude of growing piece $\varepsilon$ and rate $\lambda$. We leave the dependence on the operator $B$ implicit. From \eqref{eq:realABCB} and \eqref{eq:symABCB} we know that $F_d^{AC}=F_d^{CA}$, $\varepsilon_{AC}=\varepsilon_{CA}$ and $\lambda_{AC}=\lambda_{CA}$ are real. To leading order, the right hand side of equation \eqref{eq:ABCBexpand} above is approximately constant in time, order one, and given by \begin{equation} F_d \approx \alpha_1^2 F_d^{AA} + \alpha_2^2 F_d^{CC} + 2 \alpha_1 \alpha_2 F_d^{AC}. \end{equation} This quantity is positive for any choice of $\alpha_1$ and $\alpha_2$. This can be shown using Cauchy-Schwarz or more directly by starting from expression \eqref{eq:corrvwvw} in terms of $V$. Moreover, we might also diagonalize the $2\times 2$ matrix of two-point functions between $A$ and $C$ such that $F_d^{AC}=0$, making the equation above manifestly positive. Next, we will focus on the subleading piece growing in time. We will consider first the most general case where all rates exponents are allowed to be different. Using this ansatz for the maximal growth we can write the subleading part of the OTOC as \begin{equation}\label{eq:ABCBsublead} F_d - F(t)= \alpha_1^2 \varepsilon_{AA} e^{\lambda_{AA} t} + \alpha_2^2 \varepsilon_{CC} e^{\lambda_{CC} t} + 2 \alpha_1 \alpha_2 \varepsilon_{AC} e^{\lambda_{AC} t}. \end{equation} Since $\alpha_1$ and $\alpha_2$ are arbitrary coefficients, and by the elastic chaos bound, we can conclude that $\lambda_{AA}, \lambda_{CC}, \lambda_{AC} \leq 2\pi/\beta$. Otherwise we could form a linear combination of $A$ and $C$ such that $F(t)$ could violate the chaos bound. Moreover we can also argue that $\lambda_{AC} \leq {\rm min}(\lambda_{AA},\lambda_{CC})$. Otherwise eventually the mixed term would dominate and we could choose a sign of the $\alpha$'s which would give a negative prefactor, also violating the chaos bound. In other words, $\lambda_{AC} > {\rm min}(\lambda_{AA},\lambda_{CC})$ would imply $\varepsilon_{AC}=0$. Above we considered the most general case. Now we will assume that all OTOC grow with the same rate $\lambda$. In this case the chaos bound on the prefactor sign gives us a bound \begin{equation} \alpha_1^2 \varepsilon_{AA} + \alpha_2^2 \varepsilon_{CC} + 2 \alpha_1 \alpha_2 \varepsilon_{AC} \geq 0, ~~~~\forall ~\alpha_1,\alpha_2 \end{equation} coming from the diagonal chaos bound applied to the right hand side of equation \eqref{eq:ABCBsublead}. This condition is equivalent to the following constraint \begin{equation}\label{eq:boundabac} \varepsilon_{AC}^2 \leq \varepsilon_{AA} \varepsilon_{CC} \end{equation} From this condition we see that even though we can constrain the growth of $\langle A B CB \rangle$, the chaos bound does not constrain the sign of the correction, which could be positive or negative but with a magnitude bounded by $\sqrt{\varepsilon_{AA} \varepsilon_{CC}}$. This is also analogous to the ANEC case studied in \cite{CMT} (see also \cite{Meltzer:2017rtf}). Having done this, the obvious next step is to consider other linear combinations. An option is \begin{equation}\label{eq:ABCDlc} F(t)= {\rm Tr} [ y A y W(t) y A y W(t) ] ,~~~ W=\alpha_1 B + \alpha_2 D. \end{equation} The chaos bound applied to this correlator gives analogous bounds as the previous analysis for the (real) correlator $ {\rm Tr} [ y A y B(t) y A y D(t) ]$. Namely, the growing piece cannot grow too fast and the amplitude cannot be bigger than diagonal one. Instead, to obtain new bounds, we will consider \begin{equation}\label{eq:ABCDlc} F(t)= {\rm Tr} [ y A y W(t) y C y W(t) ] ,~~~ W=\alpha_1 B + \alpha_2 D \end{equation} with real coefficients $\alpha_1$ and $\alpha_2$. Then we can use the inelastic chaos bound derive above to constrain ${\rm Tr} [ y A y B(t) y C y D(t) ]$. We again assume an exponential ansatz on each term. A new feature of the most general case is that the mixed term now is not real anymore since \begin{equation} {\rm Tr} [ y A y B(t) y C y D(t) ]^\dagger = {\rm Tr} [ y A y D(t) y C y B(t) ] = {\rm Tr} [ y C y B(t) y A y D(t) ]. \end{equation} This means that exchanging $A \leftrightarrow C$ or $B \leftrightarrow D$ are related by complex conjugation. Only a simultaneous exchange of $A \leftrightarrow C$ and $B \leftrightarrow D$ is a symmetry. From expanding the right hand side of \eqref{eq:ABCDlc} we see it is only sensitive to the real part of ${\rm Tr} [ y A y B(t) y C y D(t) ]$. To set notation we write the exponential ansatz for the correlators as \begin{eqnarray} {\rm Tr} [ y A y B(t) y A y B(t) ] &=& F_d^{ABAB} - \varepsilon_{ABAB} ~e^{\lambda_{ABAB} t}, \\ {\rm Tr} [ y A y B(t) y C y B(t) ] &=& F_d^{ABCB} - \varepsilon_{ABCB} ~e^{\lambda_{ABCB} t},\\ {\rm Re}~{\rm Tr} [ y A y B(t) y C y D(t) ] &=& F_d^{ABCD} - \varepsilon_{ABCD} ~e^{\lambda_{ABCD} t}. \end{eqnarray} In these expressions all quantities on the right hand side are real. In the third line, after taking the real part, the quantities are symmetric under independently exchanging $A$ and $C$ or $B$ and $D$. Now we can expand the right hand side of \eqref{eq:ABCDlc}. Again, the factorized contributions $F_d$ give some leading constant piece for the correlator in \eqref{eq:ABCDlc} and we will focus on the subleading growing piece. From the constraint on the growth rate in time we obtain the following bound quoted in the introduction \begin{equation} \lambda_{ABCD} \leq \lambda_{\rm diag} \leq \frac{2\pi}{\beta}, \end{equation} where $\lambda_{\rm diag} \equiv {\rm min} (\lambda_{ABAB},\lambda_{ADAD},\lambda_{CBCB}, \lambda_{CDCD})$. Namely, if the rate of growth are different for each term, we can say that $\lambda_{ABCD}$ is smaller than the minimum of $\lambda_{ABAB}$, $\lambda_{CBCB}$, etc, which are all smaller than $2\pi/\beta$ by the chaos bound. Similarly to the previous case, we can assume all OTOC have the same rate of growth, and then we can also bound the amplitude $\varepsilon$ of the growing piece. The bound we obtain from the previous analysis, equation \eqref{eq:boundabac}, is \begin{eqnarray}\label{gencond} &&\hspace{-0.5cm}(\alpha_1^2 \varepsilon_{ABAB} +\alpha_2^2 \varepsilon_{ADAD} + 2 \alpha_1 \alpha_2 \varepsilon_{ABAD} )(\alpha_1^2 \varepsilon_{CBCB} +\alpha_2^2 \varepsilon_{CDCD} + 2 \alpha_1 \alpha_2 \varepsilon_{CBCD} )\nn &&~~~-(\alpha_1^2 \varepsilon_{ABCB} +\alpha_2^2 \varepsilon_{ADCD} + 2 \alpha_1 \alpha_2 \varepsilon_{ABCD} )^2\geq 0, \end{eqnarray} which should be satisfied for any choice of $\alpha_1$, $\alpha_2$. Since this condition is invariant under a rescaling of $\alpha_i \to \lambda \alpha_i$, we can fix $\alpha_1=1$. Then this condition \eqref{gencond} is equivalent to the positivity of a quartic polynomial on the variable $\alpha_2$ with coefficients depending on the $\varepsilon$'s. \footnote{For a general quartic polynomial $P(x)=a x^4 + b x^3 + c x^2 + d x + e $ it should have $a,e>0$ and the condition having four complex roots is to have a positive discriminant $\Delta(P)\geq0$, and a positive $8 ac-3b^2\geq0$.} When these conditions are written in terms of the amplitudes $\varepsilon$'s they look algebraically complicated and not very enlightening. To simplify the discussion we can use the previous bound $\varepsilon_{ABAD}^2 \leq \varepsilon_{ABAB} \varepsilon_{ADAD}$ and $ \varepsilon_{CBCD}^2 \leq \varepsilon_{CBCB} \varepsilon_{CDCD}$ to complete the square in the first line of equation \eqref{gencond}. Then we can derive a non-optimal bound on the most generic $\varepsilon_{ABCD}$ as \begin{equation} \varepsilon_{ABCD} ^2 \leq 4( \sqrt{ \varepsilon_{ADAD}\varepsilon_{CBCB}} +\sqrt{ \varepsilon_{ABAB}\varepsilon_{CDCD}})^2. \end{equation} Even though this bound is not optimal it shows in a more transparent way how the prefactor of the off-diagonal OTOC is bounded by the diagonal ones. This is the main conceptual point, in a holographic setting this shows hows a generic interaction is bounded by the gravitational interactions. As a final comment,we can also consider a correlator of the type \begin{equation} F(t)= {\rm Tr} [ y V y W(t) y V y W(t) ] ,~~~ V=\alpha_1 A + \alpha_2 C,~~{\rm and}~~W=\alpha_3 B + \alpha_4 D \end{equation} One might wonder whether the $\varepsilon>0$ constraint for $F(t)$ by varying all $\alpha$'s independently we can derive a bound on $\varepsilon_{ABCD}$ stronger than the one above which was derived by steps. It is easy to see that this is not the case, and considering the most general linear combination does not generate new constraints compared to equation \eqref{gencond}. \subsection{Non-Hermitian Operators}\label{sec:nonh} So far we have discussed OTOC between arbitrary hermitian operators. Some of the steps for the bound on chaos argument from \cite{MSS} do not directly work for non-hermitian operators. We will show here that the bound on hermitian operators is enough to prove this generalization. Consider the following OTOC between general non-hermitian operators \begin{equation}\label{eq:vdwdvw} F(t)= {\rm Tr} [ y V^\dagger y W^\dagger(t) y V y W(t) ] \end{equation} We will show how the bound on ${\rm Re}~F(t)$ derives from the bound on hermitian operators. This quantity is related to the double commutator between non-hermitian $V$ and $W$ that appear in the definition of chaos. We will expand a general non-hermitian operator $\mathcal{O}$ in two hermitian components $\mathcal{O}_R = (\mathcal{O}+\mathcal{O}^\dagger)/2$ and $\mathcal{O}_I = (\mathcal{O}-\mathcal{O}^\dagger)/(2i)$, for $\mathcal{O}=V,W$. To simplify the expressions we write below, we will use a shorthand for the OTOC defining $\langle ABCD\rangle \equiv {\rm Tr} [ y A(0) y B(t) y C(0) y D(t)]$. Starting from the right hand side of \eqref{eq:vdwdvw}, expanding and using the cyclic property of the trace it is easy to show that \begin{eqnarray}\label{eq:nonhermtotl} {\rm Re} ~F(t)&=&{\rm Re}~{\rm Tr} [ y (V_R-i V_I) y W^\dagger(t) y (V_R+i V_I) y W(t) ] \nn &=&{\rm Re}~[ \langle V_R W^\dagger V_R W\rangle + \langle V_I W^\dagger V_I W\rangle- i (\langle V_I W^\dagger V_R W\rangle-\langle V_R W^\dagger V_I W\rangle)]\nn &=&\langle V_R W^\dagger V_R W\rangle + \langle V_I W^\dagger V_I W\rangle \end{eqnarray} where we used that $ \langle V_I W^\dagger V_R W\rangle^* = \langle W^\dagger V_R W V_I \rangle = \langle V_I W^\dagger V_R W\rangle$ and similarly for $\langle V_R W^\dagger V_I W\rangle$, implying they are both real, and therefore the last term in the right hand side of the equation above is purely imaginary. Now we can expand $W$ and use that $\langle A B C B\rangle$ is real for hermitian operators, to show \begin{equation}\label{eqnonh} {\rm Re} ~F(t) = \langle V_R W_R V_R W_R \rangle + \langle V_R W_I V_R W_I\rangle +\langle V_I W_R V_I W_R \rangle + \langle V_I W_I V_I W_I\rangle, \end{equation} Then if we write ${\rm Re} ~F(t) = F_d - \varepsilon ~e^{\lambda t}$, the chaos bound on the growth rate automatically applies to each term individually on the right hand side implying $\lambda \leq 2\pi/\beta$. Moreover since all terms appear with a plus sign, the bound on the sign of the prefactor still applies, implying $\varepsilon\geq0$. Taking the usual chaos bound for non-hermitian operators as a starting point we can derive analogous results as in the previous section for general non-hermitian operators. \section{An Example: 2d CFTs}\label{sec:2dCFT} In the context of 2d CFTs one can show that at large $c$, a large gap in the twist is enough to obtain maximal chaos by using results from semiclassical limits of Virasoro conformal blocks \cite{Roberts:2014ifa} \footnote{Other studies of chaos in 2d CFT from different perspectives can be found in \cite{Jackson:2014nla, Turiaci:2016cvo} (see also \cite{Perlmutter:2016pkf}). }. This is given purely by a product of stress tensors acting on the identity and therefore can be interpreted as coming in the bulk from a purely gravitational interaction. In this section we want to study in a simple setup which role the inelastic version of the chaos bound plays for large $c$ 2d CFTs with a sparse spectrum. Under some assumptions, we will study the general behavior of off-diagonal OTOC. We propose that a resummation of intermediate channels can be written as a single non-vacuum block corresponding to an operator with an effective dimension and an effective spin. From a bulk perspective this constrains matter interaction (OPE coefficients of arbitrary operators) using the chaos bound. \subsection{Kinematics} In any 2d CFT an arbitrary four point function can be written as \begin{equation}\label{eq:4pftnction} \langle W_1(z_1,\bar{z}_1)W_2 (z_2,\bar{z}_2) V_3 (z_3, \bar{z}_3) V_4 (z_4,\bar{z}_4) \rangle = \frac{1}{z_{12}^{h_1+h_2} z_{34}^{h_3+h_4} }\frac{1}{\bar{z}_{12}^{h_1+h_2} \bar{z}_{34}^{h_3+h_4} } G(z,\bar{z}) \end{equation} where $G(z,\bar{z})$ can be expanded in Virasoro conformal blocks and the cross-ratio is defined as $z=\frac{z_{12}z_{34}}{z_{13}z_{24}}$ and a similar anti-holomorphic version. The operators are arbitrary but we use the letters $V$ and $W$ to indicate which ones will be at time $0$ ($V$'s) and which at time $t$ ($W$'s). Schematically we expand the four-point function as \begin{equation}\label{eq:OPEdef} G(z,\bar{z}) = \sum_{p} C_{12p}C_{34p}~ \mathcal{F} \big[{}^{h_1}_{h_2}{}^{h_3}_{h_4}\big](h_p, z)~ \mathcal{F}\big[{}^{\bar{h}_1}_{\bar{h}_2}{}^{\bar{h}_3}_{\bar{h}_4}\big](\bar{h}_p,\bar{z}) \end{equation} where $\mathcal{F}_{h_p} \big[{}^{h_1}_{h_2}{}^{h_3}_{h_4}\big](z)$ are the Virasoro blocks corresponding to a Virasoro primary operator $\mathcal{O}_p$ with (anti)holomorphic weights $h_p$($\bar{h}_p$), dimension $\Delta = h_p + \bar{h}_p$ and spin $s=|h_p-\bar{h}_p|$. The blocks are normalized such that $\mathcal{F}_h(z) = z^h(1+\ldots)$ for a small $z$ expansion. In going from Euclidean to Lorenzian signature, different orders of operators are encoded in the monodromy of the blocks \cite{Rehren:1987ur}. Following \cite{Roberts:2014ifa} we choose the kinematics of the correlator to be \begin{eqnarray} &&z_1 = e^{\frac{2\pi}{\beta} ( t+i\epsilon_1)},~~~~\hspace{0.08cm}z_2 = e^{\frac{2\pi}{\beta} ( t+i\epsilon_2)},~~~~\hspace{0.08cm}z_3 = e^{\frac{2\pi}{\beta} ( x+i\epsilon_3)},~~~z_4 = e^{\frac{2\pi}{\beta} ( x+i\epsilon_4)},\nn &&\bar{z}_1 = e^{-\frac{2\pi}{\beta} ( t+i\epsilon_1)},~~~\bar{z}_2 = e^{-\frac{2\pi}{\beta} ( t+i\epsilon_2)},~~~\bar{z}_3 = e^{\frac{2\pi}{\beta} ( x-i\epsilon_3)},~~~\bar{z}_4 = e^{\frac{2\pi}{\beta} ( x-i\epsilon_4)} \end{eqnarray} where $\epsilon_1<\epsilon_3<\epsilon_2<\epsilon_4$ and as we raise the time from $0$ to $t$ the cross-ratio $z$ goes once around $z=1$ while $\bar{z}$ remains in the first sheet. For times larger than the dissipation time, which is of order $\beta$, we can evaluate the blocks on the second sheet with cross-ratios $z \approx - \epsilon_{12}^\star \epsilon_{34} e^{\frac{2\pi}{\beta}(x-t)}$ and $\bar{z} \approx - \epsilon_{12}^\star \epsilon_{34} e^{-\frac{2\pi}{\beta}(x+t)}$, where $\epsilon_{ij} = i (e^{\frac{2\pi}{\beta} i \epsilon_i} - e^{\frac{2\pi}{\beta} i \epsilon_j})$. For a configuration with operators equally spaced on the thermal circle, $\epsilon_{12}^\star \epsilon_{34}=4i$ and $z = -4 i e^{\frac{2\pi}{\beta}(x-t)}$ and $\bar{z} = - 4i e^{-\frac{2\pi}{\beta}(x+t)}$. If we normalize operators by their 2pt function on the plane $\langle O (x) O(0) \rangle =(x)^{-2\Delta_O}$, then each term in the factorized answer for the four-point function gives a factor of ${\rm Tr} [ y^2 O y^2 O] = (\pi/\beta )^{2\Delta_O}$. This also appears from the position dependent prefactor in the right hand side of equation \eqref{eq:4pftnction} when the operators are all different. \subsection{Elastic case} In this case we take the two operators at $t=0$ and $t$ to be the same $V_3 = V_4 = V$ and $W_1 = W_2 = W$. Then the identity appears on the intermediate channel in the OPE written above in equation \eqref{eq:OPEdef}. Assuming a large twist gap we can approximate the full correlator by the vacuum block. Following \cite{Roberts:2014ifa} we will take a large $c$ limit with $h_1/c\ll1$ fixed but small and $h_3\gg 1$. Then we can make use of heavy-light semiclassical blocks found in \cite{Fitzpatrick:2014vua}. The final answer is given by \begin{eqnarray}\label{eq:vacuumblockapprox} \frac{ {\rm Tr} [ y V y W(t) y V y W(t)]}{ {\rm Tr}[y^2 V y^2 V]{\rm Tr}[y^2 W y^2 W]} &\approx& \big( 1+\frac{6 \pi h_1}{c} e^{\frac{2 \pi}{\beta}(t-x)} \big)^{-2h_3}\nn &\approx&1 - \frac{12 \pi h_1 h_3}{c} e^{\frac{2 \pi}{\beta}(t-x)} + \ldots \end{eqnarray} which saturates the chaos bound. In the first line we wrote the chaos limit of the identity Virasoro block, while in the second line we focus on times $\beta^{-1} \ll t \ll t_{\rm sc}=\frac{\beta}{2\pi} \log c$. This was formally done at infinite gap. Corrections from finite gap and how the correlator Reggeize was recently studied in reference \cite{Chang:2018nzm}. Within this approximation, for times larger than the scrambling time $t\gg t_{\rm sc} $ this OTOC goes to zero exponentially at a rate related by the quasinormal modes. \subsection{Inelastic case} In the inelastic case we can take four arbitrary operators $W_1$, $W_2$, $V_3$ and $V_4$. In order to have analytic control over the Virasoro conformal blocks we will take large $c$ with $1 \ll h_3,h_4$ and fixed $h_1/c$, $h_2/c$ but small. Moreover we also take $h_{12}=(h_1-h_2)/2$ and $h_{34}=(h_3 -h_4)/2$ to be of order one such that the results of \cite{Fitzpatrick:2014vua} apply. We can take a basis of operators such that the identity block does not appear on the intermediate channel (this is automatic if the dimensions are different). Instead within a similar approximation as in the elastic case, we need to consider light intermediate operators of low twist. The semiclassical Virasoro block was also computed in this case, when the channel dimension $h_p$ is of order one \cite{Fitzpatrick:2014vua}. After going to the second sheet and using the chaos kinematics we get \begin{equation}\label{eq:nonvacblock} \mathcal{F} \big[{}^{h_1}_{h_2}{}^{h_3}_{h_4}\big](h, z) = \left(\frac{1}{ 1-\frac{12 \pi i (h_1+h_2)}{cz} }\right)^{h_3+h_4-h} z^h {}_2 F_1 (h - h_{12}, h+ h_{34}, 2h, z)|_{{\rm 2nd~sheet}} , \end{equation} where the hypergeometric function comes from $SL(2)$ descendants and it is evaluated on the second sheet. For times between dissipation and scrambling, the first term in the right hand side of \eqref{eq:nonvacblock} is constant and the exponential growth comes from the hypergeometric function \begin{equation} {\rm Tr} [ y V_3 y W_1(t) y V_4 y W_2(t)] = N_\beta \sum_{p} C_{12p}C_{34p}~d_p~e^{ \frac{2\pi}{\beta}(s_p-1)t}e^{- \frac{2\pi}{\beta}(\Delta_p-1) x}, \end{equation} where $\Delta_p$, $s_p$ is the dimension and spin of the intermediate channel operator $O_p$. The normalization coming from the prefactor of \eqref{eq:4pftnction} is $N_\beta = (\pi/\beta)^{\Delta_1 +\Delta_2 + \Delta_3 + \Delta_4}$. $d_p$ is a coefficient \begin{equation} d_p = \frac{8\pi }{(i4)^{h_p-\bar{h}_p }(2h-1)}\frac{\Gamma(2h_p)}{\Gamma(h_p\pm h_{12})}\frac{\Gamma(2h_p)}{\Gamma(h_p\pm h_{34})}, \end{equation} where $\Gamma(a\pm b) =\Gamma(a+b)\Gamma(a-b)$. This factor depends on the dimension of the intermediate channel and comes from the evaluation of the hypergeometric function in \eqref{eq:nonvacblock} on the second sheet. The growing part of elastic OTOC for holographic CFTs are dominated by the vacuum block, dual to gravitational interactions in the bulk. The growing part of inelastic OTOCs is not related to gravitational interactions. The fact that Virasoro descendants are irrelevant for the calculation of inelastic OTOCs is a manifestation of this fact. This is not completely obvious and we give the details in the appendix. The amplitude of the growing piece of the inelastic OTOC is small due to the fact that the OPE coefficients are subleading in $N$ and in the gap. Lets first assume that the spins are bounded. Then by looking at this expression assuming a large twist gap we can conclude that in the low-lying part of the spectrum all particles must have spin $s<2$. If a primary happens to have $s=2$ then its interactions with other particles cannot be stronger than gravity (bounds OPE coefficient). If there happens to be a light particle with $s>2$, then its contribution should be Reggeize among the low-lying spectrum to give an effective spin $s_{\rm eff} <2$. Then we can see the statement in the previous paragraph as a statement about inelastic Pomerons in the theory. \begin{figure}[t!] \begin{center} \begin{tikzpicture}[scale=1] \node[inner sep=0pt] (russell) at (2.5,1.5) {\includegraphics[width=.25\textwidth]{curve2.pdf}}; \node[inner sep=0pt] (russell) at (2.5,1.5) {\includegraphics[width=.25\textwidth]{curve.pdf}}; \draw[thick, ->] (0,0) -- (5,0); \draw[thick, ->] (0,0) -- (0,3); \draw[thick] (3.5,0-0.1)--(3.5,0+0.1); \draw (3.5,-0.5) node {$t_{\rm sc}$}; \draw (-1,2) node {$F(t)$}; \end{tikzpicture} \end{center} \vspace{-0.85cm} \caption{\small Sketch of a typical behavior of inelastic OTOC $F(t)$ in 2d CFT (black curve) as a function of time, assuming its approximated by a non-vacuum block with effective dimension $\Delta_{\rm eff}$ and spin $s_{\rm eff}$. Initially the OTOC grows exponentially with rate $\lambda = \frac{2\pi}{\beta} (s_{\rm eff}-1)$. For late times the fast decay is controlled by the quasi-normal modes. In blue we show a typical elastic OTOC. } \label{fig:temp-check} \end{figure} The calculation required in the previous paragraph to evaluate the inelastic OTOC is complicated, even in the case of elastic OTOC \cite{Chang:2018nzm}. We can conjecture that the result of summing over infinite spins is equivalent to a single non-vacuum block with effective $h_{\rm eff}$, $\bar{h}_{\rm eff}$ and effective dimension $\Delta_{\rm eff}=h_{\rm eff} + \bar{h}_{\rm eff}$ and spin $s_{\rm eff}=|h_{\rm eff} - \bar{h}_{\rm eff}|$. With this assumption the inelastic OTOC is given by \begin{equation} N^{-1}_\beta~{\rm Tr} [ y V_3 y W_1(t) y V_4 y W_2(t)] = ~C_{12}C_{34}~\mathcal{F} \big[{}^{h_1}_{h_2}{}^{h_3}_{h_4}\big](h_{\rm eff}, z) ~\bar{z}^{\bar{h}_{\rm eff}}, \end{equation} where the holomorphic block is given by equation \eqref{eq:nonvacblock}, and $C_{12}$ ($C_{34}$) are effective couplings between operators $W_1$, $W_2$ ($V_3$, $V_4$) and the effective Pomeron state $h_{\rm eff}, \bar{h}_{\rm eff}$. Depending on the effective spin, $C_{12}C_{34}$ might be bounded by the dimensions of the external operators, following \eqref{ec:introbound}. With the proposal of the previous paragraph, we can analyze times longer than scrambling $t_{\rm sc} \lesssim t$. In this case the situation changes and the part of the block coming from the Virasoro descendants dominate. Namely, the first factor in the right hand side of \eqref{eq:nonvacblock} decays exponentially. Assuming the behavior of the correlator is equivalent to a single block of dimension $\Delta_{\rm eff}$ of order one and spin $s_{\rm eff}<2$, the OTOC decays exponentially as $\sim e^{- \frac{2\pi}{\beta} (h_3+h_4)t} $. Under the assumption of the previous paragraph, we show in figure \ref{fig:temp-check} the behavior of a typical OTOC between different operators. To summarize, inelastic OTOCs grow exponentially in a way controlled by a Pomeron exchange (unrelated to gravity), until the scrambling time $t_{\rm sc}$ at which the correlator begins to decay according to the quasinormal modes frequencies. This is expected since this decay is due to the bulk-boundary propagators appearing in the bulk calculation of the OTOC. The picture and the proposal that emerges from this analysis should be worked out in more detail following \cite{Chang:2018nzm} and \cite{Collier:2018exn} (see also \cite{Liu:2018iki} and \cite{Hampapura:2018otw}), but we leave it for future work. \section{Open Questions}\label{Sec:conc} To conclude we would like to state some open questions. It would be nice to compute the off-diagonals OTOC introduced here for SYK models \footnote{In particular, the approach of \cite{Berkooz:2018jqr} might be very useful for this.}. Following the notation of \cite{Kitaev:2017awl} and \cite{Gu:2018jsv}, we can write a generic OTOC as a convolution between form factors describing the coupling of external operators to an effective `scramblon' mode, and the scramblon propagator which grows exponentially with time. In this perspective the bound stated here constraints the behavior of the general form factors appearing in these models. Their rate of growth in time is bounded by the growth of the scramblon mode through the elastic OTOC. Moreover, the mode responsible to the Lyapunov behavior of inelastic OTOC might not be the same as the one for elastic case (for example, it might not have the quantum numbers of the vacuum). Therefore there must be a mode which similarly to the Schwarzian mode generates exponential growth\footnote{From the perspective of \cite{MTV} the problem is analogous to finding a generalization of Liouville theory that allows primary operators with spin.}. A simple proposal would be a similar mode living on ${\rm Diff}(S^1)/U(1)$ instead of ${\rm Diff}(S^1)/SL(2)$, but this requires further study. This question can be extended to higher dimensions. In 2d CFTs the maximal chaos behavior coming from the identity block can be understood as coming from a Goldstone mode of broken reparametrization invariance \cite{Turiaci:2016cvo}. It would be nice to find a description of a similar soft mode producing exponential growth of off-diagonal OTOC, related to non-vacuum representations. To analyze this problem 2d versions of SYK might be useful as explicit examples \cite{Gu:2016oyy, Turiaci:2017zwd, Murugan:2017eto, Berkooz:2017efq}. In the particular case of 2d CFTs it would be interesting to repeat the analysis of \cite{Chang:2018nzm} for a general OTOC. The analysis of this paper can be extended to higher order OTOC with more than four operators. The diagonal version of these correlators was studied in \cite{Haehl:2017pak} (see also \cite{Haehl:2018izb}). Results in this direction were derived in \cite{Basu:2018akv}. Finally, after understanding how correlators Reggeize in the chaos limit, it would be nice to recast the bound derived in this paper as a bound on OPE data. This might also help sharpen the statement about gravity being the highest spin, strongest, interaction. \bigskip \begin{center} {\bf Acknowledgements} \end{center} \vspace{-2mm} We want to thank G. Horowitz, J. Maldacena, M. Rangamani and D. Stanford for discussions and comments on the draft. This work was supported by a Fundamental Physics Fellowship. We also benefited from the workshop ``Chaos and Order: From strongly correlated systems to black holes" at KITP, supported in part by the National Science Foundation under Grant No. NSF PHY-1748958 to the KITP. \begin{appendix} \section{Semiclassical Virasoro Blocks}\label{app:blocks} \end{appendix} In this paper we used a Virasoro conformal block with two external light operators of weights $h_L\pm \delta_L$ and two external heavy operators of weights $h_H \pm \delta_H$. In the large $c$ limit with $h_H/c$ fixed these were obtained in \cite{Fitzpatrick:2014vua} for a light intermediate channel $h$. Within this approximation it is given by \begin{equation}\label{app:fitzkapblock} \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](h, z)= (1-w)^{(h_L+\delta_L)(1-\alpha^{-1})}\left( \frac{w}{\alpha ~z} \right)^{h-2h_L} z^h {}_2 F_1 \left( h - \frac{\delta_H}{\alpha} , h+\delta_L , 2 h , w \right) \end{equation} where $\alpha=\sqrt{1-24 h_H/c}$ and $w(z)=1-(1-z)^\alpha$. In the case of pairwise identical operators, $\delta_H=\delta_L=0$, and for the vacuum channel, this expression gives \begin{equation} \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](0, z) = (1-w)^{(h_L+\delta_L)(1-\alpha^{-1})}\left( \frac{w}{\alpha~z} \right)^{-2h_L}. \end{equation} In the limit for which $h_H/c \ll 1$ ($\alpha\approx 1-12h_H/c$), for small $z$ in the second sheet the block is approximately $ \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](0, z) \approx \left(\frac{z}{1-(1-z)^\alpha} \right)^{2h_L}$. This expression reproduces equation \eqref{eq:vacuumblockapprox}, which is the main result in \cite{Roberts:2014ifa}. For the case of generic intermediate channel we can use the general formula, go to the second sheet and evaluate in the chaos limit. This gives the same answer as doing the analytic continuation of the expression \begin{equation}\label{app:eq3} \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](h, z)= \left( \frac{z}{1-(1-z)^\alpha} \right)^{2h_L-h} z^h {}_2 F_1 \left( h - \frac{\delta_H}{\alpha} , h+\delta_L , 2 h , w \right) \end{equation} The first term is equivalent to the vacuum block with shifted dimensions. This term is entirely due to Virasoro descendants. The evaluation of this term in the second sheet is therefore \begin{equation}\label{eq:prefactorapp} \left( \frac{z}{1-(1-z)^\alpha} \right)^{2h_L-h} \Bigg|_{2{\rm nd~sheet}} \approx \left( \frac{1}{1-\frac{24 \pi i h_H}{cz} } \right)^{2h_L -h} \end{equation} In the chaos limit, and for times smaller than the scrambling time $c z \gg 1$ ($t\ll \frac{\beta}{2\pi} \log c$), this gives approximately a constant $1$. After the scrambling time $c z \ll 1$, this term decays and controls the decay of correlators. The second term of \eqref{app:eq3} evaluated on the second sheet and for small $z$ gives \begin{equation} z^h{}_2 F_1 (h - \frac{h_{H}}{\alpha}, h+ h_{L}, 2h, w)|_{{\rm 2nd~sheet}}= \frac{2\pi i ~e^{i \pi (h_{L}-h_{H})}}{2h-1}\frac{\Gamma(2h)}{\Gamma(h\pm \frac{h_{H}}{\alpha})}\frac{\Gamma(2h)}{\Gamma(h\pm h_{L})} \frac{z}{(\alpha z)^h} \end{equation} For $h_H/c$ with $\alpha \sim 1$ this gives the same as the evaluation of the global block $z^h{}_2 F_1 (h - h_{H}, h+ h_{L}, 2h, z)$. Therefore we can approximate the Virasoro block by \begin{equation} \mathcal{F} \big[{}^{H}_{H}{}^{L}_{L}\big](h, z)\approx \left( \frac{z}{1-(1-z)^{1-12 h_H/c}} \right)^{2h_L-h} z^h {}_2 F_1 \left( h - \delta_H , h+\delta_L , 2 h , z \right). \end{equation} The evaluation of this expression above in the second sheet for small $z$ gives the same answer as applied on the original \eqref{app:fitzkapblock}. Therefore all the effects of Virasoro descendants in the chaos limit come from the prefactor in the equation above. \begingroup\raggedright
{'timestamp': '2019-04-18T02:20:05', 'yymm': '1901', 'arxiv_id': '1901.04360', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.04360'}
\section{Discussion} \label{sec:conclusion-future} In this work we have introduced affinity-VAE, a neural network capable of organising the latent representation based on the similarity of the object. While a $\beta$-VAE is capable of cluster separation and latent disentanglement, we have little control over the learned factorised representations and captured semantics. We have shown that with guidance from an automatically generated affinity matrix we can create more homogeneous, rotationally-invariant clusters that could improve the classification accuracy. Furthermore, the affinity metric can be tailored to the data or domain of interest, improving the generality of the method. Since the affinity metric is only utilised during the training phase (Equation \ref{eq-shape-vae-loss}), we can infer the similarity of unseen objects in the encoded latent representation. A pre-trained network can therefore be easily applied to discovery of new classes. We have demonstrated that the learned latent spaces can be continuous, which enables the potential use of the method with unseen data (e.g. discovery of new species). We have demonstrated the potential of the method in a scientific application using the example of subtomogram target identification in volumetric cryo-ET data. While the results are promising and show the potential of the method for discovery of new species in experimental data, more experiments are required to test the effectiveness of the method on a full tomogram. This comes with new challenges, notably the presence of noise, crowding, interactions, multiple conformations and translation. \section{Methods} \label{sec:Methods} \subsection{$\beta$-VAE} \label{sec:method_VAE} The Variational Autoencoder (VAE) is a class of generative neural network aiming to learn the joint distribution of inputs $x$ and their low-dimensional latent generative factors $z$ \cite{Kingma2014}. Through maximising the probability of reconstruction, i.e. minimising the difference between the input $x$ and the decoded output $y$ (reconstruction term), we effectively organise the latent space. Concurrently, the VAE regularises the latent space by minimising the difference between real and estimated distributions (variational term) encouraging the approximate posterior to reflect the prior. To aid the optimisation, these are both set to Gaussian and the prior is set to normal allowing for the ``reparametrisation trick'' \cite{Higgins2017}. $\beta$-VAE is an iteration of the VAE framework introducing a hyperparameter $\beta$ which modulates the learning constraints applied to the model \cite{Higgins2017}. Values of $\beta>1$ put effective constraints on the capacity of the latent $z$ bottleneck encouraging factorised representations \cite{burgess2018}. The $\beta$-VAE approach is therefore most commonly used for unsupervised factorised representation learning. The goal of the training is to minimise the objective function $L$, \begin{equation} \underset{x}{\mathrm{argmin}}\ L = ||x - y||^2 + \beta \times D_{KL}\left[ \mathcal{N}(\mu_{z}, \sigma_{z}), \mathcal{N}(0, 1)\right] \end{equation} Where $x$ and $y = d(e(x))$ denote the input data and the output reconstruction respectively, $z = e(x)$ is the encoded latent representation and $e$ and $d$ are the encoder and decoder parts of the neural network, respectively. $D_{KL}$ refers to the Kullback-Leibler divergence between the prior $ \mathcal{N}(\mu, \sigma)$ and posterior $ \mathcal{N}(0, 1)$ distributions. The first term of the equation is the reconstruction loss minimising the difference between the inputs and decoded outputs. The second term, parameterised by $\beta$, is the variational term regularising the latent space. \subsection{affinity-VAE} In this section we introduce the affinity-VAE neural network, which is characterised by addition of an affinity-based regularisation to the existing formulation of the $\beta$-VAE. To facilitate this, we also introduce some architectural changes to the network, notably an additional fully-connected layer to denote the pose of the object. \subsubsection{Affinity-based loss component} In addition to the reconstruction and KL divergence terms of the standard VAE loss function, we introduced a new shape regularisation term $S(z)$. The hyperparameter $\gamma$ provides fine control of the influence of this regularisation term (in a similar manner to $\beta$): \begin{equation} L = ||x - y||^2 + \beta \times D_{KL}\left[N(\mu_{z}, \sigma_{z}), N(0, 1)\right] + \gamma \times S(z) \label{eq-shape-vae-loss} \end{equation} where $S(z)$ is the L1 norm of the difference between a pre-calculated affinity matrix $\mathbf{A}$ and the cosine similarity of the latent representations: \begin{equation} S(z) = \frac{ \sum_{i, j}^{N}\ \left|\left| \mathbf{A}_{i j} - \frac{ z_i \cdot z_j }{ ||z_i|| \cdot ||z_j|| } \right|\right| }{ N } \label{eq-shape} \end{equation} with $z$ denoting the latent variables, $i, j$ the indices of the vectors in the batch and $N$ the batch size. \begin{wrapfigure}[13]{r}{0.6\textwidth} \begin{center} \includegraphics[width=0.6\textwidth]{figures/3-B-shape-VAE-noin.png} \caption{Architecture of the affinity-VAE.} \label{fig:shape-VAE} \end{center} \end{wrapfigure} The cosine similarity measures the distance between two latent points, whereas the pre-computed affinity matrix ($\mathbf{A}$) provides feedback on their actual pairwise similarity. This effectively organises the latent space so that similar objects (regardless of their pose), as described by the similarity descriptor in the affinity matrix, are placed close together in the latent space. The pre-calculated affinity matrix is generated automatically by computing pairwise similarity scores for the entire training dataset with a target function. In our case this is SOAP for the 2D alphanumeric data and FSC for 3D protein data (more on data generation in section \ref{sec:Methods-data}), but different metrics could be chosen to facilitate different datasets, or to organise the latent space by different factors (see section \ref{sec:similarity-metrics}). Furthermore, the affinity matrix is only used in the training stage, therefore a pre-trained network can be easily applied to discovery of new classes/species. \subsubsection{Pose channel} The latent space is represented in terms of distributions expressed through learnable fully connected layers for the mean ($\mu$) and the variance ($\sigma$). We have introduced a third learnable fully connected layer to represent the $n$-D pose of the object, in a similar manner to rotation in an rVAE~\cite{Ziatdinov2021, ziatdinov_kalinin_2021}. By feeding the same shape similarity values (matrix identity) for the same object regardless of its orientation, we force the latent space and the additional pose parameter to learn to represent the pose of the object. \subsection{Latent map} To ensure that our method is generalising to new data/tomograms so that the network does not need retraining every time it analyses new data, we introduce the "latent map" approach. Instead of training the network on the whole tomogram, we create a latent "map" pre-seeded with existing proteins. Once pre-trained with a set of objects that is non-redundant (i.e. no duplicates) but exhaustive (i.e. considering all morphologies), the latent space can be used as a dictionary or a reference space for both seen and unseen data. The seen data would form clusters within or near the existing cluster centres in the latent map while the unseen data would do so near or between groups of similar morphology. This brings in a promise of applying affinity-VAE as an unsupervised template-and-label-free approach to the classification and detection of unseen samples or novel structures in cryo-ET tomograms. \subsection{Latent embeddings, clustering and KNN} The encoded samples are represented in the latent space through their generative factors, however the dimensionality of that latent space ($z$) is a hyperparameter. We are therefore analysing $n$-D latent spaces such that $n$ is high enough to capture the complexity in the original image, but low enough to only capture the true semantics. Therefore, even with the simplest 3D datasets $n > 2$. To visualise and interpret any relationships between the latent codes corresponding to different semantics, and visually assess the degree to which identical or similar objects form homogeneous clusters in the latent space, we produce latent embeddings in which each latent vector is given a location in a lower-dimensional (2D) manifold. We have explored the following methods for visualisation of latent embeddings: \begin{itemize} \item UMAP \cite{umap} (Uniform Manifold Approximation and Projection) -- used for our 2D alphanumeric data; \item t-SNE \cite{tsne} (t-Distributed Stochastic Neighbour Embedding) -- used for our 3D protein and tetramino data. \end{itemize} The accuracy of the method was evaluated in three different ways. To assess whether the object was classified correctly we explored the following approaches: \begin{itemize} \item means for the clusters are calculated and each point is classified to belong in a cluster if it lies within +/- $2\sigma$ of the mean - used for 2D alphanumeric data; \item $k = 5$ nearest neighbors (KNN) \cite{knn} were selected for each encoded data point and a class was assigned based on majority voting (producing hard assignments) -- used for 3D protein and tetramino data; \item $k=100$ nearest neighbors \cite{knn} were selected and assignment fractions for each class were used as likelihood of belonging to that class (producing soft probabilistic assignments) -- used for 3D protein and tetramino data. \end{itemize} For quick evaluation of the 3D data we used hard assignments because they are faster, however soft assignments provide more insight into the classification, which is particularly useful for evaluating unseen data or novel structures. \subsection{Data simulation} \label{sec:Methods-data} Three datasets were used in the evaluation of the methods: 1) 2D alphanumeric data, 2) 3D tetramino data, and 3) simulated 3D protein data. \begin{figure}[h] \centering \includegraphics[width=1\textwidth]{figures/3-E-sample_data.pdf} \caption{Sample data, left: alphanumeric, middle : tetramino blocks and right: protein volumes from the list in \cite{ecoli}.} \label{fig:sample_data} \end{figure} Images in the alphanumeric dataset are constructed from letters and digits ($x \in \{ \texttt{a, e, b, d, p, i, j, z, 2, k, x, u}\}$), rotated at various angles (Figure \ref{fig:sample_data} left). Images are created using the Pillow library in Python \cite{clark2015pillow}, rotated to a defined angle $\theta$ where $\left\{\theta \in \mathbb{Z}|-45<\theta<45 \right\}$, and converted to a binary two-dimensional array. We use an $80/20$ split for training and testing. The tetramino data (Figure \ref{fig:sample_data} middle) was designed to emulate an easier, but similar, 3D scenario to real cryo-ET data. Tetraminos were made of combinations of four identical cubes connected wall-to-wall. They were generated on-the-fly with controllable parameters such as rotation and translation. Other controllable aspects include morphological adjustments (e.g. elongation) to test the semantic disentangling power of the method. This allows us to generate any desired shape combinations to fully test the performance of our method. The training data was constructed from 6 morphologically different tetraminos which were used to pre-seed the latent map, while for evaluation we constructed a new, previously unseen class that was morphologically similar to two other classes from the training set. All data was augmented using rotations randomly sampled at $\theta = \{10, 20, ..., 360\}$ in 3 different planes. All images were resized prior to training to 32x32x32 voxels. Test and validation data constituted 10\% and 20\% of the training set respectively. The protein data (Figure \ref{fig:sample_data} right) was generated from the list of 50 most abundant \emph{E. coli} proteins \cite{ecoli}. Chimera \cite{chimera} was used to generate a synthetic 3D density map from each protein on the list. The maps were generated at 10 Å resolution without taking atomic $B$-factors into account. The training data was constructed from $n$ randomly selected classes (protein types), which were used to pre-seed the latent map, and a different, previously unseen class was selected for evaluation. All data was augmented with rotations randomly sampled at $\theta = \{10, 20, ..., 360\}$ in 3 different planes. All images were resized prior to training to 64x64x64 voxels. Test and validation data constituted 10\% and 20\% of the whole dataset respectively. \subsection{Affinity metrics} \label{sec:similarity-metrics} Regularisation of the latent space ensures that semantically similar objects are encoded near to each other (i.e. at similar positions in latent space) while dissimilar objects are encoded far apart. To achieve this regularisation we require knowledge of the similarity between all pairs of training examples. The affinity between two structures can be described in accordance with the intrinsic properties of the data. The choice of affinity descriptor (shape, or indeed any other metric) should therefore be made with respect to the property of the data intended to achieve the desired data separation. \subsubsection{2D alpha-numeric data} In order to investigate the similarity between two structures in the alphanumeric data, we employ the Smooth Overlap of Atomic Positions (SOAP) descriptor \cite{Bartok2010,Bartok2013}. This shape descriptor uses a combination of radial and spherical harmonics. In our model, we are treating every pixel that is not background as an “atom”. The SOAP descriptor places a Gaussian density distribution at the location of each selected pixel. The SOAP kernel is then defined as the overlap of the two local nearest neighbouring densities integrated over all three-dimensional rotations. \subsubsection{3D tetramino and protein data} Fourier shell correlation (FSC) \cite{fsc} curves are the standard metric for resolution estimation of cryo-EM maps. The method calculates the similarity of two images as a function of spatial frequency, by calculating the correlation between the Fourier coefficients of each image in thin spherical shells: \begin{equation} FSC(k) = \frac{ \sum{F_1(k) F_2(k)^*} }{ \sqrt{ \sum{|F_1(k)|^2}\sum{|F_2(k)|^2} } } \label{eq-fsc} \end{equation} where $F_1(k)$ and $F_2(k)$ are the (complex) coefficients of the Fourier transforms of the two structures in a spherical shell at radius $k$. In this work, to obtain a single value for use as an affinity metric, we take an average of the FSC across all spatial frequency shells, weighted by the number of Fourier coefficients in each shell according to the method described by Brown \textit{et al.}~\cite{avgfsc}. This gives a measure of similarity between the two 3D objects with a value between $+1$ and $-1$, with the former indicating a strong agreement. \section{Results} \label{sec:Results} \begin{figure}[t] \centering \includegraphics[width = 1\textwidth]{figures/4-A-latent-alphanumeric.pdf} \caption{Left: UMAP embedding of latent space trained with 10000 randomly rotated samples of alphanumeric data. Right: The confusion matrix build from the prediction for 200 samples of seen (a, b, d, e, i, j, z and k) and unseen data (2, x and p). For this calculation the choice of hyperparameters include $\gamma = 8$ and $\beta=5$.} \label{fig:4-A-latent_alphanumeric} \end{figure} \subsection{Latent clustering for 2D alpha-numeric data} Given the appropriate choice of hyper-parameters, data classes are clustered together for the alphanumeric data in the latent space and the clusters are correlated in accordance to their shape. The left panel of Figure \ref{fig:4-A-latent_alphanumeric} illustrates the UMAP embedding of 10,000 rotations from a set of samples from the alphanumeric data. For this calculation the choice of hyperparameters include $\gamma = 8$ and $\beta=5$. The right panel shows the confusion matrix constructed from the prediction for 200 samples of the seen data (a, b, d, e, i, j, z and k) and unseen data (2, x and p). The confusion matrix shows that the model predicts a strong affinity between letters with higher shape similarity for the seen and unseen data (for example, the unseen numeral 2 shows the closest match to the letter z from the training set). \subsection{Latent clustering for 3D tetramino and protein data} \begin{figure} \centering \includegraphics[width = 1\textwidth]{figures/4-A-latent-tetraminos.png} \caption{Left: t-SNE embedding of 1000 (rotationally augmented) tetraminos from a set of morphologies. Different colours correspond to different morphologies and opacity indicates train/validation/test set. Right: Morphology of different tetramino types positioned in areas corresponding to their latent location. Colors between left and right panel are corresponding. Tetramino unseen during training and validation indicated with red arrow.} \label{fig:4-A-latent-tetraminos} \end{figure} The results on the tetramino data are illustrated in the latent embedding in Figure \ref{fig:4-A-latent-tetraminos} and on the protein data in the latent embedding in Figure \ref{fig:4-A-latent-proteins}. In both datasets, the cluster separation between different morphologies was very good. Rotated objects of the same morphology were placed in homogeneous clusters regardless of their orientation. Additionally, in the tetramino data the clusters were arranged so that morphologically similar objects (e.g. E and T, L and I) were closer together in the latent space than dissimilar objects (e.g. I and E). A similar trend was observed in the protein data, where dimeric (two subunit) proteins were all arranged close together and ordered by the size of the protein, whereas elongated monomeric (single-subunit) proteins were placed separately forming a more homogeneous area in the latent space. \begin{figure} \centering \includegraphics[width = 1\textwidth]{figures/4-A-latent-proteins.png} \caption{Left: t-SNE embedding of 1000 (rotationally augmented) proteins from a set of 8 randomly chosen morphologies. Different colours correspond to different morphologies and opacity indicates train/validation/test set. Right: Morphology of different protein types positioned in areas corresponding to their latent location. Colors between left and right panel are corresponding. Proteins unseen during training and validation indicated with red arrows.} \label{fig:4-A-latent-proteins} \end{figure} When the network was presented with samples previously unseen during training, they formed separate clusters in the embedding positioned close to similar morphologies, which suggests that the learned latent spaces are continuous (see more in subsection \ref{sec:latent-continuity}) and offers potential use of the method for discovery of new morphologies. At the same time, cluster homogeneity was preserved within the unseen clusters, regardless of the object orientation. In the case of the tetramino data, we introduced a new morphology during evaluation that was a fusion of two similar morphologies existing in training (EL), which, as expected, clustered between the two similar classes (E and L). In the case of the protein data, the two introduced morphologies were selected randomly from a list containing an exhaustive set of proteins from the \textit{E. coli} cytoplasm. The dimeric protein (blue) was placed near other dimers and positioned in a roughly correct position arranged by size (increasing left to right). The monomeric protein with an elongated domain was placed overlapping with another cluster of proteins with elongated domains. The accuracy of classification as measured by KNN on the protein test set was up to 90\% with the optimal set of hyperparameters. \subsection{Latent and pose interpolations} \label{sec:latent-continuity} We performed interpolations across the outputs of the pose channel on alphanumeric data. The results of the interpolations are illustrated in Figure \ref{fig:4-B_pose_angle}. We observed a linear correlation between the pose value and the angle of the rotation, which demonstrates that pose channel does indeed capture information about the pose of the object. We also explored the extent of disentanglement present in the generated latent spaces. Upon visual inspection we were able to identify morphological semantics across different dimensions (Figure \ref{fig:4-B-latent-continuity} left). In this example using the protein data, dimension 2 appeared to capture the size, whereas dimension 6 described whether the protein was a dimer (two subunits) or a monomer (single unit). Other dimensions supported other morphological features, including elongation (dim 1), smoothness (dim 3 and 4), and toroidal geometry (dim 5). Additionally, we performed latent interpolations across all dimensions between four existing (encoded) data points in the latent space (Figure \ref{fig:4-B-latent-continuity} right). Non-existing (not encoded) points from the latent space generated realistic reconstructions and there was a smooth transition between different morphologies, including when transitioning between the number of protein subunits. This shows that the generated latent spaces are continuous and suitable to discovery of new morphologies unseen during training. \begin{wrapfigure}[22]{r}{0.5\textwidth} \begin{center} \includegraphics[width=0.5\textwidth]{figures/pose_angle_images_interpolation.png} \caption{Linear relationship between encoded pose and the associated angle of rotation of the input. The nested panel shows the reconstruction using a given latent encoding ($z$) and varying the value of the pose corresponding to 5 degree intervals.} \label{fig:4-B_pose_angle} \end{center} \end{wrapfigure} \begin{figure}[b] \begin{center} \includegraphics[width = 1\textwidth]{figures/4-B-latent-interpolations.png} \caption{Left: Latent interpolations on protein data in a 6 dimensional latent space. Rows correspond to dimensions and columns correspond to interpolated values, where the central column is the encoded input. Right: Latent interpolations between 4 existing encoded proteins (corners). Decoded reconstructions in between the 4 corners are taken from the latent space at evenly sampled multi-dimensional strides between the encodings of existing proteins. } \label{fig:4-B-latent-continuity} \end{center} \end{figure} \subsection{Affinity-based loss component and the influence of $\gamma$} Figure \ref{fig:4-C_VAE_shapeVAE} illustrates a comparison between affinity-VAE and a standard $\beta$-VAE ($\gamma=0$ and no pose channel) on alphanumeric data, including the latent space representations (top row) as well as the proximity matrix where the distances between the centres of the clusters are displayed (bottom row). Inspection of the two latent space representations shows that affinity-VAE ($\gamma=10$) is more successful at relating the clusters with higher affinity than the $\beta$-VAE framework ($\gamma=0$). This is confirmed in the proximity matrix where the distances between different clusters (i.e. the off-diagonal elements) are generally much higher in affinity-VAE than in the $\beta$-VAE, which would be expected to improve classification rates due to less cluster proximity contamination. Secondly, letters with similar morphology (e.g. b and d) are in closer proximity in the latent map in affinity-VAE compared with $\beta$-VAE. Another example of this is the proximity of cluster i and j to cluster z in $\beta$-VAE, which have been pushed apart in affinity-VAE due to the low affinity. When $\gamma$ was set to 0 and pose turned off on protein data, the resulting latent spaces generated by the standard $\beta$-VAE were still organised, however the organisation was by an offset from the centre (radially) and an average intensity (within each cluster) of the object (Figure \ref{fig:4-C_VAE_shapeVAE-tetramino}) rather than morphological affinity (as previously demonstrated in Figure \ref{fig:4-A-latent-proteins}). Since $\beta$-VAE is completely data-agnostic, we have little control over the learned factorised representations and the semantics we want to capture. \begin{figure \begin{center} \includegraphics[width =0.7\textwidth]{figures/4-C-bVAE_vs_aVAE.pdf} \caption{Latent space representation for the alphanumeric dataset using VAE where $\gamma=0$ and pose channel is off (left column) and affinity-VAE where $\gamma=10$ (right column). Bottom row: latent space proximity matrix displaying the distance between cluster centres. The values for the distances in the are normalised to be between 0-100 in both matrices. The colour map is used as a guide for the eye to emphasise the largest and smallest numbers displayed on the matrix.} \label{fig:4-C_VAE_shapeVAE} \end{center} \end{figure} Introducing the affinity matrix and a similarity-based loss component organises the latent space by the quality captured through the similarity metric. \begin{figure}[h \begin{center} \includegraphics[width=1\textwidth]{figures/4-C-gamma=0-tetramino.png} \caption{Latent embedding on a tetramino data from a hyperparameter-tuned $\beta$-VAE ($\gamma=0).$ The latent space is organised by centre offset radially (left) and average intensity within clusters (right).} \label{fig:4-C_VAE_shapeVAE-tetramino} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width = 1\textwidth]{figures/4-D-VAE_beta_variation.pdf} \caption{The latent space is presented for $\beta=1,2,3,4$ and $5$ using original VAE framework ($\gamma=0$). A more effective categorisation of classes is observed for values $\beta>1$.} \label{fig:4-D-VAE-beta} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[ width = 1\textwidth]{figures/4-D-gamma_beta5.pdf} \caption{For $\beta = 5$ we have explored the effect of shape-affinity regularisation of latent space for $\gamma =0, 1,3,4,6,10$ and $20$. The colour code is the same as Figure \ref{fig:4-D-VAE-beta}.} \label{fig:4-D-AVAE_gamma} \end{center} \end{figure} \newpage \subsection{The influence of $\beta$, $\gamma$ and $z$} \begin{wrapfigure}{r}{0.5\textwidth} \begin{center} \includegraphics[width=0.5\textwidth, trim={1.25cm 6.3cm 2cm 4.2cm},clip]{figures/4-F-AN_SOAP_FSC.pdf} \caption{The left and right panel show a comparison of the latent space representation and the reconstruction of the alphanumeric data for the SOAP and FSC shape descriptors respectively.} \label{fig:AN_SOAP_FSC} \end{center} \end{wrapfigure} In order to achieve an appropriately disentangled latent representation, the value of $\beta$ should be chosen carefully. Since $\beta=1$ corresponds to the original VAE method, a stronger emphasis on the $D_{KL}$ term ($\beta>1$ in our case) encourages learning of a more disentangled latent representation as shown in Figure \ref{fig:4-D-VAE-beta}. This is in agreement with existing literature \cite{Higgins2017, burgess2018}. Within the affinity-VAE framework, the value of $\gamma$ should be chosen in a similar range to that of $\beta$. This is due to the competing effect of the two terms in Equation \ref{eq-shape-vae-loss}. Figure \ref{fig:4-D-AVAE_gamma} explores the effect of affinity regularisation on the latent space. By switching on the shape-affinity ($\gamma>0$) the clusters are instantly grouped together based on their shape-similarity unlike the $\beta$-VAE framework where the letters do form clusters of their own, but are not closer in the latent space if they are similar in shape (Figure \ref{fig:4-D-VAE-beta}). As the emphasis on the affinity increases, the latent space becomes increasingly sparse pushing the clusters further apart. The suitability of this empty space for specific studies can be explored by investigating the continuity through latent interpolations and classification of unseen data. For tetraminos, the optimal range of latent dimensions was between 5 and 9. We also observed a relationship between the number of dimensions and degrees of freedom in the data (e.g. 3 rotations, translations + semantics). Small values of $\beta$ (but $>1$) gave best accuracy. Increasing the affinity regularisation $\gamma$ produced better KNN accuracy (saturated around value of 1000), however, it also increased reconstruction loss. This is expected, as latent regularisation vs. reconstruction quality is always a trade-off, and in our case latent disentanglement (and therefore accurate classification and recognition) is more important than faithful reconstructions. \subsection{Similarity function} To illustrate the influence of the choice of similarity metric used to calculate the affinity matrix on the overall cluster separation we compared various metrics across different datasets. While the SOAP metric was used for 2D alphanumeric data and FSC for 3D data, a comparison between the two descriptors for the alphanumeric data is provided in Figure \ref{fig:AN_SOAP_FSC}. SOAP provided better cluster separation as well as increased certainty of reconstruction on the alphanumeric data. We also explored the choice of similarity metric in 3D tetramino data. Figure \ref{fig:4-E-MD_FSC} shows a comparison between no affinity descriptor, mean difference and FSC metrics. Mean difference, unlike avg FSC, is a real space descriptor which is not frequency weighted (Equation \ref{eq-md}). \begin{equation} \frac{1}{N}\sum_{i=0}^{i=N} x_i - y_i \label{eq-md} \end{equation} where $x_i$ and $y_i$ are voxels in comparison, and $N$ stands for the number of voxels (image size). While mean difference also improved the cluster separation over a standard $\beta$-VAE, some clusters (e.g. L and I) still remained joint. On the other hand, after employing the FSC as a similarity metric not only did the cluster separation improve, but also the organisation of clusters was morphologically aligned (e.g. elongated shapes like L, I and T clustered near each other). \section*{Acknowledgement} This work was supported by the Medical Research Council [grant number MR/V000403/1] and Ada Lovelace Centre. This work was supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC Grant EP/W006022/1, particularly the “AI for Science” theme within that grant \& The Alan Turing Institute. The authors would like to thank James Parkhurst and Joel Greer for data simulation support. \bibliographystyle{unsrtnat} \section{Introduction} \label{sec:Introduction} Lying at the core of machine learning research, representation learning is one of the most important problems in our data-driven world. Through the recent decades the performance of a task has been largely driven by an appropriately encoded (crafted or learned) representation of an input \cite{Kingma2014}. Furthermore, data, and especially scientific or image data, is often overly redundant and, given it is composed of dependent signals, it could be meaningfully described with a compressed representation. Additionally, the interpretability of the factorised representations plays an important role especially in scientific data. Finally, unlike the standard benchmark data in machine learning, real-life or scientific scenarios are often open problems with no existing annotations or ground truth. The development of such methods for interpretable, factorised representations of data without supervision is therefore crucial to future scientific discoveries. Many visual recognition, detection, or classification tasks have benefited from disentangled $\beta$-VAE \cite{Higgins2017} representations in the recent years. However, despite significant developments \cite{Bepler2019explicitly, Ziatdinov2021}, learning continuous latent representations that capture translation and rotation along with object semantics remains an open problem to this date. In addition, as such representations were designed to be completely data-agnostic, the approach makes no assumptions about the data resulting in little control over the learned semantics. In certain domains, however, such as visual tasks, common characteristics, such as objects' affinity or similarity, could aid the process of representation learning \cite{neuro, neuro2}. We illustrate the problem on an open scientific challenge: the identification of molecules in volumetric cryogenic Electron Tomography (cryo-ET) image data. Cryo-ET is an emerging high resolution imaging technique that has the potential to revolutionize our understanding of molecular and cellular biology. Although powerful in their own right, structures of isolated and purified proteins convey little to no information on spatial distribution and interactions between the cellular systems in their native environments. Cryo-ET is uniquely capable of 3D \textit{in situ} imaging, spanning molecular to cellular scales. The main promise of cryo-ET is to deliver such spatial mapping of a cellular landscape \cite{tomo}, otherwise known as visual proteomics \cite{proteo, proteomics}. Cryo-ET tomograms are generated by collecting a tilt series of a frozen specimen in a transmission electron microscope (TEM). The individual 2D projection images are aligned and back-projected to generate the 3D tomogram \cite{Turk2020}. It enables resolution of the entire proteome of molecules inside whole cells in 3D~\cite{plates,detectors}, with further promise of increasing the precision of this method to 20 Å \cite{resolution-revolution, resolution}. However, recent advances in instrumentation have not been matched by equivalent methodological developments for extraction of contextual information from reconstructed volumes \cite{proteo}. Such analysis routinely includes recognition and classification of particles of the same class, followed by subtomogram averaging within a class to obtain structures with higher local resolution and signal-to-noise ratios \cite{resolution, Castano2019, Wan2016}. However, particle localisation, recognition and classification are inherently challenging for several reasons including low signal-to-noise ratios (SNR), molecular crowding, compositional and conformational heterogeneity, the random orientation of molecules and the abundance of different protein types. Many existing computational strategies have been developed to enable subtomogram target identification. \textbf{Template matching:} This method most commonly used to localise particles in cryo-electron tomograms is a computationally expensive algorithm and relies upon the availability of high-resolution template libraries to match against. Furthermore, it disregards the compositional and conformational heterogeneity of structures in the cellular environment. The development of template-free multiple classification techniques is a significant step towards visual proteomics, a system-wide analysis of the cell’s proteome in 3D. \textbf{CNNs for multi-class classification:} Faster than template matching owing to their ability to identify textures and abstract image primitives, Convolutional Neural Networks (CNNs) are widely used to determine the probability of a given class member in an image \cite{Foo:22,Kavitha2019}. They have also been widely used in the context of cryo-EM and cryo-ET \cite{Bepler2019, Wagner2019, Palovcak5015, deeptomo}. Furthermore, some modified versions of the CNN framework have achieved rotational invariance \cite{Chidester2019,delchevalerie2021}. However, the dependency of CNNs on manually-labelled data urges further developments. \textbf{Template-free and label-free unsupervised approaches:} \textit{De novo} visual proteomics in single cells through pattern mining \cite{miningtomo} employs the Fourier Shell Correlation \cite{fsc} score to detect similar patterns from training subtomogram sets and therefore introduces the first template-free and label-free subtomogram classification method. However, the main limitation of this method is the reliance on favourable abundances of a given object for successful classification. On the other hand, authors in \cite{Rice2022} use pattern mining to organise a lower-dimensional embedding generated with CNNs. \textbf{Variational AutoEncoders (VAE):} VAEs and $\beta$-VAEs (which we build upon in this work) are a class of CNNs which effectively reduce the dimensionality of 3D volume data while encoding it into a low-dimensional latent representation. During training, they organise the latent space encouraging factorised representations of semantic patterns. VAE frameworks have been widely used on cryo-EM \cite{Bepler2019explicitly, Zhong2021} and cryo-ET data. A great example of the latter is the work done by Zeng \textit{et. al} where features of macromolecules and membranes as well as non-cellular features have been coarsely characterised \cite{vaetomo}. This work is an important validation for the use of VAEs in tomograms. More recently, there has been an increased interest in rotationally invariant VAEs, which we are also exploring in this work. Some examples include rVAE/jrVAE \cite{Ziatdinov2021, ziatdinov_kalinin_2021} and spatial-VAE \cite{Bepler2019explicitly}. Also related to our work are contrastive learning frameworks \cite{contrastive} maximizing the agreement between augumentations of the same data. In this work, we introduce Affinity Variational Autoencoder (affinity-VAE), a deep neural network for automatic clustering and classification of multidimensional objects based on their similarity -- in our case, their morphological similarity. We focus on affinity-based latent space regularisation in addition to a standard $\beta$-VAE loss function. We introduce an affinity-based $\gamma$ parameterised loss component along with the addition of a pose channel and an automatically generated affinity matrix to learn the pose of the sample in an unsupervised manner during training. The performance of this method is first investigated on a 2D alphanumeric dataset and then the application of the method is demonstrated on a simulated 3D cryo-ET example. The preliminary success of this novel approach in comparison with the existing $\beta$-VAE framework demonstrates its potential for application with experimental cryo-ET data.
{'timestamp': '2022-09-13T02:03:16', 'yymm': '2209', 'arxiv_id': '2209.04517', 'language': 'en', 'url': 'https://arxiv.org/abs/2209.04517'}
\section{Introduction} \vspace{-1mm} \label{sec:Introd} Recently, federated learning (FL) was introduced in \cite{mcmahan17AISTATS} as an important step to bring machine learning closer to everyone. The breakthrough idea of FL is ``no raw data sent to third party companies during learning processes'', which means people can safely participate in FL processes without being worried that their personal data is exploited. A wide range of applications, such as healthcare and self-driving cars to name a few \cite{chen20IS,niknam20CM}, can benefit from FL. In FL, the learning is implemented jointly by many users (UEs). First, a local learning model is trained at each UE using local (private) training data, and sent to the central server. A global update is then computed at the central server using the local learning models transmitted from all UEs, and finally sent back to the UEs for local training updates. This learning process is iterated until reaching a certain learning accuracy level. To deploy the above iterative FL process over wireless networks, a key challenge is keeping the network energy consumption as low as possible. This is important both due to battery limitations of the UEs and to concerns about the ICT carbon footprint. It is thus critical to design an energy-efficient wireless network to support FL. There are several studies of energy-efficient deployments of FL over wireless networks; see, e.g., \cite{yang21TWC,zeng20ICC,hu20WCSP} and references therein. In these works, the authors proposed designs which minimize the energy consumption at the UEs while guaranteeing the learning performance (i.e., the test accuracy) by jointly optimizing learning and communication parameters. However, the energy consumption of the transmission from the central server to the UEs was not taken into account. Also, these works proposed to use frequency-division multiple access (FDMA) and time-division multiple access (TDMA) systems to support FL. This might not be a good choice because FDMA and TDMA systems offer low UE data rates, and hence, yield a very high energy consumption, especially when the number of UEs is large. In addition, all these works only considered the case of a single FL group. On the other hand, it is anticipated that the future wireless systems will need to serve multiple groups of UEs that participate in different FL processes. These networks need to simultaneously provide high data rates and high communication reliability to all UEs in all FL groups. Designing such networks is challenging and calls for a suitable, new wireless communication framework. To the best of our knowledge, there has not been any work studying energy-efficient wireless networks supporting multiple FL groups in the existing literature. The contributions of this paper are summarized as follows: \vspace{-2mm} \begin{itemize}[noitemsep,nolistsep] \item To support multiple FL groups over wireless networks, we propose using massive MIMO (mMIMO) and letting multiple iterations (each for one FL process of a group) be executed in one large-scale coherence time\footnote{The large-scale coherence time is defined as the time interval during which the large-scale fading coefficients remain approximately constant.}. Thanks to the high array gain and multiplexing gain, mMIMO can simultaneously offer very high quality of service to all UEs in an area of interest \cite{ngo16}, and hence, it is expected to guarantee a stable operation of each iteration (and hence the whole FL process). \item We introduce two specific transmission protocols where the steps within one FL iteration, i.e., the downlink transmission, the computation at the UEs, and the uplink transmission, are either asynchronous or synchronous. These schemes differ from the scheme in \cite{vu20TWC} which focuses on minimizing the training time of FL. Here, we use the unicast protocol on downlink and conventional multiuser transmission on uplink. Both downlink and uplink use zero-forcing (ZF) processing. \item We develop an algorithm to allocate the transmit powers and processing frequencies to minimize the total energy consumption in each FL iteration, under a constraint on the total time taken for one FL iteration. \item Numerical results show that our proposed schemes significantly reduce the energy consumption compared to baseline schemes. They also confirm that the asynchronous scheme outperforms the synchronous scheme for supporting multiple FL groups, at the cost of a higher complexity. \end{itemize} \vspace{-2mm} \section{Proposed Schemes and System Model} \label{sec:SystModel} \vspace{-1mm} \subsection{Multiple Federated Learning Framework} \label{sec:FLframework} \vspace{-1mm} We consider a multiple FL network which includes multiple FL groups with different learning purposes. Each UE is assumed to only participate in one FL group. The FL frameworks of all groups can be different in terms of loss functions but have the same following four steps in each iteration \cite{tran19INFOCOM,mcmahan17AISTATS}. \begin{enumerate}[label={(S\arabic*)}] \item A central server sends a global update to the UEs. \item Each UE updates and solves its local learning problem using its local data and then computes its local update. \item Each UE sends its computed local update to the central server. \item The central server computes the global update by aggregating the received local updates from all UEs. \end{enumerate} The above process will be done iteratively until a certain learning accuracy level is achieved. \vspace{-2mm} \subsection{Proposed Schemes to Serve Multiple FL Groups} \vspace{-1mm} To support multiple FLs discussed in Section~\ref{sec:FLframework}, we propose to use mMIMO technology, i.e. Steps (S1) and (S3) of each FL iteration can be executed via the downlink and the uplink of a mMIMO system, respectively. Our proposed mMIMO-based multiple-FL system includes one $M$-antenna base station (BS) simultaneously serving $N$ FL groups in the same frequency bands under the time-division-duplexing operation. We assume that the BS acts as the central server. Each FL iteration of each FL group is assumed to be executed within a large-scale coherence time, which is reasonable because the execution time of one FL iteration is smaller than the large-scale coherence time in many scenarios \cite{vu20TWC}. With this assumption, we then propose two specific transmission schemes to support the learning of $N$ FL groups for each FL iteration as shown in Figs.~\ref{fig:time1}(a) and~(b) respectively. \vspace{-1mm} \begin{itemize} \item[(a)]\textbf{Asynchronous scheme}: All groups start their FL iterations at the same time when the BS switches to a downlink mode. During this mode, BS simultaneously sends the global updates to all UEs in all groups (corresponding to Step~(S1)). Each UE will start its local computation if it successfully receives the global training update (corresponding to Step~(S2)). Then, the BS switches to an uplink mode immediately after the receptions of the global training update are completed at all the UEs. During this mode, the UEs will send their computed local updates to the BS (corresponding to Step~(S3)) if they finish the local computation. \item[(b)]\textbf{Synchronous scheme}: This scheme is similar to the asynchronous scheme except for the synchronization of Steps (S1)-(S3) among all the UEs. Each UE starts and waits for others to end each of those steps together. \end{itemize} The time of one FL iteration under both schemes are constrained by a given period of time. Note that in the asynchronous scheme, the time of Steps (S1)--(S3) are optimally allocated (using the the proposed algorithm in the next section) to ensure that all the UEs finish one FL iteration and start a new FL iteration at the same time. \vspace{-2mm} \subsection{Massive-MIMO-based Multiple-FL System Model} \vspace{-1mm} \begin{figure}[t!] \centering \includegraphics[width=0.42\textwidth]{FLiterationGLOBECOM.eps} \vspace{-3mm} \caption{Illustration of FL iterations over the considered mMIMO network with two groups $n,n'$ and two UEs for each group} \label{fig:time1} \end{figure} The above two schemes share the common system model as follows. In each large-scale coherence time, the global and local updates in Steps (S1) and (S3) are transmitted in one or multiple small-scale coherence times depending on their sizes. Each coherence block in Step (S1) (or (S3)) involves the channel estimation phase and the downlink (or uplink) payload data phase. Suppose that at the considered time, there are $N$ iterations of $N$ FL groups being served. Let $\NN\triangleq\{1,\dots,N\}$, and $\K_n$ be the sets of groups and the indices of the UEs in group $n$, respectively. There are $K_n$ single-antenna users (UEs) of each group $n$. The details of each step are presented in the following. \subsubsection{Step (S1)} The BS sends the global updates to all UEs of all groups. Since the global updates intended for all UEs in a given group are the same, the transmission in this step corresponds to multi-group multicasting. Thus, we follow the scheme in \cite{sadeghi18TWC} assuming orthogonal pilots and ZF processing. \textbf{Uplink channel estimation}: For each coherence block of length $\tau_c$, each UE sends its pilot of length $\tau_{d,p}$ to the BS \cite{sadeghi18TWC}. We assume that the pilots of all the UEs are pairwisely orthogonal, which requires $\tau_{d,p}\geq K_{total}\triangleq \sum_{n\in\NN}K_n$. Denote by $\g_{n_k} \!=\! (\beta_{n_k})^{1/2}\tilde{\g}_{n_k}$ the channel vector from UE $k$ of group $n$ to the BS, where $\beta_{n_k}$ and $\tilde{\g}_{n_k}$ are the large-scale fading coefficient and small-scale fading coefficient vector, respectively. At the BS, $\g_{n_k}$ is estimated by using the received pilots and the minimum mean-square error (MMSE) estimation technique. The MMSE estimate $\hat{\g}_{n_k}$ of $\g_{n_k}$ is distributed according to $\CN(\pmb{0},\hat{\sigma}_{n_k}^2\pmb{I}_M)$, where $\hat{\sigma}_{n_k}^2 = \frac{\tau_{d,p} \rho_{p} \beta_{n_k}^2 }{ \tau_{d,p} \rho_{p} \beta_{n_k} +1 }$ \cite{sadeghi18TWC}. We also denote by $\hat{\G}\triangleq [\hat{\G}_1,\dots,\hat{\G}_N]$ the matrix stacking the channels of all the UEs, where $\hat{\G}_n\triangleq [\hat{\g}_{n_1},\dots,\hat{\g}_{n_{K_n}}]$. \textbf{Downlink payload data transmission}: The BS encodes the global training update intended for UE $k$ of group $n$ into the same symbol $s_{d,n}$, where $\EEE\{|s_{d,n}|^2\}=1$, and apply the ZF precoding vector $\uu_{n_k}\! =\! \sqrt{\eta_{n_k}\hat{\sigma}_{n_k}^2(M\!-\!K_{total})} \hat{\G}(\hat{\G}^H\hat{\G})^{-1}\e_{n_k,K_{total}} $ to precode the symbol, where $\eta_{n_k}$ is a power control coefficient, $\e_{n_k,K_{total}}$ is the $n_k$-th column of $\pmb{I}_{K_{total}}$, and $M \geq K_{total}$ is required. The transmitted signal at the BS is thus given as $\x_{d}\!\!=\!\! \sqrt{\rho_{d}}\sum_{n'\in\NN}\sum_{\ell\in \K_{n'}}\uu_{n'_\ell} s_{d,n'}$, where $\rho_{d}$ is the maximum normalized transmit power at the BS. The transmitted power at the BS is required to meet the average normalized power constraint, i.e., $\EEE\{|x_{d}|^2\}\leq \rho_d$, which can be expressed through the following constraint: \begin{align} \label{powerdupperbound} \sum_{n\in\NN}\sum_{k\in\K_n}\eta_{n_k} \leq 1. \end{align} The achievable rate $R_{d,n_k}(\ETA)$ at UE $k$ of group $n$ is given as \cite[(10)]{sadeghi18TWC} \vspace{-0mm} \begin{align} \label{RdZF} R_{d,n_k}(\ETA) &= \frac{\tau_c - \tau_{d,p}}{\tau_c}B\log_2 \big( 1 + \text{SINR}_{d,n_k}(\ETA)\big), \end{align} \vspace{-3mm} \noindent where $\ETA\triangleq\{\eta_{n_k}\}_{n\in\NN,k\in\K_n}$, $B$ is the bandwidth, and $\text{SINR}_{d,n_k}(\ETA) = \frac{(M-K_{total})\rho_d \hat{\sigma}_{n_k}^2\eta_{n_k}} {\rho_d (\beta_{n_k} - \hat{\sigma}_{n_k}^2) \sum_{n'\in\NN}\sum_{n'_\ell\in\K_{n'}} \eta_{n'_\ell} +1} $ is the effective downlink SINR\footnote{Although all the UEs of one group have the same encoded symbol, their achievable rates can be different (and hence, their transmissions do not finish simultaneously). This is feasible when using a code that sends a maximum number of parity bits corresponding to the UEs with the smallest SINR. Here, each UE will stop listening as soon as it successfully decodes its message. Thus, the UEs with higher SINRs can stop listening earlier than those with smaller SINRs.}. \textbf{Downlink delay}: Let $S_{d,n}$ (bits) be the data size of the global training update of group $n$. The transmission time from the BS to UE $k$ of group $n$ is given by \begin{align*}\label{} t_{d,n_k}(\ETA) = \frac{S_{d,n}}{R_{d,n_k}(\ETA)}. \end{align*} \textbf{Energy consumption for the downlink transmission}: Denote by $N_0$ is the noise power. The energy consumption for transmitting the global update to the UE $k$ of group $n$ is the product of the transmit power $\rho_d N_0 \eta_{n_k}$ and the delay for the downlink transmission to this UE. Therefore, the total energy consumption at the BS for all groups is \vspace{-0mm} \begin{align} \nonumber \!E_{d}(\ETA) \!\!=\!\!\! \sum_{n\in\NN}\!\sum_{k\in\K_n}\!\! \rho_d N_0 \eta_{n_k} \! t_{d,n_k}(\ETA) \!\!=\!\!\! \sum_{n\in\NN}\!\sum_{k\in\K_n}\!\! \rho_d N_0 \eta_{n_k}\! \frac{S_{d,n}}{R_{d,n_k}\!(\ETA)}. \end{align} \vspace{-0mm} \subsubsection{Step (S2)} After receiving the global update, each UE executes $L$ local computing rounds over its data set to compute its local update. \textbf{Local computation}: Let $c_{n_k}$ (cycles/sample) be the number of processing cycles for a UE $k$ to process one data sample \cite{tran19INFOCOM}. Denote by $D_{n}$ (samples) and $f_{n_k}$ (cycles/s) the size of the local data set and the processing frequency of the UE $k$ of group $n$, respectively. The computation time at UE $k$ of group $n$ is then given by \cite{vu20TWC,tran19INFOCOM} \vspace{-0mm} \begin{align*}\label{} t_{C,n_k}(f_{n_k}) = \frac{LD_nc_{n_k}}{f_{n_k}}. \end{align*} \textbf{Energy consumption for local computation at the UEs}: The energy consumption at UE $k$ of group $n$ for computing its local training update is given as \cite{tran19INFOCOM,vu20TWC} \begin{align*}\label{} E_{C,n_k}(f_{n_k}) = L\frac{\alpha}{2}c_{n_k}D_nf_{n_k}^2, \end{align*} where $\frac{\alpha}{2}$ is the effective capacitance coefficient of the UEs' computing chipset. \vspace{-1mm} \subsubsection{Step (S3)} In this step, UEs' local updates are transmitted to the BS. \textbf{Uplink channel estimation}: In each coherence block, each UE sends its pilot of length $\tau_{u,p}$ to the BS. We assume that the pilots of all the UEs are pairwisely orthogonal, which requires the pilots of length $\tau_{u,p} \geq K_{total}$. The MMSE estimate $\bar{\g}_{n_k}$ of $\g_{n_k}$ is distributed according to $\CN(\pmb{0},\bar{\sigma}_{n_k}^2\pmb{I}_M)$, where $\bar{\sigma}_{n_k}^2 = \frac{\tau_{u,p} \rho_{p} \beta_{n_k}^2}{\tau_{u,p} \rho_{p}\beta_{n_k}+1}$ \cite{sadeghi18TWC}. \textbf{Uplink payload data transmission}: After computing the local update, UE $k$ of group $n$ encodes this update into symbols denoted by $s_{u,n_k}$, where $\EEE\{|s_{u,n_k}|^2\}=1$, and sends baseband signal $x_{u,{n_k}}\!=\!\sqrt{\rho_{u}\zeta_{n_k}}s_{u,{n_k}}$ to the BS, where $\rho_{u}$ is the maximum normalized transmit power at each UE and $\zeta_{n_k}$ is a power control coefficient. This signal is subjected to the average transmit power constraint, i.e., $\EEE\left\{|x_{u,n_k}|^2\right\}\leq \rho_u$, which is can be expressed in a per-UE constraint as \begin{align}\label{poweruupperbound} \zeta_{n_k}\leq 1,\forall n\in\NN,n_k\in \K_n. \end{align} After receiving data from all UEs, the BS uses the estimate channels and ZF scheme to detect the UEs' message symbols. The ZF precoder requires $M \geq K_{total}$. The achievable rate (bps) of UE $k$ in group $n$ is given by \cite[(3.28)]{ngo16} \begin{align} \label{RuZF} R_{u,n_k}(\ZETA) &= \frac{\tau_c-\tau_{u,p}}{\tau_c}B \log_2 \big( 1 + \text{SINR}_{u,n_k}(\ZETA) \big), \end{align} \vspace{-2mm} \noindent where $\text{SINR}_{u,n_k} (\ZETA) \triangleq \frac{(M-K_{total}) \rho_u \bar{\sigma}_{n_k}^2 \zeta_{n_k}} {\rho_u \sum_{n' \in \NN} \sum_{n'_\ell \in \K_{n'}} (\beta_{n'_\ell} - \bar{\sigma}_{n'_\ell}^2) \zeta_{n'_\ell} + 1} $ is the effective uplink SINR. \textbf{Uplink delay}: Denote by $S_{u,n}$ (bits) the data size of the local training update of group $n$. The transmission time from UE $k$ of group $n$ to the BS is given by \begin{align*}\label{} t_{u,n_k}(\ZETA) = \frac{S_{u,n}}{R_{u,n_k}(\ZETA)}. \end{align*} \textbf{Energy consumption for the uplink transmission}: The energy consumption for the uplink transmission at a UE is the product of the uplink power and the transmission time. In particular, the energy consumption at UE $k$ of group $n$ is given as \cite{tran19INFOCOM,vu20TWC} \vspace{-1mm} \begin{align*} E_{u,n_k}(\ZETA) &= \rho_u N_0\zeta_{n_k} t_{u,n_k}(\ZETA) = \frac{\rho_u N_0\zeta_{n_k}S_{u,n}} {R_{u,n_k}(\ZETA)}. \end{align*} \vspace{-1mm} \begin{remark} We obtain the achievable downlink and uplink rates in \eqref{RdZF} and \eqref{RuZF}, respectively, under the case that all users participate in the transmission. However, as shown from the two proposed schemes in Fig.~\ref{fig:time1}, at a particular time, some UEs may have finished their transmission, and thus, do not participate in the downlink or uplink transmission with other UEs at the same time. This will not cause any issue with our design because the rates \eqref{RdZF} and \eqref{RuZF} are still always achievable under this case. \end{remark} \subsubsection{Step (S4)} After receiving all the local updates, the BS computes its global update. Since the computational capability of the central server is much more powerful than those of the UEs, the delay of computing the global update is negligible. \vspace{-1mm} \section{Problem Formulation and Solution} \label{sec:PF} \vspace{-1mm} In practice, different groups are likely to start their FL processes at different times and have different number of FL iterations depending on their learning targets. Therefore, minimizing the energy consumption of the whole FL processes of all groups at the same time is tremendously difficult due to complicated synchronization among all groups. Instead, we aim at minimizing the total energy consumption in one FL iteration for all groups, which also leads to the total energy consumption reduction of the whole FL processes of all groups. \vspace{-2mm} \subsection{Asynchronous Scheme} \vspace{-1mm} The problem of minimizing the total energy consumption of one FL iteration for all groups is formulated as follows. \begin{subequations}\label{Pmain} \begin{align} \label{CFPmulti} \!\!\!\!\!\underset{\ETA,\f,\ZETA}{\min} & E_{total} \triangleq\! E_{d}(\ETA)\! +\!\!\! \sum_{n\in\NN}\!\sum_{n_k\in\K_n} (E_{C,n_k}(f_{n_k}) \!+\! E_{u,n_k}(\ZETA)) \\ \nonumber \!\!\!\!\!\mathrm{s.t.}\,\, & \eqref{powerdupperbound}, \eqref{poweruupperbound} \\ \label{powerlowerbound} & 0\leq \eta_{n}, 0\leq \zeta_{n_k}, \forall n,n_k \\ \label{fbound} & 0 \leq f_{n_k} \leq f_{\max}, \forall n,n_k \\ \label{QoSbound} & t_{d,n_k}(\ETA) + t_{C,n_k}(f_{n_k}) + t_{u,n_k}(\ZETA) \leq t_{\text{QoS}}, \forall n,n_k \\ \label{syncbound} & \max_{n\in\NN}\max_{n_k\in\K_n}\!\! t_{d,n_k} \!\leq\! \min_{n\in\NN}\min_{n_k\in\K_n} \big(t_{d,n_k} \!+\! t_{C,n_k}\big), \end{align} \end{subequations} \noindent where $\f\triangleq\{f_{n_k}\}_{n\in\NN,n_k\in\K_n} $. Here, \eqref{QoSbound} guarantees the execution time of one FL iteration below a threshold $t_{\text{QoS}}$ for maintaining the quality of service, and \eqref{syncbound} is introduced to ensure that all the UEs send their local update during the uplink mode of the BS. The right-hand side of \eqref{syncbound} models the first UE that finishes its downlink transmission and local computation, while the left-hand side presents the slowest UE finishes its downlink transmission as seen in Fig.~1(a). To solve \eqref{Pmain}, we rewrite it in the following more tractable epigraph form \begin{subequations}\label{Pmainepi} \begin{align} \label{CF:shortP:epi} \nonumber \!\!\!\!\!\!\underset{\x}{\min} \,\, & \widetilde{E}_{total} \triangleq \sum_{n\in\NN}\sum_{n_k\in\K_n} \rho_d N_0 S_{d,n} \omega_{n_k} \\ & + \sum_{n\in\NN}\!\sum_{n_k\in\K_n} \!\!\!\Big( L\frac{\alpha}{2}c_{n_k}D_nf_{n_k}^2 \!\!+\!\! \rho_u N_0\theta_{n_k}S_{u,n} \Big) \\ \mathrm{s.t.}\,\, \nonumber & \eqref{powerdupperbound}, \eqref{poweruupperbound}, \eqref{powerlowerbound}, \eqref{fbound} \\ \label{Rdlowerbound} & r_{d,n_k}\leq R_{d,n_k} (\ETA), \forall n,n_k \\ \label{Rulowerbound} & r_{u,n_k}\leq R_{u,n_k} (\ZETA), \forall n,n_k \\ \label{rlowerbound} & 0 \leq r_{d,n_k}, 0 \leq r_{u,n_k}, \forall n,n_k \\ \label{tdgroupbound} & \eta_{n_k}\leq r_{d,n_k} \omega_{n_k}, \forall n, n_k \\ \label{tubound} & \zeta_{n_k}\leq r_{u,n_k} \theta_{n_k}, \forall n,n_k \\ \label{tbound} & \frac{S_{d,n}}{r_{d,n_k}} + \frac{LD_nc_{n_k}}{f_{n_k}} + \frac{S_{u,n}}{r_{u,n_k}} \leq t_{\text{QoS}}, \forall n, n_k \\ \label{syncbound2a} & S_{d,n}\leq r_{d,n_k} q, \forall n,n_k \\ \label{syncbound2b} & q \leq q_{1,n_k} + q_{2,n_k}, \forall n,n_k \\ \label{syncbound2c} & 0 \leq q_{1,n_k}, 0 \leq q_{2,n_k}, \forall n,n_k \\ \label{syncbound2d} & q_{1,n_k} r_{d,n_k} \leq S_{d,n} , \forall n,n_k \\ \label{syncbound2e} & q_{2,n_k} f_{n_k} \leq LD_nc_{n_k} , \forall n,n_k, \end{align} \end{subequations} where $\x \triangleq \{\ETA,\f,\ZETA,\rr_d,\rr_u,\OOmega,\THeta, q, \q_1, \q_2\}$, $\rr_d,\rr_u,\OOmega,\THeta,q,\q_1,\\ \q_2$ are additional variables, $\rr_d=\{r_{d,n_k}\}$, $\rr_u=\{r_{u,n_k}\}$, $\OOmega=\{\omega_{n_k}\}$, $\THeta=\{\theta_{n_k}\}$, $\q_1=\{q_{1,n_k}\}, \q_2=\{\q_{2,n_k}\}, \forall n\in\NN,n_k\in\K_n$. Here, \eqref{syncbound2a}--\eqref{syncbound2e} come from \eqref{syncbound}. If we let $\vv \triangleq \{v_{n_k}\}$ and $\uu\triangleq \{u_{n_k}\}, \forall n\in\NN,n_k\in\K_n,$ with $v_{n_k}\triangleq \eta_{n_k}^{1/2}, \,\, u_{n_k} \triangleq \zeta_{n_k}^{1/2}, \forall n,n_k,$ then problem \eqref{Pmainepi} will be equivalent to \begin{subequations}\label{Pmainepiequi} \begin{align} \label{CF:shortP:epi} \!\!\!\!\!\!\underset{\widetilde{\x}}{\min} \,\, & \widetilde{E}_{total} \\ \mathrm{s.t.}\,\, \nonumber & \eqref{fbound}, \eqref{rlowerbound}, \eqref{tbound}-\eqref{syncbound2c} \\ \label{Rdlowerbound2} & r_{d,n_k}\leq R_{d,n_k} (\vv), \forall n,n_k \\ \label{Rulowerbound2} & r_{u,n_k}\leq R_{u,n_k} (\uu), \forall n,n_k \\ \label{tdgroupbound2} & v_{n_k}^2 - r_{d,n_k}\omega_{n_k} \leq 0, \forall n, n_k \\ \label{tubound2} & u_{n_k}^2 - r_{u,n_k} \theta_{n_k} \leq 0, \forall n,n_k \\ \label{powerdupperbound2} & \sum_{n\in\NN} \sum_{k\in\K_n} v_{n_k}^2 \leq 1 \\ \label{poweruupperbound2} & u_{n_k}^2\leq 1, \forall n,n_k \\ \label{powerlowerbound2} & 0\leq v_{n_k}, 0\leq u_{n_k},\forall n,n_k, \\ \label{syncbound2e2} & q_{1,n_k} r_{d,n_k} - S_{d,n}\leq 0, \forall n,n_k \\ \label{syncbound2f2} & q_{2,n_k} f_{n_k} - LD_{n}c_{n_k}\leq 0, \forall n,n_k, \end{align} \end{subequations} where $\widetilde{\x}\triangleq\{\x,\vv,\uu\}\setminus\{\ETA,\ZETA\}$. Here, \eqref{tdgroupbound2} and \eqref{tubound2} follow from \eqref{tdgroupbound} and \eqref{tubound}, while \eqref{powerdupperbound2}--\eqref{poweruupperbound2} follow from \eqref{powerdupperbound}, \eqref{poweruupperbound}, and \eqref{powerlowerbound}. Problem \eqref{Pmainepiequi} is still difficult to solve due to nonconvex constraints \eqref{Rdlowerbound2}, \eqref{Rulowerbound2}, \eqref{tdgroupbound2}, \eqref{tubound2}, \eqref{syncbound2e2}, and \eqref{syncbound2f2}. To deal with these constraints, we first observe that the rates $R_{d,n_k}(\vv)$ and $R_{u,n_k}(\uu)$ of nonconvex constraints \eqref{Rdlowerbound2} and \eqref{Rulowerbound2} have the following concave lower bounds \cite[(20)]{nguyen17TCOM}: \begin{align}\label{Rdconcave} \nonumber &\widetilde{R}_{d,n_k}(\vv) \triangleq \frac{\tau_c-\tau_{cp}}{\tau_c\log 2}B\Big[\log \Big(1 + \frac{(\Upsilon_{n_k}^{(i)})^2} {\Pi_{n_k}^{(i)}} \Big) -\frac{(\Upsilon_{n_k}^{(i)})^2}{\Pi_{n_k}^{(i)}} \\ &+2\frac{\Upsilon_{n_k}^{(i)}\Upsilon_{n_k}}{\Pi_{n_k}^{(i)}} -\frac{(\Upsilon_{n_k}^{(i)})^2(\Upsilon_{n_k}^2+\Pi_{n_k})}{\Pi_{n_k}^{(i)}((\Upsilon_{n_k}^{(i)})^2+\Pi_{n_k}^{(i)})}\Big] \leq R_{d,n_k}(\vv), \\ \nonumber &\widetilde{R}_{u,n_k}(\uu) \triangleq \frac{\tau_c-\tau_{dp}}{\tau_c\log 2} B \Big[ \log\Big(1+\frac{(\Psi_{n_k}^{(i)})^2}{\Xi_{n_k}^{(i)}}\Big) -\frac{(\Psi_{n_k}^{(i)})^2}{\Xi_{n_k}^{(i)}} \\ &+ 2\frac{\Psi_{n_k}^{(i)}\Psi_{n_k}}{\Xi_{n_k}^{(i)}} - \frac{(\Psi_{n_k}^{(i)})^2(\Psi_{n_k}^2+\Xi_{n_k})}{\Xi_{n_k}^{(i)}((\Psi_{n_k}^{(i)})^2+\Xi_{n_k}^{(i)})} \Big] \leq R_{u,n_k}(\uu), \end{align} \noindent where $\Pi_{n_k}(\vv) = \rho_d (\beta_{n_k} - \hat{\sigma}_{n_k}^2) \sum_{n'\in\NN}\sum_{n'_\ell\in\K_{n'}} v_{n'_\ell}^2 +1$, $\Upsilon_{n_k} (v_{n_k}) = \sqrt{(M-K_{total})\rho_d} \hat{\sigma}_{n_k}v_{n_k}$, $\Xi_{n_k}(\uu) = \rho_u \sum_{n' \in \NN} \sum_{n'_\ell \in \K_{n'}} (\beta_{n'_\ell} - \bar{\sigma}_{n'_\ell}^2) u_{n'_\ell}^2 + 1$, and $\Psi_{n_k} = \sqrt{(M-K_{total}) \rho_u} \bar{\sigma}_{n_k} u_{n_k}$. Next, the functions in the left-hand sides of constraints \eqref{tdgroupbound2}, \eqref{tubound2}, \eqref{syncbound2e2}, and \eqref{syncbound2f2} have the following convex upper bounds \cite{vu20TWC}: \vspace{-0mm} \begin{align} \nonumber &\!\!\!\!v_{n_k}^2\! -\! r_{d,n_k}\omega_{n_k} \!\leq\! h_{1,n_k}(v_{n_k},r_{d,n_k},\omega_{n_k}) \triangleq 0.25 \big[4v_{n_k}^2 \!+\!(r_{d,n_k} \!- \\ &\!\!\!\! \omega_{n_k})^2 - 2(r_{d,n_k}^{(i)}\!+\!\omega_{n_k}^{(i)})(r_{d,n_k}\! +\! \omega_{n_k})\! + \!(r_{d,n_k}^{(i)}\!+\!\omega_{n_k}^{(i)})^2\big], \\ \nonumber &\!\!\!\!u_{n_k}^2 \!-\! r_{u,n_k}\theta_{n_k} \!\leq\! h_{2,n_k}(u_{n_k},r_{u,n_k},\theta_{n_k})\triangleq 0.25 \big[4u_{n_k}^2 \!+\! (r_{u,n_k}\!- \\ &\!\!\!\! \theta_{n_k})^2 - 2(r_{u,n_k}^{(i)}\!+\!\theta_{n_k}^{(i)})(r_{u,n_k} \!+\! \theta_{n_k}) \!+\! (r_{u,n_k}^{(i)}\!+\!\theta_{n_k}^{(i)})^2\big] \\ \nonumber &\!\!\!\! q_{1,n_k} r_{d,n_k} \!-\! S_{d,n} \!\leq\! h_{3,n_k}(q_{1,n_k},r_{d,n_k})\triangleq 0.25 \big[ (q_{1,n_k}\!+ r_{d,n_k})^2 \\ &\!\!\!\! \!\! -\! 2(q_{1,n_k}^{(i)}\!\!-\!r_{d,n_k}^{(i)}) (q_{1,n_k} \!\!-\! r_{d,n_k}) \!+\! (q_{1,n_k}^{(i)}\!\!-\!r_{d,n_k}^{(i)})^2 \!-\! 4S_{d,n}\big] \\ \nonumber &\!\!\!\! q_{2,n_k} f_{n_k} \!-\! LD_{n}c_{n_k} \!\leq\! h_{4,n_k}(q_{2,n_k},f_{n_k})\triangleq 0.25 \big[ (q_{2,n_k}\!+ f_{n_k})^2 \\ &\!\!\!\! \!\! -\! 2(q_{2,n_k}^{(i)}\!\!-\!f_{n_k}^{(i)}) (q_{2,n_k} \!\!-\! f_{n_k}) \!+\! (q_{2,n_k}^{(i)}\!\!-\!\!f_{n_k}^{(i)})^2 \!-\! 4LD_{n}c_{n_k}\big]. \end{align} As such, constraints \eqref{Rdlowerbound2}, \eqref{Rulowerbound2}, \eqref{tdgroupbound2}, \eqref{tubound2}, \eqref{syncbound2e2}, and \eqref{syncbound2f2} can now be approximated respectively by the following convex constraints \vspace{-2mm} \begin{align} \label{Rdlowerboundapprox} &r_{d,n_k} \leq \widetilde{R}_{d,n_k}(\vv), \forall n,n_k \\ \label{Rulowerboundapprox} &r_{u,n_k} \leq \widetilde{R}_{u,n_k}(\uu), \forall n,n_k \\ \label{tubounddapprox} &h_{1,n_k}(v_{n_k},r_{d,n_k},\omega_{n_k}) \leq 0, \forall n,n_k \\ \label{tuboundd2approx} &h_{2,n_k}(u_{n_k},r_{u,n_k},\theta_{n_k}) \leq 0, \forall n,n_k \\ \label{syncbound2eapprox} &h_3(q_{1,n_k},r_{d,n_k}) \leq 0, \forall n,n_k \\ \label{syncbound2fapprox} &h_4(q_{2,n_k},f_{n_k}) \leq 0, \forall n,n_k. \end{align} \vspace{-4mm} \noindent At iteration $(i+1)$, for a given point $\widetilde{\x}^{(i)}$, problem \eqref{Pmainepi} can finally be approximated by the following convex problem: \begin{align}\label{Pmainepiapprox} \underset{\widetilde{\x}\in\widetilde{\FF}}{\min} \,\, & \widetilde{E}_{total}, \end{align} where $\widetilde{\FF}\triangleq\!\{ \eqref{fbound}, \eqref{rlowerbound},\eqref{tbound}-\eqref{syncbound2c}, \eqref{powerdupperbound2}-\eqref{powerlowerbound2}, \eqref{Rdlowerboundapprox}-\eqref{syncbound2fapprox} \}$ is a convex feasible set. In Algorithm~\ref{alg}, we outline the main steps to solve problem \eqref{Pmain}. Let $\FF\triangleq\{ \eqref{powerdupperbound}, \eqref{poweruupperbound}, \eqref{powerlowerbound} - \eqref{syncbound}\}$ be the feasible set of \eqref{Pmain}. Starting from a random point $\widetilde{\x}\in\FF$, we solve \eqref{Pmainepiapprox} to obtain its optimal solution $\widetilde{\x}^*$, and use $\widetilde{\x}^*$ as an initial point in the next iteration. The algorithm terminates when an accuracy level of $\varepsilon$ is reached. In the case when $\widetilde{\FF}$ satisfies Slater's constraint qualification condition, Alg.~\ref{alg} will converges to a Karush-Kuhn-Tucker solution of \eqref{Pmainepi} (hence \eqref{Pmain}) \cite[Theorem 1]{Marks78OR}. In contrast, Alg.~\ref{alg} will converges to a Fritz John solution of \eqref{Pmainepi} (hence \eqref{Pmain}). \begin{algorithm}[!t] \caption{Solving problem \eqref{Pmain}} \begin{algorithmic}[1]\label{alg} \STATE \textbf{Initialize}: Set $i\!=\!0$ and choose a random point $\widetilde{\x}^{(0)}\!\in\!\FF$. \REPEAT \STATE Update $i=i+1$ \STATE Solving \eqref{Pmainepiapprox} to obtain its optimal solution $\widetilde{\x}^*$ \STATE Update $\widetilde{\x}^{(i)}=\widetilde{\x}^*$ \UNTIL{convergence} \end{algorithmic} \vspace{+0mm} \textbf{Output}: $(\ETA^*,\ZETA^*,\f^*)$ \end{algorithm} \vspace{-3mm} \subsection{Synchronous Scheme} \vspace{-2mm} The optimization problem of this scheme is formulated as \begin{subequations}\label{Pmainsyn} \begin{align} \label{CFPmulti} \!\!\!\!\!\underset{\ETA,\f,\ZETA}{\min} \,\, & E_{d}(\ETA) + \!\!\sum_{n\in\K_n}\!\sum_{k\in\K_n} \!\!(E_{C,n_k}(f_{n_k})\! +\! E_{u,n_k}(\ZETA)) \\ \nonumber \mathrm{s.t.}\,\, & \eqref{powerdupperbound}, \eqref{poweruupperbound}, \eqref{powerlowerbound}, \eqref{fbound} \\ \nonumber \label{QoSboundsyn} &\max_{n\in\NN}\max_{n_k\in\K_n}t_{d,n_k}(\ETA) + \max_{n\in\NN}\max_{n_k\in\K_n}t_{C,n_k}(\f) \\ &\qquad+ \max_{n\in\NN}\max_{n_k\in\K_n}t_{u,n_k}(\ZETA) \leq t_{\text{QoS}}. \end{align} \end{subequations} Here, constraint \eqref{QoSboundsyn} captures the nature of ``step-by-step'', i.e., every UE needs to wait for the UEs of all groups to finish one step before starting the next step as seen in Fig.~\ref{fig:time1}(b). Compared to \eqref{QoSboundsyn}, \eqref{QoSbound} provides more flexibility for allocating times of Steps (S1)--(S3) for each UE since the UEs do not need to wait for other UEs to start a new step. Using the similar procedure to solve problem \eqref{Pmain} above, we approximate \eqref{Pmainsyn} by the following convex problem \begin{subequations}\label{Pmainsyncepiapprox} \begin{align} \underset{\widehat{\x}}{\min} \,\, & \widetilde{E}_{total} \\ \nonumber \mathrm{s.t.}\,\, & \eqref{fbound}, \eqref{rlowerbound}, \eqref{powerdupperbound2}-\eqref{powerlowerbound2}, \eqref{Rdlowerboundapprox}-\eqref{tuboundd2approx} \\ & t_d + t_C + t_u \leq t_{\text{QoS}} \\ & \frac{S_{d,n}}{r_{d,n_k}} \leq t_d, \forall n,n_k \\ & \frac{LD_nc_{n_k}}{f_{n_k}} \leq t_C, \forall n,n_k \\ & \frac{S_{u,n}}{r_{u,n_k}} \leq t_u , \forall n,n_k, \end{align} \end{subequations} where $\rr_d,\rr_u,\OOmega,\THeta,t_d,t_C,t_u$ are additional variables and $\widehat{\x} \triangleq \{\vv,\f,\uu,\rr_d,\rr_u,\OOmega,\THeta,t_d,t_C,t_u\}$. Then, problem \eqref{Pmainsyn} can be solved using Algorithm~\ref{alg} for iteratively solving \eqref{Pmainsyncepiapprox}. \vspace{-2mm} \subsection{Complexity Analysis} \vspace{-1mm} Problem \eqref{Pmainepiapprox} can be transformed to an equivalent problem that involves $V_1\triangleq (9K_{total}+1)$ real-valued scalar variables, $L_1\triangleq (8K_{total}+4)$ linear constraints, $Q_1\triangleq 11K_{total}$ quadratic constraints. Therefore, problem \eqref{Pmainepiapprox} requires a complexitiy of $\OO(\sqrt{L_1+Q_1}(V_1+L_1+Q_1)V_1^2)$ \cite{tam16TWC}. The transformed version of problem \eqref{Pmainsyncepiapprox} involves a smaller number of variables and constraints than the version of problem \eqref{Pmainepiapprox}, i.e, $V_2\triangleq (6K_{total}+3)$ real-valued scalar variables, $L_2\triangleq (6K_{total}+1)$ linear constraints, $Q_2\triangleq 5K_{total}$ quadratic constraints. Therefore, problem \eqref{Pmainsyncepiapprox} has the complexity of $\OO(\sqrt{L_2+Q_2}(V_2+L_2+Q_2)V_2^2)$ which is lower than that of problem \eqref{Pmainepiapprox}. As such, it is expected that the synchronous scheme requires a lower complexity than the asynchronous scheme. However, the synchronous scheme requires more signaling overhead to achieve synchronization than the asynchronous scheme. \vspace{-1mm} \section{Numerical Examples} \vspace{-1mm} \label{sec:sim} \subsection{Network Setup and Parameter Setting} \vspace{-1mm} Consider a mMIMO network in a square of $D\times D$ km$^2$ where the BS is at the center and the UEs are randomly located. We set $\tau_c\!=\!200$ samples. The large-scale fading coefficients, i.e., $\beta_{mn_k}$, are modeled in the same manner as \cite[Eqs. (37), (38)]{emil20TWC}. For ease of presentation, we assume that all groups have the same number of UEs, i.e., $K_n =K, \forall n$. The total number of UEs is thus $NK$. We choose $\tau_{d,p} =\tau_{u,p} \!=\! NK$, $S_d\!=\!S_u\!=\!20$ MB, noise power $\sigma_0^2\!=\!-92$ dBm, $L=50$, $f_{\max}=4 \times 10^9$ cycles/s, $D_n = 5\times 10^6$ samples, $c_{n_k} = 20$ cycles/samples \cite{tran19INFOCOM}, for all $n,n_k$, $\alpha=5\times 10^{-30}$, $t_{\text{QoS}} = 5$ s. Let $\tilde{\rho}_d\!=\!6$ W, $\tilde{\rho}_u\!=\!0.2$ W and $\tilde{\rho}_p\!=\!0.2$ W be the maximum transmit power of the APs, UEs and uplink pilot sequences, respectively. The maximum transmit powers $\rho_d$, $\rho_u$ and $\rho_p$ are normalized by the noise power. \vspace{-1mm} \subsection{Results and Discussions} \vspace{-1mm} Note that there are no other existing works studying wireless networks for supporting multiple FL groups. Therefore, to evaluate the effectiveness of our proposed asynchronous scheme (\textbf{OPT\_Async}) and synchronous scheme (\textbf{OPT\_Sync}), we consider the following heuristic schemes: \begin{itemize} \item \textbf{Heuristic\_Async} (Heuristic solution for asynchronous scheme): The downlink power to the UEs of all groups are the same, i.e., $\eta_{n_k}\!=\!\frac{1}{NK}$ and the transmitted power of each UE is $\eta_{n_k}=1, \forall n, n_k$. The processing frequencies are $f_{n_k} = \frac{LD_nc_{n_k}}{t_{\text{QoS}-t_{d,n_k} - t_{u,n_k}}}, \forall n,n_k$. \item \textbf{Heuristic\_Sync} (Heuristic solution for synchronous scheme): Similar to Heuristic\_Async except for the processing frequencies which are set as $f_{n_k} = \frac{LD_nc_{n_k}}{t_{\text{QoS}-\max_{n\in\NN}\max_{n_k\in\K_n}t_{d,n_k} - \max_{n\in\NN}\max_{n_k\in\K_n}t_{u,n_k}}}, \forall n,n_k$. \end{itemize} \begin{figure}[t!] \centering \vspace{-0mm} {\includegraphics[width=0.38\textwidth]{overM.eps}\label{fig:a}} \caption{Comparison among the proposed approach and baselines ($K=10$ (users per group), $N\!=\!3$ groups, $D=0.25$ km).} \label{Fig:sim1} \vspace{-5mm} \end{figure} \begin{figure}[t!] \centering {\includegraphics[width=0.39\textwidth]{overK.eps}\label{fig:b}} \vspace{-0mm} \caption{Comparison among the proposed approach and baselines ($M = 100$ (antennas), $N\!=\!3$ groups, $D=0.25$ km).} \label{Fig:sim2} \vspace{-0mm} \end{figure} Figs.~\ref{Fig:sim1} and~\ref{Fig:sim2} compare the total energy consumption of one FL iteration among the considered schemes. As seen, our proposed schemes give the best performance. Specifically, compared to heuristic schemes, the energy reduction are up to $52\%$ with $M=100$, $K=10$, and up to $66\%$ with $M=100$, $K=4$. The figures not only demonstrate the significant advantage of a joint allocation of power and processing frequency, but also show the benefit of using massive MIMO to support FL. Thanks to massive MIMO technology, the data rate of each UE increases when the number of antennas increases, leading to lower delays and then a decrease of $31\%$ in the total energy consumption of one FL iteration as shown in Fig.~\ref{Fig:sim1}. Figs.~\ref{Fig:sim1} and~\ref{Fig:sim2} also shows that the asynchronous scheme slightly outperforms the synchronous scheme. In particular, the energy reduction in one FL iteration is up to only $14\%$ with $M=100$, $K=10$. This is reasonable because the UEs in the asynchronous scheme do not need to wait for other UEs. As such, they have more time resource, and hence, can save more energy by using lower processing frequencies than those in the synchronous scheme. However, minimizing energy consumption results in maximizing the lowest data rate. Therefore, data rates obtained by the asynchronous scheme are relatively similar to those by the synchronous scheme, which leads to a similar performance of both schemes. \vspace{-1mm} \section{Conclusion} \vspace{-1mm} \label{sec:con} This work has proposed two novel schemes with mMIMO as energy-efficient solutions for future wireless networks to support multiple FL groups. Using successive convex approximation techniques, we have also successfully proposed an algorithm to allocate power and processing frequency in order to minimize the energy consumption in each FL iteration. Numerical results showed that our proposed schemes significantly reduces the energy consumption of each FL iteration compared to heuristic schemes. They also confirmed that in terms of energy savings, the asynchronous scheme is a better choice to support multiple FL groups than the synchronous scheme, though at the cost of higher complexity. \vspace{-1mm} \ifCLASSOPTIONcaptionsoff \newpage \fi \vspace{-0mm} \section*{Acknowledgment} \vspace{-0mm} The work of T.~T.~Vu and H.~Q.~Ngo was supported by the U.~K. Research and Innovation Future Leaders Fellowships under Grant MR/S017666/1. The work of Erik~G.~Larsson was supported in part by ELLIIT and the Knut and Alice Wallenberg Foundation. The work of Minh~N.~Dao was partially supported by Federation University Australia under Grant RGS21-8. \vspace{-2mm} \begin{spacing}{1} \bibliographystyle{IEEEtran}
{'timestamp': '2021-09-01T02:03:39', 'yymm': '2108', 'arxiv_id': '2108.13512', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.13512'}
\section{Introduction} Organic solar cells are flexible, stretchable and possibly wearable. If they could be integrated into our clothing, they would power our portable phones and computers. $\rm C_{60}$/pentacene solar cells are a prime example in organic photovoltaics \cite{bredas}, where pentacene (Pc) serves as an electron donor and $\rm C_{60}$ as an electron acceptor. When light strikes on pentacene, a complex singlet is formed and subsequently is split into two optically inactive triplets, or singlet fission \cite{chan,thor,zimmerman,wilson,sanders}. Such a unique feature, where one single photon creates two triplets, greatly improves the quantum efficiency of charge photogeneration \cite{congreve}. But the high quantum efficiency is only the first step for the photovoltaic cell \cite{congreve}. What is more important is the states that channel initially-bound electrons and holes into free charge carriers. Bakulin {\it et al. } \cite{bakulin} showed that the formation of delocalized states facilitates photoconversion. In 2014, Gelinas {\it et al. } \cite{gelinas} suggested that a rapid (40 fs) charge separation proceeds through delocalized $\pi$-electron states in ordered regions of the fullerene and acceptor material. Chen {\it et al. } \cite{chen} also found that charge photogeneration occurs predominantly via those delocalized hot exciton states. Paraecatti and Banerji \cite{para} more directly pointed out that exciton delocalization provides an efficient charge separation pathway. These prior studies established beyond any doubt the important of delocalized states, but what are these delocalized channel states \cite{savoie,savoie2}? To this end, there has been no obvious answer. This is the focus of our study. In this paper, we carry out an extensive first-principles density functional calculation to show that there are a group of unusually larger superintermolecular orbitals (SIMOs) in $\rm C_{60}$/Pc complex that bridge both $\rm C_{60}$ and pentacene. We employ a real grid mesh method so we can treat both ordinary molecular orbitals and SIMOs on an equal footing. We find that energetically, SIMOS are close to native superatom molecular orbitals in $\rm C_{60}$, but spatially SIMOs are much larger, with spatial extension over 1 nm. They are optically silent. By computing over 3000 Coulomb and exchange integrals, we find that both Coulomb and exchange interactions among SIMOs are in general much smaller than those among ordinary molecular orbitals, a necessary condition to allow initially bound electrons and holes to dissociate into free charge carriers. Interestingly, regardless of edge-on and face-one geometries, SIMOs retain their original shapes. These features strongly suggest that they are good candidates for those channel states in $\rm C_{60}$/Pc solar cells. The rest of the paper is arranged as follows. In Sec. II, we present our theoretical formalism and the details of our first-principles calculation. Section III is devoted to the results and discussion. We conclude this paper in Sec. IV. An appendix at end provides additional details about our hybrid MPI/OpenMP parallel implementation. \section{Method} Our calculation is based on the first-principles density functional code Octopus \cite{octopus} which employs the pseudopotential method and the real grid mesh in real space, and has an important advantage that it treats localized and delocalized states on an equal footing. To start with, we solve the Kohn-Sham (KS) equation in atomic units, \begin{equation} \left [-\frac{1}{2}\nabla^2+V_{eff} ({\bf r})\right ] \phi_i ({\bf r})=E_i\phi_i ({\bf r})\label{ks} \end{equation} where $\phi_i ({\bf r})$ is the Kohn-Sham wavefunction and $E_i$ is the eigenvalue of state $i$. The first term on the left hand side of Eq. (\ref{ks}) is the kinetic energy operator. The effective potential ($V_{eff}$) consists of the electron-nuclei interaction, the Hartree potential (due to the electron-electron Coulomb interaction), and exchange-correlation interactions, \begin{equation} V_{eff} ({\bf r})=v({\bf r})+ \int d{\bf r'} \frac{\rho({\bf r'})}{|{\bf r}-{\bf r'}|} +V_{xc}({\bf r}) \end{equation} where the exchange-correlation potential $V_{xc}$ is $\delta E_{xc}[\rho]/\delta \rho({\bf r})$, taking the form of the local density approximation (LDA). We find that LDA is sufficient for our purpose, and using GGA raises the energy by 0.5 eV \cite{ijmp2015}. The new charge density is computed by summing over all the occupied orbitals ($N_{occ}$), \begin{equation} \rho({\bf r})=\sum_{i=1}^{N_{occ}}|\phi_i ({\bf r})|^2. \end{equation} The next iteration starts. This process repeats itself until the charge density converges. With the converged wavefunction, we then compute the Coulomb and exchange integrals using National Energy Research Scientific Computing Center machines. However, these integrals over six degrees of freedom are extremely time consuming, with so many mesh grid points (see the appendix for details). We employ the submatrix technique where we compute the action of the Coulomb term on the states $n$ and $m$ and then multiply two additional wavefunctions on the above results. We develop a hybrid MPI/OpenMP code that breaks the integral into segments and distribute them to processors and nodes. And finally, the master node sums all the results up. This speeds up our calculation greatly. We use the normal conserving pseudopotential developed by Troullier and Martins \cite{tm}. Our simulation box is a cylinder. The radius of the cylinder is $r=30~\rm \AA$ and the length is 80 $\rm \AA$. The grid mesh is $m=0.22~\rm \AA$ and the total number of grid mesh points is 22814131. We have checked the convergence with the grid mesh and find that our results converge well. $\rm C_{60}$/pentacene complex has 342 valence electrons, so 171 orbitals are doubly occupied. To obtain those unoccupied states, we add 129 extra states (in Octopus, the command is ExtraStates=129), so we have eigenstates all the way up to 300. This covers the entire spectrum that is of interest to us. The threshold for the charge density convergence is set to $10^{-4}$, and the threshold for the absolute energy convergence is set to $5\times 10^{-7}$ eV. All the Octopus calculations are run on our university Silicon cluster, where each computing node has dual Intel Xeon E5-2680 v2 CPUs with 2.80GHZ. Each CPU has 10 cores and cache size of 25 MB. The total memory for each node is 132 GB. The entire calculation needs 80 GB memory and takes nearly two months to finish. After the calculation is finished, we export the wavefunctions from state number 97 up to 300 in two different formats, one for Xcrysden rendering of orbital images and the other in the Cartesian format. The latter is the actual wavefunctions in the three-dimensional space $\phi(x,y,z)$. These wavefunctions are extremely convenient for calculating other properties of interest. \section{Results: Superintermolecular orbitals} \begin{figure \centerline{\includegraphics[angle=0,width=1\columnwidth]{first.eps}} \caption{Light first strikes $\rm C_{60}$/pentacene complex and creates a singlet, followed by singlet fission into triplets. But electrons and holes have to dissociate from each other to become free charge carriers. The central question is what the channel states are for charge generation in organic solar cells. We show that the superintermolecular orbitals may offer an answer. } \label{fig1} \end{figure} Photovoltaic effects depend on an efficient charge transfer from a donor (D) to an acceptor (A) \cite{jpc95}. Figure \ref{fig1} schematically illustrates that light first strikes $\rm C_{60}$/Pc but the subsequent charge transfer relies on channel states. While there is no detail study of these channel states, several studies have estimated the size of channel states to be around 3-4 nm \cite{gelinas,barker,bernardo,heiber,dutton,davino}. This distance corresponds to a binding energy of 0.1 eV, which can be reasonably approximated as $E_B=q^2/4\pi\epsilon r_{CT}$. Here $q$ is the charge, $r_{CT}$ is the separation between average electrons and holes in the parent charge-transfer (CT) state. However, no ordinary molecular orbitals can be as big as 4 nm. Being unaware of possible relevance to photovoltaics in $\rm C_{60}$/Pc, Feng and her coworkers \cite{feng} reported some very peculiar molecular orbitals in $\rm C_{60}$, resembling the atomic orbitals, but with a much larger radius. They are not localized around the atoms of the cluster, but rather they belong to the entire cluster, for which they called superatomic molecular orbitals, or SAMOs. They detected these SAMOs using the scanning tunneling microscope (STM), where the voltage bias is gradually tuned. Their appearance is due to the partial delocality of outer shells of carbon atoms which jointly create a potential. Such a potential allows electrons to partially delocalize around the entire molecule. These orbitals have a distinctive shell structure from $1s$ up to $1d$ and are optically dark states. When we were investigating SAMO\cite{ijmp2015}, we were keenly aware of the large size of those SAMOs. We notice that the $1s$ orbital has a size close to $\rm C_{60}$ \cite{ijmp2015}. To begin with, we employ Gaussian09 \cite{gaussian} to separately optimize $\rm C_{60}$, pentacene and $\rm C_{60}$/Pc structures. We use the Becke, 3-parameter, Lee-Yang-Parr (B3LYP) method and a correlation-consistent polarized valence double-zeta (cc-pVDZ) basis. The results are fully consistent with our and other previous calculations \cite{pra11,johansson12} in both the eigenenergies and wavefunctions. The optimized coordinates in Gaussian09, without further optimization, are used as an input for Octopus \cite{octopus}. The reason is that Octopus uses grid mesh and slightly breaks the symmetry of degenerate eigenstates. Although the change in energy is small, we worry that the introduced force may be too great if we use it to optimize our $\rm C_{60}$/Pc complex. As done by many researchers \cite{Yi,Yang,li,ryno}, we consider both edge-on and face-on configurations. In the edge-on configuration one end of Pc aims at the hexagons/pentagons on $\rm C_{60}$, while in the face-on configuration, the plane of Pc faces the hexagons/pentagons on $\rm C_{60}$. \begin{widetext} \begin{center} \begin{table} \caption{Coulomb and exchange matrix elements (in units of eV) among LUMO (from 172 to 174) and LUMO+1 (175). } \begin{tabular}{c|ccccllll} \hline\hline ~~\hspace{0.5cm}&\multicolumn{4}{l}{$K(nm|mn)$ (eV)}& \multicolumn{4}{l}{$J(nm|nm)$ (eV)}\\ \hline $n$\textbackslash$m$ &172 & 173 & 174 &175 \hspace{1cm} &172 & 173 &174 &175 \\ \hline 172 &3.64 & 3.42 & 3.42 &0.92 \hspace{1cm} &-- & 0.107 &0.107& 0.555$\times 10^{-6}$ \\ 173 & 3.42 & 3.64 &3.42 &0.93 \hspace{1cm} &0.107 & -- &0.108 &0.169$\times 10^{-5}$ \\ 174 &3.42 & 3.42 &3.64 &0.92 \hspace{1cm} &0.107 & 0.018 &-- &0.916$\times 10^{-6}$ \\ 175 &0.92 & 0.93 &0.92 &4.51 \hspace{1cm} &0.555$\times 10^{-6}$ &0.169$\times 10^{-5}$ &0.916$\times 10^{-6}$ &-- \\ \hline \hline \end{tabular} \label{table1} \end{table} \end{center} \end{widetext} \subsection{Edge-on} We start with the edge-on geometry. The distance between the frontier carbon atoms of Pc and the hexagons on $\rm C_{60}$ is 7.1 $\rm \AA$, larger than previous investigations \cite{minami,Yi}. The distance between the far-left carbon atoms on Pc and the far-right carbon atoms on $\rm C_{60}$ is 19.3 $\rm \AA$. The left figure of Fig. (\ref{simo}) shows one example for the edge-on configuration. This is the wave function $\psi_{205}({\bf r})$ for orbital 205 plotted at isovalue of 0.005$\rm \sqrt[-3]{\rm\AA}$. The color difference denotes the sign of $\psi_{205}({\bf r})$. Different from SAMOs \cite{feng}, this superorbital covers both Pc and $\rm C_{60}$ molecules, or superintermolecular orbital, SIMO for short. In the language of SAMO, this could be $1p$ SIMO, but for SIMOs, the orbital character is approximate due to the symmetry reduction. We find that in general SIMOs have special orientations just as an ordinary molecular orbital. In some cases, SIMOs are more like SAMOs on an isolated $\rm C_{60}$. This spatial preference is crucial since it allows the electrons to transfer from Pc to $\rm C_{60}$ unidirectionally. One special feature, which is inherent from SAMOs, is that the dipole transition matrix elements between SIMOs and ordinary molecular orbitals are very small. For this reason, we do not expect that an optically induced charge transfer occurs from ordinary molecular orbitals to SIMOs. Instead, the initial bound exciton must dissociate into SIMOs through tunneling. We recall that Feng {\it et al. } \cite{feng} detected SAMOs using STM. The quantum tunneling is also closer to what happens in solar cells. \begin{figure}[tb] \centerline{\includegraphics[angle=0,width=0.8\columnwidth]{1p.eps}} \caption{ Superintermolecular orbitals in $\rm C_{60}$/pentacene for the edge-on configuration (left) and face-on configuration (right). We show one representative $1p$ SIMO for each configuration. $1p$ SIMO has orbital number 205. } \label{simo} \end{figure} However, in photovoltaics, electrons are first excited into those low lying lowest unoccupied molecular orbitals (LUMOs), which have been the focus of recent investigations. For free charge generation, majority of theoretical studies start from an initial state $\phi_i^m({\bf r})$ localized on D, where $m$ is the multiplicity of state $i$ and ${\bf r}$ is the electron coordinate. One hopes that this initial state ends up to a final state $\phi_f^n({\bf r})$ localized on A. In the many-body picture \cite{aki}, one often starts from configurations like \begin{equation} |\Psi\rangle=a |\phi_i^m({\bf r}_1) \phi_f^n({\bf r}_2) \rangle + {\rm high-order~ terms.} \end{equation} If $|\phi_i^m({\bf r}_1) \phi_f^n({\bf r}_2)\rangle$ takes a significant weight on the many-body wavefunction, so CT is realized. This idea is simple and attractive, but faces a dilemma. To have a large contribution from configuration $|\phi_i^m({\bf r}_1) \phi_f^n({\bf r}_2)\rangle$, the Coulomb and exchange interaction matrix elements must be large, but this leads to a large binding energy, detrimental to free charge carrier generation \cite{jail}. On the other hand, if the above elements are small, then the coupling is weak and the transition to CT states is less likely. We compute all the Coulomb and exchange integrals from the lowest unoccupied molecular orbital (LUMO) up to LUMO+1; there are in total four orbitals since LUMOs are nearly degenerate. Table I shows all the Coulomb and exchange integrals among all the LUMO states. We find that the strongest interaction is 4.51 eV, which is more than forty times larger than the disorder energy of 0.1 eV estimated by Clarke and Durrant \cite{clarke}. This leads to a high binding energy for excitons, thus it is detrimental to free charge carrier generation \cite{jail}. This simple estimate highlights that initially excited states by the light are unlikely the same states that are responsible for final charge transfer and charge separation. A different group of states engage the final step of charge generations. \begin{figure} \centerline{\includegraphics[angle=270,width=0.8\columnwidth]{fig1.eps}} \caption{Coulomb and exchange matrix elements between pairs of states in native $\rm C_{60}$. There are 3081 elements. The magnitude of matrix elements is proportional to the radius of circles, and all the Coulomb elements are rescaled by multiplying 0.15 . Since the matrix is symmetric, the upper triangle shows the Coulomb integral, while the lower triangle shows the exchange integral. } \label{coulombc60} \end{figure} The above calculation is only limited to four unoccupied orbitals. Before we present results for $\rm C_{60}$/Pc, we decide to completely map out all the matrix elements for all the states from the highest occupied molecular orbital (HOMO)-4 through $1g$ SAMOs in native $\rm C_{60}$. There are 3081 Coulomb and exchange integrals. All the calculations are carried out at Berkeley National Laboratory's National Energy Research Computing Center. Figure \ref{coulombc60} shows a complete list of those matrix elements. Since these matrix elements are symmetric with respect to the state permutation (other combinations have a much small amplitude, thus not shown), we only show the upper triangle for the Coulomb integral and the lower triangle for the exchange integral. Both the horizontal and vertical axes denote the states. The SAMOs state labels are slightly off the axis for clarity. Along the horizontal axis, the second $H_u$ state from the left is our HOMO, and the first $T_{1u}$ is our LUMO. The radii of the circles are proportional to the magnitude of integral. The Coulomb integrals are in general much larger, so when we plot them, we reduce their size by multiplying them by 0.15. A general pattern emerges. For ordinary molecular orbitals, the Coulomb and exchange integrals are much larger. The exchange integrals are much less uniform than the Coulomb integral, since the former greatly depends on the phases of the wavefunctions. SAMOs' integrals are also sizable, in particular for $1s$ SAMO, but once we are above $1p$ SAMO, both Coulomb and exchange integrals drop very quickly. This opens a door for delocalized and weak-interacting SAMOs to participate charge generating process. \begin{figure} \centerline{\includegraphics[angle=0,width=0.8\columnwidth]{last1.eps}} \caption{Coulomb and exchange matrix elements between pairs of states. The magnitude of matrix elements is proportional to the radius of circles. Since the matrix is symmetric, the upper-left triangle shows the Coulomb integral, while the lower-right triangle shows the exchange integral. The largest Coulomb integral (4.51 eV) is between LUMO+1 (orbital number 175) localized on Pc; and the smallest (0.43 eV) is among $1f$ SIMOs (orbital numbers 226 and 228). The exchange integrals are in red small circles and extremely small. } \label{coulomb} \end{figure} To build a case for SIMOs, we also compute the Coulomb and exchange integrals and we find that similar to SAMOs, their values are an order of magnitude smaller. Figure \ref{coulomb} compares the Coulomb (upper triangle) and exchange (lower triangle) integrals for LUMOs and SIMOs. The largest circle represents 4.51 eV (which is between LUMO+1). The Coulomb interaction drops quickly once we are above the $1s$ SIMO. The smallest Coulomb interaction is for $1f$ SIMOs, only 0.43 eV. If we consider the dielectric constant of the medium about 3, this interaction is reduced to 0.14 eV, very close to the disorder energy. These small Coulomb and exchange integrals are also reflected in the small transfer integral used by Smith and Chin \cite{smith}. They concluded that the transfer integrals are no larger than 8 meV, extremely tiny in comparison to those in $\rm C_{60}$ \cite{prb94}. In 2016, in poly(3-hexylthiophene)/fullerene blends, D'Avino {\it et al. }\cite{davino} argued that the bound localized charge-transfer (LCT) states coexist with delocalized space-separated states because LCT states hybridize with singlets. In a later study \cite{davino2}, they also suggested that both $\rm C_{60}$ and its derivative may sustain high-energy states that spread over a few tens of molecules by pointing out sizable intermolecular delocalization of the electron wavefunction. Here our SIMOs present an alternative. \subsection{Face-on} We also consider the face-on geometry. In this configuration, the distance between Pc and $\rm C_{60}$ is 10.6 $\rm \AA$, also larger than many prior studies. This configuration is considered to be the most favorable one for the charge transfer and charge separation. The right figure of Fig. \ref{simo} shows $1p$ SIMO. It is interesting that although the face-on geometry is so different from the edge-on geometry, the SIMO retains its shape well. The wavefunction on Pc has a larger amplitude than that for the edge-on configuration. This may explain why it is more efficient, since the orbital is very delocalized. This meets one of the requirements for the channel states. Therefore, once an electron tunnels into this orbital, it has an excellent chance to transfer to $\rm C_{60}$. \begin{figure} \centerline{\includegraphics[angle=0,width=0.8\columnwidth]{energy.eps}} {\includegraphics[angle=0,width=0.3\columnwidth]{wf-st0210.green.eps}} {\includegraphics[angle=0,width=0.3\columnwidth]{wf-st0228.green.eps}} \caption{ (Top panel) Energy level comparison between SIMOs of $\rm C_{60}$/pentacene and SAMOs of $\rm C_{60}$. The energy in $1s$ states is similar. A small splitting is noticed in $1p$ state. $1d$ state has the largest shift of 0.16 eV, due to an overlap in the $d$ orbitals. Change in $1f$ is small. (Bottom panel) Left: Wavefunction of $1d$ SIMO; Right: Wavefunction of $1f$ SIMO. } \label{energy} \end{figure} Energetically, the SIMOs appear in the same energy window as native SAMOs in $\rm C_{60}$. Naturally, this also depends on the spatial orientation of Pc and $\rm C_{60}$. Figure \ref{energy} (top panel) compares the SIMO energies in $\rm C_{60}$/Pc with those of SAMOs in $\rm C_{60}$. We see that $1s$ SIMO and $1s$ SAMO are aligned with each other. However, $1p$ SAMO is now split into three nearly degenerate levels, due to the symmetry reductions as explained above. The difference becomes bigger for $1d$ SIMOs. We see that the lowest $1d$ SIMO is shifted down by 0.16 eV. We understand why it is so. The lower left figure shows that the orbital wavefunction is delocalized over Pc and $\rm C_{60}$ and there are nodal lines in between. The lobes of the orbital overlaps strongly. This lowers the orbital energy significantly. By contrast, other orbitals do not change too much. $1f$ SAMO is also split. To show the orbital 228 has a $f$ character, we rotate the complex structure so pentacene points out of the page. It is clear that the orbital retains the $f$ character, but the orbital is elongated along the Pc-$\rm C_{60}$ axis. Quantitatively, $1f$ SAMO in native $\rm C_{60}$ is at 0.204 eV, while $1f$ SIMOs lie between 0.186 and 0.212 eV. Interestingly, Pavlyukha and Berakdar \cite{Pavlyukha} showed that SAMOs are long-lived, coincident with the experimental observation \cite{chep}. This is what a channel state is supposed to be. These agreements constitute strong evidence that the SIMOs may serve a possible channel for charge separation. However, it is difficult to detect these delocalized excited states \cite{rao} optically since it may have a low absorption cross section \cite{osterloh}. One possible method to test our prediction is the transport measurement. Such a measurement is in fact more directly related to charge transfer in photovoltaics than the optical means. Finally, we notice that there is an ongoing debate how or whether hot charge-transfer excitons assist free carrier generation \cite{nan,shen}. But even if hot CT excitons do play a role \cite{jail}, the binding energy of interfacial CT exciton after initial excitation is too high for the thermal activation to climb out of the Coulomb trap. We argue that SIMOs reported here may provide an alternative path to CT. \section{Conclusions} We have carried out the first-principles density functional calculation in $\rm C_{60}$/Pc complex and find that there exist a group of big superintermolecular orbitals. These orbitals are very delocalized and cover both Pc and $\rm C_{60}$. We find that SIMOs have the right spatial and energetic characters to channel the initially bound electrons and holes into free charge carriers. Spatially, they are much larger than ordinary molecular orbitals, close to 1 nm, a critical distance for CT. They have a clear spatial orientation from Pc to $\rm C_{60}$, a crucial element that greatly facilitates triplet dissociation into free charge carriers. Energetically, both exchange and Coulomb interactions of SIMOs are very small, on the order of 0.1 eV, a value that matches the Clarke-Durrant disorder energy of 0.1 eV \cite{clarke}. Thus, our finding highlights an unexpected benefit from SIMOs and points a possible strategy for tailoring material properties toward high-efficient organic solar cells. One possible method to enhance charge generation is to employ a larger fullerene, where SIMOs can be made even larger. This is consistent with the experiment \cite{chen}, where they showed that large fullerene crystals can enhance charge separation yields. We expect that our finding will motivate further experimental and theoretical investigations on these exciting opportunities at the frontier of photovoltaics. \acknowledgments This work was supported by the U.S. Department of Energy under Contract No. DE-FG02-06ER46304 (GPZ). This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. $^*$gpzhang@indstate.edu\\
{'timestamp': '2016-11-11T02:02:01', 'yymm': '1611', 'arxiv_id': '1611.03150', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.03150'}
\section{Introduction} A fundamental peculiarity of XML is the description of regular properties. For example, in XML schema languages the content types of element definitions rely on regular expressions. In addition, selecting nodes in such constrained trees is also done by means of regular path expressions (\`a la XPath). In both cases, it is often interesting to be able to express conditions on the frequency of occurrences of nodes. Even if we consider simple strings, it is well known that some formal languages easily described in English may require voluminous regular expressions. For instance, as pointed out in \cite{klarlund-tacas95}, the language $L_{2a2b}$ of all strings over $\Sigma=\{a,b,c\}$ containing at least two occurrences of $a$ \emph{and} at least two occurrences of $b$ requires a large expression, such as: \begin{align*} && \Sigma^*a\Sigma^*a\Sigma^*b\Sigma^*b\Sigma^* &&\cup&& \Sigma^*a\Sigma^*b\Sigma^*a\Sigma^*b\Sigma^* \\ &\cup& \Sigma^*a\Sigma^*b\Sigma^*b\Sigma^*a\Sigma^* &&\cup&& \Sigma^*b\Sigma^*b\Sigma^*a\Sigma^*a\Sigma^* \\ &\cup& \Sigma^*b\Sigma^*a\Sigma^*b\Sigma^*a\Sigma^* &&\cup&& \Sigma^*b\Sigma^*a\Sigma^*a\Sigma^*b\Sigma^*. \end{align*} If we add $\cap$ to the operators for forming regular expressions, then the language $L_{2a2b}$ can be expressed more concisely as $(\Sigma^*a\Sigma^*a\Sigma^*) \cap (\Sigma^*b\Sigma^*b\Sigma^*)$. In logical terms, conjunction offers a dramatic reduction in expression size, which is crucial when the complexity of the decision procedure depends on formula size. If we now consider a formalism equipped with the ability to describe numerical constraints on the frequency of occurrences, we get a second (exponential) reduction in size. For instance, the above expression can be formulated as $(\Sigma^*a\Sigma^*)^2 \cap (\Sigma^*b\Sigma^*)^2$. We can even write $(\Sigma^*a\Sigma^*)^{2^{n}} \cap (\Sigma^*b\Sigma^*)^{2^{n}}$ (for any natural $n$) instead of a (much) larger expression. Different extensions of regular expressions with intersection, counting constraints, and interleaving have been considered over strings, and for describing content models of sibling nodes in XML type languages \cite{ghelli-icdt09,Gelade-siam08,Kilpelainen-ic07}. The complexity of the inclusion problem over these different language extensions and their combinations typically ranges from polynomial time to exponential space (see \cite{Gelade-siam08} for a survey). The main distinction between these works and the work presented here is that we focus on counting nodes located along deep and recursive paths in trees. When considering regular \emph{tree} languages instead of regular \emph{string} languages, succinct syntax such as the one presented above is even more useful, as branching results in a higher combinatorial complexity. In the case of trees, it is often useful to express cardinality constraints not only on the sequence of children nodes, but also in a particular region of a tree, such as a subtree. Suppose, for instance, that we want to define a tree language over $\Sigma$ where there is no more than 2 ``b'' nodes. This requires a quite large regular tree type expression such as: \newcommand{x_\text{root}}{x_\text{root}} \newcommand{x_{b\leq2}}{x_{b\leq2}} \newcommand{x_{\neg b}}{x_{\neg b}} \newcommand{x_{b\leq1}}{x_{b\leq1}} \newcommand{\texttt{[}}{\texttt{[}} \newcommand{\texttt{]}}{\texttt{]}} \newcommand{\xtag}[2]{#1 \texttt{[} #2 \texttt{]}} \newcommand{x}{x} $$\begin{array}{lcl} x_\text{root} \!\!\! &\rightarrow&\!\!\ \xtag{b}{x_{b\leq1}} \mid \xtag{c}{x_{b\leq2}} \mid \xtag{a}{x_{b\leq2}} \vspace{0.1cm} \\ x_{b\leq2}\!\!\! &\rightarrow&\!\!\! x_{\neg b}, \xtag{b}{x_{\neg b}},x_{\neg b},\xtag{b}{x_{\neg b}},x_{\neg b} \mid x_{\neg b}, \xtag{b}{x_{b\leq1}},x_{\neg b} \vspace{0.1cm}\\ && \mid x_{\neg b}, \xtag{a}{x_{b\leq2}}, x_{\neg b} \mid x_{\neg b}, \xtag{c}{x_{b\leq2}}, x_{\neg b} \mid x_{b\leq1} \vspace{0.1cm}\\ x_{b\leq1} \!\!\! &\rightarrow&\!\!\ x_{\neg b} \mid x_{\neg b}, \xtag{b}{x_{\neg b}}, x_{\neg b} \mid \xtag{a}{x_{b\leq1}} \mid \xtag{c}{x_{b\leq1}} \vspace{0.1cm}\\ x_{\neg b} \!\!\! &\rightarrow&\!\!\ (\xtag{a}{x_{\neg b}} \mid \xtag{c}{x_{\neg b}})^* \end{array}$$ where $x_\text{root}$ is the starting non-terminal; $x_{\neg b}, x_{b\leq1}, x_{b\leq2}$ are non-terminals; the notation $\xtag{a}{x_{\neg b}}$ describes a subtree whose root is labeled $a$ and in which there is no $b$ node; and ``,'' is concatenation. More generally, the widely adopted notations for regular tree grammars produce very verbose definitions for properties involving cardinality constraints on the nesting of elements\footnote{This is typically the reason why the standard DTD for XHTML does not syntactically prevent the nesting of anchors, whereas this nesting is actually prohibited in the XHTML standard.}. The problem with regular tree (and even string) grammars is that one is forced to fully expand all the patterns of interest using concatenation, union, and Kleene star. Instead, it is often tempting to rely on another kind of (formal) notation that just describes a simple pattern and additional constraints on it, which are intuitive and compact with respect to size. For instance, one could imagine denoting the previous example as follows, where the additional constraint is described using XPath notation: $$\left(x \!\rightarrow\!\! (\xtag{a}{x} \mid \xtag{b}{x} \mid \xtag{c}{x})^*\right) \vspace{0.1cm} ~\wedge~ \text{count(/descendant-or-self::}b) \leq 2$$ Although this kind of counting operators does not increase the expressive power of regular tree grammars, it can have a drastic impact on succinctness, thus making reasoning over these languages harder (as noticed in \cite{DBLP:conf/mfcs/Gelade08} in the case of strings). Indeed, reasoning on this kind of extensions without relying on their expansion (in order to avoid syntactic blow-ups) is often tricky \cite{DBLP:conf/mfcs/GeladeGM09}. Determining satisfiability, containment, and equivalence over these classes of extended regular expressions typically requires involved algorithms with higher complexity \cite{DBLP:conf/focs/MeyerS72} compared to ordinary regular expressions. In the present paper, we propose a succinct logical notation, equipped with a satisfiability checking algorithm, for describing many sorts of cardinality constraints on the frequency of occurrence of nodes in regular tree types. Regular tree types encompass most of XML types (DTDs, XML Schemas, RelaxNGs) used in practice today. XPath is the standard query language for XML documents, and it is an important part of other XML technologies such as XSLT and XQuery. XPath expressions are regular path expressions interpreted as sets of nodes selected from a given context node. One of the reasons why XPath is popular for web programming resides in its ability to express multidirectional navigation. Indeed, XPath expressions may use recursive navigation, to access descendant nodes, and also backward navigation, to reach previous siblings or ancestor nodes. Expressing cardinality restrictions on nodes accessible by recursive multidirectional paths may introduce an extra-exponential cost \cite{DBLP:conf/doceng/GenevesR05,Balder09}, or may even lead to undecidable formalisms \cite{Balder09,DBLP:conf/cade/DemriL06}. We present in this paper a decidable framework capable of succinctly expressing cardinality constraints along deep multidirectional paths. A major application of this logical framework is the decision of problems found in the static analysis of programming languages manipulating XML data. For instance, since the logic is closed under negation, it can be used to solve subtyping problems such as XPath containment in the presence of tree constraints. Checking that a query $q$ is contained in a query $p$ with this logical approach amounts to verifying the validity of $q\Rightarrow p$, or equivalently, the unsatisfiability of $q \wedge \neg p$. \paragraph{Contributions} We extend a tree logic with a succinct notation for counting operators. These operators allow arbitrarily deep and recursive counting constraints. We present a sound and complete algorithm for checking satisfiability of logical formulas. We show that its complexity is exponential in the size of the succinct form. \paragraph{Outline} We introduce the logic in Section~\ref{sec:logic}. Section~\ref{sec:application} shows how the logic can be applied in the XML setting, in particular for the static analysis of XPath expressions and of common schemas containing constraints on the frequency of occurrence of nodes. The decision procedure and the proofs of soundness, completeness, and complexity are presented in Section~\ref{sec:algo}. Finally, we review related work in Section~\ref{sec:relatedwork} before concluding in Section~\ref{sec:conclusion}. \begin{comment} As a consequence, there is a natural desire to design logical frameworks to reason about the largest set of features found in languages such as XML schemas and XPath expressions. An increasing effort aims at solving basic reasoning tasks involving very complex constructions found in these languages. In particular, every future compiler of XML programs will have to routinely solve problems such as query emptiness, containment, and equivalence, in the presence of nominals, powerful recursive multidirectional navigation, and counting features. All of these problems are known to be computationally heavy (when decidable), and the related algorithms are often tricky. We introduce a tree logic with inverse modalities, nominals, and operators for arbitrarily deep and recursive counting. This logic is capable of capturing regular tree languages and XPath queries with counting features. Specifically, the logic corresponds to the $\mu$-calculus for finite trees introduced in \cite{geneves-pldi07} extended with a general operator for expressing cardinality constraints. In order to prove decidability, we introduce another extension of the logic presented in \cite{geneves-pldi07}, by enriching its signature with a new set of counting propositions. We show that both extensions turn out to be equivalent. An advantage of this approach is that it is easily implementable as it only requires slight modifications of the algorithm presented in \cite{geneves-pldi07}. The logic considered in this paper has a close relative in the world of graphs: the $\mu\!-\!$calculus. The $\mu\!-\!$calculus is a propositional modal logic augmented with least and greatest fixpoint operators \cite{DBLP:conf/icalp/Kozen82}. Its great expressive power comes from the addition of inductive definitions to modal logic, allowing $\mu\!-\!$calculus to subsume most of the logics used in program verification such as PDL, CTL, and CTL* \cite{1207696}. Recursive navigation introduced by fixpoints is one of the most important properties expressible by the logic. It captures both the Kleene star found in regular tree types together with recursively deep navigation in trees found in path expressions such as in XPath. Some major advantages of the $\mu$-calculus include a relatively easy evaluation of formulas (i.e., linear-time \textit{model-checking}) and low computational complexity for reasoning (i.e., exponential-time \textit{satisfiability}). With these strengths the logic is becoming increasingly popular for tree-shaped structures, in particular, for XML static analysis \cite{DBLP:conf/tableaux/TanabeTYTH05,geneves-pldi07,964013,DBLP:conf/icalp/SeidlSMH04,DBLP:conf/lics/JurdzinskiL07,barcenas-planx09,DBLP:conf/dbpl/CalvaneseGLV09,Barcenas09b}. Unfortunately, $\mu$-calculus over graphs becomes undecidable if equipped with the three vital features for XML: nominals, recursive multidirectional navigation, and counting against constants \cite{Bonatti-et-al-ICALP-06}. Surprisingly, we show that nominals, counting against constants, and powerful recursive multidirectional navigation can be combined in a logic for trees which is not only decidable, but whose complexity is a single exponential in terms of the formula size. \end{comment} \section{Counting Tree Logic} \label{sec:logic} We introduce our syntax for trees, define a notion of trails in trees, then present the syntax and semantics of logical formulas. \subsection{Trees}\label{subsec:trees} We consider finite trees which are node-labeled and sibling-ordered. Since there is a well-known bijective encoding between $n\!-\!$ary and binary trees, we focus on binary trees without loss of generality. Specifically, we use the encoding represented in Figure~\ref{fig:depthlevels}, where the binary representation preserves the first child of a node and append sibling nodes as second successors. \begin{figure} \begin{center} \includegraphics[keepaspectratio,width=7cm]{depthlevels} \end{center} \caption{$n\!-\!$ary to binary trees} \label{fig:depthlevels} \end{figure} The structure of a tree is built upon modalities ``$\fc$'' and ``$\ns$''. Modality ``$\fc$'' labels the edge between a node and its first child. Modality ``$\ns$'' labels the edge between a node and its next sibling. Converse modalities ``$\invfc$'' and ``$\medtriangleleft}%\dual{\ns}$'' respectively label the same edges in the reverse direction. We define a Kripke semantics for our tree logic, similar to the one of modal logics \cite{685998}. We write $\Modals=\{\fc,\ns,\invfc,\medtriangleleft}%\dual{\ns}\}$ for the set of modalities. For $m\in \Modals$ we denote by $\dual{m}$ the corresponding inverse modality ($\dual{\fc}=\invfc, \dual{\ns}=\medtriangleleft}%\dual{\ns}, \dual{\invfc}=\fc, \dual{\medtriangleleft}%\dual{\ns}}=\ns$). We also consider a countable alphabet $\Props$ of {\em propositions} representing names of nodes. A node is always labeled with exactly one proposition. A tree is defined as a tuple $(\Nodes,\brel,\tlabel)$, where $\Nodes$ is a finite set of nodes; $\brel$ is a partial mapping from $\Nodes\times\Modals$ to $\Nodes$ that defines a tree structure;\footnote{For all $n,n' \in \Nodes,m \in \Modals$, $\brel(n,m) = n' \iff \brel(n', \dual{m}) = n$; for all $n \in \Nodes$ except one (the \emph{root}), exactly one of $\brel(n,\invfc)$ or $\brel(n,\medtriangleleft}%\dual{\ns})$ is defined; for the root, neither $\brel(n,\invfc)$ nor $\brel(n,\medtriangleleft}%\dual{\ns})$ is defined.} and $\tlabel$ is a labeling function from $\Nodes$ to $\Props$. \subsection{Trails}\label{subsec:trails} Trails are defined as regular expressions formed by modalities, as follows: \begin{align*} \trail &::= \trail_0 \mid \trail_0^\star \mid \trail_0^\star, \trail \\ \trail_0 &::= \modals \mid \trail_0,\trail_0 \mid \trail_0\shortmid\trail_0 \end{align*} We restrict trails to sequences of repeated subtrails (which themselves contain no repetition) followed by a subtrail (with no repetition). Since we do not consider infinite paths, we also disallow trails where both a subtrail and its converse occurs under the scope of the recursion operator, thus ensuring cycle-freeness (see Section \ref{cyclefreeness}). These restrictions on trails allow us to prove the completeness of our approach while retaining the ability to express many counting formulas, such as the ones of XPath. Trails are interpreted as sets of \emph{paths}. A path, written $\rho$, is a sequence of modalities that belongs to the regular language denoted by the trail, written $\rho \in \trail$. In a given tree, we say that there is a {\em trail $\trail$ from the node $\node_0$ to the node $\node_\natn$}, written $\path{n_0}{\trail}{n_\natn}$, if and only if there is a sequence of nodes $\node_0, \ldots, n_k$ and a path $\rho = \modals_1, \ldots, \modals_k$ such that $\rho \in \trail$, and $\brel(\node_j,\modals_{j+1})=\node_{j+1}$ for every $j=0,\ldots,\natn-1$. \subsection{Syntax of Logical Formulas}\label{subsec:syntax} The syntax of logical formulas is given in Figure \ref{normalformsyn}, where $m \in \Modals$ and $k \in \mathbb{N}$. Formulas written $\form$ may contain counting subformulas, whereas formulas written $\restrictedform$ cannot. We thus disallow counting under counting or under fixpoints. We also restrict formulas to \emph{cycle-free} formulas, as detailed in Section~\ref{cyclefreeness}. The syntax is shown in negation normal form. The negation of any closed formula (i.e., with no free variable) built using the syntax of Figure \ref{normalformsyn} may be transformed into negation normal form using the usual De Morgan rules together with rules given in Figure~\ref{normalneg}. When we write $\neg\form$, we mean its negated normal form. \begin{figure} \smallsyntax{ \Formulas \ni \entry \form [formula] \top \quad | \quad \neg\top [true, false] \oris \prop ~\quad | \quad \neg \prop [atomic prop (negated)] \oris x [recursion variable] \oris \phi \vee \phi [disjunction] \oris \phi \wedge \phi [conjunction] \oris \modalf{\modals}\phi \quad | \quad \neg \modalf{\modals}\true [modality (negated)] \oris \countf{}{\trail}{\restrictedform}{\leq}{\natn} \quad | \quad \countf{}{\trail}{\restrictedform}{>}{\natn} [counting] \oris \ufixpf{\var}{\restrictedform} [fixpoint operator] \\ % \entry \restrictedform [] \top \quad | \quad \neg\top [] \oris \prop ~\quad | \quad \neg \prop [] \oris x [] \oris \restrictedform \vee \restrictedform [] \oris \restrictedform \wedge \restrictedform [] \oris \modalf{\modals}\restrictedform \quad | \quad \neg \modalf{\modals}\true [] \oris \ufixpf{\var}{\restrictedform} [] } \caption{Syntax of Formulas (in Normal Form).} \label{normalformsyn} \end{figure} \begin{figure} \begin{align*} \neg \modalf{\modals}{\form} &\equiv \neg \modalf{\modals}{\true} \vee \modalf{\modals}{\neg \form} & \neg \ufixpf{\var}{\restrictedform} &\equiv \ufixpf{\var}{\neg \restrictedform\{\subst{\var}{\neg \var}\}}\\ \neg \countf{}{\trail}{\restrictedform}{\leq}{\natn} &\equiv \countf{}{\trail}{\restrictedform}{>}{\natn} & \neg \countf{}{\trail}{\restrictedform}{>}{\natn} &\equiv \countf{}{\trail}{\restrictedform}{\leq}{\natn} \end{align*} \caption{Reduction to Negation Normal Form.}\label{normalneg} \end{figure} Defining an {\em equality} operator for counting formulas is straightforward using the other counting operators. \begin{align*} \countf{}{\trail}{\restrictedform}{=}{\natn} &\equiv \countf{}{\trail}{\restrictedform}{>}{(\natn-1)} \wedge \countf{}{\trail}{\restrictedform}{\leq}{\natn} &&\text{if $k>0$}\\ \countf{}{\trail}{\restrictedform}{=}{0} &\equiv \countf{}{\trail}{\restrictedform}{\leq}{0} \end{align*} \subsection{Semantics of Logical Formulas}\label{subsec:semantics} A formula is interpreted as a set of nodes in a tree. A model of a formula is a tree such that the formula denotes a non-empty set of nodes in this tree. A counting formula $\countf{}{\trail}{\restrictedform}{>}{\natn}$ satisfied at a given node $n$ means that there are at least $\natn + 1$ nodes satisfying $\restrictedform$ that can be reached from $n$ through the trail $\trail$. A counting formula $\countf{}{\trail}{\restrictedform}{>}{\natn}$ is thus interpreted as the set of nodes such that, for each of them, the previously described condition holds. For example, the formula $\prop_1\wedge \modalf{\fc}{\countf{}{\ns^*}{\prop_2}{>}{5}}$, denotes $\prop_1$ nodes with strictly more than $5$ children nodes named $\prop_2$. In order to present the formal semantics of formulas, we introduce valuations, written $\valuation$, which relate variables to sets of nodes. We write $\valuation[\subst{\Nodes^\prime}{x}]$, where $\Nodes^\prime$ is a subset of the nodes, for the valuation defined as $\valuation[\subst{\Nodes^\prime}{x}](y) = \valuation(y)$ if $x \neq y$, and $\valuation[\subst{\Nodes^\prime}{x}](x) = \Nodes^\prime$. Given a tree $\tree=(\Nodes,\brel, \tlabel)$ and a valuation $\valuation$, the formal semantics of formulas is given in Figure~\ref{formsem}. Note that the function $f :Y \rightarrow \semf{\psi}{\tree}{\valuation[\subst{Y}{x}]}$ is monotone, and the denotation of $\ufixpf{\var}{\restrictedform}$ is a fixed point \cite{tarski1955}. \begin{figure} \begin{align*} & \semf{\true}{\tree}{\valuation} &&=&& \Nodes \\ & \semf{\neg \true}{\tree}{\valuation} &&=&& \emptyset \\ & \semf{\prop}{\tree}{\valuation} &&=&& \{\node \mid \tlabel(\node)=\prop\} \\ & \semf{\neg \prop}{\tree}{\valuation} &&=&& \{\node \mid \tlabel(\node)\neq \prop\} \\ & \semf{\var}{\tree}{\valuation} &&=&& \valuation(\var)\\ & \semf{\form_1\vee \form_2}{\tree}{\valuation} &&=&& \semf{\form_1}{\tree}{\valuation} \cup \semf{\form_2}{\tree}{\valuation}\\ & \semf{\form_1\wedge \form_2}{\tree}{\valuation} &&=&& \semf{\form_1}{\tree}{\valuation} \cap \semf{\form_2}{\tree}{\valuation}\\ & \semf{\modalf{\modals}{\form}}{\tree}{\valuation} &&=&& \{\node \mid \brel(\node,\modals) \in \semf{\form}{\tree}{\valuation}\}\\ & \semf{\neg \modalf{\modals}{\true}}{\tree}{\valuation} &&=&& \{\node \mid \brel(\node,\modals) \text{ undefined}\} \\ & \semf{\countf{}{\alpha}{\restrictedform}{\leq}{\natn}}{\tree}{\valuation} &&=&& \{\node \mid~~ \card{\{\node^\prime\in \semf{\restrictedform}{\tree}{\valuation} \mid \path{\node}{\alpha}{\node^\prime}\}}\leq \natn \}\\ & \semf{\countf{}{\alpha}{\restrictedform}{>}{\natn}}{\tree}{\valuation} &&=&& \{\node \mid~~ \card{\{\node^\prime\in \semf{\restrictedform}{\tree}{\valuation} \mid \path{\node}{\alpha}{\node^\prime}\}}> \natn \}\\ & \semf{\ufixpf{\var}{\restrictedform}}{\tree}{\valuation} &&=&& \bigcap\{\Nodes^\prime \mid \semf{\restrictedform}{\tree}{\valuation[\subst{\Nodes^\prime}{x}]}\subseteq \Nodes^\prime\} \end{align*} \caption{Semantics of Formulas.} \label{formsem} \end{figure} Intuitively, propositions denote the nodes where they occur; negation is interpreted as set complement; disjunction and conjunction are respectively set union and intersection; the least fixpoint operator performs finite recursive navigation; and the counting operator denotes nodes such that the ones accessible from this node through a trail fulfill a cardinality restriction. A logical formula is said to be {\em satisfiable} iff it has a model, i.e., there exists a tree for which the semantics of the formula is not empty. \subsection{Cycle-Freeness} \label{cyclefreeness} Formal definition of cycle-freeness can be found in \cite{geneves-pldi07}. Intuitively, in a cycle-free formula, fixpoint variables must occur under a modality but cannot occur in the scope of both a modality and its converse. For instance, the formula $\ufixpf{\var}{\modalf{\fc}{\var}\vee\modalf{\invfc}{\var}}$ is not cycle-free. In a cycle-free formula, the number of modality cycles (of the form $m \dual{m}$) is bound independently of the number of times fixpoints are unfolded (i.e., by replacing a fixpoint variable with the fixpoint itself). A fundamental consequence of the restriction to cycle-free formulas is that, when considering only finite trees, the interpretations of the greatest and smallest fixpoints coincide. This greatly simplifies the logic. Here, we also restrict our approach to cycle-free formulas. We thus need to extend this notion to the counting operators, and more precisely to the trails that occur in them. Cycle-free trails are trails where both a subtrail and its converse do not occur under the scope of the recursion operator. We thus restrict the formulas under consideration to cycle-free formulas whose counting operators contain cycle-free trails. \begin{lemma}\label{lem:finite_unfolding} Let $\form$ be a cycle-free formula, and $\tree$ be a tree for which $\semf{\form}{\tree}{\emptyset} \neq \emptyset$. Then there is a finite unfolding $\form^\prime$ of the fixpoints of $\form$ such that $\semf{\form^{\prime}\{\subst{\neg\true}{\ufixpf{\var}{\restrictedform}}\}}{\tree}{\emptyset} = \semf{\form}{\tree}{\emptyset}$. \end{lemma} \begin{proof} As cycle-free counting formulas may be translated into (exponentially larger) cycle-free non-counting formulas, the proof is identical to the one in \cite{geneves-pldi07}. \end{proof} As a consequence, our logic is closed under negation even without greatest fixpoints. \subsection{Global Counting Formulas and Nominals}\label{subsec:globalformula} To conclude this section, we turn to an illustration of the expressive power of our logic. An interesting consequence of the inclusion of backward axes in trails is the ability to reach every node in the tree from any node of the tree, using the trail $(\invfc|\medtriangleleft}%\dual{\ns})^\star,(\fc|\ns)^\star$.\footnote{Note that this trail is cycle-free.} We can thus select some nodes depending on some global counting property. Consider the following formula, where $\#$ stands for one of the comparison operators $\leq$, $>$, or $=$. \[\countf{}{(\invfc|\medtriangleleft}%\dual{\ns})^\star,(\fc|\ns)^\star}{\form_1}{\#}{\natn}\] Intuitively, this formula counts how many nodes in the whole tree satisfy $\form_1$. For each node of the tree, it selects it if and only if the count is compatible with the comparison considered. The interpretation of this formula is thus either every node of the tree, or none. It is then easy to restrict the selected nodes to some that satisfy another formula $\form_2$, using intersection. \[(\countf{}{(\invfc|\medtriangleleft}%\dual{\ns})^\star,(\fc|\ns)^\star}{\form_1}{\#}{\natn}) \wedge \form_2\] This formula select every node satisfying $\form_2$ if and only if there are $\#\natn$ nodes satisfying $\form_1$, which we write as follows. \[ \form_1\#\natn \implies \form_2 \] We can now express existential properties, such as ``select every node satisfying $\form_2$ if there exists a node satisfying $\form_1$''. \[ \form_1 > 0 \implies \form_2 \] We can also express universal properties, such as ``select every node satisfying $\form_2$ if every node satisfies $\form_1$''. \[ (\neg\form_1) \leq 0 \implies \form_2 \] Another way to interpret global counting formulas is as a generalization of the so-called nominals in the modal logics community~\cite{DBLP:conf/cade/SattlerV01}. Nominals are special propositions whose interpretation is a singleton (they occur exactly once in the model). They come for free with the logic. A nominal, denoted ``$@n$'', corresponds to the following global counting formula: \[[\countf{}{(\invfc|\medtriangleleft}%\dual{\ns})^\star,(\fc|\ns)^\star}{n}{=}{1}] \wedge n \] where $n$ is a new fresh atomic proposition. One may need for nominals to occur in the scope of counting formulas. As we disallow counting under counting, we propose the following alternative encoding of nominals in these cases: \begin{align*} @n \equiv n \wedge\neg [& \text{descendant}(n) \vee \text{ancestor}(n) \vee \\ & \text{anc$\!-\!$or$\!-\!$self}(\text{siblings}(\text{desc$\!-\!$or$\!-\!$self}(n))) ], \end{align*} where: \begin{align*} \text{descendant}(\restrictedform)&= \modalf{\fc}{\ufixpf{\var}{\restrictedform \vee \modalf{\fc}{\var} \vee \modalf{\ns}{\var}}};\\ \text{foll$\!-\!$sibling}(\restrictedform) &= \ufixpf{\var}{\modalf{\ns}{\restrictedform} \vee \modalf{\ns}{\var}};\\ \text{prec$\!-\!$sibling}(\restrictedform) &= \ufixpf{\var}{\modalf{\medtriangleleft}%\dual{\ns}}{\restrictedform} \vee \modalf{\medtriangleleft}%\dual{\ns}}{\var}};\\ \text{desc$\!-\!$or$\!-\!$self}(\restrictedform) &= \ufixpf{x}{\restrictedform \vee \modalf{\fc}{\ufixpf{y}{x \vee \modalf{\ns}{y}} }};\\ \text{ancestor}(\restrictedform) &= \ufixpf{\var}{\modalf{\invfc}{(\restrictedform \vee \var)} \vee \modalf{\medtriangleleft}%\dual{\ns}}{\var}};\\ \text{anc$\!-\!$or$\!-\!$self}(\restrictedform) &= \ufixpf{x}{\restrictedform \vee \ufixpf{y}{\modalf{\invfc}{(y \vee x)} \vee \modalf{\medtriangleleft}%\dual{\ns}}{y}} };\\ \text{siblings}(\restrictedform) &= \text{foll$\!-\!$sibling}(\restrictedform) \vee \text{prec$\!-\!$sibling}(\restrictedform). \end{align*} \section{Application to XML Trees} \label{sec:application} \subsection{XPath Expressions}\label{typespaths} \label{sec:xpath} XPath \cite{xpath} was introduced as part of the W3C XSLT transformation language to have a non-XML format for selecting nodes and computing values from an XML document (see \cite{geneves-pldi07} for a formal presentation of XPath). Since then, XPath has become part of several other standards, in particular it forms the ``navigation subset'' of the XQuery language. In their simplest form XPath expressions look like ``directory navigation paths''. For example, the XPath \begin{verbatim} /company/personnel/employee \end{verbatim} navigates from the root of a document through the top-level ``company'' node to its ``personnel'' child nodes and on to its ``employee'' child nodes. The result of the evaluation of the entire expression is the set of all the ``employee'' nodes that can be reached in this manner. At each step in the navigation, the selected nodes for that step can be filtered with a predicate test. Of special interest to us are the predicates that count nodes or that test the position of the selected node in the previous step's selection. For example, if we ask for \begin{verbatim} /company/personnel/employee[position()=2] \end{verbatim} then the result is \emph{all} employee nodes that are the \emph{second} employee node (in document order) among the employee child nodes of each personnel node selected by the previous step. XPath also makes it possible to combine the capability of searching along ``axes'' other than the shown ``children of'' with counting constraints. For example, if we ask for \begin{verbatim} /company[count(descendant::employee)<=300]/name \end{verbatim} then the result consists of the company names with less than 300 employees in total (the axis ``descendant'' is the transitive closure of the default -- and often omitted -- axis ``child''). The syntax and semantics of Core XPath expressions are respectively given on Figure~\ref{fig:xpath-syntax} and Figure~\ref{fig:xpath-semantics}. An XPath expression is interpreted as a relation between nodes. The considered XPath fragment allows absolute and relative paths, path union, intersection, composition, as well as node tests and qualifiers with counting operators, conjunction, disjunction, negation, and path navigation. Furthermore, it supports all XPath axes allowing multidirectional navigation. \begin{figure} \begin{align*} \text{Axis} ::= & \text{self} \mid \text{child} \mid \text{parent} \mid \text{descendant} \mid \text{ancestor} \mid\\ & \text{following-sibling} \mid \text{preceding-sibling} \mid \\ & \text{following} \mid \text{preceding} \\ \text{NameTest} ::= & \text{QName} \mid * \\ \text{Step} ::= & \text{Axis}\text{::}\text{NameTest} \\ \text{PathExpr} ::= & \text{PathExpr}/\text{PathExpr} \mid \text{PathExpr}[\text{Qualifier}] \mid \text{Step} \\ \text{Qualifier} ::= & \text{PathExpr} \mid \text{CountExpr} \mid\text{not}~ \text{Qualifier} \mid \\ & \text{Qualifier} ~\text{and}~ \text{Qualifier} \mid \text{Qualifier} ~\text{or}~ \text{Qualifier} \mid @n \\ \text{CountExpr} ::= & \qualifcount{\text{PathExpr}'}~\text{Comp}~k \\ \text{PathExpr}' ::= & \text{PathExpr}'/\text{PathExpr}' \mid \text{PathExpr}'[\text{Qualifier}'] \mid \text{Step} \\ \text{Qualifier}' ::= & \text{PathExpr}' \mid\text{not}~ \text{Qualifier}' \mid \text{Qualifier}' ~\text{and}~ \text{Qualifier}' \\ & \mid \text{Qualifier}' ~\text{or}~ \text{Qualifier}' \mid @n \\ \text{Comp} ::= & \leq \mid > \mid \geq \mid < \mid =\\ \text{XPath} ::= & \text{PathExpr} \mid /\text{PathExpr} \mid \text{XPath} ~\text{union}~ \text{PathExpr} \mid\\ & \text{XPath} ~\text{intersect}~ \text{PathExpr} \mid \text{XPath} ~\text{except}~ \text{PathExpr} \end{align*} \caption{Syntax of Core XPath Expressions.}\label{fig:xpath-syntax} \end{figure} \begin{figure} \begin{align*} \pathsem{ \text{Axis}\text{::}\text{NameTest} } =& \{ (x,y) \in N^2 \mid x(\text{Axis})y \text{ and } \\ & y \text{ satisfies } \text{NameTest} \} \\ \pathsem{ /\text{PathExpr} } =& \{ (r,y) \in \pathsem{\text{PathExpr} } \mid \\ & r \text{ is the root} \} \\ \pathsem{ P_1/P_2} =& \pathsem{P_1} \circ \pathsem{P_2}\\ \pathsem{P_1 ~\text{union}~ P_2} =& \pathsem{P_1} \cup \pathsem{P_2}\\ \pathsem{ P_1 ~\text{intersect}~ P_2 } =& \pathsem{P_1} \cap \pathsem{P_2}\\ \pathsem{ P_1 ~\text{except}~ P_2 } =& \pathsem{P_1} \setminus \pathsem{P_2}\\ \pathsem{ \text{PathExpr}[\text{Qualifier}] } = & \{ (x,y) \in \pathsem{\text{PathExpr}} \mid \\ & y \in \qualifsem{\text{Qualifier}} \}\\ \\ \qualifsem{ \text{PathExpr} } =& \{ x \mid \exists y. (x,y) \in \pathsem{\text{PathExpr}} \} \\ \qualifsem{\qualifcount{\text{PathExpr}}~\text{Comp}~k } =& \{ x \in N \mid \\& \card{\left\{y \mid (x,y) \in \pathsem{\text{PathExpr}} \right\}} \\ & \text{satisfies } \text{Comp}~k \} \\ \qualifsem{ \text{not}~ Q } =& N \setminus \qualifsem{Q}\\ \qualifsem{Q_1 ~\text{and}~ Q_2} =& \qualifsem{Q_1} \cap \qualifsem{Q_1} \\ \qualifsem{ Q_1 ~\text{or}~ Q_2 } =& \qualifsem{Q_2} \cup \qualifsem{Q_2} \end{align*} \caption{Semantics of Core XPath Expressions}\label{fig:xpath-semantics} \end{figure} It was already observed in \cite{DBLP:conf/doceng/GenevesR05,Balder09} that using positional information in paths reduces to counting (at the cost of an exponential blow-up). For example, the expression \begin{verbatim} child::a[position()=5] \end{verbatim} first selects the ``\texttt{a}'' nodes occurring as children of the current context node, and then keeps those occurring at the $5$th position. This expression can be rewritten into the semantically equivalent expression: \begin{verbatim} child::a[count(preceding-sibling::a)=4] \end{verbatim} which constraints the number of preceding siblings named ``\texttt{a}'' to $4$, so that the qualifier becomes true only for the $5$th child ``\texttt{a}''. A general translation of positional information in terms of counting operators \cite{DBLP:conf/doceng/GenevesR05,Balder09} is summarized on Figure~\ref{fig:positional-info}, where $\ll$ denotes the document order (depth-first left-to-right) relation in a tree. Note that translated path expressions can in turn be expressed into the core XPath fragment of Figure~\ref{fig:xpath-syntax} (at the cost of another exponential blow-up). Indeed, expressions like $\text{PathExpr}/(\text{PathExpr}_2 ~\text{except}~ \text{PathExpr}_3)/\text{PathExpr}_4$ must be rewritten into expressions where binary connectives for paths occur only at top level, as in: \begin{align*} & \text{PathExpr}/\text{PathExpr}_2/\text{PathExpr}_4 ~\text{except}~ \\ & \text{PathExpr}/\text{PathExpr}_3/\text{PathExpr}_4 \end{align*} \begin{figure} \begin{align*} \text{PathExpr}[\text{position}()=1] \equiv & \text{PathExpr} ~\text{except}~ (\text{PathExpr}/\ll) \\ \text{PathExpr}[\text{position}()=k+1] \equiv & (\text{PathExpr} ~\text{intersect}~ \\ & (\text{PathExpr}[k]/\!\ll))[\text{position}()\!=\!1] \\ \ll \equiv& (\text{descendant::*}) ~\text{union}~ (\text{a-o-s::*}/ \\ & \text{following-sibling::*}/\text{d-or-s::*}) \\ \text{a-or-s::*} \equiv & \text{ancestor::*} ~\text{union}~ \text{self::*}\\ \text{d-or-s::*} \equiv & \text{descendant::*} ~\text{union}~ \text{self::*} \end{align*} \caption{Positional Information as Syntactic Sugars \cite{DBLP:conf/doceng/GenevesR05,Balder09}}\label{fig:positional-info} \end{figure} We focus on Core XPath expressions involving the counting operator (see Figure~\ref{fig:xpath-syntax}). The XPath fragment without the counting operator (the navigational fragment) was already linearly translated into $\mu$-calculus in \cite{geneves-pldi07}. The contributions presented in this paper allow to equip this navigational fragment with counting features such as the ones formulated above. Logical formulas capture the aforementioned XPath counting constraints. For example, consider the following XPath expression: \begin{verbatim} child::a[count(descendant::b[parent::c])>5] \end{verbatim} This expression selects the children nodes named ``\texttt{a}'' provided they have more than $5$ descendants which (1) are named ``\texttt{b}'' and (2) whose parent is named ``\texttt{c}''. The logical formula denoting the set of children nodes named ``\texttt{a}'' is: \[\psi = a\wedge \modalf{\medtriangleleft}%\dual{\ns}^*,\invfc}\top \] The logical translation of the above XPath expression is: \[\psi\wedge\modalf{\fc}{\countf{}{(\fc|\ns)^\star}{(b \wedge \ufixpf{\var}{\modalf{\invfc}{c}\vee \modalf{\medtriangleleft}%\dual{\ns}}{\var}} )}{>}{5}}\] This formula holds for nodes selected by the XPath expression. A correspondence between the main XPath axes over unranked trees and modal formulas over binary trees is given in Figure~\ref{fig:axes-modalities}. In this figure, each logical formula holds for nodes selected by the corresponding XPath axis from a context $\gamma$. \begin{figure} \begin{center} $\begin{array}{r|l} \text{Path} & \text{Logical formula}\\ \hline \gamma/\text{self::*} & \gamma \\ \gamma/\text{child::*} & \modalf{\medtriangleleft}%\dual{\ns}^*, \invfc}\gamma \\ \gamma/\text{parent::*} & \modalf{\fc}{\modalf{\ns^*}{\gamma}} \\ \gamma/\text{descendant::*} & \modalf{(\medtriangleleft}%\dual{\ns}\mid\invfc)^*, \invfc}\gamma\\ \gamma/\text{ancestor::*} & \modalf{\fc}{\modalf{(\fc\mid\ns)^*}{\gamma}}\\ \gamma/\text{following-sibling::*} & \modalf{\medtriangleleft}%\dual{\ns}}{\modalf{\medtriangleleft}%\dual{\ns}^*}{\gamma}}\\ \gamma/\text{preceding-sibling::*} & \modalf{\ns}{\modalf{\ns^*}{\gamma}} \\ \end{array}$ \end{center} \caption{XPath axes as modalities over binary trees.}\label{fig:axes-modalities} \end{figure} Let consider another example (XPath expression $e_1$): \begin{verbatim} child::a/child::b[count(child::e/descendant::h)>3] \end{verbatim} Starting from a given context in a tree, this XPath expression navigates to children nodes named ``a'' and selects their children named ``b''. Finally, it retains only those ``b'' nodes for which the qualifier between brackets holds. The first path can be translated in the logic as follows: $$ \vartheta = b \wedge \ufixpf{\var}{\modalf{\invfc}{(a \wedge \ufixpf{\var^\prime}{\modalf{\invfc}{\top} \vee \modalf{\medtriangleleft}%\dual{\ns}}{\var^\prime}}}) \vee \modalf{\medtriangleleft}%\dual{\ns}}{\var}}$$ The counting part requires a more sophisticated translation in the logic. This is because it makes implicit that ``e'' nodes (whose existence is simply tested for counting purposes) must be children of selected ``b'' nodes. The translation of the full aforementioned XPath expression is as follows: $$\vartheta \wedge @n \wedge \countf{}{(\invfc\mid\medtriangleleft}%\dual{\ns})^*, (\fc\mid\ns)^*}{\eta}{>}{3}$$ where $@n$ is a new fresh nominal used to mark a ``b'' node which is filtered by the qualifier and the formula $\eta$ describes the counted ``h'' nodes: $$\eta = h \wedge \ufixpf{\var}{\modalf{\invfc}{(e \wedge \ufixpf{\var^\prime}{\modalf{\invfc}{@n} \vee \modalf{\medtriangleleft}%\dual{\ns}}{\var^\prime}}}) \vee \modalf{\medtriangleleft}%\dual{\ns}}{\var} \vee \modalf{\invfc}{\var}}$$ Intuitively, the general idea behind the translation is to first translate the leading path, use a fresh nominal for marking a node which is filtered, then find at least ``3'' instances of ``h'' nodes from which we can reach back the marked node via the inverse path of the counting formula. Since trails make it possible to navigate but not to test properties (like existence of labels), we test for labels in the counted formula $\eta$ and we use a general navigation $(\invfc\mid\medtriangleleft}%\dual{\ns})^*, (\fc\mid\ns)^*$ to look for counted nodes everywhere in the tree. Introducing the nominal is necessary to bind the context properly (without loss of information). Indeed, the XPath expression $e_1$ makes implicit that a ``e'' node must be a child of a ``b'' node selected by the outer path. Using a nominal, we restore this property by connecting the counted nodes to the initial single context node. \begin{lemma} The translation of Core XPath expressions with counting constraints into the logic is linear. \label{xpathtranslationlemma} \end{lemma} It is proven by structural induction in a similar manner to \cite{geneves-pldi07} (in which the translation is proven for expressions without counting constraints). For counting formulas, the use of nominals and the general (constant-size) counting trail make it possible to avoid duplication of trails so that the translation remains linear. We can now address several decision problems such as equivalence, containment, and emptiness of XPath expressions. These decision problems are reduced to test satisfiability for the logic (in the manner of \cite{geneves-pldi07}). We present in Section~\ref{sec:algo} a satisfiability testing algorithm with a single exponential complexity with respect to the formula size. In \cite{geneves-pldi07}, it was show the logic is also able to capture XML schema languages. This allows to test the XPath decision problems in the presence of XML types. We now show our logic can also succinctly express cardinality constraints on XML types. \begin{comment} As for counting expressions, the head function returns the original path but its last component. \[\head{\pathvar_1[\qvar_2]/\ldots/\axvar::\prop[\qvar_n]}=\pathvar_1[\qvar_1]/\ldots/\pathvar_{n-1}[\qvar_{n-1}]\] The tail function returns precisely the last component of a path. \[\target{\pathvar_1[\qvar_2]/\ldots/\pathvar_n[\qvar_n]}=\axvar::\prop[\qvar_n]\] Finally, the navigational information of the axis is extracted by the trail function. \[\extracttrail{\text{child}::\prop}= \fc,\ns^*\] \begin{figure} \centering \begin{align*} \mucalcE{\cdot}{\cdot} &: \text{XPath} \rightarrow \Formulas \rightarrow \Formulas \\ \mucalcE{/\pathvar}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcP{\pathvar}{\left(\left(\mu Z. \neg \left<\invfc\right> \true \vee \left<\invns\right> Z\right) \wedge \left(\mu Y. \chi \wedge \circledS \vee \left<\fc\right> Y \vee \left<\ns\right> Y\right)\right)} \\ \mucalcE{\pathvar}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcP{\pathvar}{\left(\chi \wedge \circledS\right)} \\ \mucalcE{e_1 \shortmid e_2}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcE{e_1}{\chi} \vee \mucalcE{e_2}{\chi} \\ \mucalcE{e_1 \cap e_2}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcE{e_1}{\chi} \wedge \mucalcE{e_2}{\chi} \\ \\ \mucalcP{\cdot}{\cdot} &: \text{PathExpr} \rightarrow \Formulas \rightarrow \Formulas \\ \mucalcP{\pathvar_1/\pathvar_2}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcP{\pathvar_2}{\left(\mucalcP{\pathvar_1}{\chi}\right)} \\ \mucalcP{\qualif{\pathvar}{\qvar}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcP{\pathvar}{\chi} \wedge \mucalcQex{\qvar}{\true} \\ \mucalcP{\step{\emph{a}}{\sigma}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \sigma \wedge \mucalcA{\emph{a}}{\chi} \\ \mucalcP{\step{\emph{a}}{*}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcA{\emph{a}}{\chi} \end{align*} \caption{Translation of Expressions and Paths.}\label{fig:exprs} \end{figure} \begin{figure} \centering \begin{align*} \mucalcQex{\cdot}{\cdot} &: \text{Qualifier} \rightarrow \Formulas \rightarrow \Formulas \\ \mucalcQex{q_1 \op{and} q_2}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcQex{q_1}{\chi} \wedge \mucalcQex{q_2}{\chi} \\ \mucalcQex{q_1 \op{or} q_2}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcQex{q_1}{\chi} \vee \mucalcQex{q_2}{\chi} \\ \mucalcQex{\op{not}~q}{\chi} & \stackrel{\text{\tiny{def}}}{=} \neg~\mucalcQex{q}{\chi} \\ \mucalcQex{\pathvar}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcPex{\pathvar}{\chi} \\ \mucalcQex{\qualifcount{\pathvar}\!\#\!\natn}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcPex{\head{\pathvar}}{\eta}\\ \eta & \stackrel{\text{\tiny{def}}}{=} \countf{}{\extracttrail{\target{\pathvar}}}{\mucalcP{\slf{\target{\pathvar}}}{\top}}{\#}{\natn}\wedge\chi\\ \\ \mucalcPex{\cdot}{\cdot} &: \text{PathExpr} \rightarrow \Formulas \rightarrow \Formulas \\ \mucalcPex{\pathvar_1/\pathvar_2}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcPex{\pathvar_1}{\left(\mucalcPex{\pathvar_2}{\chi}\right)} \\ \mucalcPex{\qualif{\pathvar}{\qvar}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcPex{\pathvar}{\left(\chi \wedge \mucalcQex{\qvar}{\true}\right)} \\ \mucalcPex{\step{\emph{a}}{\sigma}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcAex{\emph{a}}{\left(\chi \wedge \sigma\right)} \\ \mucalcPex{\step{\emph{a}}{*}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcAex{\emph{a}}{\chi} \\ \\ \mucalcAex{\cdot}{\cdot} &: \text{Axis} \rightarrow \Formulas \rightarrow \Formulas \\ \mucalcAex{\emph{a}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcA{\emph{a}^{-1}}{\chi} \end{align*} \caption{Translation of Qualifiers.}\label{fig:qualifs} \end{figure} \begin{figure} \centering \begin{align*} \mucalcA{\cdot}{\cdot} &: \text{Axis} \rightarrow \Formulas \rightarrow \Formulas \\ \mucalcA{\axis{self}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \chi \\ \mucalcA{\axis{child}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mu Z. \left<\invfc\right> \chi \vee \left<\invns\right> Z \\ \mucalcA{\axis{foll-sibling}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mu Z. \left<\invns\right> \chi \vee \left<\invns\right> Z \\ \mucalcA{\axis{prec-sibling}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mu Z. \left<\ns\right> \chi \vee \left<\ns\right> Z \\ \mucalcA{\axis{parent}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \left<\fc\right> \mu Z. \chi \vee \left<\ns\right> Z \\ \mucalcA{\axis{descendant}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mu Z. \left<\invfc\right> (\chi \vee Z) \vee \left<\invns\right> Z \\ \mucalcA{\axis{desc-or-self}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mu Z. \chi \vee \mu Y. \left<\invfc\right> (Y \vee Z) \vee \left<\invns\right> Y\\% \chi \vee \mucalcA{\axis{descendant}}{\chi} \\ \mucalcA{\axis{ancestor}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \left<\fc\right> \mu Z. \chi \vee \left<\fc\right> Z \vee \left<\ns\right> Z \\ \mucalcA{\axis{anc-or-self}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mu Z. \chi \vee \left<\fc\right> \mu Y. Z \vee \left<\ns\right> Y\\ \mucalcA{\axis{following}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcA{\axis{desc-or-self}}{\eta_1}\\ \mucalcA{\axis{preceding}}{\chi} & \stackrel{\text{\tiny{def}}}{=} \mucalcA{\axis{desc-or-self}}{\eta_2} \\ \eta_1 & \stackrel{\text{\tiny{def}}}{=} \mucalcA{\axis{foll-sibling}}{\mucalcA{\axis{anc-or-self}}{\chi}}\\ \eta_2 & \stackrel{\text{\tiny{def}}}{=} \mucalcA{\axis{prec-sibling}}{\mucalcA{\axis{anc-or-self}}{\chi}} \end{align*} \caption{Translation of XPath Axes.}\label{fig:axes} \end{figure} \begin{figure} \begin{align*} & \target{\cdot} &&:&& \text{PathExpr}\rightarrow \text{PathExpr}\\ & \target{\axvar::\prop} &&=&&\axvar::\prop\\ & \target{\axvar::*} &&=&& \axvar::* \\ & \target{\pathvar_1/\pathvar_2} &&=&& \target{\pathvar_2}\\ & \target{\pathvar[\qvar]} &&=&& \target{\pathvar}[\qvar]\\ \\ & \head{\cdot} &&:&& \text{PathExpr}\rightarrow \text{PathExpr}\\ & \head{\axvar::\prop} &&=&&\text{self}::*\\ & \head{\axvar::*} &&=&&\text{self}::*\\ & \head{\pathvar_1/\pathvar_2} &&=&& \pathvar_1/ \head{\pathvar_2}\\ & \head{\pathvar[\qvar]} &&=&& \head{\pathvar}[\qvar]\\ \\ & \text{sf} &&:&& \text{PathExpr} \rightarrow \text{PathExpr}\\ & \slf{\axvar::\prop} &&=&& \text{self}::\prop \\ & \slf{\axvar::\prop[\qvar]} &&=&& \text{self}::\prop[\qvar] \\ \\ & (\cdot)^{-1} &&:&&\text{Axis} \rightarrow \text{Axis}\\ & \text{self}^{-1} &&=&& \text{self} \\& \text{child}^{-1} &&=&& \text{parent} &&\\ & \text{foll$\!-\!$sibling}^{-1} &&=&& \text{prec$\!-\!$sibling} \\& \text{descendant}^{-1} &&=&& \text{ancestor} &&\\ & \text{desc$\!-\!$or$\!-\!$self}^{-1} &&=&& \text{anc$\!-\!$or$\!-\!$self} \\& \text{following}^{-1} &&=&& \text{preceding} && \end{align*} \caption{Auxiliary Functions for the Reduction of Counting Expressions.} \label{qualiftrans} \end{figure} \begin{figure} \begin{align*} & \extracttrail{\cdot}&&:&&\text{PathExpr} \rightarrow \Formulas\\ & \extracttrail{\axvar::\prop} &&=&& \trail(\axvar)\\ & \extracttrail{\axvar::*} &&=&& \trail(\axvar)\\ & \extracttrail{\pathvar_1/\pathvar_2} &&=&& \extracttrail{\pathvar_1},\extracttrail{\pathvar_2}\\ & \extracttrail{\pathvar[\qvar]} &&=&& \extracttrail{\pathvar} \\\\ & \trail(\cdot) &&:&&\text{Axis}\rightarrow\Trails\\ & \trail(\text{self}) &&=&& \epsilon \\ & \trail(\text{child})&&=&& \fc,\ns^*\\ & \trail(\text{parent}) &&=&& \medtriangleleft}%\dual{\ns}^*,\invfc \\ & \trail(\text{descendant})&&=&& \fc,(\fc|\ns)^*\\ & \trail(\text{desc$\!-\!$or$\!-\!$self}) &&=&& (\fc|\ns)^* \\ & \trail(\text{ancestor})&&=&& (\invfc|\medtriangleleft}%\dual{\ns})^*,\invfc\\ & \trail(\text{anc$\!-\!$or$\!-\!$self}) &&=&& (\invfc|\medtriangleleft}%\dual{\ns})^* \\ & \trail(\text{foll$\!-\!$sibling})&&=&& \ns,\ns^*\\ & \trail(\text{prec$\!-\!$sibling}) &&=&& \medtriangleleft}%\dual{\ns},\medtriangleleft}%\dual{\ns}^* \\ & \trail(\text{following})&&=&& (\invfc|\medtriangleleft}%\dual{\ns})^*,\ns,\ns^*,(\fc|\ns)^*\\ & \trail(\text{preceding})&&=&& (\invfc|\medtriangleleft}%\dual{\ns})^*,\medtriangleleft}%\dual{\ns},\medtriangleleft}%\dual{\ns}^*,(\fc|\ns)^* \end{align*} \caption{Trail Extraction Functions.} \end{figure} \end{comment} \subsection{Regular Tree Languages with Cardinality Constraints} Regular tree grammars capture most of the schemas in use today \cite{DBLP:journals/toit/MurataLMK05}. The logic can express all regular tree languages (it is easy to prove that regular expression types in the manner of e.g., \cite{DBLP:journals/toplas/HosoyaVP05} can be linearly translated into the logic: see \cite{geneves-pldi07}). In practice, schema languages often provide shorthands for expressing cardinality constraints on node occurrences. XML Schema notably offers two attributes {\em minOccurs} and {\em maxOccurs} for this purpose. For instance, the following XML schema definition: \begin{Verbatim}[fontsize=\small] <xsd:element name="a"> <xsd:complexType> <xsd:sequence> <xsd:element name="b" minOccurs="4" maxOccurs="9"/> </xsd:sequence> </xsd:complexType> </xsd:element> \end{Verbatim} \noindent is a notation that restricts the number of occurrences of ``\texttt{b}'' nodes to be at least 4 and at most 9, as children of ``\texttt{a}'' nodes. The goal here is to have a succinct notation for expressing regular languages which could otherwise be exponentially large if written with usual regular expression operators. The above regular requirement can be translated as the formula: \[\phi \wedge \modalf{\fc}{(\countf{}{\ns^\star}{b}{>}{3}} \wedge \countf{}{\ns^\star}{b}{\leq}{9})\] where $\phi$ corresponds to the regular tree type $a[b^*]$ as follows: \[\begin{array}{ll} \phi =&(a \wedge (\neg\modalf{\fc}{\top} \vee \modalf{\fc}{\psi})) \wedge \neg\modalf{\ns}{\top} \vspace{0.1cm}\\ \psi =&\mu x. \left(b \wedge \neg\modalf{\fc}{\top} \wedge \neg\modalf{\ns}{\top}\right) \vee \left(b \wedge \neg\modalf{\fc}{\top} \wedge \modalf{\ns}{x}\right) \end{array}\] This example only involves counting over children nodes. The logic allows counting through more general trails, and in particular arbitrarily deep trails. Trails corresponding to the XPath axes ``preceding, ancestor, following'' can be used to constrain the context of a schema. The ``descendant'' trail can be used to specify additional constraints over the subtree defined by a given schema. For instance, suppose we want to forbid webpages containing nested anchors ``$a$'' (whose interpretation makes no sense for web browsers). We can build the logical formula $f$ which is the conjunction of a considered schema for webpages (e.g. XHTML) with the formula $a/\text{descendant::}a$ in XPath notation. Nested anchors are forbidden by the considered schema iff $f$ is unsatisfiable. As another example, suppose we want paragraph nodes (``$p$'' nodes) not to be nested inside more than 3 unordered lists (``$ul$'' nodes), regardless of the schema defining the context. One may check for the unsatisfiability of the following formula: \[ p\wedge \countf{}{(\invfc|\medtriangleleft}%\dual{\ns})^\star,\invfc}{ul}{>}{3} \] \section{Satisfiability Algorithm} \label{sec:algo} We present a tableau-based algorithm for checking satisfiability of formulas. Given a formula, the algorithm seeks to build a tree containing a node selected by the formula. We show that our algorithm is correct and complete: a satisfying tree is found if and only if the formula is satisfiable. We also show that the time complexity of our algorithm is exponential in the size of the formula. \subsection{Overview} The algorithm operates in two stages. First, a formula $\form$ is decomposed into a set of subformulas, called the \emph{lean}. The lean gathers all subformulas that are useful for determining the truth status of the initial formula, while eliminating redundancies. For instance, conjunctions and disjunctions are eliminated at this stage. More precisely, the lean (defined in \ref{sec:lean}) mainly gathers atomic propositions and modal subformulas. From the lean, one may gather a finite number of formulas, called a $\form\!-\!$node, which may be satisfied at a given node of a tree. Trees of $\form\!-\!$nodes represent the exhaustive search universe in which the algorithm is looking for a satisfying tree. The second stage of the algorithm consists in the building of sets of such trees in a bottom-up manner, ensuring consistency at each step. Initially, all possible leaves (i.e., $\form\!-\!$node that do not require children nodes) are considered. During further steps, the algorithm considers every possible $\form\!-\!$node that can be connected with a tree of the previous steps, checking for consistency. For instance, if a formula at a $\form\!-\!$node $n$ involve a forward modality $\modalf{\fc}{\form'}$, then $\form'$ must be verified at the first child of $n$. Reciprocally, due to converse modalities, a $\form\!-\!$node may impose restrictions on its possible parent nodes. The new trees that are built may involve converse modalities, which will be satisfied during further steps of the algorithm. To ensure the algorithm terminates, a bound on the number of times each $\form\!-\!$node may occur in the tree is given. Finally, the algorithm terminates whenever: \begin{itemize} \item either a tree that satisfies the initial formula has been found, and its root does not contain any pending (unproven) backward modality; or \item every tree has been considered (the exploration of the whole search universe is complete): the formula is unsatisfiable. \end{itemize} \subsection{Preliminaries} To track where counting formulas are satisfied, we annotate each one with a fresh \emph{counting proposition} $c$, yielding formulas of the form $\countf{c}{\trail}{\form}{\#}{\natn}$. To define the notions of lean and $\form\!-\!$nodes, we need to extract navigating formulas from counting formulas (Figure~\ref{fig-nav}). \begin{figure} \begin{align*} nav(\var) &= \var & nav(\prop) &= \prop \\ nav(\top) &= \top & nav(c) &= c\\ nav(\neg \prop) &= \neg \prop & nav(\neg \modalf{\modals}{\top})&= \neg \modalf{\modals}{\top} \end{align*} \begin{align*} nav(\form_1\wedge \form_2)&= nav(\form_1) \wedge nav(\form_2) \\ nav(\form_1\vee \form_2)&= nav(\form_1) \vee nav(\form_2) \\ nav(\modalf{\modals}{\form}) &= \modalf{\modals}{nav(\form)} \\ nav(\ufixpf{\var}{\restrictedform}) &= \ufixpf{\var}{nav(\restrictedform)}\\ nav(\countf{c}{\trail}{\restrictedform}{>}{\natn})&= \nav{\trail}{\restrictedform\wedge c}\\ nav(\countf{c}{\trail}{\restrictedform}{\leq}{\natn})&= \nav{\trail}{(\restrictedform \land c) \lor (\neg\restrictedform \land \neg c)}\\ \nav{\epsilon}{\restrictedform} &= \restrictedform \\ \nav{\modals}{\restrictedform} &= \modalf{\modals}{\restrictedform}\\ \nav{\trail_1,\trail_2}{\restrictedform}&= \nav{\trail_1}{\nav{\trail_2}{\restrictedform}} \\ \nav{\trail_1\mid\trail_2}{\restrictedform} &= \nav{\trail_1}{\restrictedform} \vee \nav{\trail_2}{\restrictedform}\\ \nav{\trail^\star}{\restrictedform} &= \ufixpf{\var}{nav(\restrictedform)\vee\nav{\trail}{\var}} \end{align*} \caption{Navigation extraction from counting formulas} \label{fig-nav} \end{figure} We now define the {\em Fisher-Ladner} relation to extract subformulas. In the following, $i$ ranges over $1$ and $2$. \begin{align*} & \fishrel{\form_1\wedge\form_2}{\form_i}, && \fishrel{\form_1\vee\form_2}{\form_i}, \\ & \fishrel{\ufixpf{\var}{\form}}{\form[\subst{\ufixpf{\var}{\form}}{\var}]}, && \fishrel{\countf{c}{\trail}{\restrictedform}{\#}{\natn}}{nav(\countf{c}{\trail}{\restrictedform}{\#}{\natn})},\\ & \fishrel{\modalf{\modals}\phi}{\phi}. \end{align*} The {\em Fisher-Ladner} closure of a formula $\form$, written $\flcl{\form}$, is the set defined as follow. \begin{align*} & \flcl{\form}_0 &&=&& \{\form\}, \\ & \flcl{\form}_{i+1} &&=&& \flcl{\form}_i \cup \{\form^\prime \mid \fishrel{\form^{\prime\prime}}{\form^\prime}, \form^{\prime\prime}\in \flcl{\form}_i\},\\ & \flcl{\form}&&=&&\flcl{\form}_k, \end{align*} \text{where $k$ is the smallest integer s.t. $\flcl{\form}_k=\flcl{\form}_{k+1}$}. \noindent Note that this set is finite since only one expansion of a fixpoint formula is required in order to produce all its subformulas in the closure. \label{sec:lean} The {\em lean} of a formula $\form$ is a set of formulas containing navigating formulas of the form $\modalf{\modals}{\top}$, every navigating formulas of the form $\modalf{\modals}{\restrictedform}$ (i.e., that do not contain counting formulas) from $\flcl{\form}$, every proposition occurring in $\form$, written $\Props_{\form}$, every counting proposition, written $C$, and an extra proposition that does not occur in $\form$ used to represent other names, written $p_{\overline\form}$. \begin{equation*} \lean{\form}=\{\modalf{\modals}{\top}\}\cup \{\modalf{\modals}{\restrictedform}\in \flcl{\form}\} \cup \Props_{\form} \cup C \cup \{p_{\overline\form}\} \end{equation*} A {\em $\form\!-\!$node }, written $\fnode{\form}$, is a subset of $\lean{\form}$, such that: \begin{itemize} \item exactly one proposition from $\Props_{\form} \cup \{p_{\overline\form}\}$ is present; \item when $\modalf{\modals}{\restrictedform}$ is present, then $\modalf{\modals}{\true}$ is present; and \item both $\modalf{\invfc}{\true}$ and $\modalf{\medtriangleleft}%\dual{\ns}}{\true}$ cannot be present at the same time. \end{itemize} The set of $\form\!-\!$nodes is defined as $\fNodes{\form}$. Intuitively, a node $\fnode{\form}$ corresponds to a formula. \[ \fnode{\form} = \bigwedge_{\psi \in \fnode{\form}} \psi \wedge \bigwedge_{\psi \in \lean{\form} \setminus \fnode{\form}} \neg \psi \] When the formula $\form$ under consideration is fixed, we often omit the superscript. A \emph{\psitree{\form}} is either the empty tree $\emptyset$, or a triple $(\fnode{\form}, \stree_1,\stree_2)$ where $\stree_1$ and $\stree_2$ are \psitree{\form}s. When clear from the context, we usually refer to \psitree{\form}s simply as trees. \begin{figure} \begin{mathpar} \inferrule*{ }{\node \vdash^{\form} \top} \and \inferrule*{\restrictedform\in\node}{\node\vdash^{\form} \restrictedform} \and \inferrule*{\restrictedform \not\in \node}{\node\vdash^{\form} \neg \restrictedform} \and \inferrule*{\node\vdash^{\form} \restrictedform_1 \\ \node\vdash^{\form} \restrictedform_2}{\node\vdash^{\form} \restrictedform_1\wedge\restrictedform_2} \and \inferrule*{\node\vdash^{\form} \restrictedform_1}{\node\vdash^{\form} \restrictedform_1\vee\restrictedform_2} \and \inferrule*{\node\vdash^{\form} \restrictedform_2}{\node\vdash^{\form} \restrictedform_1\vee\restrictedform_2} \and \inferrule*{\node\vdash^{\form} \restrictedform\{\subst{\ufixpf{\var}{\restrictedform}}{\var}\}}{\node\vdash^{\form} \ufixpf{\var}{\restrictedform}} \end{mathpar} \caption{Local entailment relation: between nodes and formulas} \label{fig:entailmentnode} \end{figure} We now turn to the definition of consistency of a \psitree{\form}. To this end, we define an entailment relation between a node and a formula in Figure~\ref{fig:entailmentnode}. Two nodes $\node_1$ and $\node_2$ are consistent under modality $\modals \in \{\fc,\ns\}$, written $\fbrel{\form}{\node_1}{\modals}{\node_2}$, iff \begin{align*} \forall \modalf{\modals}\psi \in \lean{\form}&,\modalf{\modals}\psi \in \node_1 \iff \node_2 \vdash^{\form} \psi\\ \forall \modalf{\dual\modals}\psi \in \lean{\form}&, \modalf{\dual\modals}\psi \in \node_2 \iff \node_1 \vdash^{\form} \psi \end{align*} Consistency is checked each time a node is added to the tree, ensuring that forward modalities of the node are indeed satisfied by the nodes below, and that pending backward modalities of the node below are consistent with the added node. Note that counting formulas are not considered at this point, as they are globally verified in the next step. Upon generation of a finished tree, i.e., a tree with no pending backward modality, one may check whether a node of this tree satisfies $\form$. To this end, we first define forward navigation in a \psitree{\form} $\stree$. Given a path consisting of forward modalities $\rho$, $\stree(\rho)$ is the node at that path. It is undefined if there is no such node. \begin{align*} (\node,\stree_1,\stree_2)(\epsilon) &= \node \\ (\node,\stree_1,\stree_2)(\fc\rho) &= \stree_1(\rho) \\ (\node,\stree_1,\stree_2)(\ns\rho) &= \stree_2(\rho) \end{align*} We also allow extending the path with backward modalities if they match the last modality of the path. \begin{align*} (\node,\stree_1,\stree_2)(\rho \fc \invfc) &= (\node,\stree_1,\stree_2)(\rho) \\ (\node,\stree_1,\stree_2)(\rho \ns \medtriangleleft}%\dual{\ns}) &= (\node,\stree_1,\stree_2)(\rho) \end{align*} Now, we are able to define an entailment relation along paths in \psitree{\form}s in Figure~\ref{fig:countingentailment}. This relation extends local entailment relation (Figure~\ref{fig:entailmentnode}) with checks for counting formulas. Note that the case for fixpoints is contained in the case for formulas with no counting subformula. In the ``less than'' case, we need to make sure that every node reachable through the trail is taken into account, either as counted if it satisfies $\restrictedform$, or not counted otherwise (in this case, $\neg \restrictedform$ denotes the negation normal form). \begin{figure} \begin{mathpar} \inferrule*{\form^\prime \text{ does not contain counting formulas} \\ \stree(\rho) \vdash^\form \form^\prime}{\rho \vdash^{\form}_{\stree} \form^\prime}\and \inferrule*{\rho\vdash^{\form}_{\stree} \form_1 \\ \rho\vdash^{\form}_{\stree} \form_2}{\rho\vdash^{\form}_{\stree} \form_1\wedge\form_2} \and \inferrule*{\rho\vdash^{\form}_{\stree} \form_1}{\rho\vdash^{\form}_{\stree} \form_1\vee\form_2} \and \inferrule*{\rho\vdash^{\form}_{\stree} \form_2}{\rho\vdash^{\form}_{\stree} \form_1\vee\form_2} \and \inferrule*{\rho \modals \vdash^{\form}_{\stree}\form^\prime}{\rho \vdash^{\form}_{\stree} \modalf{\modals}\form^\prime}\and \inferrule*{ | \{ \node^\prime,\; \rho^\prime \in \trail \wedge \stree(\rho \rho^\prime) = \node^\prime \wedge \node^\prime \vdash^{\form} \restrictedform \land c \} | > \natn }{ \rho \vdash^{\form}_{\stree} \countf{c}{\trail}{\restrictedform}{>}{\natn}} \and \inferrule*{ | \{ \node^\prime,\; \rho^\prime \in \trail \wedge \stree(\rho \rho^\prime) = \node^\prime \wedge \node^\prime \vdash^{\form} \restrictedform \land c \} | \leq \natn \\ \forall \rho^\prime \in \trail, \stree(\rho\pathn^\prime) \vdash^\form (\restrictedform \land c) \lor (\neg \restrictedform \land \neg c) }{ \rho \vdash^{\form}_{\stree} \countf{c}{\trail}{\restrictedform}{\leq}{\natn}} \end{mathpar} \caption{Global entailment relation (incl. counting formulas)} \label{fig:countingentailment} \end{figure} We conclude these preliminaries by introducing some final notations. The {\em root} of a \psitree{\form} is defined as follows. \begin{align*} \stroot{\emptyset} & = \emptyset\\ \stroot{(\node,\stree_1,\stree_2)} & = \node \end{align*} A \psitree{\form} $\stree$ {\em satisfies} a formula $\form$, written $\stree\vdash \form$, if neither $\modalf{\invfc}\top$ nor $\modalf{\medtriangleleft}%\dual{\ns}}\top$ occur in $\stroot{\stree}$, and if there is a path $\rho$ such that $\rho\vdash^{\form}_{\stree} \form$. A set of trees $ST$ {\em satisfies} a formula $\form$, written $ST\vdash \form$, when there is a tree $\stree\in ST$ such that $\stree\vdash \form$. \subsection{The Algorithm} We are now ready to present the algorithm, which is parameterized by $K(\form)$ (defined in Figure~\ref{fig:boundK}), the maximum number of occurrences of a given node in a path from the root of the tree to a leaf. The algorithm builds consistent candidate trees from the bottom up, and checks at each step if one of the built tree satisfies the formula, returning $1$ if it is the case. As the set of nodes from which to build the trees is finite, it eventually stops and returns $0$ if no satisfying tree has been found. \begin{algorithm} \caption{Check Satisfiability of $\form$} \label{satalgo} \begin{algorithmic} \STATE $ST \leftarrow \emptyset$ \REPEAT \STATE $AUX \leftarrow \{(\node,\stree_1,\stree_2) \mid$ ~~\quad\qquad \textcolor{gray}{\COMMENT{we extend the trees}}\\ \quad $\mathtt{nmax}(\node,\stree_1,\stree_2) \leq K(\form) + 2$ ~\textcolor{gray}{\COMMENT{with an available node}}\\ \quad for $i$ in $\fc,\ns$ ~~~\qquad\qquad\qquad \textcolor{gray}{\COMMENT{and each child is either}}\\ \quad $\stree_i = \emptyset$ and $\modalf{i}{\top} \notin \node$ \qquad\qquad\textcolor{gray}{\COMMENT{an empty tree}}\\ \quad or $\stree_i\in ST$ ~~\quad\qquad\qquad\qquad\textcolor{gray}{\COMMENT{or a previously built tree}}\\ \quad\quad $\modalf{\dual{i}}\top \in \stroot{\stree_i}$ ~~\textcolor{gray}{\COMMENT{with pending backward modalities}}\\ \quad\quad $\fbrel{\form}{\node}{i}{\stroot{\stree_i}}\}$ \qquad\textcolor{gray}{\COMMENT{checking consistency}} \IF{$AUX \subseteq ST$} \RETURN{$0$} \qquad\qquad\qquad\qquad\textcolor{gray}{\COMMENT{No new tree was built}} \ENDIF \STATE $ST \leftarrow ST \cup AUX$ \UNTIL{$ST \vdash \form$} \RETURN{$1$} \end{algorithmic} \end{algorithm} To bound the size of the trees that are built, we restrict the number of identical nodes on a path from the root to any leaf by $K(\form) + 2$, defined in Figure \ref{fig:boundK}, using $\mathtt{nmax}$ defined as follows. \begin{align*} \mathtt{nmax}(\node,\stree_1,\stree_2) &= \mathtt{max}(\mathtt{nmax}(\node,\stree_1),\mathtt{nmax}(\node,\stree_2))\\ \mathtt{nmax}(\node,(\node,\stree_1,\stree_2)) & = 1 + \mathtt{nmax}(n,\stree_1,\stree_2)\\ \mathtt{nmax}(\node,(\node^\prime,\stree_1,\stree_2)) & = \mathtt{nmax}(n,\stree_1,\stree_2) \quad\text{if $\node \neq \node^\prime$}\\ \mathtt{nmax}(\node, \emptyset) &= 0 \end{align*} \begin{figure} \begin{align*} & K(\prop) = K(\neg\prop) = K(\neg \modalf{\modals}{\true}) = K(\true) = K(\ufixpf{\var}{\restrictedform})=0 \\ & K(\form_1\wedge\form_2) = K(\form_1\vee\form_2)= K(\form_1)+K(\form_2) \\ & K(\modalf{\modals}{\form}) = K(\form) \\ & K(\countf{}{\trail}{\restrictedform}{\#}{\natn}) = \natn+1 \end{align*} \caption{Occurrences bound} \label{fig:boundK} \end{figure} Consider for instance the formula $\form = \prop_1\wedge \modalf{\fc}{\countf{}{\ns^\star}{\prop_2}{>}{2}}$. The computed lean is as follows, where $\psi = \ufixpf{\var}{(\prop_2 \land c) \vee \modalf{\ns}{\var}} $. \[\{\prop_1,\prop_2,\prop_3,c,\modalf{\fc}{\top},\modalf{\ns}{\top},\modalf{\invfc}{\top}, \modalf{\medtriangleleft}%\dual{\ns}}{\top}, \modalf{\fc}{\psi}, \modalf{\ns}{\psi} \}\] Names other than $\prop_1$ and $\prop_2$ are represented by $\prop_3$; $c$ identifies counted nodes. Computing the bound on nodes, we get $K(\form) = 3$. After the first step, $ST$ consists of the trees $(\{\prop_i\}, \emptyset, \emptyset)$, $(\{\prop_i,c\}, \emptyset, \emptyset)$, $(\{\prop_i, \modalf{\dual{j}}{\top}\}, \emptyset, \emptyset)$, and $(\{\prop_i, c, \modalf{\dual{j}}{\top}\}, \emptyset, \emptyset)$ with $i \in \{1,2,3\}$ and $j \in \{\fc,\ns\}$. At this point the three finished trees in $ST$ are tested and found not to satisfy $\form$. After the second iteration many trees are created, but the one of interest is the following. \[\tree_0 = (\{\prop_2,c,\modalf{\ns}{\top},\modalf{\medtriangleleft}%\dual{\ns}}{\top}, \modalf{\ns}{\psi}\},\emptyset,(\{\prop_2,c,\modalf{\medtriangleleft}%\dual{\ns}}{\top}\},\emptyset,\emptyset))\] The third iteration yields the following tree. \[\tree_1 = (\{\prop_2,c,\modalf{\ns}{\top},\modalf{\invfc}{\top}, \modalf{\ns}{\psi}\},\emptyset,\tree_0)\] We can conclude by the fourth iteration when we find the tree $(\{\prop_1, \modalf{\fc}{\psi}, \modalf{\fc}{\top}\},\tree_1,\emptyset)$, which is found to satisfy $\form$ at path $\epsilon$. As the nodes at every step are different, the limit is not reached. Figure \ref{algfig} depicts a graphical representation of the example where counted nodes (containing $c$) are drawn as thick circles. \begin{figure} \centering \begin{tikzpicture}[scale=0.6]. \draw [thin,dotted] (-1,3) -- (9,3); \draw [thin,dotted] (-1,5) -- (9,5); \draw [thin,dotted] (-1,7) -- (9,7); \node (p1) at (0,2) [circle,draw] {$\prop_1$}; \node (p2) at (2,2) [circle,draw] {$\prop_2$}; \node (p3) at (4,2) [circle,draw] {$\prop_3$}; \node (p10) at (5,2) {$\ldots$}; \node (p5) at (7,2) [circle,draw,very thick] {$\prop_2$}; \node (p6) at (6,2) {$\ldots$}; \node (p7) at (6,4) [circle,draw,very thick] {$\prop_2$}; \node (p8) at (5,6) [circle,draw,very thick] {$\prop_2$}; \node (p9) at (6,8) [circle,draw] {$\prop_1$}; \draw [->] (p9) to node[auto] {$\fc$} (p8); \draw [->] (p8) to node[auto] {$\ns$} (p7); \draw [->] (p7) to node[auto] {$\ns$} (p5); \end{tikzpicture} \caption{Checking $\form = \prop_1\wedge \modalf{\fc}{\countf{}{\ns^\star}{\prop_2}{>}{2}}$} \label{algfig} \end{figure} \subsection{Termination} Proving termination of the algorithm is straightforward, as only a finite number of trees may be built and the algorithm stops as soon as it cannot build a new tree. \subsection{Soundness} If the algorithm terminates with a candidate, we show that the initial formula is satisfiable. Let $\stree$ and $\rho$ be the \psitree{\form} and path such that $\rho \vdash^\form_\stree \form$. We build a tree from $\stree$ and show that the interpretation of $\form$ for this tree includes the node at path $\rho$. We write $\tree(\stree)$ for the tree $(\Nodes,\brel,\tlabel)$ defined as follows. We first rewrite $\Gamma$ such that each node $\node$ is replaced by the path to reach it (i.e, nodes are identified by their path). \begin{align*} path(\node,\stree_1,\stree_2) & \rightarrow (\epsilon, path(\fc,\stree_1), path(\ns,\stree_2))\\ path(\rho, (\node,\stree_1,\stree_2)) & \rightarrow (\rho, path(\rho \fc,\stree_1), path(\rho \ns,\stree_2))\\ path(\rho, \emptyset) & \rightarrow \emptyset \end{align*} We then define: \begin{itemize} \item $\Nodes = \stnodes{path(\stree)}$; \item for every $(\rho,\stree_1,\stree_2)$ in $path(\stree)$ and $i=\fc,\ns$, if $\stree_i\neq\emptyset$ then $R(\rho,i)=\rho i$ and $R(\rho i, \dual{i}) = \rho$; and \item for all $\rho\in\Nodes$ if $\prop\in\stree(\rho)$ then $\tlabel(\rho)=\prop$. \end{itemize} \begin{lemma}\label{lem:localsound} Let $\restrictedform$ a subformula of $\form$ with no counting formula. If $\stree(\rho) \vdash^\form \restrictedform$ then we have $\rho \in \semf{\restrictedform}{\tree(\stree)}{\emptyset}$. \end{lemma} \begin{proof} We proceed by induction on the lexical ordering of the number of unfolding of $\restrictedform$ that are required for $\tree(\stree)$ as defined by Lemma~\ref{lem:finite_unfolding}, and of the size of the formula. The base cases are $\true$, atomic or counting propositions, and negated forms. These are immediate by definition of $\semf{\restrictedform}{\tree(\stree)}{\emptyset}$. The cases for disjunction and conjunction are immediate by induction (the formula is smaller). The case for fixpoints is also immediate by induction, as the number of unfoldings required decreases, and as $\semf{\ufixpf{\var}{\restrictedform}}{\tree(\stree)}{\emptyset} = \semf{\restrictedform\{\subst{\ufixpf{\var}{\restrictedform}}{\var}\}}{\tree(\stree)}{\emptyset}$. The last case is the presence of a modality $\modalf{\modals}\restrictedform$ from the $\form$node $\stree(\rho)$. In this case we rely on the fact that the nodes $\stree(\rho \modals)$ and $\stree(\rho)$ are consistent to derive $\stree(\rho \modals) \vdash^\form \restrictedform$. We then conclude by induction as the formula is smaller. \end{proof} \begin{theorem}[Soundness]\label{thm:soundness} If $\rho \vdash^\form_\stree \form$ then $\rho \in \semf{\form}{\tree(\stree)}{\emptyset}$ \end{theorem} \begin{proof} The proof proceeds by induction on the derivation of $\rho \vdash^\form_\stree \form$. Most cases are immediate (or rely on Lemma \ref{lem:localsound}). For the ``greater than'' counting case, we rely on the $k+1$ selected nodes that have to satisfy $\restrictedform \land c$ thus $\restrictedform$. In addition, in the ``less than'' case, every node that is not counted has to satisfy $\neg \restrictedform \land \neg c$, so in particular $\neg \restrictedform$. In both cases we conclude by induction. \end{proof} \subsection{Completeness} Our proof proceeds in two step. We build a \psitree{\form} that satisfies the formula, then we show it is actually built by the algorithm. As the proof is quite complex, we devote some space to detail it. Assume that formula $\form$ is satisfiable by a tree $\tree$. We consider the smallest such tree (i.e., the tree with the fewest number of nodes) and fix $\node^\star$, a node witnessing satisfiability. We now build a \psitree{\form} homomorphic to $\tree$, called the lean labeled version of $\form$, written $\stree(\tree,\form)$. To this end, we start by annotating counted nodes along with their corresponding counting proposition, yielding a new tree $\tree_c$. Starting from $\node^\star$ and by induction on $\form$, we proceed as follows. For formulas with no counting subformula, including recursion, we stop. For conjunction and disjunction of formulas, we recursively annotate according to both subformulas. For modalities, we recursively annotate from the node under the modality. For $\countf{c}{\trail}{\restrictedform}{\leq}{\natn}$, we annotate every selected node with the counting proposition corresponding to the formula. For $\countf{c}{\trail}{\restrictedform}{>}{\natn}$, we annotate exactly $\natn+1$ selected nodes. We now extend the semantics of formulas to take into account counting propositions and annotated nodes, written $\semfc{\cdot}{\tree}{\valuation}$. The definition is identical to Figure \ref{formsem}, with one addition and two changes. The addition is for counting propositions, which we define as $\node \in \semfc{c}{\tree}{\valuation}$ iff $\node$ is annotated by $c$. The two changes are for counting propositions, which we define as follows, where we select only nodes that are annotated. \begin{align*} \semfc{\countf{}{\alpha}{\form^\prime}{\leq}{\natn}}{\tree}{\valuation} &= \{\node , |\{\node^\prime\in \semfc{\form^\prime}{\tree}{\valuation} \cap \semfc{c}{\tree}{\valuation} , \path{\node}{\alpha}{\node^\prime}\}|\leq \natn \}\\ \semfc{\countf{}{\alpha}{\form^\prime}{>}{\natn}}{\tree}{\valuation} &= \{\node , |\{\node^\prime\in \semfc{\form^\prime}{\tree}{\valuation} \cap \semfc{c}{\tree}{\valuation} , \path{\node}{\alpha}{\node^\prime}\} |> \natn \} \end{align*} We show that this modification of the semantics does no change the satisfiability of the formula. \begin{lemma}\label{lem:annotated_satisfiability} We have $\node^\star \in \semfc{\form}{\tree}{\emptyset}$. \end{lemma} \begin{proof} We proceed by recursion on the derivation $\node^\star \in \semf{\form}{\tree}{\emptyset}$. The cases where no counting formula is involved, thus including fixpoints, are immediate, as the selected nodes are identical. The disjunction, conjunction, and modality cases are also immediate by induction. The interesting cases are the counting formulas. For $\countf{c}{\trail}{\restrictedform}{>}{\natn}$, as there are exactly $\natn+1$ nodes annotated, the property is true by induction. For $\countf{c}{\trail}{\restrictedform}{\leq}{\natn}$, we rely on the fact that every counted node is annotated. We conclude by remarking that $\restrictedform$ does not contain a counting formula, thus we have $\semfc{\restrictedform}{\tree}{\valuation} = \semf{\restrictedform}{\tree}{\valuation}$ and $\semfc{\neg \restrictedform}{\tree}{\valuation} = \semf{\neg \restrictedform}{\tree}{\valuation}$. \end{proof} To every node $\node$, we associate $\fnode{\form}$, the largest subset of formulas of the lean selecting the node. \begin{equation*} \fnode{\form} = \{\form_0\mid \node \in \semf{\form_0}{\tree}{\emptyset} ,\form_0\in\lean{\form}\} \end{equation*} This is a $\form$-node as it contains one and exactly one proposition, and if it includes a modal formula $\modalf{\modals}{\psi}$, then it also includes $\modalf{\modals}{\top}$. The tree $\stree(\tree,\form)$ is then built homomorphically to $\tree$. In the remainder of this section, we write $\stree$ for $\stree(\tree,\form)$. We now check that $\stree$ is consistent, starting with local consistency. \begin{figure} \begin{mathpar} \inferrule*{\psi \in \lean{\form}}{\psi \stackrel{.}{\in} \lean{\form}}\and \inferrule*{\psi_1 \stackrel{.}{\in} \lean{\form} \\ \psi_2 \stackrel{.}{\in} \lean{\form}}{\psi_1 \land \psi_2 \stackrel{.}{\in} \lean{\form}} \and \inferrule*{\psi_1 \stackrel{.}{\in} \lean{\form} \\ \psi_2 \stackrel{.}{\in} \lean{\form}}{\psi_1 \lor \psi_2 \stackrel{.}{\in} \lean{\form}} \and \inferrule*{ }{\true \stackrel{.}{\in} \lean{\form}} \and \inferrule*{\psi \in (\Props_{\form} \cup \modalf{\modals}\true \cup C) }{\neg\psi \stackrel{.}{\in} \lean{\form}} \end{mathpar} \caption{Formula induced by a lean} \label{fig:induced_by_lean} \end{figure} In the following, we say a formula $\psi$ is induced by the lean of $\form$, written $\psi \stackrel{.}{\in} \lean{\form}$, if it consists of the boolean combination of subformulas from the lean as defined in Figure~\ref{fig:induced_by_lean}. \begin{lemma}\label{lem:formulas_induced} Let $\modalf{\modals}\psi$ be a formula in $\lean{\form}$, and let $\psi^\prime$ be $\psi$ after unfolding its fixpoint formulas not under modalities. We have $\psi^\prime \stackrel{.}{\in} \lean{\form}$. \end{lemma} \begin{proof} By definition of the lean and of the $\stackrel{.}{\in}$ relation. \end{proof} \begin{lemma}\label{lem:complete_entailment} Let $\psi$ be a formula induced by $\lean{\form}$. We have $\node \in \semfc{\psi}{\tree}{\emptyset}$ if and only if $\fnode{\form} \vdash^{\form} \psi$. \end{lemma} \begin{proof} We proceed by induction on $\psi$. The base cases (the formula is in the $\form$-node or is a negation of a lean formula not in the $\form$-node) hold by definition of $\fnode{\form}$. The inductive cases are straightforward as these formulas only contain fixpoints under modalities. \end{proof} \begin{lemma}\label{lem:complete_consistent} Let $\node_1$ and $\node_2$ such that $\brel(\node_1,\modals) = \node_2$ with $\modals \in \{\fc,\ns\}$. We have $\fbrel{\form}{\fnode{\form}_1}{\modals}{\fnode{\form}_2}$. \end{lemma} \begin{proof} Let $\modalf{\modals}\psi$ be a formula in $\lean{\form}$. We show that $\modalf{\modals}\psi \in \fnode{\form}_1 \iff \fnode{\form}_2 \vdash^\form \psi$. We have $\modalf{\modals}\psi \in \fnode{\form}_1$ if and only if $\node_1 \in \semfc{\modalf{\modals}\psi}{\tree}{\emptyset}$ by definition of $\fnode{\form}_1$, which in turn holds if and only if $\node_2 = \brel(\node_1,\modals) \in \semfc{\psi}{\tree}{\emptyset}$. We now consider $\psi^\prime$ which is $\psi$ after unfolding its fixpoint formulas not under modalities. We have $\semfc{\psi^\prime}{\tree}{\emptyset} = \semfc{\psi}{\tree}{\emptyset}$ and we conclude by Lemmas \ref{lem:formulas_induced} and \ref{lem:complete_entailment}. \end{proof} We now turn to global consistency, taking counting formulas into account. \begin{lemma}\label{comple1} Let $\form_s$ be a subformula of $\form$, and $\rho$ be a path from the root in $\tree$ such that $\tree(\rho) \in \semfc{\form_s}{\tree}{\emptyset}$. We then have $\rho \vdash^{\form}_{\stree} \form_s$. \end{lemma} \begin{proof} We proceed by induction on $\form_s$. If $\form_s$ does not contain any counting formula, we consider $\form_s^\prime$ which is $\form_s$ after unfolding its fixpoint formulas not under modalities. We have $\semfc{\form_s^\prime}{\tree}{\emptyset} = \semfc{\form_s}{\tree}{\emptyset}$ and $\form_s^\prime \stackrel{.}{\in} \lean{\form}$. We conclude by Lemma \ref{lem:complete_entailment}. For most inductive cases, the proof is immediate by induction, as the formula size decreases. For $\countf{c}{\trail}{\restrictedform}{>}{\natn}$, we have by induction for every counted node $\rho\pathn^\prime \vdash^\form_\stree \restrictedform$ and $\rho\pathn^\prime \vdash^\form_\stree c$. We conclude by the conjunction rule and by the counting rule of Figure \ref{fig:countingentailment}. For $\countf{c}{\trail}{\restrictedform}{\leq}{\natn}$, we proceed as above for the counted nodes. For the nodes that are not counted, we have $\stree(\rho\pathn^\prime) \vdash^\form \neg\restrictedform$ by Lemma \ref{lem:complete_entailment} (since $\neg\restrictedform \stackrel{.}{\in} \lean{\form}$). We conclude by remarking that the node is not annotated by $c$, hence $\stree(\rho\pathn^\prime) \vdash^\form \neg c$. \end{proof} We next show that the \psitree{\form} $\stree$ is actually built by the algorithm. The proof follows closely the one from \cite{geneves-pldi07}, with a crucial exception: we need to make sure there are enough instances of each formula. Indeed, in \cite{geneves-pldi07}, the algorithm uses a $\form$type (a subset of $\lean{\form}$) at most once on each branch from the root to a leaf of the built tree. This yields a simple condition to stop the algorithm and conclude the formula is unsatisfiable. However, in the presence of counting formulas, a given $\form$type may occur more than once on a branch. To maintain the termination of the algorithm, we bound the number of identical $\form$type that may be needed by $K(\form)$ as defined in Figure \ref{fig:boundK}. We thus need to check that this bound is sufficient to build a tree for any satisfiable formula. We recall that $\form$ is a satisfiable formula and $\tree$ is a smallest tree such that $\form$ is satisfied, and $\node^\star$ is a witness of satisfiability. We proceed in two steps: first we show that counted nodes (with counted propositions) imply a bound on the number of identical $\form$types on a branch for a smallest tree. Second, we show that this minimal marking is bound by $K(\form)$. In the following, we call counted nodes and node $\node^\star$ \emph{annotations}. We define the \emph{projection} of an annotation on a path. Let $\rho$ be a path from the root of the tree to a leaf. An annotation projects on $\rho$ at $\rho_1$ if $\rho = \rho_1 \rho_2$, the annotation is at $\rho_1 \rho_m$, and $\rho_2$ shares no prefix with $\rho_m$. \begin{lemma}\label{lem:annotation_present} Let $\stree^\prime$ be the annotated tree, $\rho$ a path from the root of the tree to a leaf, $\node_1$ and $\node_2$ two distinct nodes of $\rho$ such that $\fnode{\form}_1 = \fnode{\form}_2$. Then either annotations projects both on $\rho$ at $\node_1$ and $\node_2$, or an annotation projects strictly between $\node_1$ and $\node_2$. \end{lemma} \begin{proof} We proceed by contradiction: we assume there is no annotation that projects between $\node_1$ and $\node_2$ and at most one of them has an annotation that projects on it. Without loss of generality, we assume that $\node_2$ is below $\node_1$ in the tree. Assume neither $\node_1$ nor $\node_2$ is annotated (through projection). We consider the tree $\stree_s$ where $\node_2$ is ``grafted'' upon $\node_1$. Formally, let $\rho_1$ be the path to $\node_1$ and $\rho_1\rho_2$ the path to $\node_2$. We remove every node whose path is of the form $\rho_1\rho_3$ where $\rho_2$ is not a prefix of $\rho_3$, and we also remove node $\node_2$. The mapping $\brel^\prime$ from nodes and modalities to nodes is the same as before for the node that are kept except for $\node_1$, where $\brel^\prime(\node_1,\fc) = \brel(n_2,\fc)$ and $\brel^\prime(\node_1,\ns) = \brel(\node_2,\ns)$. For every path $\rho$ of $\stree$, let $\rho_s$ be the potentially shorter path if it exists (i.e., if it was not removed when pruning the tree). More precisely, if $\rho^\prime = \rho^\prime_1 \rho^\prime_3$ where $\rho^\prime_1$ is a prefix of $\rho_1$ and the paths are disjoint from there, then $\stree_s(\rho^\prime) = \stree(\rho^\prime)$. If $\rho^\prime = \rho_1 \rho_2 \rho_3$, then $\stree_s(\rho_1\rho_3) = \stree(\rho^\prime)$. We now show that $\stree_s$ still satisfies $\form$ at $\node^\star$, a contradiction since this tree is strictly smaller than $\stree$. First, as there was no annotation projected, $\node^\star$ is still part of this tree at a path $\rho_s$. We show that we have $\rho_s \vdash^{\form}_{\stree_s} \form$ by induction on the derivation $\rho \vdash^{\form}_{\stree} \form$. Let $\rho^\prime \vdash^{\form}_{\stree} \form^\prime$ in the derivation, assuming that $\rho^\prime_s$ is defined. The case where $\form^\prime$ does not mention any counting formula is trivial: $\stree(\rho^\prime) = \stree_s(\rho^\prime_s)$ thus local entailment is immediate. Conjunction and disjunction are also immediate by induction. We now turn to the modality case, $\modalf{\modals} \form^\prime$ where $\form^\prime$ contains a counting formula. If $\rho^\prime$ is neither $\rho_1$ nor $\rho_1\rho_2$, we deduce from the fact that $\rho^\prime_s$ is defined that $(\rho^\prime \modals)_s$ is also defined and we conclude by induction. We now assume that $\rho^\prime$ is either $\rho_1$ or $\rho_1\rho_2$ and find a contradiction. First, remark that $\rho^\prime \vdash^{\form}_{\stree} \modalf{\modals} \form^\prime$ implies that the navigation generated by $\modalf{\modals} \form^\prime$ is in $\stree(\rho_1) = \stree(\rho_1 \rho_2)$. As each syntactic occurrence of a counting formula mentions a distinct counting proposition $c$, this is possible only if the counting formula is under a fixpoint or under another counting formula, both of which are impossible. We finally turn to the counting case $\countf{c}{\trail}{\restrictedform}{\#}{\natn}$. We say that a path \emph{does not cross over} when this path does not contain $\node_1$ nor $\node_2$. For nodes that are reached using paths that do not cross over, we conclude by induction that they are also counted. We show that the remaining nodes reached through a crossover remain reachable (there cannot be any counted node in the part of the tree that is removed since counted nodes are annotated and there was no annotation in the part removed). Without loss of generality, assume that $\rho^\prime$ is a prefix of $\rho_1$ (the counting formula is in the ``top'' part of the tree), and let $\rho_n$ be the path from the counting formula to the counted node ($\rho_n$ is an instance of the trail $\trail$). This path is of the shape $\rho^\prime_1 \rho_2 \rho_c$, with $\rho_1 = \rho^\prime \rho^\prime_1$. We now show that the path $\rho^\prime_1 \rho_c$ is an instance of $\trail$ if and only if $\rho_n$ is an instance of the trail, thus the same node is still reached. Recall that $\trail$ is of the shape $\trail_1, \ldots, \trail_n, \trail_{n+1}$ where $\trail_1$ to $\trail_n$ are of the form $\trail_{r_i}^\star$ and where $\trail_{n+1}$ does not contain a repeated trail. We say that a prefix $\rho_p$ of a path $\rho$ \emph{stops at $i$} if there is a suffix $\rho_s$ such that $\rho_p\rho_s$ is still a prefix of $\rho$, $\rho_p\rho_s \in \trail_1, \ldots, \trail_i$, and there is no shorter suffix $\rho^\prime_s$ and $j$ such that $\rho_p\rho^\prime_s \in \trail_1, \ldots, \trail_j$. (Intuitively, $\trail_i$ is the trail being used when matching the end of $\rho_p$.) If there are several satisfying indices $i$, we consider the smallest. We first show that a counting proposition is necessarily mentioned in a formula of $\fnode{\form}_2$, by contradiction. Assume no counting proposition is mentioned, yet the counting crossed-over. This can only occur for a ``less than'' counting formula that reaches $n_2$ which is not counted (because the formula was false), and if there is no path whose $\rho_n$ is a strict prefix that is an instance of $\trail$ (otherwise, by definition of the lean and of $nav$ (Figure \ref{fig-nav}), a formula of the form $\nav{\trail^\prime}{(\restrictedform \land c) \lor (\neg\restrictedform \land \neg c)}$ would be true and thus would be present, contradicting the assumption that no counting proposition is mentioned). Since $\fnode{\form}_1 = \fnode{\form}_2$, the same is true for $\fnode{\form}_1$, a direct contradiction to the fact that $n_2$ is also reached by the trail. Thus counting propositions are mentioned in $\fnode{\form}_1$ and $\fnode{\form}_2$. We next show that there are $i \leq j \leq n$ such that both $\rho^\prime_1$ stops at $i$ and $\rho^\prime_1 \rho_2$ stop at $j$, i.e., neither $i$ nor $j$ may be $n+1$. Recall that $\trail_{n+1}$ does not contain a repeated subtrail. Thus every formula of $\fnode{\form}_2$ mentioning $c$ is of the form $\nav{\trail^\prime}{\restrictedform}$, where $\trail^\prime$ does not contain a repetition. We consider the largest such formula. Since $n_1$ is before $n_2$ in the path from the counting node to the counted node, a similar formula with a larger trail or with a repetition must occur in $\fnode{\form}_1$, contradicting $\fnode{\form}_1 = \fnode{\form}_2$. Consider next the suffixes $\rho_s^1$ and $\rho_s^2$ computed when stating that the paths stop at $i$ and $j$. These suffixes correspond to the path matching the end of $\trail_i$ and $\trail_j$, respectively (before the next iteration or switching to the next subtrail). They have matching formulas in $\fnode{\form}_1$ and $\fnode{\form}_2$. As the formulas are present in both nodes, then the remainder of the paths ($\rho_2\rho_c$ and $\rho_c$) are instances of $(\rho_s^1 | \rho_s^2) \trail_i \ldots \trail_{n+1}$, thus $\rho^\prime_1 \rho_c$ is an instance of $\trail$ if and only if $\rho_n$ is. In the case of ``greater than'' counting, we conclude immediately by induction as the same nodes are selected (thus there are enough). In the case of ``less than'', we need to check that no new node is counted in the smaller tree. Assume it is not the case for the formula $\countf{}{\trail}{\restrictedform}{\leq}{\natn}$, thus there is a path $\rho_n \in \trail$ to a node satisfying $\restrictedform$. As the same node can be reached in $\stree$, and as we have $\stree(\rho^\prime \rho_n) \vdash^{\form} \neg \restrictedform$ by induction, we have a contradiction. This concludes the proof when neither $\node_1$ nor $\node_2$ is annotated. The proof is identical when $\node_2$ is annotated. If $\node_1$ is annotated, we look at the first modality between $\node_1$ and $\node_2$. If it is a $\fc$, then we build the smaller tree by doing $\brel^\prime(\node_1,\fc) = \brel(n_2,\fc)$ (we remove the $\ns$ subtree from $\node_2$ instead of $\node_1$). Symmetrically, if the first modality is a $\ns$, we consider $\brel^\prime(\node_1,\ns) = \brel(\node_2,\ns)$ as smaller tree. The rest of the proof proceeds as above. \end{proof} \begin{theorem}[Completeness]\label{thm:completeness} If $\form$ is satisfiable, then a satisfying tree is built. \end{theorem} \begin{proof} The proof proceeds as in \cite{geneves-pldi07}, we only need to check there are enough copies of each node to build every path. Let $\rho$ be a path from the root of the tree to the leaves. By Lemma~\ref{lem:annotation_present}, there are at most $n+1$ identical nodes in this path, where $n$ is the number of annotations. The number of annotations is $c+1$ where $c$ is the number of counted nodes. We show by an immediate induction on the formula $\form$ that $c$ is bound by $K(\form)$ as defined in Figure~\ref{fig:boundK}. We conclude by remarking that $K(\form) + 2$ is the number of identical nodes we allow in the algorithm. \end{proof} \subsection{Complexity} \label{complexity} We now show that the time complexity of the satisfiability algorithm is exponential in the formula size. This is achieved in two steps: we first show that the lean size is linear in the formula size, then we show that the algorithm has a single exponential complexity with relation to the lean size. \begin{lemma} The lean size is linear in terms of the original formula size. \end{lemma} \begin{proof}[Proof Sketch] First note that the size of the lean is the number of elements it contains; the size of each element does not matter. It was shown in \cite{geneves-pldi07} that the size of the lean generated by a non-counting formula is linear with respect to the formula size. We now describe the case for counting formulas. The lean consists of propositions and of modal subformulas, including the ones generated by the navigation of counting formulas (Figure~\ref{fig-nav}). Moreover, each counting formula adds one fresh counting proposition. In the case of ``less than'' formulas $\countf{}{\trail}{\psi}{\leq}{k}$, a duplication occurs due to the consideration of the negated normal form of $\psi$. Since there is no counting under counting, this duplication and the fact that the negated normal form of a formula is linear in the size of the original formula (Figure~\ref{normalneg}) result in the lean remaining linear. Another duplication occurs in the case of counting formulas of the form $\countf{}{\trail_1|\trail_2}{\restrictedform}{\#}{k}$. This duplication does not double the size of the lean, however, since $\restrictedform$ still occurs only once in the lean, thus the number of elements in the lean induced by $\nav{\trail_1}{\restrictedform} \vee \nav{\trail_2}{\restrictedform}$ is the same as the sum of the ones in $\nav{\trail_1}{\restrictedform}$ and in $\nav{\trail_2}{\cdot}$. \end{proof} \begin{theorem}\label{complexitythm} The satisfiability algorithm for the logic is decidable in time $2^{O(n)}$, where $n$ is the size of the lean. \end{theorem} \begin{proof}[Proof Sketch] The maximum number of considered nodes is the number of distinct tree nodes which is $2^n$, the number of subsets of the lean. For a given formula $\phi$, the number of occurrences of the same node in the tree is bounded by $K(\form)\leq k*m$, where $k$ is the greatest constant occurring in the counting formulas and $m$ is the number of counting subformulas of $\form$. Hence the number of steps of the algorithm is bounded by $2^n*k*m$. At each iteration, the main operation performed by the algorithm is the composition of trees stored in $AUX$. The cost of each iteration consists in: the different searches needed to form the necessary triples \((\node,\stree_1,\stree_2)\), the $\mathtt{nmax}$ function and $R^\form$. Since the total number of nodes is exponential, and the number of different subtrees too, therefore the maximum number of newly formed trees (triples) at each step has also an exponential bound. The function $\mathtt{nmax}$ performs a single traversal of the tree which is also exponential. Since the entailment relation involved in the definition of $R^\form$ is local, $R^\form$ is performed in linear time. Computing the containment $AUX \subseteq ST$ and the union $ST \cup AUX$ are linear operations over sets of exponential size. The stop condition of the algorithm is checked by the global entailment relation. It involves traversals parametrized by the number of trees, the number of nodes in each tree, the number of traversals for the entailment relation of counting formulas, and $K(\form)$. Its time complexity is bounded by $(2^n*k*m)^3$. Hence, the total time complexity of the algorithm is bounded by $(2^n*k*m)^{k^\prime}$, for some constant $k^\prime$. \end{proof} \section{Related Work} \label{sec:relatedwork} \paragraph{Counting over trees} The notion of Presburger Automata for trees, combining both regular constraints on the children of nodes and numerical constraints given by Presburger formulas, has independently been introduced by Dal Zilio and Lugiez \cite{964013} and Seidl et al. \cite{DBLP:conf/icalp/SeidlSMH04}. Specifically, Dal Zilio and Lugiez \cite{964013} propose a modal logic for unordered trees called Sheaves logic. This logic allows to impose certain arithmetical constraints on children nodes but lacks recursion (i.e., fixpoint operators) and inverse navigation. Dal Zilio and Lugiez consider the satisfiability and the membership problems. Demri and Lugiez \cite{DBLP:conf/cade/DemriL06} showed by means of an automata-free decision procedure that this logic is only PSPACE-complete. Restrictions like {\em $\prop_1$ nodes have no more ``children'' than $\prop_2$ nodes}, are succinctly expressible by this approach. Seidl et al. \cite{DBLP:conf/icalp/SeidlSMH04} introduce a fixpoint Presburger logic, which, in addition to numerical constraints on children nodes, also supports recursive forward navigation. For example, expressions like {\em the descendants of $\prop_1$ nodes have no more ``children'' than the number of children of descendants of $\prop_2$ nodes} are efficiently represented. This means that constraints can be imposed on sibling nodes (even if they are deep in the tree) by forward recursive navigation but not on distant nodes which are not siblings. Compared to the work presented here, neither of the two previous approaches can efficiently support constraints like {\em there are more than $5$ ancestors of ``$\prop$'' nodes}. Furthermore, due to the lack of backward navigation, the works found in \cite{964013,DBLP:conf/icalp/SeidlSMH04,DBLP:conf/cade/DemriL06} are not suited for succinctly capturing XPath expressions. Indeed, it is well-known that expressions with backward modalities are exponentially more succinct than their forward-only counterparts \cite{DBLP:conf/doceng/GenevesR05,685998}. There is poor hope to push the decidability envelope much further for counting constraints. Indeed, it is known from \cite{DBLP:conf/icalp/KlaedtkeR03,DBLP:conf/cade/DemriL06,Balder09} that the equivalence problem is undecidable for XPath expressions with counting operators of the form: \begin{itemize} \item $\text{PathExpr}_1[\qualifcount{\text{PathExpr}_2} = \qualifcount{\text{PathExpr}_3}]$, or \item $\text{PathExpr}_1[\text{position}() = \qualifcount{\text{PathExpr}_2}]$. \end{itemize} This is the reason why logical frameworks that allow comparisons between counting operators limit counting by restricting the $\text{PathExpr}$ to immediate children nodes \cite{964013,DBLP:conf/icalp/SeidlSMH04}. In this paper, we chose a different tradeoff: comparisons are restricted to constants but at the same time comparisons along more general paths are permitted. \paragraph{Counting over graphs} The $\mu$-calculus is a propositional modal logic augmented with least and greatest fixpoint operators \cite{DBLP:conf/icalp/Kozen82}. Kupferman, Sattler and Vardi study a $\mu$-calculus with graded modalities where one can express, e.g., that a graph node has at least $n$ successors satisfying a certain property \cite{DBLP:conf/cade/KupfermanSV02}. The modalities are limited in scope since they only count immediate successors of a given node. A similar notion in trees consists in counting immediate children nodes, as performed by the counting formula $\modalf{\fc}{\countf{}{\ns^*}{\form}{\#}{k}}$, where $\phi$ describes the property to be counted. Compared to graded modalities of \cite{DBLP:conf/cade/KupfermanSV02}, we consider trees and we can extend the ``immediate successor'' notion to nodes reachable from regular paths, involving reverse and recursive navigation. A recent study \cite{Bonatti-et-al-ICALP-06} focuses on extending the $\mu$-calculus with inverse modalities \cite{685998}, nominals \cite{DBLP:conf/cade/SattlerV01}, and graded modalities of \cite{DBLP:conf/cade/KupfermanSV02}. If only two of the above constructs are considered, satisfiability of the enriched calculus is EXPTIME-complete \cite{Bonatti-et-al-ICALP-06,Bianco-lics09}. However, if all of the above constructs are considered simultaneously, the calculus becomes undecidable \cite{Bonatti-et-al-ICALP-06}. The present work shows that this undecidability result in the case of graphs does not preclude decidable tree logics combining such features. \paragraph{XPath-like counting extensions}\label{sec:gradedpaths} The proposed logic can be the target for the compilation of a few more sophisticated counting features, considered as syntactic sugars (and that may come at the potential extra cost of their translation). In particular, XPath allows nested counting, as in the expression $$\text{self::book}[\text{chapter}[\text{section}>1]>1,$$ which selects the current ``book'' node provided it has at least two ``chapter'' child nodes which in turn must contain at least two ``section'' nodes each. For a simple set of formulas, formulas that count only on children nodes, such nesting can be translated into ordinary logical formulas. For instance, the logical formulation of the above XPath expression can be captured as follows: \[\text{book} \wedge \modalf{\fc}{\ufixpf{x}{\left(\text{chapter} \wedge \psi \wedge \modalf{\ns}{\ufixpf{y}{\text{chapter} \wedge \psi \vee \modalf{\ns}{y} }}\right) \vee \modalf{\ns}{x} }} \] where \(\psi=\modalf{\fc}{\ufixpf{x}{(\text{section} \wedge \modalf{\ns}{\ufixpf{y}{\text{section} \vee \modalf{\ns}{y} }}) \vee \modalf{\ns}{x} }} \). In \cite{marx-tods05}, Marx introduced an ``until'' operator for extending XPath's expressive power to be complete with respect to first-order logic over trees. This operator is trivially expressible in the present logic, owing to the use of the fixpoint binder. We can even combine counting features with the ``until'' operator and express properties that go beyond the expressive power of the XPath standard. For instance, the following formula states that ``\emph{starting from the current node, and until we reach an ancestor named $a$, every ancestor has at least 3 children named $b$}'': $$\ufixpf{x}{\left(\modalf{\fc}{\countf{}{\ns^*}{b}{>}{2}} \wedge \ufixpf{y}{\modalf{\invfc}{x}\vee \modalf{\medtriangleleft}%\dual{\ns}}{y}}\right) \vee a}$$ These extensions come at an extra cost, however. It is not difficult to observe (by induction) that, given a formula $\form$ with subformulas $\psi_1,...,\psi_n$ counting only on children nodes, if formulas $\psi_1,...,\psi_n$ are replaced by their expansions in $\form$, yielding a formula $\form'$, then $|lean(\form')|\leq |lean(\form)|*k^l$, where $k$ is greatest numerical constraint of the counting subformulas, and $l$ is the greatest level nesting of counting subformulas. As a consequence of Theorem~\ref{complexitythm}, the logic extended with nested formulas counting on children nodes and formulas counting on children nodes under the scope of a fixpoint operator can be decided in time $2^{O(n*k^l)}$. \begin{comment} Although only numerical constraints on children are allowed for graded modal formulas \cite{DBLP:conf/cade/KupfermanSV02}, they can occur in the scope of fixpoint formulas. In order to avoid the restriction of non presence of counting formula in the scope of fixpoints, we now replace counting formulas for children by equivalent fixpoint formulas. \begin{align*} & \text{ch}(\prop) &&=&& \prop \\ & \text{ch}(\top) &&=&& \top \\ & \text{ch}(\var) &&=&& \var \\ & \text{ch}(\neg \prop) &&=&& \prop\\ & \text{ch}(\neg \modalf{\modals}{\top}) &&=&& \modalf{\modals}{\top} \\ & \text{ch}(\form_1\wedge\form_2) &&=&& \text{ch}(\form_1) \wedge \text{ch}(\form_2)\\ & \text{ch}(\form_1\vee\form_2) &&=&& \text{ch}(\form_1) \vee \text{ch}(\form_2)\\ & \text{ch}(\modalf{\modals}{\form}) &&=&& \modalf{\modals}{\text{ch}(\form)}\\ & \text{ch}(\ufixpf{\var}{\form}) &&=&& \ufixpf{\var}{\text{ch}(\form)}\\ & \text{ch}(\countf{}{1,2^*}{\form}{>}{k}) &&=&& \text{ch}^k(\form)\\ & \text{ch}(\countf{}{1,2^*}{\form}{\leq}{0}) &&=&& \neg \text{ch}^0(\form)\\ & \text{ch}(\countf{}{1,2^*}{\form}{\leq}{k}) &&=&& \neg \text{ch}^{k+1}(\form)\\ & \text{ch}(\countf{}{\trail}{\form}{\#}{k}) &&=&& \countf{}{\trail}{\text{ch}(\form)}{\#}{k}\\ & \text{ch}^0(\form) &&=&& \modalf{1}{\ufixpf{\var}{\text{ch}(\form)\vee\modalf{2}{\var}}} \\ & \text{ch}^{k+1}(\form) &&=&& \text{ch}(\text{ch}^k(\form)\wedge \text{ch}(\form)) \end{align*} \begin{proposition} $\text{ch}$ transformation function consistently replace counting formulas for children, that is, $\form \equiv \text{ch}(\form)$. \end{proposition} The proof is immediate by double induction, first on structure of $\form$, and then on the numerical constraints in counting formulas. For instance, the following formulas are equivalent. \[ \countf{}{1,2^*}{\prop}{>}{1} \equiv \modalf{1}{\ufixpf{\var_1}{(\prop \wedge \modalf{1}{\ufixpf{\var_2}{\prop \vee \modalf{2}{\var_2} }}) \vee \modalf{2}{\var_1} }} \] We can now impose numerical constraints recursively, that is, we can use counting formulas on children in the scope of fixpoints. For instance, the following formulas are equivalent. \begin{align*} \ufixpf{\var}{\countf{}{\fc,\ns^*}{\prop}{>}{1} \vee \modalf{\invfc}{\var}} &\equiv \ufixpf{\var}{ \psi \vee \modalf{\invfc}{\var}} \\ \psi & \equiv \modalf{\fc}{\ufixpf{\var_1}{(\prop \wedge \modalf{\fc}{\ufixpf{\var_2}{\prop \vee \modalf{\ns}{\var_2} }}) \vee \modalf{\ns}{\var_1} }} \end{align*} In Subsection \ref{complexity}, we show that a formula $\form$ and its transformation $\text{ch}(\form)$ have the same computational cost. \end{comment} \begin{comment} \paragraph{Counting over strings} Different extensions of regular expressions (REs) with intersection, counting constraints, and interleaving have been considered over strings for describing content models in XML type languages \cite{Gelade-siam08, Kilpelainen-ic07}. The complexity of the inclusion problem over these different language extensions and their combinations typically ranges from polynomial to exponential space. A complete survey of these results can be found in \cite{Gelade-siam08} and the references thereof. \cite{ghelli-icdt09} proposes restrictions on regular expressions extended with counting and interleaving to obtain a PTIME inclusion decision procedure. \cite{ghelli-icdt09} \cite{Kilpelainen-ic07} \cite{Gelade-siam08} \end{comment} \begin{comment} Finally, we observe that the algorithm presented in this paper can be used to decide containment of string regular expressions with counting features (this corresponds to removing one modality from the logic). Two particular classes of regular expressions of interest are the ones introduced by the notions of weak and strong determinism \cite{DBLP:conf/mfcs/GeladeGM09}. Intuitively, when matching a string against a weak deterministic expression, it is always clear which string symbol is matched in the corresponding expression position. As for the strong deterministic case, it is additionally required to know how to go from one position to the next one. For simple regular expressions, with no counting constraints, it is already known that weak and strong deterministic classes coincide \cite{DBLP:journals/tcs/Bruggemann-Klein93}. When counting constraints are present, the same result does not apply \cite{DBLP:conf/mfcs/GeladeGM09}. Actually, weak determinism happens to strictly contain its strong counterpart, evenmore, weak deterministic expressions are exponentially more succinct. The especially succinct class of weak deterministic regular expressions with counting features is known to have an EXPSPACE-complete upper bound for the inclusion and equivalence problems \cite{DBLP:conf/mfcs/GeladeGM09}. This is done by reduction to the equivalence problem of regular expressions with squaring for which an EXPSPACE lower bound is known \cite{DBLP:conf/focs/MeyerS72}. The linear embedding of regular (string/tree) languages into the logic proposed here provides an exponential time decision algorithm for weak deterministic regular expressions with counting features. \end{comment} \section{Conclusion} \label{sec:conclusion} We introduced a modal logic of trees equipped with (1) converse modalities, which allow to succinctly express forward and backward navigation, (2) a least fixpoint operator for recursion, and (3) cardinality constraint operators for expressing numerical occurrence constraints on tree nodes satisfying some regular properties. A sound and complete algorithm is presented for testing satisfiability of logical formulas. This result is surprising since the corresponding logic for graphs is undecidable \cite{Bonatti-et-al-ICALP-06}. The decision procedure for the logic is exponential time w.r.t. to the formula size. The logic captures regular tree languages with cardinality restrictions, as well as the navigational fragment of XPath equipped with counting features. Similarly to backward modalities, numerical constraints do not extend the logical expressivity beyond regular tree languages. Nevertheless they enhance the succinctness of the formalism as they provide useful shorthands for otherwise exponentially large formulas. This exponential gain in succinctness makes it possible to extend static analysis to a larger set of XPath and XML schema features in a more efficient way. We believe the field of application of this logic may go beyond the XML setting. For example, in verification of linked data structures \cite{DBLP:conf/lfcs/MannaSZ07,kuncak08,DBLP:journals/acta/HabermehlIV10} reasoning on tree structures with in-depth cardinality constraints seems a major issue. Our result may help building solvers that are attractive alternatives to those based on non-elementary logics such as SkS \cite{Thatcher68}, like Mona~\cite{mona_user_manual}. \bibliographystyle{plain}
{'timestamp': '2010-08-31T02:02:53', 'yymm': '1008', 'arxiv_id': '1008.5073', 'language': 'en', 'url': 'https://arxiv.org/abs/1008.5073'}
\section{Introduction} Superconducting qubits coupled to a quantum mechanical harmonic oscillator have reached the strong coupling region, where the coupling strength is much larger than the decay rates of the cavity and the qubit. The circuit quantum electrodynamics (QED) scheme has been applied to the superconducting charge qubit \cite{Blais,Blais07}, phase qubit \cite{Sillanpaa}, and flux qubit \cite{Abd,Yang,Niem,Fedorov}. Recently, in Refs. \cite{Abd,Niem,Fedorov} the phase-biasing scheme was studied to provide a strong coupling strength between the resonator and the flux qubit by sharing the qubit loop with the resonator. This galvanic coupling, however, has difficulty in switching on and off the coupling between the qubit and the resonator, which is essential for a scalable design. In this study, by introducing a {\it current-biasing} scheme for the flux qubit, we offer a new circuit QED architecture, where the flux qubit is coupled with the {\it current mode of the resonator}, to obtain a strong coupling between the flux qubit and the resonator without the galvanic link. In this scheme, the flux qubit is biased by the oscillating current mode of the resonator. The current-biased dc-SQUID qubit (phase qubit) \cite{Martinis,Berkley,Steffen} is controlled by an electric field, which provides a fast operation. On the other hand, the flux qubit \cite{Mooij,Niskanen} is operated at the optimal point where the first-order phase fluctuations vanish. The present current-biased flux qubit operation is also performed at a point optimally biased with respect to both the bias current and the external magnetic flux, which may provide a long coherence time. An oscillating bias current induces a Rabi oscillation between qubit states. If the bias current operation of the flux qubit is to be performed, the number of the Josephson junctions in the flux qubit loop should be three (in general, an odd number), and the junctions should be arranged in an asymmetrical way. \section{Current-biased flux qubit} Figure \ref{fig1}(a) shows the current-biased flux qubit, where the current flows through the three-Josephson-junction qubit loop. In the circuit QED architecture with the flux qubit, which we will discuss later, the bias current comes from the oscillating current mode of the resonator. For the time being, we consider an externally applied oscillating bias current. The Hamiltonian of this system can be derived semiclassically by using the quantum Kirchhoff relation. First of all, let's consider a superconducting loop without a Josephson junction, where the usual fluxoid quantization condition reads \cite{Tinkham} \begin{eqnarray} \label{Tc} -\Phi_t+(m_c/q_c)\oint \vec{v}_c \cdot d\vec{l}=n\Phi_0, \end{eqnarray} with $\vec{v}_c$ being the average velocity of Cooper pairs, $q_c$ being $2e$, and $m_c$ being $2m_e$. The total magnetic flux threading the loop $\Phi_t$ is the sum of the external and the induced flux $\Phi_{\rm t}=\Phi_{\rm ext}+\Phi_{\rm ind}$ with the superconducting unit flux quantum $\Phi_0=h/2e$. Then, the condition of Eq. (\ref{Tc}) is written as \begin{eqnarray} k{\it l}=2\pi (n+ f_t), \end{eqnarray} where $f_t\equiv \Phi_t/\Phi_0=f+f_{\rm ind}$ with $f=\Phi_{\rm ext}/\Phi_0$ and $f_{\rm ind}=\Phi_{\rm ind}/\Phi_0$, $l$ is the circumference of the loop, and $k$ is the wave vector of the Cooper pair wavefunction. For the current-biased flux qubit in Fig. \ref{fig1}(a), the fluxoid quantization condition in Eq. (\ref{Tc}) is changed due to the phase differences $\phi_i$'s in the circumference of the qubit loop as follows: \begin{eqnarray} \label{fqc} (k_1+k_2)\frac{l}{2}=2\pi(n+f_t)-\phi_1-\phi_2-\phi_3. \end{eqnarray} Usually, for flux qubits, the contribution $(k_1+k_2)l/2$ is negligible in Eq. (\ref{fqc}) which then can be reduced to $2\pi(n+f_t)-\phi_1-\phi_2-\phi_3=0$. Rigorously, however, we keep this term for the time being to derive the Hamiltonian of our qubit system. The induced flux $\Phi_{\rm ind}$ is given by $\Phi_{\rm ind}=L_s (I_1-I_2)/2$ with the self inductance $L_s$, and the current $I_{1(2)}$ in Fig. \ref{fig1}(a) is \begin{eqnarray} \label{I} I_{1(2)}=\mp(n_cAq_c/m_c)\hbar k_{1(2)}, \end{eqnarray} with $n_c$ being the Cooper pair density and $A$ the cross section of the superconducting loop. Then, by using Eq. (\ref{I}) and $\Phi_0=h/q_c$, $f_{\rm ind}=\Phi_{\rm ind}/\Phi_0$ can be represented as $f_{\rm ind}= -(L_s/L_K)(l/2)(k_1+k_2)/2\pi$ with $L_K = m_cl/An_cq^2_c$ being the kinetic inductance \cite{flux}. With this relation the condition in Eq. (\ref{fqc}) becomes \begin{eqnarray} \label{fqc2} \left(1+\frac{L_s}{L_K}\right)(k_1+k_2)\frac{l}{2} =2\pi\left(n+f-\frac{1}{2\pi}\sum^3_{i=1}\phi_i\right). \end{eqnarray} The dynamics of the Josephson junction is described in the capacitively-shunted model, where the current relation is given by $I=-I_{\rm c}\sin\phi+C\dot{V}$ with the capacitance of junction $C$, $\dot{V}=dV/dt$, and voltage $V$ across the junction. This relation can be rewritten by using the Josephson voltage-phase relation $V=-(\Phi_0/2\pi){\dot \phi}$ as \begin{eqnarray} \label{Jc} -(n_cAq_c/m_c)\hbar k_i=-I_{{\rm c}i}\sin\phi_{i}-C_{i}(\Phi_0/2\pi)\ddot{\phi}_{i}, \end{eqnarray} with the critical current of Josephson junction $I_{{\rm c}i}=2\pi E_{Ji}/\Phi_0$ ($i=1,2,3$) and the Josephson coupling energy $E_{Ji}$. Then, from Eqs. (\ref{fqc2}) and (\ref{Jc}) with $k_0=k_1-k_2$, we have the quantum Kirchhoff relation \begin{eqnarray} \label{motion} \left(\frac{\Phi_0}{2\pi}\right)^2C_{i}\ddot{\phi}_{i}&=& \frac{\Phi^2_0}{2(L_s+L_K)\pi}\left(n+f-\frac{1}{2\pi}\sum^3_{i=1}\phi_i\right)\nonumber\\ &&-E_{Ji}\sin\phi_{i}\mp\frac{\Phi_0I_0}{4\pi}, \end{eqnarray} where $I_{0}=-(n_cAq_c/m_c)\hbar k_{0}$. Here, for $i=1,3$, the last term of Eq. (\ref{motion}) is $+\Phi_0I_0/4\pi$ whereas for $i=2$ the sign is reversed as $-\Phi_0I_0/4\pi$. For usual flux qubits, $L_K/L_s \sim 0.01$ \cite{Mooij}; thus, hereafter $L_K$ is neglected for simplicity. The equation of motion, Eq. (\ref{motion}), can be obtained from the Lagrange equation $(d/dt)\partial {\cal L}/\partial \dot{\phi}_i-\partial {\cal L}/\partial \phi_i=0$ with the Lagrangian \begin{eqnarray} \label{Lag} {\cal L}(\phi_i,\dot{\phi}_i)&=&\sum^3_{i=1}\frac12 C_i\left(\frac{\Phi_0}{2\pi}\right)^2\dot{\phi}^2_i -U_{\rm eff}(\{\phi_i\}),\\ \label{Ueff} U_{\rm eff}(\{\phi_i\})&=&\sum^3_{i=1}E_{Ji}(1-\cos\phi_i)+\frac{\Phi_0I_0}{4\pi}(\phi_1+\phi_3-\phi_2)\nonumber\\ &&+\frac{\Phi^2_0}{2L_s}\left(n+f-\frac{1}{2\pi}\sum^3_{i=1}\phi_i\right)^2, \end{eqnarray} where the first term in Eq. (\ref{Lag}) comes from the charging energy of the qubit system, $E_C= \sum^3_{i=1}Q^2_i/2C_i$ with $Q_i=C_i(\Phi_0/2\pi)\dot{\phi_i}$. The second term of Eq. (\ref{Ueff}) has a finite value owing to the asymmetry of the qubit loop, giving rise to the coupling between the bias current and the flux qubit. \begin{figure}[t] \vspace{-0cm} \includegraphics[width=0.45\textwidth]{fpqubit.eps} \vspace{-0cm} \caption{(Color online) (a) Current-biased flux qubit. Two current states of the three-Josephson-junction loop interact with the bias currents. $I_{0}$ is the bias current applied across the capacitance $C_{\mu\!\rm w}$, and $k_i$'s are the wave vectors of the Cooper pair wave function. (b) Plot for the effective potential $U_{\rm eff}(\{\phi_\pm\})$ with $f=0.5$ and $\phi_-$=0. The potential tilts due to a finite bias current $I_0$. The solid (dashed) line corresponds to positive (negative) $I_0$. } \label{fig1} \end{figure} In the usual experiments for flux qubits, $\Phi^2_0/2L_s \sim O(10^3E_J)$ is much larger than the other energy scale in the Lagrangian of Eq. (\ref{Lag}). Hence, the last term in Eq. (\ref{Ueff}) can be removed, leaving the constraint $g(\phi_i)=\phi_1+\phi_2+\phi_3-2\pi(n+f)=0$, which is the familiar flux quantization condition. The Lagrangian then has the effective potential \begin{eqnarray} U_{\rm eff}(\{\phi_i\})\!=\!\!\!\sum^3_{i=1}E_{Ji}(1\!-\!\cos\phi_i) \!+\!\!\frac{\Phi_0I_0}{4\pi}(\phi_1\!+\!\phi_3\!-\!\phi_2) \end{eqnarray} with the constraint $g(\phi_i)=0$. The Lagrange equation of motion with the above constraint produces the Kirchhoff equations in the qubit loop of Fig. \ref{fig1}(a): $I_0=I_1+I_2$ and $I_1=I_3$. We introduce a coordinate transformation such as $\phi_+=(\phi_2+\phi_3)/2$ and $\phi_- = (\phi_2-\phi_3)/2$. In the usual flux qubit experiment, the two Josephson junctions are nearly identical; thus, we set $E_{J2}=E_{J3}=E_J$. Although one can treat the general case numerically, this case gives a clear and intuitive picture for our qubit system. In this case, the effective potential is given by \begin{eqnarray} \label{UeffT} U_{\rm eff}(\{\phi_\pm\})&=&-E_{J1}\cos(2\pi f-2\phi_+) -2E_{J}\cos\phi_+\cos\phi_- \nonumber\\ &&+\frac{\Phi_0I_0}{2\pi}(\pi f-\phi_+ -\phi_-). \end{eqnarray} Without the last term representing the linear coupling of the phase to the external bias current $I_0$, the effective potential of Eq. (\ref{UeffT}) can have local minima only for $\cos\phi_-=\pm 1$, {\it i.e.}, $\phi_-= j\pi$, with $j$ being an integer, where the qubit states are formed (See Fig. \ref{fig2}(a)). Let the flux qubit be at the degeneracy point $f=0.5$. When there is no bias current $I_0=0$, $\phi_+ \approx \pi/3(-\pi/3)$ for the counterclockwise (clockwise) current state $|\downarrow\rangle ~(|\uparrow\rangle)$ with $\phi_-= 0$, and the qubit energy levels corresponding to the local minima of $U_{\rm eff}(\{\phi_\pm\})$ are degenerate, $E_{\downarrow}=E_{\uparrow}$. In this case, the qubit is optimally biased with respect to both the current $I_0$ and the magnetic flux $f$. A finite bias current tilts the effective potential as shown in Fig. \ref{fig2}(b), where the direction of energy level tilt depends on the sign of $I_0$. Consequently, the effective potential of Eq. (\ref{UeffT}) is expressed as $(E_{\downarrow}-\Phi_0I_0\alpha/2\pi)|\downarrow\rangle\langle\downarrow| + (E_{\uparrow}+\Phi_0I_0 \alpha/2\pi)|\uparrow\rangle\langle\uparrow|$ apart from the constant, where \begin{eqnarray} \alpha \approx |\phi_+|\approx \pi/3. \end{eqnarray} \begin{figure}[t] \vspace{0.5cm} \includegraphics[width=0.4\textwidth]{pot.eps} \vspace{0.cm} \caption{(Color online) (a) Effective potential of the current-biased flux qubit in the plane of $(\phi_+, \phi_-)$, where the effective potential of the local minima decreases as $\phi_+$ or $\phi_-$ increases. Here, we set $I_0/I_{\rm c}$=0.05, $f=0.5$ and $E_{J1}/E_J$=0.8. (b) Plot for $U_{\rm eff}(\{\phi_\pm\})$ along the dashed line in the upper panel. } \label{fig2} \end{figure} The transitions between the two states, $|\downarrow\rangle$ and $|\uparrow\rangle$, are induced by the charging energy given by the first term of the Lagrangian in Eq. (\ref{Lag}). Using the tight-binding approximation, we can write the Hamiltonian for our qubit system as \begin{eqnarray} \label{H} {\cal H} \!&=&\! E_{\downarrow}|\downarrow\rangle\langle\downarrow|\!+\! E_{\uparrow}|\uparrow\rangle\langle\uparrow| - t_q(|\downarrow\rangle\langle\uparrow|\!+\!|\uparrow\rangle\langle\downarrow|) \nonumber\\ && -\frac{\Phi_0I_0}{2\pi}\alpha (|\downarrow\rangle\langle\downarrow| - |\uparrow\rangle\langle\uparrow|), \end{eqnarray} where $t_q$ is the transition rate between $|\downarrow\rangle$ and $|\uparrow\rangle$ states. In order to operate the single qubit states, we apply a oscillating current, \begin{eqnarray} I_0=-I_{\rm b}\sin\omega t. \end{eqnarray} As shown in Fig. \ref{fig1}(b), the effective potential vibrates with the frequency $\omega$, which produces oscillating diagonal terms in Eq. (\ref{H}). In transformed coordinates, these terms appear off-diagonal, giving rise to a Rabi oscillation. The qubit states, $|0\rg$ and $|1\rg$, are the symmetric and the antisymmetric superpositions of $|\da\rg$ and $|\ua\rg$, such that \begin{eqnarray} |0\rg&=&(|\da\rg+|\ua\rg)/\sqrt{2},\nonumber\\ |1\rg&=&(|\da\rg-|\ua\rg)/\sqrt{2}. \end{eqnarray} In the basis of $\{|0\rangle, |1\rangle\}$, the Hamiltonian is represented as \begin{eqnarray} \tilde{\cal H}=\frac{\hbar\omega_0}{2}\sigma_z +g\sin\omega t \sigma_x, \label{OneT} \end{eqnarray} where $\hbar \omega_0=2t_q$ is the qubit frequency, and $\sigma_{x,z}$ and $I$ are the Pauli and the identity matrices, respectively. The coupling strength $g$ between the oscillating current and the qubit is given by \begin{eqnarray} \label{g} g\equiv \frac{\Phi_0I_{\rm b}}{2\pi}\alpha. \end{eqnarray} Near resonance, $\omega\approx \omega_0$, with a weak coupling $g/\hbar \ll \omega_0$, one can apply the rotating wave approximation. \cite{Jaynes} Then, we can observe a Rabi oscillation between the two qubit states $|0\rangle$ and$|1\rangle$ with a Rabi frequency $\Omega_{\rm R}^0=g/\hbar$. Although we do not present it explicitly, in general the number of Josephson junctions can be any odd integer larger than one. In this case, at the two sides of the qubit loop divided by the bias current line, the numbers of Josephson junctions should be different from each other so that the symmetry of the flux qubit loop is broken. Then, the resulting coupling between the current and the phase of junction enables current-driven operation of the flux qubit. For the dc-SQUID (2-junction) qubit, the bias current flows across the Josephson junctions, but the symmetry of the dc-SQUID does not allow bias-current operation of the qubit states. \section{Circuit QED} The present current-biasing scheme for the flux qubit is implemented in the circuit QED architecture. In Fig. \ref{fig3}, the qubit is coupled to the resonator by an ac current flowing through a capacitance. The Lagrangian ${\cal L}_r$ of the transmission line resonator consists of the charge density mode $ q(x,t)$ and the current density mode $I(x,t)$: \begin{eqnarray} {\cal L}_r=\int^{\frac{L}{2}}_{-\frac{L}{2}}dx\left(\frac{l}{2}I^2(x,t)-\frac{1}{2c} q^2(x,t)\right), \end{eqnarray} where $l$ and $c$ represent the inductance and the capacitance per unit length, and $L$ the length of the resonator. Introducing the variable $\theta(x,t)\equiv \int^x_{-L/2}dx'q(x',t)$, the Lagrangian becomes \begin{eqnarray} {\cal L}_r=\int^{L/2}_{-L/2}dx[(l/2)(\partial_t\theta)^2-(1/2c) (\partial_x \theta)^2], \end{eqnarray} where the voltage and the current of the resonator are given by $V(x,t)=\frac{1}{c}\frac{\partial\theta(x,t)}{\partial x}$ and $I(x,t)=\frac{\partial\theta(x,t)}{\partial t}$, respectively. For example, the voltage and the current of the resonator for the second harmonic mode can be represented in terms of the boson creation and annihilation operators $a^\dagger(t)$ and $a(t)$ as \cite{Blais} \begin{eqnarray} \label{VI} V(x,t)&=&\sqrt{\frac{\hbar\omega_r}{cL}}\cos\left(\frac{2\pi x}{L}\right)[a(t)+a^\dagger(t)],\nonumber\\ I(x,t)&=&-i\sqrt{\frac{\hbar\omega_r}{lL}}\sin\left(\frac{2\pi x}{L}\right)[a(t)-a^\dagger(t)], \end{eqnarray} where $\omega_r=2\pi v/L$ and $v=\sqrt{1/lc}$. The current profile for the second harmonic mode is shown in Fig. \ref{fig3}. \begin{figure}[t] \vspace{0.5cm} \includegraphics[width=0.48\textwidth]{cQED.eps} \vspace{0.3cm} \caption{(Color online) Schematic diagram and equivalent lumped circuit representation of a three-junction flux qubit coupled to a transmission line resonator. The flux qubit interacts with the current mode of the resonator. The arrows in the schematic diagram and equivalent circuit show the flow of oscillating current. As shown in the equivalent circuit, the oscillating current flows between the resonator and the qubit through the capacitance. Here, $L$ is the length of the resonator, and $d$ is the width of qubit loop.} \label{fig3} \end{figure} In the circuit QED scheme with superconducting charge qubit, the qubit interacts with the voltage mode $V(x,t)$. In contrast, the present current-biased flux qubit is coupled to the temporal fluctuation of local charge density, $I(x,t)$, corresponding to the bias current applied to the qubit. In Fig. \ref{fig3} the resonator and the qubit are coupled by a capacitance through the region $-d/2<x<d/2$, where the charge fluctuation ${\dot q}(x,t)$ in the resonator produces the current flow into the qubit, $I_0(t)=\int^{d/2}_{-d/2}{\dot q}(x,t)dx.$ As a result, following the current conservation condition in the resonator, ${\dot q}(x,t)=\partial I(x,t)/\partial x$, and Eq. (\ref{VI}), the current flowing into the qubit with width $d$ is given by \begin{eqnarray} \label{I0} I_0(t)&=&I(d/2,t)-I(-d/2,t)\nonumber\\ &=&-2i\sqrt{\frac{\hbar\omega_r}{lL}}\sin\left(\frac{\pi d}{L}\right)[a(t)-a^\dagger(t)]. \end{eqnarray} The interaction between the flux qubit and the bias current $I_0(t)$ of the resonator is given by the last term of Eq. (\ref{H}). Then, from Eqs. (\ref{H}) and (\ref{I0}), the total Hamiltonian at the degeneracy point ($E_\downarrow=E_\uparrow$) is given by \begin{eqnarray} {\cal H}&=&\hbar\omega_r\left(a^\dagger a+\frac12\right)-t_q(|\downarrow\rangle\langle\uparrow|\!+\!|\uparrow\rangle\langle\downarrow|)\nonumber\\ &&+i g(|\downarrow\rangle\langle\downarrow| - |\uparrow\rangle\langle\uparrow|)(a-a^\dagger), \end{eqnarray} where the first term comes from the oscillating mode in the resonator. The first two terms of Eq. (\ref{H}) disappear at the degeneracy point. In the basis of $|0\rangle$ and $|1\rangle$, the Hamiltonian is written as \begin{eqnarray} \label{Hc} {\tilde {\cal H}}=\hbar\omega_r a^\dagger a +\frac{\omega_a}{2}\sigma_z+i g\sigma_x(a-a^\dagger), \end{eqnarray} with $\hbar \omega_a=2t_q$. Here, the single qubit gate is performed by a resonant external driving microwave field \cite{Blais07}. Since usually $d\ll L$, we have the expression of the coupling $g$ as follows: \begin{equation} \label{gc} g \approx \alpha \Phi_0 \sqrt{\frac{\hbar\omega_r}{lL}}\left(\frac{d}{L}\right). \end{equation} Using the usual experimental values for the parameters \cite{Abd} such that $d=3 ~\mu$m, $L$=5 mm, $lL$=2.5 nH, $\omega_r/2\pi=$15 GHz, and $\alpha=\pi/3$, we estimate the coupling strength to be $g/h\sim 120$ MHz. For inductive coupling between the qubit and the resonator, the coupling strength, $g_{\rm IC}=\Phi I_c\sin\alpha$ with $\Phi$ being the magnetic flux threading the qubit due to the resonator current, can also be estimated to be \begin{eqnarray} g_{\rm IC}\approx \frac{\mu_0 d^2}{2\pi R}\sqrt{\frac{\hbar\omega_r}{lL}}I_c\sin\alpha, \end{eqnarray} where $R$ is the mean distance between the resonator and the qubit loop and $I_c=2\pi E_J/\Phi_0$. With the same parameter values, we see that $g_{\rm IC}$ is smaller than $g$ by an order of magnitude. We obtain the inductive coupling strength $g_{\rm IC}/h\sim 12$ MHz with $R=5~\mu$m and $E_J/h=200$ GHz. Note that since the qubit is located at the nodal point of the current oscillation for the second harmonic mode, the inductive coupling at this point is negligible. The above value of inductive coupling has been obtained for the first harmonic mode. We have also assumed that the capacitance density is nearly uniform along the resonator. In reality, the capacitance around the center of the resonator can be much higher due to the presence of the qubit loop, which can potentially provide a much stronger coupling $g$ in the present scheme. \section{Decoherence property} The qubit state of the present current-biased flux qubit is driven in a different way from that of the flux-driven flux qubit; thus, the decoherence property will also differ from each other. According to a recent review for the phase qubit \cite{QIP}, the dephasing times of the phase qubit and the three-Josephson junction flux qubit are comparable with each other. Since the only difference between the present current-biased flux qubit and the usual current-biased dc-SQUID qubit (phase qubit) is the number of Josephson junctions in the qubit loop, the decoherence property of both qubits can be analyzed in a similar way. In a recent experiment \cite{Steffen2} for the phase qubit, the capacitance of Josephson junction is $\sim$ 50fF while the three-Josephson junction flux qubit has a typically smaller value of capacitance of $\sim$ 3fF ~\cite{Wal} with a small area of the Josephson junction. For both qubits, a large shunted capacitance may reduce the decoherence from charge fluctuation. We employ the typical parameters of the three-Josephson junction flux qubit for the present current-biased qubit. Since both the capacitance and the critical current of the Josephson junction scale as the area of the junction, the critical current $I_c$ of junction is also much smaller. For the present current-biased flux qubit, the dephasing due to the bias-current noise may cause decoherence of the qubit state, as in the phase qubit \cite{QIP}. The current noise is related to the $1/f$ charge noise with a spectral density $S^*_q(1 {\rm Hz})/f$. The phase noise is given by $\langle\phi^2\rangle\approx [S^*_q(1{\rm Hz})/C\Delta U] (2/3)\ln(1.778\omega_{10}t)$, \cite{Nam} where $C$ is the capacitance of the Josephson junction, $\Delta U$ is the barrier height of the potential, and $\omega_{10}$ is the qubit level spacing. Although for the present qubit, the spectral density $S^*_q(1 {\rm Hz})$ and the capacitance $C$ are small, these contributions cancel each other in $\langle\phi^2\rangle$ because both the spectral density and the capacitance scale as the area of the junction \cite{Nam}. Further, the qubit level spacing $\omega_{01}$ is similar between the two qubits. For the operation of the present qubit, we need not tilt the potential; thus, the barrier hight of the potential $\Delta U$ remains large. Hence, we consider the dephasing due to current noise in present qubit to still be comparable to the flux-driven flux qubit. On the other hand, the phase noise due to the critical-current fluctuation and to the flux $1/f$ fluctuations is given by $\langle\phi^2\rangle\approx (S^*_I(1{\rm Hz})L_J/\Delta U) \ln(0.401/f_m t)(\omega_{01}t)^2/6$ \cite{Nam}, with the Josephson inductance $L_J=\Phi_0/2\pi I_c\cos\phi$ and the low-frequency cutoff $f_m$. For the phase noise, the argument is similar because both the spectral density $S^*_I(1{\rm Hz})$ and the critical current $I_c$ scale as the area of the junction. \section{Summary} We propose a new circuit QED architecture for the three-Josephson-junction flux qubit, where the flux qubit is coupled to the temporal charge density fluctuations of the transmission line resonator. The flux qubit is controlled by using a bias current. When three Josephson junctions are arranged asymmetrically, the energy levels of two qubit states with different chiralities couple to the external bias current. Rabi oscillations can be induced by an ac current at an optimal point with respect to both the bias current and the external magnetic flux. Remarkably, the coupling between the qubit and the resonator is strongly enhanced compared to conventional inductive coupling. \begin{acknowledgments} We wish to thank S. Girvin for useful discussions and valuable suggestions. This work was supported by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD, Basic Research Promotion Fund) through KRF-2008-313-C00243. \end{acknowledgments}
{'timestamp': '2011-06-16T02:01:27', 'yymm': '1005', 'arxiv_id': '1005.1703', 'language': 'en', 'url': 'https://arxiv.org/abs/1005.1703'}
\subsection{Type II Quasars} Type II quasars are the obscured counterparts of the classical quasar population predicted by AGN unification models. In the optical, Type II quasar candidates are traditionally selected as objects with narrow permitted emission lines and high ionization line ratios. Zakamska et al (2003) were the first to compile large samples of Type II quasar candidates in the Sloan Digital Sky Survey. Their objects were selected to lie in the redshift range $0.3 < z < 0.83$ in order to disfavour selection of low luminosity objects. The main disadvantage of applying such a redshift cut in SDSS is that the Type II quasars are not selected from a magnitude-limited survey of galaxies, so their demographics and contribution to black hole growth cannot be studied in detail, nor can direct comparisons be made to other classes of AGN. Zakamska et al's Type II selection was based on a cut in the [OIII]/H$\beta$ line ratio as well as the presence of very high-ionization lines such as [NeV]. The [OIII] line luminosities of the sample range from $3 \times 10^7 L_{\odot}$ to close to $10^{10} L_{\odot}$ and the [OIII]/[OII] line ratios lie in the range 1-10, i.e. very similar to the most luminous objects in our sample. In follow-up work, Zakamska et al (2004) found that 143 of these objects had counterparts in the FIRST radio catalogue. They speculate that this may represent an overestimate of the true fraction, because the SDSS targetted many FIRST radio sources for spectroscopy. Hubble Space Telescope imaging of the host galaxies of a subset of 9 Type II quasars with [OIII] line luminosities greater than $3 \times 10^8 L_{\odot}$ reveal that 6 out of the 9 are elliptical galaxies well-fit by de Vaucouleurs light profiles and the other 3 have a minor disk component (Zakamska et al 2006). Most recently, Liu et al (2013a,b) have obtained IFU data for 11 of the most luminous, radio-quiet objects in their sample and show the the [OIII] emission is very extended with a mean diameter of 28 kpc and is spherical in morphology. The majority of nebulae show blue-shifted excesses in their line profiles across most of their extents, signifying gas outflows. These authors estimate a median outflow velocity of 760 km/s, similar to or above the escape velocities from the host galaxies. In Figure 15, we present a compilation of images of the mid-IR excess AGN in our sample with the highest [OIII] luminosities ($> 10^8 L_{\odot}$). As can be seen, the majority also have elliptical morphologies and centrally concentrated light profiles. The galaxy in the bottom row with a strange red handle-liked protuberance is the famous ``teacup'' AGN, the nearest known radio-quiet type II quasar with a redshift z=0.08056, first identified by Reyes et al (2008). In recent work, Harrison et al (2015) have studied the ionized gas kinematics in this object and find evidence for an outflow with velocity 740 km/s. We thus believe it is likely that there is a close correspondence between the Type II quasar population and the brightest objects in our mid-IR excess sample. We have also checked whether there is a significant population of optically-selected AGN with [OIII] luminosities greater than ($> 10^8 L_{\odot}$) that are not included in our mid IR-excess sample. We find only 25 out of 128 such objects, indicating that the mid-IR and type II quasar selection techniques yield essentially the same set of objects at the very highest [OIII] luminosities. \begin{figure*} \includegraphics[width=135mm]{radiocentral_mostlum.eps} \caption{A compilation of SDSS cut-out images of the mid-IR excess AGN in our sample with the highest [OIII] luminosities ($> 10^8 L_{\odot}$). \label{models}} \end{figure*} What about the less luminous mid-IR excess AGN? In Figure 16, we present a compilation of objects with D$_n$(4000) in the range 1.0-1.2, indicative of current bursts of star formation. As shown in Figures 9 and 10, these galaxies have more moderate [OIII] luminosities in the range $10^7-10^8 L_{\odot}$. As can be seen, there are many more interacting pairs and triples in this sample than in the high-luminosity sample. If the galaxies in Figures 15 and 16 constitute different phases of the same merger-induced black hole fuelling events, the host galaxies shown in Figure 16 could be said to be an earlier stage of the merging process. \begin{figure*} \includegraphics[width=135mm]{radiocentral_moststarf.eps} \caption{A compilation of SDSS cut-out images of the mid-IR excess AGN in our sample with D$_n$(4000) in the range 1.0-1.2, indicative of current bursts of star formation. \label{models}} \end{figure*} \subsection {AGN selected only by a mid-IR colour criterion} As discussed in Paper I, the selection of AGN using WISE photometry has generally been carried out using colour cuts designed to avoid the main locus of star-forming galaxies (e.g. Stern et al 2012). We showed in this paper that the W1-W2 colours of a significant fraction of such objects remain very red out to large ($>$ 5 kpc) radii, suggesting that a significant fraction of the mid-IR emission may arise from an extended distribution of dust in the galaxy heated by collisions with electrons from surrounding hot halo gas, rather from a central, parsec-scale torus. We include an additional radio-loud criterion to increase the likelihood that the sample contains a high proportion of galactic nuclei with black holes that are currently accreting. In other words, our goal is to maximize the purity of the AGN sample with the data at hand. The danger with the procedure adopted in Paper I with respect to past mid-IR selection procedures, is that such an AGN sample is not complete. This may occur, for example, if the emission comes from a jet that is variable over timescales that are short compared to the lifetime of the torus. Figure 17 compares some of the key properties of the full mid-IR excess sample and the sample with the additional cut on radio luminosity. Mid-IR excess galaxies are identified as outliers in SFR/$M_*$ versus D$_n$(4000) space for galaxies with $\log$ SFR/$M_* > -11$ and in H$\delta_A$ versus D$_n$(4000) space for galaxies with lower specific star formation rates. We bin up the two planes in intervals of 0.15 in D$_n$(4000), 0.25 dex in $\log$ SFR/$M_*$ and 0.1 in H$\delta_A$ and calculate the distribution of W1-W2 colours in each bin. Outliers or mid-IR excess galaxies are defined to have colours that lie above the upper 95th percentile point of the distribution. In Figure 17, we plot galaxy properties as a function of the quantity $\Delta$(W1-W2), the difference between the measured W1-W2 colour of the galaxy and the colour that delineates the upper 95th percentile cut. In the top left panel, we plot the fraction of galaxies as a function of $\Delta$(W1-W2) for the mid-IR excess sample without the radio-loud cut (black histogram) compared to the fiducial sample (black triangles). As can be seen, the sample with the radio loud cut includes a much more pronounced tail of objects with large $\Delta$(W1-W2), i.e. with mid-IR colours that are very far from the stellar locus. This supports our claim that the radio selection is increasing the purity of the AGN sample. \begin{figure} \includegraphics[width=90mm]{threshprope.ps} \caption{ The top left panel shows the fraction of the sample in bins of $\Delta (W1-W2)$, where $\Delta (W1-W2)$ is the difference between the W1-W2 colour and the colour that delineates the 95th percentile cut. The result for the mid-IR excess sample without any constraint from the radio is shown as a black histograms, while black triangles show the result for the fiducial sample. In the next three panels, stellar mass, [OIII] line luminosity and ionization parameter are plotted as a function of $\Delta (W1-W2)$. Each plotted point corresponds to a bin containing a fixed number (200) galaxies, so the noise due to Poisson sampling of the underlying distribution remains constant in each diagram. Black line/triangles show the median of the distribution, red lines/triangles the upper 75th percentile of the distribution and blue lines/triangles the lower 25th percentile. Once again, lines denote the full mid-IR excess sample, while triangles denote the fiducial sample. \label{models}} \end{figure} In the next three panels, trends in stellar mass of the host galaxy, [OIII] line luminosity and ionization parameter [OIII]/[OII] are shown as a function of $\Delta$(W1-W2) for the two samples. As can be seen, [OIII] luminosity and ionization parameter increase for systems with larger W1-W2 colours in a similar manner for both classes of object. This implies that the main effect of the radio selection is to boost the fraction of galaxies with the most extreme W1-W2 colours, which are also the most optically luminous systems with the highest ionization parameters. More detailed studies using spatially resolved data will be necessary to figure out the physical nature of the central radio emission (jet or central starburst) and whether or not the radio source is influencing the structure of the dust emission in the central regions of the galaxy. \section {Summary} We now summarize the main results of our analysis. We have studied the narrow emission line properties and stellar populations of a sample of 1385 radio-detected, mid-IR excess AGN in order to understand the physical conditions in the interstellar medium of these objects. We compare these systems with a control sample of 50,000 AGN selected by their optical emission line ratios that do not have a significant mid-IR excess. Our main conclusions are the following: \begin{itemize} \item The mid-IR excess AGN populate the high ionization branches of the [OIII]/H$\beta$ versus [OI]/H$\alpha$/[SII]/H$\alpha$ BPT diagrams, whereas the control sample AGN cluster near the star-forming locus and have lower ionization parameters on average. \item The mid-IR excess AGN have [OIII] luminosities that are an order of magnitude large on average than the control sample AGN. \item The mid-IR excess AGN have higher electron densities, but similar metal abundances to the control sample. \item The H$\delta_A$ versus D$_n$(4000) diagrams show that a much larger fraction of the host galaxies of mid-IR excess AGN have experienced recent bursts of star formation. These recent starburst galaxies have lower stellar metallicities and higher Mg/Fe ratios. \item The number densities of mid-IR excess AGN are a 1000 times smaller than those of control sample AGN at low [OIII] luminosities ($\sim 10^{6} L_{\odot}$, but at the very highest [OIII] luminosities probed by our sample ($\sim 10^{9} L_{\odot}$), mid-IR excess AGN become more populous by a factor of 10. \item Mid-IR excess AGN contribute about half the total present-day black hole growth in galaxies with stellar masses larger than $10^{11} M_{\odot}$, whereas control sample AGN are currently the dominant contributor in lower mass systems. \end {itemize} It is well known that the AGN population evolves strongly to higher luminosities at higher redshifts, and it is likely that AGN similar to the mid-IR excess population studied in this paper become much more populous. We note that more than 95\% of all AGN in the parent sample with [OIII] luminosities greater than $10^{8}$ L$_{\odot}$ are included in the mid-IR excess/radio sample studied in this paper, suggesting that the two selection techniques converge at the highest luminosities. The future usefulness of our low redshift sample will lie in spatially resolved spectroscopic follow-up studies of various kinds, as in the Harrison et al study of the teacup AGN. These studies are required in order to understand how accretion onto the central supermassive black hole is occurring, the physical origin and location of the very high ionization gas in these systems, and the impact of the energetic processes occurring near the black hole on the interstellar medium of the host galaxy. The establishment of a technique that selects a {\em population} of AGN seen at different phases along a starburst cycle is also interesting for more detailed follow-up programs. Although `AGN feedback' in the form of extended outflowing gas is now established in a variety of AGN sub-populations such as the most luminous Type II quasars and radio galaxies, understanding the global ubiquity, energetics and duty cycle of the feedback will require more carefully controlled statistical approaches. Finally, our sample is an interesting one for understanding the relation between AGN activity, galaxy-galaxy interactions, mergers between black holes, and the origin of powerful AGN driven outflows of gas. It is rather interesting that although the number densities of IR excess and control sample AGN are very different, the integrated [OIII] emissivity in both classes of objects peaks at a stellar mass of $\sim 10^{10.5} M_{\odot}$. We note that this was shown for the AGN population as a whole in Heckman \& Kauffmann (2004, see their Figure 4). This value ($10^{10.5} M_{\odot}$) corresponds closely to the transition mass where the galaxy population switches over from a blue, star-forming, disk-dominated population to a red, passive, bulge-dominated one. Energetic feedback from AGN has been hypothesized to cause this transition, but considerable uncertainty remains as to how this occurs in practice. Some models assume that a feedback mode associated with radio galaxies accreting hot gas at the centers of massive dark matter halos is responsible for this transition (Croton et al 2006), while others invoke quasar-driven feedback triggered by galaxy-galaxy mergers (Hopkins et al 2006). Identification of a population of very luminous AGN clearly associated with galaxy-galaxy mergers is the first step to answering this question empirically. \section*{Acknowledgments} I thank Patricia Sanchez-Blazquez and Tim Heckman for helpful discussions and Mazda Adli for his support.
{'timestamp': '2018-08-10T02:08:10', 'yymm': '1808', 'arxiv_id': '1808.03145', 'language': 'en', 'url': 'https://arxiv.org/abs/1808.03145'}
\section{Introduction} \label{section:introduction} \enlargethispage{0.5 cm} Radioactive decays in which photons with energies above $Q=2039$~keV are emitted are expected to be a significant source of background for the GERmanium Detector Array, {\sc GERDA}~\cite{proposal}. {\sc GERDA} is an experiment which is currently being constructed and has as aim the search for the neutrinoless double beta-decay ($0\nu\beta\beta$) of the germanium isotope $^{76}$Ge. \\ Methods to distinguish between electrons and multiply scattered photons using the time structure of the germanium detector response, or pulse shape, are presented in this paper. The pulse shape depends on the location and the spatial distribution over which energy is deposited inside the detector in a single event. Photons in the MeV-energy region will predominantly Compton-scatter and deposit energy at locations separated by centimeters. These events are referred to as {\it multi-site events}. In contrast, electrons in the same energy region have a range of the order of a millimeter. Events of this kind are referred to as {\it single-site events}. \\ Pulse shape analysis methods have been developed for nuclear experiments such as {\sc AGATA}~\cite{agata} and {\sc GRETA}~\cite{greta} as well as for double beta-decay experiments~\cite{Hellmig:2000xp,HM,IGEX,Aalseth:2000hy,Elliott:2005at}. In the context of the latter these techniques are now extended to segmented detectors. In this study the focus is on the pulse shape analysis after the application of a single segment requirement as presented in~\cite{Abt:2007rg}. The performance of the pulse shape analysis with and without segment information is compared based on data taken with an 18-fold segmented {\sc GERDA} prototype detector. \\ The experimental setup and the collected data sets are described in Section~\ref{section:setup}. The accompanying Monte Carlo simulation is introduced in Section~\ref{section:simulation}. A parameter accessible in simulations which is a measure of the volume over which energy is deposited inside the detector is defined. A definition of single-site and multi-site events is derived from the Monte Carlo data sets based on this parameter. The fraction of single-site and multi-site events in the data sets is estimated. Three analysis methods are presented in Section~\ref{section:methods} and these methods are applied to the data sets taken with the prototype detector. The results are summarized in Section~\ref{section:results}. Conclusions are drawn in Section~\ref{section:conclusions}. \section{Experimental setup and data sets} \label{section:setup} \subsection{Experimental setup and data taking} The segmented germanium detector under study is the first segmented {\sc GERDA} prototype detector. The true coaxial cylindrical crystal has a height of 70~mm, an outer diameter of 70~mm and a central bore with a diameter of 10~mm. It is 18-fold segmented with a 6-fold segmentation in the azimuthal angle $\phi$ and a 3-fold segmentation in the height $z$. It was operated in a conventional test cryostat. Signals from the core and the segment electrodes were amplified and subsequently digitized using a 14-bit ADC with a sampling rate of 75~MHz. The energy and the pulse shapes of the core and the 18~segment electrodes were recorded for each event. The pulse shape data consists of 300 13.3~ns samples of the integrated charge amplitude. The onset of the signal was delayed by one~$\mu$s. The (full width at half maximum) energy resolution of the core electrode was 2.6~keV at energies around 1.3~MeV, the energy resolutions of the segment electrodes ranged from 2.4~keV to 4.8~keV with an average segment energy resolution of 3.3~keV. Details of the experimental setup and the detector performance can be found in~\cite{Abt:2007rf}. \\ A 100~kBq $^{228}$Th source was placed at $z=0$~cm and $r=17.5$~cm with respect to the detector center ($z=0$~cm, $r=0$~cm) facing towards the center of a segment, $S$, located in the middle row. Two data sets were taken with different trigger conditions labeled $TR_{C}$ and $TR_{S}$. The former trigger condition requires the core electrode to show an energy above 1~MeV. The collected data set is referred to as {\it core data set} and contains $127\,000$~events. The latter trigger condition requires segment~$S$ to show an energy above 1~MeV. The collected data set is referred to as {\it segment data set} and contains $420\,000$~events. As an example, Figure~\ref{fig:example} shows a pulse shape measured with the core (left) and with the segment~$S$ electrode (right) for an event in the segment data set. The core-energy spectra will be shown in Section~\ref{subsection:application}. \\ \begin{figure}[ht!] \center \mbox{\epsfig{file=example_7_new.eps,width=0.9\textwidth}} \caption{Pulse shape measured with the core (left) and with the segment~$S$ electrodes (right) for an event in the segment data set. The energy of $1758$~keV seen in the core is completely contained in segment~$S$. The starting time is chosen arbitrarily in this example. The amplitude is in arbitrary units but the scale is the same for both pulse shapes. The pulse shapes are dominated by different charge carrier types. \label{fig:example}} \end{figure} \pagebreak \subsection{Event selection} \label{subsection:selection} A pre-selection applied to the segment data set collects events with energy deposited only in one segment. It requires the energy measured in segment~$S$ to be the same as the energy measured in the core within $\pm5$~keV, according to about $\pm 4~\sigma$ given the energy resolution. In total, $150\,396$~events fulfill the pre-selection criterion. \\ Four data samples each are selected from the core and segment data sets. The data samples are defined by the energy measured in the core and are labeled: \begin{itemize} \item $DEP$: The sample contains events with a core energy in the region of \mbox{$(1593\pm5)$~keV}. These events are associated with the double escape peak of the $2615$~keV $^{208}$Tl photon. The photon produces electron-positron pairs of which the positron subsequently annihilates. Both 511~keV annihilation photons escape the detector. The energy is predominantly deposited on a millimeter-scale; i.e., locally. \item $\Gamma_{1}$: The sample contains events with a core energy in the region of \linebreak \mbox{$(1620\pm5)$~keV}. These events are associated with photons of this energy produced in the decay of $^{212}$Bi. The photons mostly scatter multiple times before their energy is fully deposited inside the detector. \item $\Gamma_{2}$: The sample contains events with a core energy in the region of \linebreak \mbox{$(2615\pm5)$~keV}. These events are associated with photons of this energy produced in the decay of $^{208}$Tl. The photons mostly scatter multiple times before their energy is fully deposited inside the detector. \item $ROI$: The sample contains events with a core energy in the region of interest, $(2039\pm50)$~keV. These events are predominantly associated with Compton-scattered photons from $^{208}$Tl. \end{itemize} The requirements of the trigger, pre-selection and event selection are listed in Table~\ref{table:datasets}. Also the number of events in the corresponding data samples are shown. The amount of background in each data sample, as estimated from taking spectra without the $^{228}$Th source present, was found to be less than 1\%. \\ \begin{table}[ht!] \caption{Requirements of the trigger, pre-selection and event selection, and the number of events in the corresponding data samples. $E_{C}$ and $E_{S}$ are the energies seen in the core and in segment~$S$, respectively. \label{table:datasets}} \center \begin{tabular}{llr} \\ \hline Cut & Condition & Events \\ \hline Trigger ($TR_{C}$) & $E_{C} > 1$~MeV & $127\,000$ \\ Pre-selection & - & $127\,000$ \\ Selection ($DEP$) & $\left| E_{C} - 1593\mathrm{~keV} \right| < \phantom{0}5$~keV & $1673$ \\ Selection ($\Gamma_{1}$) & $\left| E_{C} - 1620\mathrm{~keV} \right| < \phantom{0}5$~keV & $1965$ \\ Selection ($\Gamma_{2}$) & $\left| E_{C} - 2615\mathrm{~keV} \right| < \phantom{0}5$~keV & $22\,924$ \\ Selection ($ROI$) & $\left| E_{C} - 2039\mathrm{~keV} \right| < 50$~keV & $6\,431$ \\ \hline Trigger ($TR_{S}$) & $E_{S} > 1$~MeV & $420\,000$ \\ Pre-selection & $\left| E_{C} - E_{S} \right| < 5$~keV & $150\,396$ \\ Selection ($DEP$) & $\left| E_{C} - 1593\mathrm{~keV} \right| < \phantom{0}5$~keV & $3492$ \\ Selection ($\Gamma_{1}$) & $\left| E_{C} - 1620\mathrm{~keV} \right| < \phantom{0}5$~keV & $1972$ \\ Selection ($\Gamma_{2}$) & $\left| E_{C} - 2615\mathrm{~keV} \right| < \phantom{0}5$~keV & $19\,243$ \\ Selection ($ROI$) & $\left| E_{C} - 2039\mathrm{~keV} \right| < 50$~keV & $7707$ \\ \hline \end{tabular} \end{table} \section{Monte Carlo simulation} \label{section:simulation} The GEANT4~\cite{geant4} based MaGe~\cite{MaGe} framework was used to simulate the prototype detector setup (for details and a validation of this particular simulation see~\cite{Abt:2007rg}). A Monte Carlo study was performed to estimate the spatial distribution over which energy is deposited in the detector for events in the different data samples. A $^{228}$Th source was simulated. The trigger, pre-selection and event selection requirements discussed in the previous section were applied to the Monte Carlo data. The data sets are referred to as {\it core} and {\it segment Monte Carlo data sets}. A measure for the spatial distribution over which energy is distributed inside the detector is the radius $R_{90}$. This is defined as the radius inside which 90\% of the energy in a single event is deposited; for a detailed discussion see~\cite{segmentation}. Figure~\ref{fig:R90} shows the distribution of $R_{90}$ for the $DEP$, $\Gamma_{1}$, $\Gamma_{2}$ and $ROI$ samples for the core (left) and segment (right) Monte Carlo data sets. All distributions are normalized to unity. The $R_{90}$ distributions range from 0.1~mm ($\log_{10}(R_{90})=-1$) up to 7~cm ($\log_{10}(R_{90})=1.8$). The $DEP$ samples are dominated by events with $R_{90}$ in a region from 0.1~mm to 1~mm. A long tail towards larger radii is visible and mostly due to events in the underlying Compton-shoulder of $^{208}$Tl and events in which electrons undergo hard bremsstrahlung processes. The $R_{90}$ distributions for the $\Gamma_{1}$ and $ROI$ samples have two prominent regions each, one at radii from 0.3~mm to 1~mm and a second from 3~mm to 6~cm. The latter one is due to multiply scattered photons whereas the former is due to photons with higher energy which only scatter once and then leave the detector. The $R_{90}$ distributions for the $\Gamma_{2}$ samples range from 0.3~mm to about 7~cm with a maximum at around 2~cm for the core Monte Carlo data sample and at around 1~cm for the segment Monte Carlo data sample. The sample is dominated by events in which photons scatter multiple times. No peak at small $R_{90}$ is visible. \\ It is expected that the single segment requirement rejects events with large values of $R_{90}$. Indeed, the distributions of $R_{90}$ in the segment Monte Carlo data samples are suppressed in the region above 1~cm. The peaks between 0.1~mm and 1~mm in the $DEP$, $\Gamma_{1}$ and $ROI$ samples are more pronounced in this case. \\ \begin{figure}[ht!] \center \begin{tabular}{cc} \begin{minipage}[ht!]{0.45\textwidth} \mbox{\epsfig{file=R90_core.eps,width=\textwidth}} \end{minipage} & \begin{minipage}[ht!]{0.45\textwidth} \mbox{\epsfig{file=R90_segment.eps,width=\textwidth}} \end{minipage} \\ \end{tabular} \caption{Normalized distributions of $R_{90}$ for the $DEP$, $\Gamma_{1}$, $\Gamma_{2}$ and $ROI$ samples for the Monte Carlo core (left) and segment data sets (right). Single-site events (SSE) and multi-site events (MSE) are defined by requiring $R_{90} < 2$~mm and $R_{90} > 2$~mm (dashed line) as discussed in the text. \label{fig:R90}} \end{figure} Single-site and multi-site events are defined by requiring $R_{90}<\overline{R}$ and $R_{90}>\overline{R}$, respectively, where $\overline{R}$ is a chosen parameter value. The distributions of $R_{90}$ for the $DEP$ samples suggest $\overline{R}=2$~mm ($\log_{10}(\overline{R})=0.3$). Also, due to the sampling rate of 75~MHz and the average drift velocity of charge carriers ($\mathcal{O}(10^{8})$~mm/s) energy deposits closer than about 2~mm cannot be resolved. The fractions of single-site events in the Monte Carlo data samples are thus defined and summarized in Table~\ref{table:fraction}. Also listed are the corresponding systematic uncertainties of the fractions which are derived by varying the parameter $\overline{R}$ by $\pm1$~mm. \begin{table}[ht!] \caption{Fractions of single-site events in the Monte Carlo data samples. The errors are derived by varying the parameter $\overline{R}$ by $\pm1$~mm. \label{table:fraction}} \center \begin{tabular}{lcccc} \\ \hline Monte Carlo data samples & $DEP$ & $\Gamma_{1}$ & $\Gamma_{2}$ & $ROI$ \\ & ($1593$~keV) & ($1620$~keV) & ($2615$~keV) & (2039~keV) \\ \hline Core samples & $(77.9^{+1.6}_{-3.4})$\% & $(30.5^{+4.0}_{-3.6})$\% & $(12.2^{+\phantom{0}6.0}_{-\phantom{0}7.6})$\% & $(52.4^{+3.8}_{-7.6})$\% \\ Segment samples & $(89.0^{+1.1}_{-3.0})$\% & $(55.0^{+5.0}_{-4.4})$\% & $(30.0^{+10.0}_{-16.8})$\% & $(77.6^{+3.4}_{-6.7})$\% \\ \hline \end{tabular} \end{table} The Monte Carlo data samples are not purely composed of single-site or multi-site events. The $DEP$ samples are dominated by single-site events, the $\Gamma_{1}$ and $\Gamma_{2}$ have large fractions of multi-site events. Events in the $DEP$ samples are referred to as {\it electron-like} while events in the $\Gamma_{1}$ and $\Gamma_{2}$ samples are referred to as {\it photon-like} in the following. Note, that these two labels do not describe an intrinsic property of an event (such as the range of energy deposition), but they are used to emphasize the different probabilities of the event being single-site or multi-site. \section{Analysis methods} \label{section:methods} Three analysis methods were tested. Half of the $DEP$ and $\Gamma_{1}$ data samples were used to train the methods. The other half of the samples, together with the $\Gamma_{2}$ and $ROI$ samples, were used to test the analysis methods. The $DEP$ and $\Gamma_{1}$ samples were selected for training in order to avoid biases due to the difference in energy of events in the two samples. For the same reason the maximum of each pulse shape was normalized to unity for each event. \\ The analyses were applied to the core and segment data samples in order to study the effect of pulse shape analysis before and after the application of a single segment requirement. In the former case, only the core pulse shape was used. In the latter case, the core pulse shape was used and, optionally, the segment~$S$ pulse shape in addition. \subsection{Likelihood discriminant method} Four quantities are calculated for each pulse shape. These quantities provided separation power in previous studies~\cite{Aalseth:2000hy,Elliott:2005at}. Interpolation algorithms were applied to the pulse shapes to obtain continuous distributions. Figure~\ref{fig:quantities} shows an ideal pulse and the quantities calculated are indicated. All quantities are given subscripts $C$ and $S$ for the core and segment pulse shapes, respectively. \enlargethispage{0.5 cm} \begin{itemize} \item Risetime $\tau_{10-30}$, defined as the difference between the times the integrated charge amplitude has reached 10\% and 30\% of its maximal amplitude; \item risetime $\tau_{10-90}$, defined as the difference between the times the integrated charge amplitude has reached 10\% and 90\% of its maximal amplitude; \item left-right asymmetry $\zeta$, defined as the asymmetry of the area below the left and the right half of the current pulse, $A_{l}$ and $A_{r}$, measured from the maximum\footnote{The definition differs from the one given in~\cite{Aalseth:2000hy,Elliott:2005at}.}, $\zeta = \frac{A_{l}-A_{r}}{A_{l}+A_{r}}$; \item current pulse width $\delta$, defined as the full width at half maximum of the current pulse. \end{itemize} \begin{figure}[ht!] \center \mbox{\epsfig{file=quantities.eps,width=0.45\textwidth}} \caption{Ideal pulse shape: the integrated charge (thick line) and the current (thin line). Indicated are the quantities $\tau_{10-30}$, $\tau_{10-90}$, $\delta$, $A_{l}$ and $A_{r}$ (see text). \label{fig:quantities}} \end{figure} The variables are histogrammed for both training samples and their integrals are normalized to unity. As an example, Figure~\ref{fig:quantities_distributions} shows the normalized distributions of the four quantities calculated from the core pulse shape in the two segment data samples. The average risetime of pulses in the $DEP$ sample is larger than that of pulses in the $\Gamma_{1}$ sample~\footnote{This behavior was also found in a simple calculation of pulse shapes assuming a perfect crystal and not taking into account any effects from the electronics.}. The relative frequencies are used to define discriminants, given that the event is electron-like ($DEP$ sample) or photon-like ($\Gamma_{1}$ sample). The respective overall discriminants, $p_{e^{-}}$ and $p_{\gamma}$, are calculated by multiplying the individual discriminants: \begin{eqnarray} p_{e^{-}}^{k} & = & p(\tau_{\mathrm{10-30},k}|e^{-}) \cdot p(\tau_{\mathrm{10-90},k}|e^{-}) \cdot p(\zeta_{k}|e^{-}) \cdot p(\delta_{k}|e^{-}) \ , \\ && \nonumber \\ p_{\gamma}^{k} & = & p(\tau_{\mathrm{10-30},k}|\gamma) \cdot p(\tau_{\mathrm{10-90},k}|\gamma) \cdot p(\zeta_{k}|\gamma) \cdot p(\delta_{k}|\gamma) \ , \end{eqnarray} with $k=C$~or~$S$ for the core and segment pulses, respectively. Note that no correlations among these quantities are taken into account. Likelihood discriminants (LHD) are constructed from $p_{e^{-}}$ and $p_{\gamma}$ for each individual event: \begin{eqnarray} D^{C} & = & \frac{p_{e^{-}}^{C}}{p_{e^{-}}^{C} + p_{\gamma}^{C}} \ , \\ D^{C+S} & = & \frac{p_{e^{-}}^{C} \cdot p_{e^{-}}^{S}}{p_{e^{-}}^{C} \cdot p_{e^{-}}^{S}+ p_{\gamma}^{C} \cdot p_{\gamma}^{S}} \ , \end{eqnarray} \noindent where $D^{C}$ uses information from the core electrode only and $D^{C+S}$ uses information from the core and segment electrodes. $D$ varies between~0 and~1 by construction. $D$ peaks at~1 for electron-like events; for photon-like events $D$ peaks at 0. Events are identified as electron-like for $D>\overline{D}$ and as photon-like for $D<\overline{D}$, where $\overline{D}$ is a chosen parameter. \begin{figure}[ht!] \center \begin{tabular}{cc} \begin{minipage}[ht!]{0.45\textwidth} \mbox{\epsfig{file=quantities_risetime1030.eps,width=\textwidth}} \end{minipage} & \begin{minipage}[ht!]{0.45\textwidth} \mbox{\epsfig{file=quantities_risetime1090.eps,width=\textwidth}} \end{minipage} \\ \begin{minipage}[ht!]{0.45\textwidth} \mbox{\epsfig{file=quantities_zeta.eps,width=\textwidth}} \end{minipage} & \begin{minipage}[ht!]{0.45\textwidth} \mbox{\epsfig{file=quantities_delta.eps,width=\textwidth}} \end{minipage} \end{tabular} \caption{Quantities calculated from the core pulseshapes in the $DEP$ (open histogram) and $\Gamma_{1}$ (hatched histogram) segment data samples. Top left: risetime $\tau_{10-30}$, top right: risetime $\tau_{10-90}$, bottom left: left-right asymmetry $\zeta$, bottom right: current pulse width $\delta$. \label{fig:quantities_distributions}} \end{figure} \subsection{Library method} The training $DEP$ samples are interpreted as libraries of electron-like reference pulses. An average $\chi^{2}$ with respect to all reference pulses is calculated for each pulse shape in the test samples. For the $k$th reference pulse and the $l$th pulse shape under study the average $\chi^{2}$ is defined as \begin{equation} \chi_{k,l}^{2} = \frac{1}{N}\sum_{i = 1}^{N} \frac{(x_{k, i} - x_{l,i})^{2}}{\sigma^{2}} \ , \end{equation} where $N$ is the number of bins of the pulse shapes and $x_{k,i}$ and $x_{l,i}$ are the pulse heights in bin $i$ of the $k$th reference pulse and the $l$th pulse under study. $\sigma^{2}$ is defined as \begin{equation} \sigma^{2} = \sigma_{\mathrm{k}}^{2} + \sigma_{\mathrm{l}}^{2}, \end{equation} where $\sigma_{\mathrm{k}}$ and $\sigma_{\mathrm{l}}$ are the noise amplitudes of the reference pulse shape and the pulse shape under study. The noise amplitude is the RMS of the baseline measured during the one $\mu$s before the onset of the pulse. \\ The minimum $\chi^{2}$ is selected with respect to the reference pulses and denoted $\chi^{2}_\mathrm{min}=\chi^{2}_{k_{\mathrm{min}},l}$ for each pulse shape in the test sample. Ideally, the minimum $\chi^{2}$ for electron-like events should be smaller than that of photon-like events. Events are identified as electron-like for $\chi^{2}_{\mathrm{min}} < \overline{\chi^{2}}$ and as photon-like for $\chi^{2}_{\mathrm{min}} > \overline{\chi^{2}}$, where $\overline{\chi^{2}}$ is a chosen parameter. \subsection{Neural network method} Artificial neural networks (ANNs) are used to separate electron-like from photon-like events. Input neurons are fed with samples of the normalized pulse shape, starting from the time when the amplitude has reached 10\%. 40 consecutive samples per pulse shape are used. The ANN consists of 40 input neurons, 40 hidden neurons and one output neuron for the core data samples. An additional 40 input neurons are used optionally for the segment data samples. \\ The ANNs are trained by feeding them with pulse shapes from the two training samples and simultaneously providing the information which of the samples each pulse belongs to (0:~$DEP$ sample, 1:~$\Gamma_{1}$ sample). The ANNs adjust the internal neurons iteratively using the Broyden, Fletcher, Goldfarb, Shanno (BFGS) learning method~\cite{BFGS}. Each ANN is trained in about $1000$~iterations. The output quantity, $NN$, is on average larger for photon-like events than for electron-like events. Events are identified as electron-like for $NN < \overline{NN}$ and as photon-like for $NN > \overline{NN}$, where $\overline{NN}$ is a chosen parameter. \pagebreak \section{Results} \label{section:results} The three analysis methods are applied to the data samples defined in Section~\ref{subsection:selection}. The likelihood discriminant and neural network analysis are performed on the segment data samples (a) with information from the core electrode only and (b) with information from the core and the segment~$S$ electrode. As an example, Figure~\ref{fig:PSA_output} shows the output distributions for the two segment training data samples $DEP$ and $\Gamma_{1}$ for the likelihood method (left), the library method (middle) and the neural network (right). The segment pulse shapes have not been taken into account for these examples. \begin{figure}[ht!] \center \begin{tabular}{ccc} \begin{minipage}[ht!]{0.30\textwidth} \mbox{\epsfig{file=lhd.eps,width=\textwidth}} \end{minipage} & \begin{minipage}[ht!]{0.30\textwidth} \mbox{\epsfig{file=chi2.eps,width=\textwidth}} \end{minipage} & \begin{minipage}[ht!]{0.30\textwidth} \mbox{\epsfig{file=NN_1.eps,width=\textwidth}} \end{minipage} \\ \end{tabular} \caption{Output distributions for the two segment training data samples $DEP$ (open histograms) and $\Gamma_{1}$ (hatched histograms) for the likelihood method (left), the library method (middle) and the neural network (right). The segment pulse shapes were not taken into account in these examples. \label{fig:PSA_output}} \end{figure} The results of the analysis are interpreted in the following. First, it is shown that the electron-like and photon-like event samples can be distinguished. In a second step, the results are interpreted to distinguish between single-site and multi-site events. The estimate of the power of such a distinction requires the knowledge of the fraction of single-site and multi-site events in the data samples. That information is taken from the Monte Carlo simulation presented in Section~\ref{section:simulation} based on the parameter $R_{90}$. \subsection{Distinction between electron-like and photon-like event samples} \label{subsection:electronlike} \enlargethispage{0.5 cm} The power to distinguish between electron-like and photon-like event samples is estimated. The events in the $DEP$ sample are assumed to give the same output in the analyses as events from neutrinoless double beta-decay. The cut values are chosen to keep 90\% of the events in the $DEP$ training samples for the three analysis methods and thus a high detection efficiency. The fraction of events in each test data sample identified as electron-like are summarized in Table~\ref{table:fractions}. The uncertainties are estimated from the deviation from 90\% of the fraction of events identified as electron-like in the $DEP$ test data samples and found to be about 2\%. Note that no deviation is found in case of the library method since the $DEP$ training data sample is used as a reference library. \begin{table}[ht!] \center \caption{Fraction of events in the test data samples identified as electron-like for the three analyses. The uncertainties are estimated to be about 2\%. \label{table:fractions}} \center \begin{tabular}{lcccc} \\ \hline Data samples & $DEP$ & $\Gamma_{1}$ & $\Gamma_{2}$ & $ROI$ \\ & ($1593$~keV) & ($1620$~keV) & ($2615$~keV) & (2039~keV) \\ \hline Likelihood method & & & & \\ \hline Core samples & 89.3\% & 76.5\% & 75.4\% & 83.4\% \\ Segm. samples, core only & 89.3\% & 67.1\% & 64.1\% & 84.8\% \\ Segm. samples, core \& segm. & 88.0\% & 66.7\% & 61.1\% & 83.4\% \\ \hline Library method & & & & \\ \hline Core samples & 90.0\% & 86.9\% & 85.8\% & 86.7\% \\ Segm. samples, core only & 90.0\% & 68.4\% & 56.4\% & 83.1\% \\ \hline Neural network method & & & & \\ \hline Core samples & 90.4\% & 65.8\% & 63.2\% & 79.9\% \\ Segm. samples, core only & 89.3\% & 54.1\% & 44.3\% & 80.8\% \\ Segm. samples, core \& segm. & 89.3\% & 56.1\% & 49.9\% & 79.6\% \\ \hline \end{tabular} \end{table} The fraction of events identified as electron-like is significantly lower than 90\% in the $\Gamma_{1}$, $\Gamma_{2}$ and $ROI$ samples. The fraction in the $\Gamma_{1}$ sample is found to be larger than that in the $\Gamma_{2}$ sample with each method. This is expected, as the mean free path of photons increases with the photon energy. \\ The fraction of events identified as electron-like in the $\Gamma_{1}$ and $\Gamma_{2}$ segment data samples (using the core pulse shape only) is found to be lower than that in the core data samples with all three methods. The additional usage of the segment pulse shape in the analyses reduces the fraction by maximally 3\%; in case of the neural network it even increases the fraction by maximally 5\%. This demonstrates that the additional information is highly correlated with the existing information and only marginally contributes to the analysis. \\ The neural network shows the best performance. This is expected, since the ANN uses the largest fraction of information and also takes correlations between input variables into account. \\ \subsection{Selection of single-site events and discrimination against multi-site events} As demonstrated in Table~\ref{table:fraction}, neither the $DEP$ nor the $\Gamma_{1}$, $\Gamma_{2}$ and $ROI$ samples are solely composed of single-site or multi-site events. The probability to correctly identify single-site and multi-site events as such, $\epsilon$ and $\eta$, can be deduced from the fraction of single-site and multi-site events in each sample (estimated from Monte Carlo) and the output of the analyses, $D$, $\chi^{2}_\mathrm{min}$, $NN$: \begin{eqnarray} \epsilon & = & \frac{ N_{id}^{SSE}/N_{true}^{MSE} - M_{id}^{SSE}/M_{true}^{MSE} }{ N_{true}^{SSE}/N_{true}^{MSE} - M_{true}^{SSE}/M_{true}^{MSE} } \ , \label{eqn:epsilon} \\ && \nonumber \\ \eta & = & \frac{ N_{id}^{MSE}/N_{true}^{SSE} - M_{id}^{MSE}/M_{true}^{SSE} }{ N_{true}^{MSE}/N_{true}^{SSE} - M_{true}^{MSE}/M_{true}^{SSE} } \ , \label{eqn:eta} \end{eqnarray} \noindent where $N_{id}^{SSE}$ and $N_{id}^{MSE}$ are the number of events in the $DEP$ sample identified as single-site and multi-site events, respectively. The numbers depend on the cut value chosen for each analysis. $N_{true}^{SSE}$ and $N_{true}^{MSE}$ are the true number of single-site and multi-site events in the same sample and are estimated from the Monte Carlo simulation discussed in Section~\ref{section:simulation}. $M_{id}^{SSE}$ and $M_{id}^{MSE}$ are the number of events in the $\Gamma_{1}$ sample identified as single-site and multi-site events, respectively. $M_{true}^{SSE}$ and $M_{true}^{MSE}$ are the true number of single-site and multi-site events in the same sample. The probabilities $\epsilon$ and $\eta$ are assumed to be the same for all samples. This assumption is reasonable for the $DEP$ and $\Gamma_{1}$ samples as the average energies are very close. \\ The cut values for the three analysis methods are chosen to maximize the figure of merit, the identification efficiency $\sqrt{\epsilon\cdot\eta}$. Note, that these cut values differ from those used in Section~\ref{subsection:electronlike}. The probabilities obtained from the data samples using Equations~\ref{eqn:epsilon} and~\ref{eqn:eta} are listed in Table~\ref{table:efficiencies}. \begin{table}[ht!] \caption{Probabilities $\epsilon$ and $\eta$ obtained for all three analysis methods. The errors are introduced by the choice of $\overline{R}$ determining the fraction of single-site and multi-site events. \label{table:efficiencies}} \center \begin{tabular}{lccc} \\ \hline Analysis & $\epsilon$ & $\eta$ & $\sqrt{\epsilon\cdot\eta}$ \\ \hline Likelihood method & & & \\ \hline Core samples & ($74.8^{+1.8}_{-0.3}$)\% & ($\phantom{0}84.7^{+\phantom{0}3.4}_{-\phantom{0}2.4}$)\% & ($79.6^{+1.4}_{-0.2}$)\% \\ Segm. samples, core only & ($84.3^{+1.8}_{-0.2}$)\% & ($\phantom{0}97.7^{+10.4}_{-\phantom{0}5.9}$)\% & ($90.8^{+4.8}_{-1.9}$)\% \\ Segm. samples, core \& segm. & ($83.9^{+1.7}_{-0.1}$)\% & ($\phantom{0}94.0^{+\phantom{0}9.9}_{-\phantom{0}5.6}$)\% & ($88.8^{+4.6}_{-1.8}$)\% \\ \hline Library method & & & \\ \hline Core samples & ($68.7^{+\phantom{0}0.8}_{-\phantom{0}0.1}$)\% & ($56.1^{+\phantom{0}1.4}_{-\phantom{0}1.0}$)\% & ($62.1^{+0.7}_{-0.2}$)\% \\ Segm. samples, core only & ($90.9^{+\phantom{0}0.1}_{-13.4}$)\% & ($80.4^{+10.1}_{-\phantom{0}9.1}$)\% & ($85.6^{+4.8}_{-1.7}$)\% \\ \hline Neural network method & & & \\ \hline Core samples & ($85.6^{+2.4}_{-0.4}$)\% & ($\phantom{0}91.0^{+\phantom{0}4.3}_{-\phantom{0}0.3}$)\% & ($\phantom{0}88.3^{+1.9}_{-0.3}$)\% \\ Segm. samples, core only & ($96.4^{+2.5}_{-0.2}$)\% & ($121.6^{+15.0}_{-\phantom{0}8.5}$)\% & ($108.3^{+6.6}_{-2.5}$)\% \\ Segm. samples, core \& segm. & ($90.6^{+2.3}_{-0.2}$)\% & ($115.4^{+13.4}_{-\phantom{0}7.7}$)\% & ($102.3^{+5.9}_{-2.2}$)\% \\ \hline \end{tabular} \end{table} The likelihood and library methods work better on events with only one segment hit. The additional usage of the segment pulse shape in the likelihood method does not improve the analysis results. \\ The analysis of the neural network output yields probabilities larger than one for the segment data samples. The calculation of $\epsilon$ and $\eta$ depends on the real fraction of single-site and multi-site events and is therefore model dependent. The current model assumes the fraction of single-site and multi-site events to be completely reflected by the parameter $R_{90}$. The validity of the assumed model is limited and the extraction of the probabilities $\epsilon$ and $\eta$ carries systematic uncertainties. The results should be taken with care. The efficiencies do not exceed unity for the chosen cut parameter for the core data samples. Figure~\ref{fig:PSA_efficiency} shows $\epsilon$ and $\eta$ together with the identification efficiency as a function of the neural network cut parameter for the core data samples. \begin{figure}[ht!] \center \epsfig{file=eta_epsilon_1000it.eps,width=0.5\textwidth} \caption{Probabilities to correctly identify single-site, $\epsilon$, and multi-site events, $\eta$, and the efficiency, $\sqrt{\epsilon\cdot\eta}$, for the neural network analysis of the core data samples. Probabilities above one are caused by uncertainties in the extraction process. \label{fig:PSA_efficiency}} \end{figure} \subsection{Application to the $^{228}$Th data set} \label{subsection:application} Figure~\ref{fig:spectrum} (left) shows the energy spectrum resulting from a $^{228}$Th source in the region from 1.3~MeV to 2.7~MeV as seen by the core electrode. The black line corresponds to all events with only segment~$S$ hit, the gray line represents events with only segment~$S$ hit and pulse shape analysis, using the ANN, applied. Only the pulse shape of the core was used and the cut parameter was chosen to keep 90\% of the events in the $DEP$ training data sample. \\ The gray spectrum is suppressed with respect to the black spectrum. The suppression ranges up to a factor of about two at the photon peaks. The suppression is weak in the double escape peak. Figure~\ref{fig:spectrum} (right) shows a close-up of the spectrum in the region from 1560~keV to 1650~keV. The application of the pulse shape analysis removes photon induced events (1620~keV photon line from the decay of $^{212}$Bi) but keeps most of the electron induced events (double escape peak of the $2\ 615$~keV $^{208}$Tl photon at $1593$~keV). Pulse shape analysis is thus suitable to confirm the signal process. \\ \begin{figure}[ht!] \center \begin{tabular}{cc} \begin{minipage}[ht!]{0.45\textwidth} \mbox{\epsfig{file=spectrum_1000it.eps,width=\textwidth}} \end{minipage} & \begin{minipage}[ht!]{0.45\textwidth} \mbox{\epsfig{file=spectrum_closeup_1000it.eps,width=\textwidth}} \end{minipage} \\ \end{tabular} \caption{Spectrum of a $^{228}$Th source as seen by the core electrode. The black line corresponds to all events with only segment~$S$ hit, the gray line represents events with only segment~$S$ hit and pulse shape analysis, using the ANN, applied. Only the pulse shape of the core was used and the cut parameter was chosen to keep 90\% of the $DEP$ events. Left: Spectrum from 1.3~MeV to 2.7~MeV. Right: Close-up of the region from 1560~keV to 1650~keV. For a discussion see text. \label{fig:spectrum}} \end{figure} \pagebreak \section{Conclusions and outlook} \label{section:conclusions} Three methods using pulse shapes were introduced to distinguish electrons from multiply scattered photons. They were applied on data collected with a {\sc GERDA} prototype detector. Single-site dominated samples were distinguished from multi-site dominated samples. The probability to correctly identify single-site and multi-site events was estimated based on Monte Carlo calculations. \\ All three methods were trained with double escape events and events from a nearby photon peak. The former events are expected to be similar to the expected $0\nu\beta\beta$-events. \\ The methods are based on information from the core electrode and may include information from the segment electrode or not. The power to identify photon induced events does not increase with the straightforward inclusion of the pulse shape of the segment. \\ The performance of the three methods is slightly worse than what was reported in~\cite{Elliott:2005at}. A reason for this is the purity of the samples. Also, the spatial distribution of energy deposited inside the detector is not homogeneous in the $DEP$ sample. Methods to select cleaner and more homogeneous training samples are currently being tested. \\ The artificial neural network shows a better performance than both the likelihood discriminant and the library method. Photon peaks remaining after a single segment cut are suppressed by a factor of about two at energies around 1.5~MeV. At the same time 90\% of the events in the single-site dominated sample are kept. This demonstrates that the association of a particular peak with the signal process can be substantiated by this kind of analysis. \\ The calculation of the efficiency to correctly identify single-site and multi-site events is limited by the assumed model based on the $R_{90}$ parameter. Further studies are required; in particular, a simulation of the development of pulse shapes is important and is currently under development. Studies using additional information from neighboring segments to distinguish single-site from multi-site events are also planned. In addition, an improved experimental setup is planned. \\ The rejection of events in the $1620$~keV peak using segment anti-coincidences as presented in~\cite{Abt:2007rg} is about a factor of two better than the sole application of pulse shape analysis as presented in this paper. Nevertheless, the application of pulse shape analysis after a single segment cut can further reject events in this peak by an additional factor of about two. \section{Acknowledgements} The authors would like to thank A.~Bettini, P.~Grabmayr, L.~Pandola and B.~Schwingenheuer for their helpful comments and suggestions. The authors would also like to thank the {\sc GERDA} and {\sc Majorana} Monte~Carlo groups for their fruitful collaboration and cooperation on the {\sc MaGe} project. \addcontentsline{toc}{section}{Bibliography}
{'timestamp': '2007-04-23T16:23:54', 'yymm': '0704', 'arxiv_id': '0704.3016', 'language': 'en', 'url': 'https://arxiv.org/abs/0704.3016'}
\section{Introduction} Soft-gamma repeaters (SGRs) are strongly-magnetized neutron stars that produce frequent, short-duration bursts ($\hbox{${_{\displaystyle<}\atop^{\displaystyle\sim}}$} 1$ s) of $\hbox{${_{\displaystyle<}\atop^{\displaystyle\sim}}$} 10^{41}$ ergs in hard x-ray and soft gamma-rays. SGRs occasionally produce giant flares that last $\sim 100$ s; the first giant flare to be detected occurred in SGR 0526-66 on 5 March, 1979 \citep{barat_etal79,mazets_etal79,cline_etal80}, releasing $\sim 2\times 10^{45}$ erg \citep{fenimore_etal96}. The August 27th 1998 giant flare from SGR 1900+14 liberated $\hbox{${_{\displaystyle>}\atop^{\displaystyle\sim}}$} 4\times 10^{43}$ erg, with a rise time of $<4$ ms \citep{hurley_etal99,feroci_etal99}. The duration of the initial peak was $\sim 1$ s \citep{hurley_etal99}. On December 27, 2004, SGR 1806-20 produced the largest flare yet recorded, with a total energy yield of $\hbox{${_{\displaystyle>}\atop^{\displaystyle\sim}}$} 4\times 10^{46}$ ergs.\footnote{These energy estimates assume isotropic emission.} In both short bursts and in giant flares, the peak luminosity is reached in under 10 ms. Measured spin down parameters imply surface dipole fields of $6\times 10^{14}$ G for SGR 0526-66 \citep{tiengo_etal09}, $7\times 10^{14}$ G for SGR 1900+14 \citep{mereghetti_etal06}, and $2\times 10^{15}$ G for SGR 1806-20 \citep{nakagawa_etal08}, establishing these objects as magnetars. The giant flares in SGR 1806-20 (hereafter SGR 1806) and SGR 1900+14 (hereafter SGR 1900) showed rotationally phase-dependent, quasi-periodic oscillations (QPOs). QPOs in SGR 1806 were detected at 18$\pm 2$ Hz, 26$\pm 3$ Hz, 30$\pm 4$ Hz, 93$\pm 2$ Hz, 150$\pm 17$ Hz, 626$\pm 2$ Hz, and 1837$\pm 5$ Hz \citep{israel_etal05,ws06,sw06,hambaryan_etal11}. QPOs in the giant flare of SGR 1900 were detected at 28$\pm 2$ Hz, 53 $\pm 5$ Hz, 84 Hz (width unmeasured), and 155$\pm 6$ Hz \citep{sw05}. Recently, oscillations at 57$\pm 5$ Hz were identified in the short bursts of SGR 1806 \citep{huppenkothen_etal14a}, and at 93$\pm 12$ Hz, 127$\pm 10$ Hz, and possibly 260 Hz in SGR J1550-5418 \citep{huppenkothen_etal14b}.\footnote{\citet{elib10} reported evidence for oscillations in the short, recurring bursts of SGR 1806, but this analysis was shown by \citet{huppenkothen_etal13} to be flawed.} To summarize, SGRs 1806 and 1900 have QPOs that begin at about 20 Hz, with a spacing of some tens of Hz below 160 Hz, and that are sharp with typical widths of 2-4 Hz. The observed QPOs are generally attributed to oscillations of the star excited by an explosion of magnetic origin that creates the flare. The oscillating stellar surface should modulate the charge density in the magnetosphere, creating variations in the optical depth for resonant Compton scattering of the hard x-rays that accompany the flare \citep{timokhin_etal08,dw12}. In this connection, the problem of finding the oscillatory modes for a strongly-magnetized neutron star has received much attention, and has proven to be a formidable problem. To make the problem tractable, most theoretical treatments of the QPO problem have assumed smooth field geometries, usually dipolar or variants ({\it e.g.}, \citealt{levin06,gsa06,levin07,sotani_etal08a,sotani_etal08b,cerda_etal09,cbk09,csf09,ck11,vl11,gabler_etal11,ck11,gabler_etal12,vl12,pl13,gabler_etal13,gabler_etal_sf13,gabler_etal14}). Smooth field geometries support a problematic Alfv\'en continuum that couples to the discrete natural spectrum of the crust. As pointed out by \citet{levin06}, if energy is deposited in the crust at one of the natural frequencies of the crust, and this frequency lies within a portion of the core continuum, the energy is lost to the core continuum in less than 0.1 s as the entire core continuum is excited. The crust excitation is effectively damped through {\em resonant absorption}, a familiar process in MHD; see {\it e.g.}, \citet{gp04}. The problem has been addressed by assuming field geometries with gaps in the Alfv\'en continuum. Under this assumption, long-lived quasi-normal modes can exist inside the gaps or near the edges of the Alfv\'en continuum. \cite{vl11} showed for a ``box'' neutron star that introduction of a magnetic tangle breaks the Alfv\'en continuum. \cite{lv15} showed that for magnetic tangling in a spherical neutron star the problematic Alfv\'en continuum disappears.\footnote{\cite{sotani15} added general relativity in the treatment of the magnetic field, and confirmed some of the results of \cite{lv15}.} They found that the star acquires discrete normal modes, and quantified the mode spacing. It is clear from these investigations that the unknown magnetic field geometry is the most important ingredient in determining the oscillation spectrum of a magnetar. As no model presented so far has provided good quantitative agreement with observed QPOs, we take a new direction in this Letter. We begin by arguing that stability considerations and observational evidence show that magnetars do not possess the smooth fields considered in most previous work, but rather have highly tangled fields. We propose that magnetar QPOs represent torsional normal modes that are supported by the magnetic tangle, and we present a simple model that supports this hypothesis. Keeping the energy in the magnetic tangle as a free parameter, we adjust this parameter to accommodate the 28 Hz QPO observed in SGR 1900 while maintaining consistency with QPOs observed at higher frequencies. We obtain a rough {\em measurement} of the energy density in the tangled field to be $\sim 14$ times that in the dipole field. Our model, though simple, is the first to give reasonable quantitative agreement with the data. Our model also provides useful scaling relations for the frequencies of the QPOs on bulk neutron star parameters and provides insight into the problem that might not emerge so clearly from more detailed numerical simulations. In particular, the model shows that if strong field tangling occurs, the normal-mode spectrum of a magnetar is determined principally by field tangling, and less so by crust rigidity, the dipole field, relativistic effects, and detailed stellar structure. We conclude that the effects of a tangled field cannot be neglected in the QPO problem, and we outline what we see to be interesting research directions on this issue. \section{Theoretical and observational evidence for field tangling} A pure dipole field is unstable, and a strong toroidal field is needed to stabilize the field \citep{fr77,bs06}. Purely toroidal fields are also unstable \citep{wright73,tayler73}. There has been considerable progress recently on the identification of magnetic equilibria. \citet{bn06} found a ``twisted torus'' configuration, which consists of torus of flux near the magnetic equator that stabilizes the linked poloidal plus toroidal configuration. The twisted torus is topologically distinct from any poloidal field, or twisted poloidal field, in the sense that the twisted torus cannot be continuously deformed into a dipole field - the field is {\em tangled}. This topological complexity is required to establish hydromagnetic stability. Simulations by \citet{braithwaite08} show that the evolution of the magnetic field from initially-turbulent configurations can evolve to configurations other than the twisted torus, generally non-axisymmetric equilibria with highly tangled fields; see, {\it e.g.}, Fig. 12 of that paper. \citet{braithwaite09} studied the relative strengths of the poloidal and toroidal components in stable, axi-symmetric configurations, and found that the energy in the toroidal component typically exceeds that in the poloidal component. By what factor the toroidal energy exceeds the poloidal energy in an actual neutron star depends on initial conditions and the equation of state; \cite{braithwaite09} finds examples in which this ratio is 10-20, and he argues that this ratio could plausibly be $\sim 10^3$ since a proto-magnetar should be in a highly turbulent state that winds up the natal field \citep{td93,brandenburg_etal05}. In this process, energy injected at some scale propagates down to the dissipative scale as well as up to large scales, giving a large-scale mean field with complicated structure at many scales. The chief conclusion of these theoretical studies is that a topologically distinct tangle is needed to stabilize the dipolar component. Most theoretical work on QPOs has assumed simple field geometries that are demonstrably unstable. Observational evidence that the internal fields of neutron stars are highly tangled can be found in the `low-field SGRs'. In these objects, the interior fields must be stronger than the inferred dipole fields in order to power observed burst activity. SGR 0418+5729 has a dipole field inferred from spin-down of $\sim 6\times 10^{12}$ G, \citep{esposito_etal10,ret10,horst_etal10,turolla_etal11}. Two other examples are Swift J1822-1606, with an inferred dipole field of $\sim 3\times 10^{13}$ G \citep{rea_etal12}, and 3XMM J1852+0033 \citep{rvi14}, with an inferred dipole field of less than $4\times 10^{13}$ G. Energetics indicate that the interior field consists of strong multipolar components, while stability considerations require these components to be tangled \citep{braithwaite08,braithwaite09}. \section{A simple model of QPOs} \label{model} Much of the work cited above on the QPO problem has included realistic stellar structure, specific magnetic field geometries, and the effects of general relativity. The normal mode frequencies are determined principally by the strength of the tangled field and bulk stellar properties, with realistic stellar structure and general relativity coming in as secondary effects. In \S 5 of the supplementary materials \citep{lv16s}, we show that realistic structure has a relatively small effect on the normal mode spectrum for an isotropic tangle. Our approach, therefore, is to proceed with a very simple model that elucidates the consequences of a tangled field. We do not expect refinements of the simple model given here to alter our chief conclusions. We treat the magnetic field as consisting of a smooth dipolar contribution $\Bbf_d$, plus a tangled component $\Bbf_t$ that stabilizes the field. At location $\rbf$, the field is \begin{equation} \Bbf(\rbf)=\Bbf_d(\rbf)+\Bbf_t(\rbf). \end{equation} We assume that $\Bbf_t(\rbf)$ averages to nearly zero over a dimension of order the stellar radius or smaller, that is, $\ev{\Bbf_t(\rbf)}\simeq 0$ where $\ev{...}$ denotes a volume average; see the supplementary materials \citep{lv16s} for details. The magnetic energy density in the tangle is $\ev{B_t^2}/8\pi$. We define a dimensionless measure of the strength of the tangle as the ratio of the energy density in the tangle to that in the dipolar component: \begin{equation} b_t^2\equiv \frac{\ev{B_t^2}}{B_d^2}. \end{equation} We regard the magnetic tangle as approximately isotropic, and the dominant source of magnetic stress, so that $b^2_t>>1$, as supported by the simulations and arguments of \cite{braithwaite09}. In this limit, the fluid core acquires an effective shear modulus given by: \begin{equation} \mu_B\equiv \frac{\ev{B_t^2}}{4\pi}; \end{equation} We limit the analysis to low-frequency QPOs, as the approximation of an isotropic tangle could break down at high frequencies. Using realistic structure calculations (see \S \ref{crust}), the volume averaged-shear modulus of the crust is $\bar{\mu}_c\simeq 3\times 10^{29}$ \hbox{\rm\hskip.35em erg cm}$^{-3}$. The magnetic rigidity of the tangle will dominate the material rigidity of the crust when \begin{equation} \frac{\bar{\mu}_c}{\mu_B}<<1 \longrightarrow b_t^2 >> \left(\frac{\bar{\mu}_c}{3\times 10^{29}\mbox{ \hbox{\rm\hskip.35em g cm}$^{-3}$}}\right) \left(\frac{B_d}{10^{15}\mbox{ G}}\right)^{-2}, \label{crust_criterion} \end{equation} and so the crust is dynamically negligible in the same limit (coincidentally) that the isotropic tangle becomes the dominant form of magnetic stress for a typical magnetar dipole field of $10^{15}$ G; we quantify the small effect of the crust in \S \ref{crust}. We ignore the crust in our simple model, and treat the star as a self-gravitating, constant-density, magnetized fluid whose torsional normal modes are determined by the isotropic stresses of the tangled field. General relativity is included as a redshift factor that reduces the oscillation frequencies observed at infinity by about 20\%. Electrical resistivity is negligible for the modes of interest, and we work with ideal MHD. Our normal-mode analysis is restricted entirely to toroidal modes. The equations of motion are derived using a mean-field formalism in \S 1 of the supplementary materials \citep{lv16s}; the equation of motion for small displacements of the fluid $\ubf$ is \begin{equation} c_t^2\nabla^2\ubf+\omega^2\ubf=0, \label{waveeqn} \end{equation} where $c_t\equiv(\ev{B_t^2}/4\pi x_p\rho)^{1/2}$ is the Alfve\'n wave speed through the tangle, $\rho$ is the mass density, $x_p\simeq 0.1$ is the proton mass fraction fraction, and $\omega$ is an eigenfrequency. In evaluating $c_t$, we have assumed that the neutrons are superfluid. If the protons are normal, the neutrons do not scatter with the protons (ignoring scattering processes with the vortices of the neutron superfluid). If the protons are superconducting, the neutron fluid is entrained by the proton fluid, but this effect is negligible \citep{ch06}. In either case, the dynamical mass density is essentially $x_p\rho$. At this point we have reduced the normal mode problem to that of an elastic sphere of constant density and rigidity. Torsional modes have the form $\ubf = u_\phi(r,\theta) \hat{\phi}$ in spherical coordinates $(r,\theta,\phi)$ with the origin at the center of the star. The solutions to eq. (\ref{waveeqn}) are (see \S 3 of the supplementary materials \citep{lv16s} for further details): \begin{equation} u_\phi(r)=A j_l(kr)\,\frac{dP_l(\theta)}{d\theta}, \label{sol} \end{equation} where $k\equiv\omega/c_t$ and $A$ is normalization. The eigenfunctions and associated eigenfrequencies are determined by the boundary condition that the traction vanish at the stellar surface: \begin{equation} \left[\frac{dj_l}{dr}-\frac{j_l}{r}\right]_{r=R}=0. \label{bc_isotropic} \end{equation} For each value of $l$, eq. (\ref{bc_isotropic}) has solutions $x_{ln}\equiv k_{ln} R$, where $n=0,1,2...$, the overtone number, gives the number of nodes in $j_l(kr)$. The eigenfrequencies are \begin{equation} \omega_{ln}=z \left(\frac{\ev{B_t^2}R}{3x_pM}\right)^{1/2} x_{ln}, \end{equation} where a redshift factor $z\equiv(1-R_s(M)/R)^{1/2}$ has been introduced; $R_s$ is the Schwarzchild radius.\footnote{For $l=1$, eq. (\ref{waveeqn}) has a solution $u_\phi =A\, r\, dP_1/d\theta= A\,r$ for $\omega=k=0$. This solution, which we label $n=0$, corresponds to rigid-body rotation, and so is of no physical significance to the mode problem we are addressing. All other modes have zero angular momentum.} In terms of fiducial values \begin{equation} \nu_{ln}(\mbox{Hz})=\frac{\omega_{ln}}{2\pi}= 4.3\, \left(\frac{z}{0.77}\right) \left(\frac{R}{10\mbox{ km}}\right)^{1/2} \left(\frac{M}{1.4M_\odot}\right)^{-1/2} \left(\frac{x_p}{0.1}\right)^{-1/2} \left(\frac{\ev{B_t^2}^{1/2}}{10^{15}\mbox{ G}}\right) \, x_{ln}\mbox{ Hz}. \label{nu_isotropic} \end{equation} The energy in a torsional mode $(l,n)$ is \begin{equation} E_{ln}=\frac{1}{2} \int d^3r\, x_p \rho\, \omega^2 (u_\phi^{ln})^2. \label{E} \end{equation} For the purpose of comparing the energies of different normal modes of the same amplitude, we choose the normalization $A$ in eq. (\ref{sol}) so that \begin{equation} \bar{u}^2=\int_{4\pi}d\Omega\, u_\phi(R)^2, \label{norm} \end{equation} where $\bar{u}^2$ is the square of the mode amplitude averaged over the stellar surface. We evaluate the energy in a mode $(l,n)$, normalized by the $l=2$, $n=0$ mode energy, with $\bar{u}^2$ set equal for both modes. In the second row of Table \ref{nu_SGR1900_values}, we give example frequencies for the fundamental of each $l$ for $M=1.4M_\odot$, $R=10$ km, and $x_p=0.1$. Numbers in parenthesis give the normalized energy in the mode. By tuning the parameter $b_t^2=\ev{B_t^2}/B_d^2$ to 14.4, the spectrum of the fundamentals for each $l$ agrees with the QPOs seen in SGR 1900. A more accurate analysis given in \S 4 of the supplementary materials \citep{lv16s} changes $b_t^2$ slightly to 13.3. Taking $B_d$ equal to the inferred dipole field for SGR 1900 of $7\times 10^{14}$ G, this value of $b_t^2$ corresponds to $\ev{B_t^2}^{1/2}=2.7\times 10^{15}$ G. We also show some of the eigenfrequencies for the first three overtones. The overtones begin at higher frequencies; they also require more energy to excite to the same root-mean square amplitude (eq. \ref{norm}) than the fundamental modes and so are energetically suppressed. To the extent that the surface amplitude of a mode determines its observability through variations of magnetospheric emission, the overtones might not be as important as the fundamental modes. The amplitude of a given mode depends on the excitation process, and we note that overtones could prove relevant in a more detailed treatment that addresses the initial-value problem of mode excitation; we discuss this issue further below. The fundamental modes are nearly evenly spaced in $l$, with frequencies given by \begin{equation} \nu_l(\mbox{Hz})\simeq 0.5\, l\, \nu_2, \label{sequence} \end{equation} where $\nu_2$ is the frequency of the $l=2$ fundamental. The mode spacing is about half of $\nu_2$. From eq. (\ref{nu_isotropic}), the lowest-frequency mode and the mode spacing both scale as $z(M/R)^{-1/2}$. The observed QPO spacing follows eq. (\ref{sequence}), though only four of the 12 modes in the range $2\le l\le 12$ are seen; we discuss this point further below. The highest fundamental frequency given in Table \ref{nu_SGR1900_values} is 156 Hz for $l=12$. The wavelength of this mode is $\simeq 0.5R$. Hence, for modes in the 28 to 156 Hz range, the approximation of an isotropic tangle is required to hold over stellar dimensions, as supported by the simulations of \cite{braithwaite09}. This model of a tangle that dominates the dipole field does not apply to SGR 1806. If we attempt to explain the lowest-frequency QPO observed (18 Hz) as an $l=2$ fundamental mode for the inferred dipole field strength of $2\times 10^{15}$, eq. (\ref{nu_isotropic}) gives $b_t^2\simeq 0.5$, which is inconsistent with the approximation of a strong, nearly isotropic tangle. For this case, the magnetic stress of the smooth field must be included. This problem is solved in \S 4 in the supplementary materials \citep{lv16s}. A match to the 18 Hz QPO implies $b_t^2=0.17$. The spectrum is very dense with a spacing of about two Hz. If the dipole field has been overestimated by a factor several for this object, not implausible, then the predicted spectrum is much less dense, and more similar to SGR 1900. For example, taking $B_d$ equal to 0.62 of the reported value, a value of $b_t^2=1.0$ matches the 18 Hz QPO, and the predicted spectrum is less dense, with a spacing of about 7 Hz in the fundamentals. \section{Effects of the crust are negligible} \label{crust} So far we have ignored the crust under the assumption that magnetic stresses throughout the star dominate material stresses in the crust. Here we show that crust rigidity increases the eigenfrequencies calculated above for SGR 1900 by 3\% or less. A more detailed treatment is given in \S 4 of the supplementary materials \citep{lv16s}. To estimate the effects of the crust, we use a two-zone crust plus core model, assuming a nearly isotropic tangle ($b_t^2>>1$). The core has constant density $\rho$ and constant effective shear modulus $\mu_B$. The crust has inner radius $R_c$, outer radius $R$, thickness $\Delta R$, average density $\bar{\rho}_c$, average material shear modulus $\bar{\mu}_c$, and average effective shear modulus $\bar{\mu}_{\rm crust}=\bar{\mu}_c+\mu_B$. \citet{chamel05,chamel12} finds that the neutron fluid is largely entrained by ions in the inner crust; in evaluating the wave speed in the crust, we use the total mass density, so that the wave propagation speed in the crust is $\bar{c}_{\rm crust}=\sqrt{\bar{\mu}_{\rm crust}/\bar{\rho}_c}$. In the core the propagation speed is $c_t=\sqrt{\mu_B/x_p\rho}$, where we take the proton mass fraction to be $x_p=0.1$ (see discussion after equation \ref{waveeqn}). In the core, the solution to the mode problem is $u_{\rm core}=j_l(kr)$. The solution in the crust is $u_{\rm crust}=aj_l(k^\prime r)+bn_l(k^\prime r)$, where $n_l$ are spherical Neumann functions, and $a$ and $b$ are constants. The wavenumbers are related through $\omega=c_tk=\bar{c}_{\rm crust}k^\prime$. The boundary conditions are continuity in value and traction at $r=R_c$, and vanishing traction at $r=R$: \begin{equation} u_{\rm core}(R_c)=u_{\rm crust}(R_c), \end{equation} \begin{equation} \mu_B\left[\frac{du_{\rm core}}{dr}-\frac{u_{\rm core}}{r} \right]_{r=R_c}= \bar{\mu}_{\rm crust}\left[\frac{du_{\rm crust}}{dr}-\frac{u_{\rm crust}}{r} \right]_{r=R_c}, \end{equation} \begin{equation} \left[\frac{du_{\rm crust}}{dr}-\frac{u_{\rm crust}}{r} \right]_{r=R}=0. \end{equation} For $\bar{c}_{\rm crust}$ and $\bar{\mu}_{\rm crust}$, we use volume-averaged quantities obtained in the following way. The shear modulus in the crust as a function of density, ignoring magnetic effects, is in cgs units \citep{strohmayer_etal91} \begin{equation} \mu_c=0.1194\,\frac{n_i(Ze)^2}{a}, \end{equation} where $n_i$ is the number density of ions of charge $Ze$, and $a$ is the Wigner-Seitz cell radius given by $n_i4\pi a^3/3=1$. For the composition of the inner crust, we use the results of \citet{dh01}, conveniently expressed analytically by \citet{hp04}. We solve for crust structure using the Newtonian equation for hydrostatic equilibrium for a neutron star of 1.4 $M_\odot$. The volume-averaged density and shear modulus in the crust are $\bar{\rho}_c=0.06\rho$ and $\bar{\mu}_c=2.5\times 10^{29}$ \hbox{\rm\hskip.35em erg cm}$^{-3}$, respectively. The crust thickness is $\Delta R=0.1R$. We take $\ev{B_t^2}^{1/2}=2.7\times 10^{15}$ G assumed above for SGR 1900, corresponding to $\mu_B=5.8\times 10^{29}$ \hbox{\rm\hskip.35em erg cm}$^{-3}$. To evaluate the effects of the crust, we solve the two-zone model for a particular eigenmode first taking $\bar{\mu}_{\rm crust}=\bar{\mu}_c+\mu_B$, then taking $\bar{\mu}_{\rm crust}=\mu_B$, and evaluating the difference between the two eigenfrequencies. We find that the finite shear modulus of the crust increases the frequencies of the fundamental normal modes by only about 3\% for $l=2,3,4$, and 2\% for $l=12$. We have confirmed that the effects are even smaller for overtones. The crust is nearly dynamically irrelevant in this limit of a strong magnetic tangle, and neglect of the crust is a good approximation. \section{Discussion and Conclusions} \label{conclusions} Theoretical study of magnetar QPOs is seriously hampered by the fact that we do not know the detailed magnetic structure within a magnetar. Even if the magnetic structure were known or is assumed, solution of the full problem is very difficult. Motivated by stability considerations and observational evidence, we have argued that the magnetic field should be highly tangled; {\em significant tangling of the magnetic field is needed to stabilize the linked poloidal and toroidal fields} \citep{bn06,braithwaite08}. We have shown with a simple model that a highly-tangled field, under the assumption that the tangle is approximately isotropic, supports normal modes with frequencies consistent with the QPOs observed in SGR 1900. In comparison with data, we have obtained a rough measurement of the ratio of the energy density in the tangle to that in the dipolar field of $\sim 14$. Given the approximations we have made, this model should not be taken as quantitatively very accurate, but what is significant is that the assumption of a strong magnetic tangle leads naturally to a normal mode spectrum with frequencies that lie in the range of observed QPOs. Our model predicts about three times as many modes below 160 Hz than are observed. That the normal mode spectrum is more dense than the observed QPO spectrum is crucial, for if the opposite were true, we would be forced to abandon the model as unviable. The main point that we would like to emphasize is that field tangling has important effects that cannot be ignored in the QPO problem, and that the magnetic tangle is likely to be the principal factor that determines the normal-mode spectrum. Use of dipolar magnetic geometries and their variants is not adequate, and we point out that such magnetic geometries are unstable and therefore unphysical starting points for study of normal modes of neutron stars. One simplifying feature of magnetic tangling is that for a tangle of magnetar-scale strength, the crust becomes dynamically unimportant. We conclude by briefly describing several directions of future research that we consider to be interesting and important. One issue is the prediction that the normal-mode spectrum is denser than the observed QPO spectrum. In general, any excitation mechanism will give preferential mode excitation. Determination of which modes are excited requires solution of the initial-value problem. Also, it is unknown at present if the QPO emission is beamed or not. If the emission has a beaming fraction less than unity, only that fraction of modes would be potentially observable. We are currently studying the mode excitation problem; we find that excitation in a localized region of the star can lead to excitation of separate groups of modes. Further development of the model into a quantitative tool will require inclusion of realistic stellar structure and treatment of the case of comparable energy densities in the magnetic tangle and the dipolar component. For SGR 1806, the energy densities in the tangle and the dipolar component appear to be comparable within our interpretation; see \S 4 of the supplementary materials \citep{lv16s}. In this case, the spectrum of normal modes could be very dense, with a mode spacing of about 2-7 Hz. If the spectrum is so dense that it begins to assume the qualities of a quasi-continuum, resonant absorption might be important as has been considered in previous work with smooth field geometries. We will study this interesting problem in future work. The magnetic field that occurs in nature may not be a nearly isotropic tangle as we have assumed. It would be interesting to explore how spatial variations in the magnetic tangle affect the spectrum of normal modes. \begin{acknowledgments} We thank M. Gabler, D. Huppenkothen, Y. Levin, and A. Watts for very helpful discussions, and the anonymous referee for useful criticism. This work was supported by NSF Award AST-1211391 and NASA Award NNX12AF88G. \end{acknowledgments}
{'timestamp': '2016-04-11T02:11:04', 'yymm': '1604', 'arxiv_id': '1604.02372', 'language': 'en', 'url': 'https://arxiv.org/abs/1604.02372'}
\section{\label{SectionJammed}What is {}``jammed''?} OSLN question whether the hard-sphere system is ``physical" and therefore resort to studying particle systems with soft-sphere interactions to mimic hard-particle packings. The latter is inherently a geometrical problem. In fact, there is a simple and rigorous geometrical approach to jamming in hard-sphere systems that is not only well-defined, but, as we show below, is closely related to OSLN's jamming point $J$. Although the hard-sphere potential is an idealization, it is no less physical than any soft-sphere potential, especially in regards to jamming. Indeed, the singular nature of the hard-sphere potential is crucial because it enables one to be precise about the concept of ``jamming.'' Recently, three hierarchically ordered jamming categories have been introduced \cite{Torquato_jammed}: \emph{local}, \emph{collective} and \emph{strict} \emph{jamming}. Each successive category progressively relaxes the boundary conditions imposed on the particle displacements. These definitions are very intuitive and completely \emph{geometric}, and are closely linked to definitions of {}``rigid'' or {}``stable'' packings appearing in the mathematics literature \cite{Connelly_disks,Connelly_packings}. OSLN's definition of jamming simply states that the configuration of particles is at a stable (strict) energy minimum. Such a definition is dependent on the particular interparticle potential, and thus it obscures the relevant packing geometry (exclusion-volume effects). Furthermore, the distinctions between different jamming categories is critical, especially if one is trying to determine the density of the MRJ state \cite{Jamming_LP_results}. Specifically, this density will generally be higher the more demanding is the jamming category. Clearly, OSLN do not distinguish between different degrees or levels of jamming. We have recently demonstrated that the distinction between collective and strict jamming is important even for very large packings, especially in two dimensions \cite{Jamming_LP_results} . For OSLN, a jammed configuration is one where there are no zero-frequency modes of the Hessian matrix of the total potential energy with respect to the positions of the particles (the dynamical matrix), while keeping the periodic unit cell \emph{fixed}. Our definition of strict jamming relaxes this requirement and includes the lattice vectors as additional degrees of freedom \cite{Jamming_LP}. As explained in detail in Ref. \cite{Connelly_Tensegrities}, the Hessian consists of two parts, a negative definite \emph{stress matrix} and a positive semidefinite \emph{stiffness matrix}. OSLN's definition of jammed means simply that the Hessian is positive definite at the energy minimum. A precise phrase for this is \emph{a stable} or \emph{strict (local) energy minimum}, and we see no point in redefining this elementary concept. In fact, according to OSLN, any stable energy minimum represents a jammed configuration, and it is not possible to relate this idea to \emph{packing} concepts without numerous additional assumptions about the form of the pair potential and the interparticle distances at the energy minimum. Although OSLN point out themselves that our definitions of jamming and MRJ are for \emph{hard-sphere packings}, they claim to replace them with a {}``cleaner definition,'' which applies \emph{only} to systems of \emph{soft spheres}. The two definitions cannot directly be compared as they apply to different systems. However, OSLN themselves clearly imply that their {}``jammed'' soft-sphere systems and {}``jammed'' hard-sphere packings are related, by referring to other works on hard-sphere systems. For example, they claim a direct relation between their special point $J$ and RCP of hard spheres in Section IID of Ref. \cite{Jamming_ZeroT}. The basic idea, as OSLN explain, is that one {}``can approach the hard sphere by making the potential harder and harder...{[}to{]} produce a limiting hard sphere value''. However, they question whether the hard-sphere limit is well-defined and {}``would argue that hard spheres are a singular limit and thus unphysical'' and that {}``One should therefore concentrate on softer potentials for which unambiguous definitions can be constructed.'' To demonstrate that the limit is well-defined, let us first define a \emph{collectively jammed sphere packing} to be \emph{any nonoverlapping configuration of hard spheres in which no subset of particles can continuously be displaced so that its members move out of contact with one another and with the remainder set} (while maintaining nonoverlap) \cite{Torquato_jammed}. The following theorem \cite{Connelly_Tensegrities} shows that near the {}``jamming threshold'' $\phi_{c}$, as defined in Section IIB in Ref. \cite{Jamming_ZeroT}, the jamming of particle systems as defined by OSLN is directly related to this definition of collective jamming in hard-sphere packings: \noindent\textbf{Theorem:} {\sl Consider an interparticle potential that is continuous and strictly monotonically decreasing around $r_{ij}=D$, and vanishes for $r_{ij}>D+\delta$. If in a finite configuration of particles interacting with such a potential, all interacting (i.e., closer then $D+\delta$) particles are a distance $D$ apart, and the configuration is a stable local energy minimum, then the configuration corresponds to a collectively jammed packing of hard spheres with diameter $D$.} If one relaxes the condition that all interacting particles are exactly at distance $D$ apart and instead asks only that the minimum interparticle distance be $D$, then for a sufficiently small $\delta$ one can prove \cite{Connelly_Energy} that the above sphere packing is almost collectively jammed (i.e., it is trapped in a small neighborhood of the initial configuration \cite{Jamming_LP}). This theorem implies that the packings studied in Ref. \cite{Jamming_ZeroT} that are very slightly above the {}``jamming threshold'' $\phi_{c}$ are indeed closely related to collectively jammed ideal packings of spheres of diameter $D=\sigma$ (polydispersity is trivial to incorporate). All of these considerations call into question the value of a definition of jamming that hinges on eigenvalues of dynamical matrices. Finally, it is important to note that despite the fact that our definition of collective jamming above calls for virtual displacing (groups of) particles, one can in fact rigorously test for our hard-particle jamming categories using linear programming \cite{Jamming_LP}, without what OSLN call {}``shifting particles,'' even for very large disordered packings \cite{Jamming_LP_results}. We have in fact communicated to OSLN the results \cite{footnote_2} of our algorithm applied to several sample packings provided by them. In short, our algorithm verified that OSLN's systems near $\phi_{c}$ were indeed nearly collectively jammed (within a very small tolerance) when viewed as packings. However, they were not strictly jammed because OSLN keep the lattice vectors fixed during energy minimization. \section{\label{SectionRandom}What is {}``random''?} We agree that the maximum of an appropriate ``entropic" metric would be a potentially useful way to characterize the randomness of a packing and therefore the MRJ state \cite{Anu_order_metrics}. However, as pointed out in Ref. \cite{Anu_order_metrics}, a substantial hurdle to be overcome is the necessity to generate all possible jammed states, or at least a representative sample of such states, in an unbiased fashion using a ``universal'' protocol in the large-system limit. Even if such a protocol could be developed, however, the issue of what weights to assign the resulting configurations remains. Moreover, there are other fundamental problems with entropic measures, as we discuss below, including its significance for two-dimensional monodisperse hard disk packings. According to OSLN, maximally random is defined by {}``where the entropy of initial states is maximum'' and imply that this is a universal measure of disorder. It is not clear exactly what the authors mean by entropy and how (or whether) it can be measured for {}``initial states''. It is not obvious that one can relate the {}``randomness'' of the \emph{final} configurations (which is what OSLN are analyzing) to that of the \emph{initial} configurations. It appears OSLN's rationale is that their algorithm goes to the {}``nearest'' energy minimum from a given initial configuration. Does this process preserve {}``entropy'' or randomness? Clearly if one used, for example, global energy minimization, one would obtain very different results. Furthermore, entropy is a concept inherently related to \emph{distributions} of configurations. However, one classifies \emph{particular} final configurations (packings) as random or disordered, and by considering a given configuration, one can devise a procedure for quantitatively measuring (using order metrics) how disordered or ordered it is. This distinction between distributions of configurations and particular configurations is an important one that OSLN do not make. The MRJ state is defined in \cite{Torquato_MRJ} as the jammed state which minimizes a given order metric $\psi$. OSLN suggest their interpretation of maximally random as superior because using order metrics {}``will always be subject to uncertainty since one never knows if one has calculated the proper order parameter.'' Therefore, OSLN believe that they have identified the proper, unique, measure of order (related to entropy). We wish to stress the difference between \emph{well-defined} and \emph{unique}, as the two seem to be blurred in Ref. \cite{Jamming_ZeroT}. The MRJ state is well-defined in that for a particular choice of jamming category and order metric it can be identified unambiguously. For a finite system, it will consist of a discrete set (possibly one) of configurations, becoming more densely populated as the system becomes larger. At least for collective and strict jamming in three dimensions, a variety of sensible order metrics seem to produce an MRJ state near $\phi\approx0.64$ \cite{Anu_order_metrics}, the traditionally accepted density of the RCP state. However, the \emph{density} of the MRJ state should not be confused with the MRJ \emph{state} itself. It is possible to have a rather ordered packing at this very same density; for example, a jammed but diluted vacancy FCC lattice packing \cite{Anu_order_metrics}. This is why the two-parameter description of packings in terms of the density $\phi$ and order metric $\psi$, as in Ref. \cite{Torquato_MRJ}, is not only useful, but actually necessary. OSLN's description of order implies a direct relation between probability densities and randomness, i.e., that the \emph{most probable} \cite{footnote_3} configurations represent the most disordered state. In this sense, one expects that the density of jammed configurations, when viewed as a three-dimensional plot over the $\phi-\psi$ plane will be very strongly peaked around the MRJ point for very large systems, just as the probability distribution curves in Fig. 6 in Ref. \cite{Jamming_ZeroT} are very peaked around $\phi\approx0.64$. As OSLN suggest themselves, this might explain why several different packing procedures yield similar hard-particle packings under appropriate conditions, historically designated as RCP. However, this is far from being a closed question \cite{footnote_4}. Consider two-dimensional monodisperse circular disk packings as an example. It is well-known that two-dimensional analogs of three-dimensional computational and experimental protocols that lead to putative RCP states result in disk packings that are highly crystalline, forming rather large triangular domains (grains) \cite{footnote_4a}. Because such highly ordered packings are the most probable for these \emph{protocols}, OSLN's entropic measure would identify these as the most disordered, a dubious proposition. An appropriate order metric, on the other hand, is capable of identifying a particular configuration (not an ensemble of configurations) of considerably lower density (e.g., a jammed diluted triangular lattice) that is consistent with our intuitive notions of maximal disorder. However, typical packing protocols would almost never generate such disordered disk configurations because of their inherent bias toward undiluted crystallization. This brings us to OSLN's claim that they have devised an unbiased universal protocol, to which we now turn our attention. \section{\label{SectionAlgorithms}Universal, Hard and Soft Algorithms} In this section, we focus on the algorithms used by OSLN and point out why they are neither universal nor superior to other procedures. We point out the close relations between OSLN's algorithm for generating configurations near the onset of jamming and the Zinchenko hard-sphere packing algorithm \cite{Zinchenko}. Furthermore, we question OSLN's implication that using one kind of interaction potential (with three different exponents) and one algorithm amounts to exploring the space of all jammed configurations in an unbiased manner. This puts into doubt the claimed universality of the point $J$. By fixing the interaction potential, initial density and energy minimization (conjugate gradient) algorithm, OSLN obtain a well-defined collection of final configurations with well-defined (not unique) properties. In essence, OSLN make their algorithm devoid of tunable parameters by simply choosing specific and fixed values for them. Both the Zinchenko and OSLN algorithms are {}``dynamics independent'', in the sense that there is no tunable parameter for the rate of compression, which would be an analog of the cooling or quenching rate in molecular systems. Both also imply that this makes their algorithm universal or superior to other algorithms and that the (well-defined) results they obtain are somehow special. Most sensible algorithms will in fact produce a well-defined density in the limit of large systems given a choice of algorithmic parameters. For example, by changing the expansion rate in the Lubachevsky-Stillinger algorithm, one can achieve final densities for spheres anywhere in the range from $0.64$ (fast expansion) to $0.74$ (very slow expansion), as clearly illustrated in Fig. 2a in Ref. \cite{Torquato_MRJ}. Therefore, if we followed the logic of OSLN, we could claim that any number in that range represents a special point. In our opinion, a good packing algorithm should be capable of generating a variety of packings, in both density and the amount of order. How can one ascertain that the packings one produces are {}``most random'' if there are no other jammed packings to compare to? OSLN use two main procedures to generate final configurations. The first procedure is to choose a density and then use conjugate gradients to find a nearby energy minimum, starting from a randomly-generated initial configuration ($T=\infty$), as described in Section IIA in Ref. \cite{Jamming_ZeroT}. Using this procedure, OSLN sampled inherent structures \cite{Inherent_Structures} at fixed density to measure the fraction $f_j(\phi)$ of states that had nonzero bulk and shear moduli, and showed that $f_j$ has a strong system-size dependence with its derivative becoming a delta-function in the large system limit. It is important to note that this procedure as such has little or nothing to do with hard-sphere packings, especially for the kind of soft potentials ($\alpha\geq3/2)$ that OSLN study. Many stable energy minima will be completely unrelated to packings, and especially not to those designated as MRJ states. OSLN used a second procedure to study the mechanical and structural properties of systems near the onset of jamming $\phi_{c}$. In this procedure, a configuration is compressed (or decompressed) using very small steps in density until the bulk and shear moduli vanished (or nonzero moduli develop), as described in Section IIB in Ref. \cite{Jamming_ZeroT}. We now demonstrate that this procedure is closely related to Zinchenko's algorithm \cite{Zinchenko} for generating hard-sphere packings. Start at low density with a set of nonoverlapping spheres of diameter $\sigma$. Both algorithms then slowly grow the particles (OSLN in small increments, Zinchenko continuously) while moving the particles to avoid overlap \cite{footnote_7}. In the Zinchenko algorithm, one strictly maintains the contact between particles as soon as they touch, which requires solving a system of ODE's containing the rigidity matrix of the packing \cite{Jamming_LP} to find the necessary particle displacements. OSLN on the other hand, use conjugate gradients (CG) to reminimize the potential energy, which will simply push the particles just enough to keep them nonoverlapping, i.e., almost in contact. This procedure continues in both algorithms until no further densification is possible without inducing overlap. Accordingly, it is not surprising the packing configurations close to $\phi_{c}$ obtained in Ref. \cite{Jamming_ZeroT} closely resemble (in packing fraction, amorphous character, coordination, etc.) packings generated via a variety of bonafide hard-sphere algorithms (and experiments \cite{Experiment_RCP}). In particular, very similar packings are produced with the Lubachevsky-Stillinger (LS) algorithm \cite{LS_algorithm,LS_algorithm_3D} (with sufficiently high expansion rates) and the Zinchenko algorithm \cite{Zinchenko}. OSLN criticize the LS algorithm for changing the density in a dynamic fashion. The stated advantage of the OSLN protocol is that one can {}``quench the system to the final state within a fixed energy landscape'' since {}``the density is always held constant''. We are very puzzled by this last claim in light of their admission (in Section IIB of Ref. \cite{Jamming_ZeroT}) that they slowly change the density of the packing to find $\phi_{c}$. In fact, OSLN do not seem to clearly distinguish between the two rather different procedures they employ: the first for finding inherent structures (at a fixed density) and the second for generating packings at the jamming threshold (which searches in density). Fig. 6 of Ref. \cite{Jamming_ZeroT}, which supposedly represents the distributions of jamming thresholds $\phi_{c}$, \emph{defined} by the \emph{second} procedure, is obtained by differentiating the distribution generated with the \emph{first} procedure, with no clear justification. Most problematic of all is OSLN's claim that their results are universal. Despite the statement that {}``Starting with randomly generated $T=\infty$ states guarantees that we sample \emph{all} {[}emphasis added{]} phase space equally'', all that their first algorithm manages to explore is the space of energy minima for the \emph{particular} chosen interaction potential. By comparing three different exponents $\alpha$, OSLN conclude that the exact form of the potential is not important. However, a much more convincing picture would have been made if they instead tried \emph{qualitatively} different kinds of interaction potentials, rather then simply changing the curvature of the potential at the contact point. Otherwise, why focus on continuous interaction potentials at all? Since it is geometry (i.e., the nonoverlap condition on the spherical cores) that is crucial, the hard-sphere system offers a far {}``cleaner'' system to study when trying to understand the special point $J$.
{'timestamp': '2005-06-27T20:38:27', 'yymm': '0506', 'arxiv_id': 'cond-mat/0506707', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/0506707'}
\section{Introduction} Already in \cite{Fontana} and more recently and independently in \cite{Ebulfez 2} we have rediscovered the Hochster's inverse topology (see \cite[Prop. 8]{Hochster}) on the prime spectrum by a new and purely algebraic method. We call it the flat topology (it is worthy to mention that during the writing \cite{Ebulfez 2} we were not aware of \cite{Fontana} and Hochster's work). Hence the flat topology and Hochster's inverse topology are exactly the same things. The flat topology behaves as the dual of the Zariski topology. Roughly speaking, for a given ring $R$, then the collection of subsets $\V(I)=\{\mathfrak{p}\in\Spec(R) : I\subseteq\mathfrak{p}\}$ where $I$ runs through the set of f.g. ideals of $R$ formes a basis for the opens of the flat topology, see \cite[Theorem 3.2]{Ebulfez 2}. We use f.g. in place of ``finitely generated". Note that, in general, there are tremendous differences between the two topologies even if the base ring $R$ be a noetherian ring. In fact, the two topologies on $\Spec(R)$ are the same if and only if every prime ideal of $R$ is maximal, see Corollary \ref{coro 9}. \\ Recall that a topological space is called a \emph{separated} (or Hausdorff or $T_{2}$) space if every two distinct points admit disjoint open neighbourhoods. In this article, we are concerned primarily with the problem of when the Zariski topology is separated, and the dual question of when the flat topology is separated. The flat topology, as Zariski, is not necessarily separated, see \cite[Corollary 3.6]{Ebulfez 2}. Because of its importance and geometric applications, characterizing the separability of the Zariski topology became the most urgent task to the author at that time.\\ Note that characterizing the separability of the Zariski topology as well as the flat topology is not as easy to understand as one may think at first. This is because we are used to the topology of locally Hausdorff spaces, but the flat and Zariski topologies in general are not locally Hausdorff. By applying some sophisticated properties of the absolutely flat (von-Neumann regular) rings and flat epimorphisms the separability of these topologies are completely understood, see Theorem \ref{theorem 2}. In the literature one can also find another versions or parts of this characterization. Corollary \ref{coro 9} can be considered as interesting application of Theorem \ref{theorem 2}. \\ In this article, by an epimorphism $\phi:R\rightarrow S$ we mean it is an epimorphism in the category of commutative rings. The class of injective ring maps is precisely coincide to the class of monomorphisms of rings; but surjective ring maps are just special cases of epimorphisms. As a specific example, the canonical ring map $\mathbb{Z}\rightarrow\mathbb{Q}$ is an epimorphism while it is not surjective. For more details on epimorphisms of rings please consider \cite[\S2]{ebulfez}.\\ The titles of the sections should be sufficiently explanatory. Throughout the article, all of the rings which are discussed are commutative.\\ \section{Preliminaries-absolutely flat rings} Absolutely flat rings play a major role throughout this article. The authors's contributions in this section are including Corollaries \ref{epic-flat}, \ref{coro 6}, \ref{coro 700} and Proposition \ref{theorem 3}.\\ \begin{lemma}\label{tricky lemma} Let $R$ be a ring and let $I=\langle a_{1},...,a_{n}\rangle$ be a f.g. ideal of $R$. Suppose there is some $c_{i}\in R$ such that $a_{i}=a^{2}_{i}c_{i}$ for all $i$. Then $I$ can be generated by an idempotent element.\\ \end{lemma} {\bf Proof.} Clearly $e_{i}=a_{i}c_{i}$ is an idempotent element and $I=\langle e_{1},...,e_{n}\rangle$. If $n=1$ then there is nothing to prove. Suppose $n> 1$. Then, by the induction hypothesis, the ideal $\langle e_{1},...,e_{n-1}\rangle$ is generated by an idempotent element $e'\in R$. It follows that $I=\langle e\rangle$ where $e=e'+e_{n}-e'e_{n}$ is an idempotent element. $\Box$ \\ Recall that a ring $R$ is said to be absolutely flat if each $R-$module is flat.\\ \begin{proposition}\label{prop 3} Let $R$ be a ring. Then the following conditions are equivalent.\\ $\mathbf{(i)}$ The ring $R$ is absolutely flat. \\ $\mathbf{(ii)}$ Every ideal of $R$ is idempotent.\\ $\mathbf{(iii)}$ Each element $r\in R$ can be written as $r=r^{2}s$ for some $s\in R$.\\ $\mathbf{(iv)}$ Every f.g. ideal of $R$ is a direct summand of $R$.\\ \end{proposition} {\bf Proof.} $\mathbf{(i)}\Rightarrow\mathbf{(ii)}$: Let $I$ be an ideal of $ R$. The map $i\otimes1:I\otimes_{R}R/I\rightarrow R\otimes_{R}R/I$ induced by the canonical injection $i:I\rightarrow R$ is injective since $R/I$ is $R-$flat. But $\Ima(i\otimes1)=0$. Therefore $I=I^{2}$.\\ $\mathbf{(ii)}\Rightarrow\mathbf{(iii)}$: There is nothing to prove.\\ $\mathbf{(iii)}\Rightarrow\mathbf{(iv)}$: Let $I=\langle a_{1},...,a_{n}\rangle$ be a f.g. ideal of $R$. By Lemma \ref{tricky lemma}, there is an idempotent $e\in R$ such that $I=\langle e\rangle$. Let $J=\langle1-e\rangle$. Then clearly $I+J=R$ and $I\cap J=0$.\\ $\mathbf{(iv)}\Rightarrow\mathbf{(i)}$: Let $M$ be a $R-$module. By \cite[Theorem 7.7]{Matsumura1}, it suffices to show that for every f.g. ideal $I$ of $R$, the canonical map $I\otimes_{R}M\rightarrow M$ which maps each pure tensor $a\otimes m$ into $am$ is injective. By the hypothesis, there is an ideal $J$ of $R$ such that $R=I+J$ and $I\cap J=0$. It follows that the following sequence $\xymatrix{0\ar[r] & I \ar[r]^{i} & R\ar[r]^{p} &J \ar[r] & 0}$ is exact and split where $i$ is the canonical injection and $p$ is the projection map. Therefore the following sequence is exact and split $$\xymatrix{0\ar[r] & I\otimes_{R}M \ar[r]^{i\otimes1} & R\otimes_{R}M\ar[r]^{p\otimes1} &J\otimes_{R}M\ar[r] & 0}$$ because the exact and split sequences are left exact and split by an additive functor. $\Box$ \\ \begin{corollary}\label{epic-flat} Absolutely flat rings are stable under taking quotients and localizations. \\ \end{corollary} {\bf Proof.} Let $R$ be an absolutely flat ring, let $I$ be an ideal of $R$ and let $S$ be a multiplicative subset of $R$. Each ideal $K$ of $R/I$ is of the form $J/I$ where $J$ is an ideal of $R$ which contains $I$. By Proposition \ref{prop 3}, $J=J^{2}$. Thus $K^{2}=J^{2}+I/I=K$. Therefore, by Proposition \ref{prop 3}, $R/I$ is absolutely flat. Suppose $r/s\in S^{-1}R$ then, by Proposition \ref{prop 3}, $r=r^{2}r'$ for some $r'\in R$. Thus $r/s=(r/s)^{2}(r's/1)$. Therefore by Proposition \ref{prop 3}, $S^{-1}R$ is absolutely flat. $\Box$ \\ \begin{proposition}\label{prop 987} A ring $R$ is absolutely flat if and only if $R_{\mathfrak{m}}$ is absolutely flat for all maximal ideals $\mathfrak{m}$ of $R$.\\ \end{proposition} {\bf Proof.} The implication ``$\Rightarrow$" is an immediate consequence of Corollary \ref{epic-flat}. For the reverse implication, let $M$ be a $R-$module and let $\xymatrix{0 \ar[r] & N'\ar[r]^{f} & N}$ be an exact sequence of $R-$modules. Denote $K$ the kernel of the morphism $f\otimes 1:N'\otimes_{R}M\rightarrow N\otimes_{R}M$. Let $\mathfrak{m}$ be a maximal ideal of $R$. The sequence $\xymatrix{0 \ar[r] & N'_{\mathfrak{m}}\otimes_{_{R_{\mathfrak{m}}}}M_{\mathfrak{m}}\ar[r] & N_{\mathfrak{m}}}\otimes_{_{R_{\mathfrak{m}}}}M_{\mathfrak{m}}$ is exact since $R_{\mathfrak{m}}$ is absolutely flat. It follows that the map $(f\otimes 1)_{\mathfrak{m}}:(N'\otimes_{R}M)_{\mathfrak{m}}\rightarrow (N\otimes_{R}M)_{\mathfrak{m}}$ is injective. This means that $K_{\mathfrak{m}}=0$. Therefore $K=0$. $\Box$ \\ \begin{corollary}\label{coro 6} Let $\{R_{i}\}$ be a family of rings. Then the direct product ring $A=\prod\limits_{i}R_{i}$ is absolutely flat if and only if each $R_{i}$ is so.\\ \end{corollary} {\bf Proof.} The projection map $\pi_{i}:A\rightarrow R_{i}$ is surjective. Therefore, by Corollary \ref{epic-flat}, $R_{i}$ is absolutely flat. Conversely, pick $a=(r_{i})\in A$. For each $i$, by Proposition \ref{prop 3}, $r_{i}=r_{i}^{2}s_{i}$ for some $s_{i}\in R_{i}$. Thus $a=a^{2}b$ where $b=(s_{i})$. Therefore, by Proposition \ref{prop 3}, $A$ is absolutely flat. $\Box$ \\ \begin{corollary}\label{coro 700} Let $R$ be an absolutely flat ring. If one of the following conditions hold then $R$ is a field.\\ $\textbf{(i)}$ $R$ is local.\\ $\textbf{(ii)}$ $R$ is an integral domain.\\ $\textbf{(iii)}$ $R$ is a non-trivial noetherian ring with the trivial idempotents.\\ \end{corollary} {\bf Proof.} Let $\textbf{(i)}$. Let $\mathfrak{m}$ be the maximal ideal of $R$. For each $a\in\mathfrak{m}$, by Proposition \ref{prop 3}, there is some $b\in R$ such that $a(ab-1)=0$. But $ab-1$ is invertible in $R$. Therefore $a=0$. If $\textbf{(ii)}$ holds. Then, by Proposition \ref{prop 3}, every non-zero element of $R$ is invertible. Suppose $\textbf{(iii)}$ holds. Let $I$ be a non-zero ideal of $R$. Therefore, by Proposition \ref{prop 3} and Lemma \ref{tricky lemma}, $I=R$. This, in particular, implies that every non-zero element of $R$ is invertible. $\Box$ \\ \begin{proposition}\label{theorem 3} A ring $R$ is absolutely flat if and only if each $R-$algebra is $R-$flat.\\ \end{proposition} {\bf Proof.} The implication ``$\Rightarrow$" is clear. Conversely, let $M$ be a $R-$module. Consider the ring $S$ where the underlying set of this ring is the cartesian product $R\times M$ and its addition and multiplication are defined as $(r,m)+(r',m')=(r+r',m+m')$ and $(r,m).(r',m')=(rr', rm'+r'm)$, respectively. Clearly $S$ is a commutative ring whose identity element is $(1,0)$. Moreover the map $\phi: R\rightarrow S$ given by $r\rightsquigarrow(r,0)$ is a ring homomorphism. The $R-$module structure induced via $\phi$ on $S$ is the same as the usual $R-$module structure on the direct sum $R\bigoplus M$. By the hypothesis, $\phi$ is a flat morphism. It follows that $M$ is a flat $R-$module. $\Box$ \\ \section{Preliminaries-pointwise rings} In this section we develop the theory of pointwise rings which we need to it in the sequel. Here, the only author's contributions are Proposition \ref{prop 6}, part $\textbf{(iii)}$, Lemma \ref{lemma 453} and the proofs of Lemma \ref{lemma 9} and Proposition \ref{pointwise}. The remaining results are well-known and can be found in the S\'{e}minaire Samuel \cite{Samuel} also see \cite{Wiegand} and \cite{Wiegand 2}. \\ If $R$ is an absolutely flat ring then each element $a\in R$, by Proposition \ref{prop 3}, can be written as $a=a^{2}b$ for some $b\in R$. This leads us to the following definition:\\ \begin{definition} Let $R$ be a ring and let $a\in R$. If there is an element $b\in R$ such that $a=a^{2}b$ and $b=b^{2}a$, then $b$ is said to be a pointwise inverse of $a$. \\ \end{definition} \begin{lemma}\label{lemma 9} Let $a,b\in R$. Then $b$ is a pointwise inverse of $a$ if and only if $a\in Ra^{2}$. Moreover, if $b$ is a pointwise inverse of $a$ then there is an idempotent element $e\in R$ such that $(e+a)(e+b)=1$. Finally, the pointwise inverse, if it exists, is unique. \\ \end{lemma} {\bf Proof.} Suppose $a\in Ra^{2}$. We have $a=ra^{2}$ for some $r\in R$. Let $b=r^{2}a$. Then $b$ is a pointwise inverse of $a$. Clearly $e=1-ab$ is an idempotent element and $(e+a)(e+b)=1$. Let $c\in R$ be another pointwise inverse of $a$. We have $b=ab^{2}=(ac)(ab^{2})=a^{2}c^{2}b=ac^{2}=c$. $\Box$ \\ The pointwise inverse of $a\in R$, if it exists, is usually denoted by $a^{(-1)}$.\\ \begin{lemma} Let $\phi: R\rightarrow S$ be a ring map. Suppos $a,b\in R$ have pointwise inverses in R. Then the pointwise inverses of $\phi(a)$ and $ab$ exist. Moreover $\phi(a)^{(-1)}=\phi(a^{(-1)})$ and $(ab)^{(-1)}=a^{(-1)}b^{(-1)}$. \\ \end{lemma} {\bf Proof.} Easy. $\Box$ \\ The following result establishes the universal property of the poinwise rings.\\ \begin{proposition}\label{pointwise} Let $R$ be a ring and let $S$ be a subset of $R$. Then there exist a ring $S^{(-1)}R$ and a canonical ring map $\eta: R\rightarrow S^{(-1)}R$ such that for each $s\in S$, the pointwise inverse of $\eta(s)$ in $S^{(-1)}R$ exists and the pair $(S^{(-1)}R, \eta)$ satisfies in the following universal property: if there is a ring map $\phi: R\rightarrow R'$ such that for each $s\in S$ the pointwise inverse of $\phi(s)$ in $R'$ exists then there is a unique ring map $\psi:S^{(-1)}R\rightarrow R'$ such that $\phi=\psi\circ\eta$.\\ \end{proposition} {\bf Proof.} Consider the polynomial ring $A=R[x_{s} : s\in S]$ and let $S^{(-1)}R=A/I$ where the ideal $I$ is generated by elements of the form $sx_{s}^{2}-x_{s}$ and $s^{2}x_{s}-s$ with $s\in S$. Let $\eta: R\rightarrow S^{(-1)}R$ be the canonical ring map. For each $s\in S$, the element $x_{s}+I$ is the pointwise inverse of $\eta(s)=s+I$. Let $\phi:R\rightarrow R'$ be a ring map such that for each $s\in S$, the pointwise inverse of $\phi(s)$ exists in $R'$. By the universal property of the polynomial rings, there is a (unique) homomorphism of $R-$algebras $\widetilde{\phi}:R[x_{s} : s\in S]\rightarrow R'$ such that $x_{s}\rightsquigarrow\phi(s)^{(-1)}$ for all $s\in S$. We have $\widetilde{\phi}(I)=0$. Denote by $\psi:S^{(-1)}R\rightarrow R'$ the ring map induced by $\widetilde{\phi}$. Clearly $\psi$ is the unique ring homomorphism such that $\phi=\psi\circ\eta$. Because suppose there is another such ring map $\psi':S^{(-1)}R\rightarrow R'$. Then we have $\psi(x_{s}+I)=\widetilde{\phi}(x_{s})=\phi(s)^{(-1)}= \psi'\big(\eta(s)\big)^{(-1)}=\psi'\big(\eta(s)^{(-1)}\big)=\psi'(x_{s}+I)$ for all $s\in S$. Therefore $\psi=\psi'$. $\Box$ \\ We call $S^{(-1)}R$ the pointwise localization of $R$ with respect to $S$. \\ \begin{proposition}\label{prop 6} Let $R$ be a ring and let $S$ be a subset of $R$. Then the following are true. \\ $\mathbf{(i)}$ The canonical ring map $\eta: R\rightarrow S^{(-1)}R$ is an epimorphism.\\ $\mathbf{(ii)}$ The map $\eta^{\ast}:\Spec\big(S^{(-1)}R\big)\rightarrow\Spec(R)$ is bijective.\\ $\mathbf{(iii)}$ For each $s\in S$, $(\eta^{\ast})^{-1}\big(V(s)\big)$ is a clopen subset of $\Spec\big(S^{(-1)}R\big)$ with respect to the flat (resp. Zariski) topology.\\ $\mathbf{(iv)}$ The ring $S^{(-1)}R$ is nontrivial if and only if $R$ is so.\\ $\mathbf{(v)}$ $\Ker(\eta)\subseteq\mathfrak{N}$ where $\mathfrak{N}$ is the nil-radical of $R$.\\ \end{proposition} {\bf Proof.} $\mathbf{(i)}:$ This implies from the universal property of Proposition \ref{pointwise}.\\ $\mathbf{(ii)}:$ The map $\eta^{\ast}$ is injective since $\eta$ is an epimorphism, see \cite[Theorem 3.3]{ebulfez}. Let $\mathfrak{p}$ be a prime ideal of $R$ and consider the canonical ring map $\pi: R\rightarrow\kappa(\mathfrak{p})$. The image of every element of $R$ under $\pi$ has a pointwise inverse in $\kappa(\mathfrak{p})$. Thus, by Proposition \ref{pointwise}, there is a (unique) ring map $\psi:S^{(-1)}R\rightarrow\kappa(\mathfrak{p})$ such that $\pi=\psi\circ\eta$. Then $\mathfrak{p}=\eta^{\ast}(\mathfrak{q})$ where $\mathfrak{q}=\psi^{-1}(0)$.\\ $\mathbf{(iii)}:$ We have $(\eta^{\ast})^{-1}\big(V(s)\big)=V\big(\eta(s)\big)$. Moreover $V\big(\eta(s)\big)=D\big(1-\eta(s)\eta(s)^{(-1)}\big)$. Therefore, by \cite[Corollary 3.12]{Ebulfez 2}, $(\eta^{\ast})^{-1}\big(V(s)\big)$ is both open and closed. \\ $\mathbf{(iv)}$ and $\mathbf{(v)}$: These are immediate consequences of $\mathbf{(ii)}$. $\Box$ \\ \begin{lemma}\label{lem 1} Let $\phi: R\rightarrow S$ be an epimorphism of rings where $S$ is a nontrivial ring with the trivial idempotents. Suppose $\phi(r)$ has a pointwise inverse in $S$ for all $r\in R$. Then $A=\Ima(\phi)$ is an integral domain and $S$ is its field of fractions.\\ \end{lemma} {\bf Proof.} Suppose $\phi(r)\phi(r')=0$ for some elements $r,r'\in R$. If $\phi(r)\neq0$ then $\phi(r)\phi(r)^{(-1)}=1$ since $\phi(r)\phi(r)^{(-1)}$ is an idempotent element. Therefore $A$ is an integral domain. Let $K$ be the field of fractions of $A$. Since every non-zero element of $A$ is invertible in $S$ therefore by the universal property of the localization, there is a (unique) ring map $\psi:K\rightarrow S$ such that $i=\psi\circ j$ where $i:A\rightarrow S$ and $j:A\rightarrow K$ are the canonical injections. The map $\phi$ factors as $\phi=i\circ\phi'$ where $\phi':R\rightarrow A$ is the ring map induced by $\phi$. Since $\phi$ is an epimorphism thus $i$ and so $\psi$ are epimorphisms. By \cite[Corollary 2.3]{ebulfez}, $\psi$ is an isomorphism. $\Box$ \\ \begin{lemma}\label{lemma 453} Let $R$ be a ring. Then $\Spec\big(R^{(-1)}R\big)$ equipped with the Zariski (resp. flat) topology is separated.\\ \end{lemma} {\bf Proof.} Let $\mathfrak{q}$ and $\mathfrak{q'}$ be distinct prime ideals of $R^{(-1)}R$. The ideals $\mathfrak{p}=\eta^{\ast}(\mathfrak{q})$ and $\mathfrak{p'}=\eta^{\ast}(\mathfrak{q'})$ are distinct since by Proposition \ref{prop 6}, $\eta^{\ast}$ is injective. Choose $a\in\mathfrak{p}\setminus\mathfrak{p'}$. It follows that $\mathfrak{q}\in V\big(\eta(a)\big)$ and $\mathfrak{q'}\in D\big(\eta(a)\big)$. By Proposition \ref{prop 6}, $V\big(\eta(a)\big)$ is a clopen. $\Box$ \\ \begin{theorem}\label{theorem 1} Let $R$ be a ring and let $\eta:R\rightarrow R^{(-1)}R$ be the canonical ring map. Then the following are true. \\ $\mathbf{(i)}$ For each prime ideal $\mathfrak{q}$ of $R^{(-1)}R$, then $F=\big(R^{(-1)}R\big)_{\mathfrak{q}}$ is canonically isomorphic to $\kappa(\mathfrak{p})$ where $\mathfrak{p}=\eta^{\ast}(\mathfrak{q})$.\\ $\mathbf{(ii)}$ The ring $R^{(-1)}R$ is absolutely flat.\\ \end{theorem} {\bf Proof.} $\mathbf{(i)}:$ For each prime ideal $\mathfrak{q}$ of $R^{(-1)}R$, the composed map $$\xymatrix{R \ar[r]^{\eta} & R^{(-1)}R\ar[r] & F}$$ satisfies all of the hypotheses of Lemma \ref{lem 1}. Therefore $F$ is a field. Now consider the following commutative diagram $$\xymatrix{ R_{\mathfrak{p}} \ar[r]^{\eta_{\mathfrak{q}}=epic} \ar[d]^{} & F \ar[d]^{\simeq} \\ \kappa(\mathfrak{p})\ar[r]^{} & \kappa(\mathfrak{q}) } $$ where $\mathfrak{p}=\eta^{\ast}(\mathfrak{q})$. By \cite[Corollary 2.3]{ebulfez}, the map $\kappa(\mathfrak{p})\rightarrow\kappa(\mathfrak{q})$ is an isomorphism and we win.\\ $\mathbf{(ii)}:$ It is an immediate consequence of $\mathbf{(i)}$ and Proposition \ref{prop 987}. $\Box$ \\ By Proposition \ref{pointwise} and Theorem \ref{theorem 1}, the assignment $R\rightsquigarrow R^{(-1)}R$ is a covariant functor form the category of commutative rings into the category of absolutely flat rings. \\ \section{Main results} \begin{lemma}\label{remark 1} Let $\phi: R\rightarrow S$ be a ring map, let $M$ and $N$ be $S-$modules and consider the canonical map $\eta:M\otimes_{R}N\rightarrow M\otimes_{S}N$ which maps each pure tensor $m\otimes_{R}n$ into $m\otimes_{S}n$. Then $\Ker(\eta)=\langle sm\otimes_{R}n-m\otimes_{R}sn : s\in S\setminus\Ima(\phi), m\in M, n\in N\rangle$.\\ \end{lemma} {\bf Proof.} Let $K$ be the $R-$submodule of $M\otimes_{R}N$ generated by elements of the form $sm\otimes_{R}n-m\otimes_{R}sn$ with $s\in S\setminus\Ima(\phi)$, $m\in M$ and $n\in N$. Clearly $K\subseteq\Ker(\eta)$. Consider the map $\overline{\eta}:P=M\otimes_{R}N/K\rightarrow M\otimes_{S}N$ induced by $\eta$. We have $\Ker(\overline{\eta})=\Ker(\eta)/K$. The scalar multiplication $S\times P\rightarrow P$ which is defined on pure tensors by $s.(m\otimes_{R}n+K)=sm\otimes_{R}n+K$ is actually well-defined and puts a $S-$module structure over $P$. By the universal property of the tensor products, the $S-$bilinesr map $M\times N\rightarrow P$ defined by $(m,n)\rightsquigarrow m\otimes_{R}n+K$ induces a (unique) $S-$homomorphism $M\otimes_{S}N\rightarrow P$ which maps each pure tensor $m\otimes_{S}n$ into $m\otimes_{R}n+K$. This implies that $\overline{\eta}$ is bijective. Therefore $\Ker(\eta)=K$. $\Box$ \\ \begin{lemma}\label{lem 2} Let $\phi: R\rightarrow S$ be a flat ring map which has a factorization $\xymatrix{ R \ar[r]^{\psi} & A \ar[r]^{\phi'} & S}$ such that $\phi'$ is an injective ring map and $\psi$ is an epimorphism. Then $\phi'$ is a flat morphism.\\ \end{lemma} {\bf Proof.} For each $A-$module $M$, the canonical map $\eta_{M}:M\otimes_{R}S\rightarrow M\otimes_{A}S$ which maps each pure tensor $m\otimes_{R} s$ into $m\otimes_{A}s$ is injective because in $A\otimes_{R}A-$module $M\otimes_{R}S$ we have $am\otimes_{R}s=(a\otimes_{R}1_{A}).(m\otimes_{R}s)= (1_{A}\otimes_{R}a).(m\otimes_{R}s)=m\otimes_{R}a.s$ then apply Lemma \ref{remark 1}. In fact it is bijective. Now suppose $\xymatrix{0 \ar[r] & N \ar[r]^{f} & M}$ is an exact sequence of $A-$modules. The following diagram is commutative $$\xymatrix{ N\otimes_{R}S\ar[r]^{f\otimes_{R}1} \ar[d]^{\eta_{N}} & M\otimes_{R}S \ar[d]^{\eta_{M}} \\ N\otimes_{A}S \ar[r]^{f\otimes_{A}1} & M\otimes_{A}S} $$ and the map $f\otimes_{R}1$ is injective since $S$ is flat over $R$. Therefore $f\otimes_{A}1$ is injective as well. $\Box$ \\ \begin{lemma}\label{simplified lemma} Let $\phi: R\rightarrow S$ be a flat epimorphism of rings. Then for each prime $\mathfrak{p}$ of $R$ we have either $\mathfrak{p}S=S$ or that the canonical map $R_{\mathfrak{p}}\rightarrow T^{-1}S$ given by $r/s\rightsquigarrow\phi(r)/\phi(s)$ is bijective where $T=\phi(R\setminus\mathfrak{p})$. \\ \end{lemma} {\bf Proof.} Suppose $\mathfrak{p}S\neq S$ for some prime $\mathfrak{p}$. The canonical map $R_{\mathfrak{p}}\rightarrow T^{-1}S$ is a flat epimorphism because flat morphisms and epics are stable under base change and composition (recall that the ring $ T^{-1}S$ is canonically isomorphic to $S_{\mathfrak{p}}$). It is also faithfully flat since $\mathfrak{p}S\neq S$. Therefore, by \cite[Corollary 2.2]{ebulfez}, it is bijective. $\Box$ \\ It is worthy to mention that the converse of Lemma \ref{simplified lemma} also holds.\\ For a given ring $R$, the quotient ring $R/\mathfrak{N}$ is denoted by $R_{\mathrm{red}}$ where $\mathfrak{N}$ is the nil-radical of $R$. For any ring map $\phi:R\rightarrow S$ the induced map $R_{\mathrm{red}}\rightarrow S_{\mathrm{red}}$ is denoted by $\phi_{\mathrm{red}}$.\\ \begin{theorem}\label{outstanding lemma} Let $\phi: R\rightarrow S$ be a flat epimorphism of rings. If $\phi_{\mathrm{red}}$ is surjective then so is $\phi$.\\ \end{theorem} {\bf Proof.} The map $\phi$ factors as $\xymatrix{R \ar[r]^{\pi\:\:\:\:\:\:\:\:\:\:\:} & R/\Ker(\phi)\ar[r]^{\:\:\:\:\:\:\:\:\:\:\:\:\phi'} & S}$ where $\pi$ is the canonical ring map and $\phi'$ is induced by $\phi$. We have $\Ima(\phi)=\Ima(\phi')$, $\phi'$ is an epimorphism and $\phi'_{\mathrm{red}}$ is surjective. Moreover, by Lemma \ref{lem 2}, $\phi'$ is flat. Therefore, without loss of generality, we may assume that $\phi$ is injective. It follows that $\phi_{\mathrm{red}}$ is an isomorphism and so $\phi^{\ast}: \Spec(S)\rightarrow\Spec(R)$ is bijective. Therefore $\mathfrak{p}S\neq S$ for all primes $\mathfrak{p}$ of $R$ and so by Lemma \ref{simplified lemma}, the canonical map $R_{\mathfrak{p}}\rightarrow S_{\mathfrak{p}}$ is bijective. It follows that $S/\phi(R)\otimes_{R}R_{\mathfrak{p}}=0$ for all primes $\mathfrak{p}$. $\Box$ \\ \begin{theorem}\label{coro 3} Let $\phi: R\rightarrow S$ be an epimorphism of rings such that $R$ is absolutely flat. Then $\phi$ is surjective.\\ \end{theorem} {\bf Proof.} The map $\phi$ factors as $\xymatrix{R \ar[r]^{\pi\:\:\:\:\:\:\:\:\:\:\:} & R/\Ker(\phi)\ar[r]^{\:\:\:\:\:\:\:\:\:\:\:\:\phi'} & S}$ where $\pi$ is the canonical ring map and $\phi'$ is the injective ring map induced by $\phi$. The quotient ring $R/\Ker(\phi)$ is absolutely flat. Moreover, $\Ima(\phi)=\Ima(\phi')$ and yet $\phi'$ is an epimorphism. Hence, without loss of generality, we may assume that $\phi$ is injective. In this case, $\phi$ is a faithfully flat morphism. Because, suppose $S\otimes_{R}M=0$ for some $R-$module $M$. From the following short exact sequence of $R-$modules $$\xymatrix{0 \ar[r] & R\ar[r]^{\phi} & S\ar[r]^{\pi} & S/R \ar[r] & 0}$$ we obtain the following long exact sequence of $R-$modules $\xymatrix{... \ar[r] &}$\\$\xymatrix{\Tor^{R}_{1}(S/R, M) \ar[r] &R\otimes_{R}M\ar[r]^{\phi\otimes1_{M}} & S\otimes_{R}M \ar[r]^{\pi\otimes1_{M}} & S/R\otimes_{R}M \ar[r] & 0}$.\\ But $\Tor^{R}_{1}(S/R,M)=0$ since $S/R$ is $R-$flat, see \cite[Theorem 7.2]{Rotman}. Thus $M\simeq R\otimes_{R}M=0$. Therefore $\phi$ is a faithfully flat epimorphism and so by \cite[Corrollary 2.2]{ebulfez}, it is bijective. This means that, in our factorization $\phi=\phi'\circ\pi$, $\phi'$ is an isomorphism therefore the original $\phi$ is surjective. $\Box$ \\ \begin{lemma}\label{lemma 234} A ring $R$ is absolutely flat if and only if the canonical map $\eta:R\rightarrow R^{(-1)}R$ is bijective.\\ \end{lemma} {\bf Proof.} Suppose $R$ is absolutely flat. Then, by Theorem \ref{coro 3}, $\eta$ is surjective. Pick $a\in\Ker(\eta)$. By Proposition \ref{prop 3}, there exists some $b\in R$ such that $a=ba^{2}$. It follows that $a=b^{n-1}a^{n}$ for all $n\geq1$. But $a$ is a nilpotent element, see Proposition \ref{prop 6}. Therefore $a=0$. The converse implies from Theorem \ref{theorem 1}. $\Box$ \\ \begin{theorem}\label{theorem 2} Let $R$ be a ring. Then the following conditions are equivalent.\\ $\mathbf{(i)}$ The ring $R_{\mathrm{red}}=R/\mathfrak{N}$ is absolutely flat where $\mathfrak{N}$ is the nil-radical of $R$.\\ $\mathbf{(ii)}$ Every flat epimorphism $\phi:R\rightarrow S$ is surjective.\\ $\mathbf{(iii)}$ For each prime ideal $\mathfrak{p}$ of $R$, the canonical map $R\rightarrow R_{\mathfrak{p}}$ is surjective.\\ $\mathbf{(iv)}$ Every prime ideal of $R$ is maximal.\\ $\mathbf{(v)}$ The patch and Zariski topologies over $\Spec(R)$ are the same.\\ $\mathbf{(vi)}$ The set $\Spec(R)$ equipped with the Zariski topology is separated.\\ $\mathbf{(vii)}$ Every prime ideal of $R$ is minimal.\\ $\mathbf{(viii)}$ The set $\Spec(R)$ equipped with the flat topology is separated.\\ $\mathbf{(ix)}$ The patch and flat topologies over $\Spec(R)$ are the same.\\ \end{theorem} {\bf Proof.} $\mathbf{(i)}\Rightarrow\mathbf{(ii)}:$ By Theorem \ref{outstanding lemma}, it suffices to show that $\phi_{\mathrm{red}}:R_{\mathrm{red}}\rightarrow S_{\mathrm{red}}$ is surjective. The following diagram is commutative $$\xymatrix{ R \ar[r]^{\phi} \ar[d]^{} & S \ar[d]^{} \\ R_{\mathrm{red}}\ar[r]^{\phi_{\mathrm{red}}} & S_{\mathrm{red}}.}$$ It follows that $\phi_{\mathrm{red}}$ is an epimorphism. Therefore, by Theorem \ref{coro 3}, $\phi_{\mathrm{red}}$ is surjective and we win.\\ $\mathbf{(ii)}\Rightarrow\mathbf{(iii)}:$ For each prime ideal $\mathfrak{p}$ of $R$, the canonical map $R\rightarrow R_{\mathfrak{p}}$ is a flat epimorphism.\\ $\mathbf{(iii)}\Rightarrow\mathbf{(iv)}:$ The canonical map $R/\mathfrak{p}\rightarrow\kappa(\mathfrak{p})$ is surjective.\\ $\mathbf{(iv)}\Rightarrow\mathbf{(i)}:$ Let $\mathfrak{m}'=\mathfrak{m}/\mathfrak{N}$ be a maximal ideal of $R_{\mathrm{red}}$ where $\mathfrak{m}$ is a maximal ideal of $R$. The ring $(R_{\mathrm{red}})_{\mathfrak{m}'}$ is canonically isomorphic to $R_{\mathfrak{m}}/\mathfrak{N}R_{\mathfrak{m}}$. Moreover, $\mathfrak{N}R_{\mathfrak{m}}=\mathfrak{N'}$ where $\mathfrak{N'}$ denotes the nil-radical of $R_{\mathfrak{m}}$. But $\mathfrak{N'}=\mathfrak{m}R_{\mathfrak{m}}$ since every prime ideal of $R$ is maximal. Thus, by Proposition \ref{prop 987}, $R_{\mathrm{red}}$ is absolutely flat.\\ $\mathbf{(v)}\Rightarrow\mathbf{(vi)}:$ It is obvious.\\ $\mathbf{(vi)}\Rightarrow\mathbf{(v)}:$ Consider the identity map $i: \big(\Spec(R),\mathscr{J}_{p}\big)\rightarrow \big(\Spec(R),\mathscr{J}_{z}\big)$ where $\mathscr{J}_{p}$ and $\mathscr{J}_{z}$ denote the patch and Zariski topologies, respectively. By the hypothesis and \cite[Theorem 2.4]{Ebulfez 2}, $i$ is a homeomorphism, so $\mathscr{J}_{p}=\mathscr{J}_{z}$ .\\ $\mathbf{(vi)}\Rightarrow\mathbf{(iv)}:$ In every separated space the points are closed. \\ $\mathbf{(i)}\Rightarrow\mathbf{(vi)}$ and $\mathbf{(viii)}:$ The map $\pi^{\ast}:\Spec(R_{\mathrm{red}})\rightarrow\Spec(R)$ induced by the canonical map $\pi: R\rightarrow R_{\mathrm{red}}$ is a homeomorphism. By Lemma \ref{lemma 234}, $\Spec(R_{\mathrm{red}})$ is homeomorphic to $\Spec(A)$ where $A:=R_{\mathrm{red}}^{(-1)}R_{\mathrm{red}}$. The latter space, by Lemma \ref{lemma 453}, is separated.\\ The implications $\mathbf{(ix)}\Rightarrow\mathbf{(viii)}$ and $\mathbf{(vii)}\Leftrightarrow\mathbf{(iv)}$ are clear.\\ $\mathbf{(viii)}\Rightarrow\mathbf{(vii)}:$ See \cite[Corollary 3.6]{Ebulfez 2}.\\ $\mathbf{(viii)}\Rightarrow\mathbf{(ix)}:$ Using the similar argument as applied in the implication $(vi)\Rightarrow(v)$. $\Box$ \\ \begin{corollary}\label{coro 9} The flat and Zariski topologies on $\Spec(R)$ are the same if and only if every prime ideal of $R$ is maximal. \\ \end{corollary} {\bf Proof.} It is an immediate consequence of Theorem \ref{theorem 2}. $\Box$ \\
{'timestamp': '2018-02-27T02:09:55', 'yymm': '1608', 'arxiv_id': '1608.05835', 'language': 'en', 'url': 'https://arxiv.org/abs/1608.05835'}
\section{Introduction and Main Results}\label{Sec.Introduction} Deep learning provides various models and algorithms to process data as efficiently as biological nervous systems or neuronal responses in the human brain \cite{Lecun1998, Hinton2006, Krizhevsky2012, Goodfellow2016, LecunNature}. It is based on deep neural network architectures and those structures bring essential tools for obtaining data features and function representations in practical applications. A main concern about deep learning which has attracted much scientific attention and some criticism is its lack of theories supporting its practical efficiency caused by its network structures, though there have been some theoretical attempts from approximation theory viewpoints \cite{Mallat, Mallat2016, Poggio}. In particular, for deep CNNs having convolutional structures without fully connected layers, it is unknown which kinds of functions can be approximated. This paper provides a rigorous mathematical theory to answer this question and to illustrate the role of convolutions. \bigskip \noindent {\bf Notation and Concepts} \medskip \noindent {\bf Convolutional Filters and Matrices.} \medskip The deep CNNs considered in this paper have two essential ingredients: a rectified linear unit (ReLU) defined as a univariate nonlinear function $\sigma$ given by $$\sigma (u) = (u)_+ = \max\{u, 0\}, \qquad u\in {\mathbb R}$$ and a sequence of convolutional filter masks ${\bf w} =\{w^{(j)}\}_j$ inducing sparse convolutional structures. Here a filter mask $w=(w_k)_{k=-\infty}^{\infty}$ means a sequence of filter coefficients. We use a fixed integer filter length $s \geq 2$ to control the sparsity, and assume that $w^{(j)}_k \not=0$ only for $0\leq k \leq s$. The convolution of such a filter mask $w$ with another sequence $v =(v_0, \ldots,v_D)$ is a sequence $w{*}v$ given by $\left(w{*}v\right)_i = \sum_{k=0}^{D} w_{i-k} v_k$. This leads to a $(D+s) \times D$ Toeplitz type convolutional matrix $T$ which has constant diagonals: $$ T = \left[\begin{array}{llllllll} w_0 & 0 & 0 & 0 & \cdot & 0 \\ w_1 & w_0 & 0 & 0 & \cdot & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\ w_{s} & w_{s-1} & \cdots & \hspace*{-3mm} w_0 & \hspace*{-3mm} 0 \cdots & 0 \\ 0 & w_s & \cdots & \hspace*{-3mm} w_1 & \hspace*{-3mm} w_0 \cdots \ & 0 \\ \vdots & \ddots & \ddots \ddots & \ddots & \ddots \ddots & \vdots \\ \cdots & \cdots & 0 & \hspace*{-4mm} w_{s} & \cdots & w_0 \\ \cdots & \cdots & \cdots & \hspace*{-4mm} 0 & \hspace*{-6mm} w_{s} \cdots & w_1 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & \cdots \cdots & \cdots 0 & \hspace*{6mm} w_{s} & w_{s-1} \\ 0 & \cdots & \cdots & \cdots & \hspace*{4mm} \cdots 0 & w_{s} \end{array}\right]. $$ Sparse matrices of this form induce deep CNNs which are essentially different from the classical neural networks involving full connection matrices. Note that the number of rows of $T$ is $s$ greater than that of columns. This leads us to take a sequence of linearly increasing widths $\{d_j = d + j s\}$ for the network, which enables the deep CNN to represent functions of richer structures. \medskip \noindent {\bf Convolutional Neural Networks.} \medskip Starting with the action of $T$ on the input data vector $x \in {\mathbb R}^d$, we can define a deep CNN of depth $J$ as a sequence of $J$ vectors $h^{(j)}(x)$ of functions on ${\mathbb R}^d$ given iteratively by $h^{(0)}(x)=x$ and $$ h^{(j)} (x) = \sigma \left(T^{(j)} h^{(j-1)}(x) - b^{(j)}\right), \qquad j=1, 2, \ldots, J, $$ where $T^{(j)} =\left(w^{(j)}_{i-k}\right)$ is a $d_j \times d_{j-1}$ convolutional matrix, $\sigma$ acts on vectors componentwise, and ${\bf b}$ is a sequence of bias vectors $b^{(j)}$. Except the last iteration, we take $b^{(j)}$ of the form $[b_1 \ \ldots b_{s} \ b_{s+1} \ b_{s+1} \ \ldots \ b_{s+1} \ b_{d_j-s+1} \ \ldots b_{d_j}]^T$ with the $d_j - 2s$ repeated components in the middle. The sparsity of $T^{(j)}$ and the special form of $b^{(j)}$ tell us that the $j$-th iteration of the deep CNN involves $3 s +2$ free parameters. So in addition to the $2 d_J + s +1$ free parameters for $b^{(J)}, c \in {\mathbb R}^{d_J}, w^{(J)}$, the total number of free parameters in the deep CNN is $(5 s +2) J + 2d -2 s -1$, much smaller than that in a classical fully connected multi-layer neural network with full connection matrices $T^{(j)}$ involving $d_j d_{j-1}$ free parameters. It demonstrates the computational efficiency of deep CNNs. \bigskip \noindent {\bf Mathematical Theory of Deep CNNs} \medskip The hypothesis space of a learning algorithm is the set of all possible functions that can be represented or produced by the algorithm. For the deep CNN of depth $J$ considered here, the hypothesis space is a set of functions defined by $${\mathcal H}^{{\bf w}, {\bf b}}_J =\left\{\sum_{k=1}^{d_{J}} c_k h^{(J)}_k (x): c \in {\mathbb R}^{d_J}\right\}.$$ This hypothesis space and its approximation ability depend completely on the sequence of convolutional filter masks ${\bf w} =\{w^{(j)}\}_{j=1}^J$ and the sequence of bias vectors ${\bf b}=\{b^{(j)}\}_{j=1}^J$. Observe that each function in the hypothesis space ${\mathcal H}^{{\bf w}, {\bf b}}_J$ is a continuous piecewise linear function (linear spline) on any compact subset $\Omega$ of ${\mathbb R}^d$. Our first main result verifies the universality of deep CNNs, asserting that any function $f\in C(\Omega)$, the space of continuous functions on $\Omega$ with norm $\|f\|_{C(\Omega)} =\sup_{x\in \Omega} |f(x)|$, can be approximated by ${\mathcal H}^{{\bf w}, {\bf b}}_J$ to an arbitrary accuracy when the depth $J$ is large enough. \medskip \noindent {\bf Theorem A.} Let $2 \leq s\leq d$. For any compact subset $\Omega$ of ${\mathbb R}^d$ and any $f\in C(\Omega)$, there exist sequences ${\bf w}$ of filter masks, ${\bf b}$ of bias vectors and $f^{{\bf w}, {\bf b}}_J \in {\mathcal H}^{{\bf w}, {\bf b}}_J$ such that $$ \lim_{J\to\infty} \|f-f^{{\bf w}, {\bf b}}_J \|_{C(\Omega)} = 0. $$ Our second main result presents rates of approximation by deep CNNs for functions in the Sobolev space $H^{r} (\Omega)$ with an integer index $r>2 + d/2$. Such a function $f$ is the restriction to $\Omega$ of a function $F$ from the Sobolev space $H^{r} ({\mathbb R}^d)$ on ${\mathbb R}^d$ meaning that $F$ and all its partial derivatives up to order $r$ are square integrable on ${\mathbb R}^d$. \medskip \noindent{\bf Theorem B.} Let $2 \leq s\leq d$ and $\Omega \subseteq [-1, 1]^d$. If $J \geq 2d/(s-1)$ and $f=F|_{\Omega}$ with $F \in H^{r} ({\mathbb R}^d)$ and an integer index $r>2 + d/2$, then there exist ${\bf w}$, ${\bf b}$ and $f^{{\bf w}, {\bf b}}_J \in {\mathcal H}^{{\bf w}, {\bf b}}_J$ such that $$\|f-f^{{\bf w}, {\bf b}}_J \|_{C(\Omega)} \leq c \left\|F\right\| \sqrt{\log J} \left(1/J\right)^{\frac{1}{2} + \frac{1}{d}},$$ where $c$ is an absolute constant and $\left\|F\right\|$ denotes the Sobolev norm of $F \in H^{r} ({\mathbb R}^d)$. \medskip According to Theorem B, if we take $s=\lceil 1+ d^\tau/2\rceil$ and $J= \lceil 4 d^{1-\tau}\rceil L$ with $0 \leq \tau \leq 1$ and $L \in {\mathbb N}$, where $\lceil u\rceil$ denotes the smallest integer not smaller than $u$, then we have $$ \|f-f^{{\bf w}, {\bf b}}_J \|_{C(\Omega)} \leq c \left\|F\right\| \sqrt{\frac{(1-\tau) \log d + \log L + \log 5}{4 d^{1-\tau} L}},$$ while the widths of the deep CNN are bounded by $12 L d$ and the total number of free parameters by $$(5 s +2) J + 2d -2 s -1 \leq (73 L +2) d. $$ We can even take $L=1$ and $\tau=1/2$ to get a bound for the relative error $$ \frac{\|f-f^{{\bf w}, {\bf b}}_J \|_{C(\Omega)}}{\left\|F\right\|} \leq \frac{c}{2} d^{-\frac{1}{4}} \sqrt{\log (5\sqrt{d})} $$ achieved by a deep CNN of depth $\lceil 4 \sqrt{d}\rceil$ and at most $75 d$ free parameters, which decreases as the dimension $d$ increases. This interesting observation is new for deep CNNs, and does not exist in the literature of fully connected neural networks. It explains the strong approximation ability of deep CNNs. A key contribution in our theory of deep CNNs is that an arbitrary pre-assigned sequence $W=(W_k)_{-\infty}^{\infty}$ supported in $\{0, \ldots, {\mathcal M}\}$ can be factorized into convolutions of a mask sequence $\{w^{(j)}\}_{j=1}^{J}$. It is proved by the same argument as in \cite{ZhouAA} for the case with the special restriction $W_0 \not=0$. Convolutions are closely related to translation-invariance in speeches and images \cite{Mallat, Daubechies, Strang}, and also in some learning algorithms \cite{SmaleZhou, FanHuWuZhou}. \medskip \noindent{\bf Theorem C.} Let $s \geq 2$ and $W=(W_k)_{-\infty}^{\infty}$ be a sequence supported in $\{0, \ldots, {\mathcal M}\}$ with ${\mathcal M} \geq 0$. Then there exists a finite sequence of filter masks $\{w^{(j)}\}_{j=1}^{J}$ supported in $\{0, \ldots, s\}$ with $J < \frac{{\mathcal M}}{s-1}+1$ such that the convolutional factorization $W = w^{(J)}{*}\ldots {*}w^{(2)}{*}w^{(1)}$ holds true. \bigskip \noindent{\bf Discussion} \medskip The classical shallow neural networks associated with an activation function $\sigma: {\mathbb R} \to {\mathbb R}$ produce functions of the form $$ f_N (x) =\sum_{k=1}^N c_k \sigma(\alpha_k \cdot x - b_k) $$ with $\alpha_k \in {\mathbb R}^d, b_k, c_k \in {\mathbb R}$. A mathematical theory for approximation of functions by shallow neural networks was well developed three decades ago \cite{Cybenko, Hornik, Barron, Mhaskar1993, Leshno1993, Pinkus1999} and was extended to fully connected multi-layer neural networks shortly afterwards \cite{Hornik, Mhaskar1993, Chui1996} . The first type of results obtained in the late 1980s are about universality, asserting that any continuous function $f$ on any compact subset $\Omega$ of ${\mathbb R}^d$ can be approximated by some $f_N$ to an arbitrary accuracy when the number of hidden neurons $N$ is large enough. Such results were given in \cite{Cybenko, Hornik, Barron} when $\sigma$ is a sigmoidal function, meaning that $\sigma$ is a continuous strictly increasing function satisfying $\lim_{u\to-\infty}\sigma(u)=0$ and $\lim_{u\to \infty}\sigma(u)=1$. A more general result with a locally bounded and piecewise continuous activation function $\sigma$ asserts \cite{Leshno1993, Pinkus1999} that universality holds if and only if $\sigma$ is not a polynomial. The second type of results obtained in the early 1990s are about rates of approximation. When $\sigma$ is a $C^\infty$ sigmoidal function and $f=F|_{[-1, 1]^d}$ for some $F\in L^2({\mathbb R}^d)$ with the Fourier transform $\hat F$ satisfying $|w| \hat F(w) \in L^1 ({\mathbb R}^d)$, rates of type $\|f_N - f\|_{L^2_{\mu} ([-1, 1]^d)} =O(1/\sqrt{N})$ were given in \cite{Barron} where $\mu$ is an arbitrary probability measure $\mu$. Analysis was conducted in \cite{Mhaskar1993} for shallow neural networks with more general continuous activation functions $\sigma$ satisfying a special condition with some $b\in{\mathbb R}$ that $\sigma^{(k)} (b)\not= 0$ for any nonnegative integer $k$ and a further assumption with some integer $\ell \not=1$ that $\lim_{u\to-\infty} \sigma(u)/|u|^\ell=0$ and $\lim_{u\to \infty} \sigma(u)/u^\ell=1$. The rates there are of type $\left\|f_N - f\right\|_{C([-1, 1]^d)} =O(N^{-r/d})$ for $f \in C^r ([-1, 1]^d)$. Note that the ReLU activation function considered in this paper does not satisfy the condition with $\sigma^{(k)} (b)\not= 0$ or the special assumption with $\ell\not= 1$. To achieve the approximation accuracy $\left\|f_N - f\right\|_{C([-1, 1]^d)} \leq \epsilon$, when $r= \lceil \frac{d+1}{2} +2\rceil$ with $d/r \approx 2$, the number of hidden neurons $N \geq \left(c_{f, d, \ell}/\epsilon\right)^{d/r}$ and the total number of free parameters is at least $\left(c_{f, d, \ell}/\epsilon\right)^{d/r} d$, where the constant $ c_{f, d, \ell}$ depends on the dimension $d$ and might be very large. To compare with our result, we take the filter length $s=\lceil 1+ d/2\rceil$ and depth $J= 4 L$ with $L \in {\mathbb N}$. We know from Theorem B that the same approximation accuracy $ \|f-f^{{\bf w}, {\bf b}}_J \|_{C(\Omega)} \leq \epsilon$ with $0< \epsilon \leq c \left\|F\right\|$ can be achieved by the deep CNN of depth $J= 4 \lceil \frac{1}{\epsilon^2} \log \frac{1}{\epsilon^2}\rceil$ having at most $ \lceil \frac{75}{\epsilon^2} \log \frac{1}{\epsilon^2}\rceil d$ free parameters, which does not depend on the dimension $d$. Though a logarithmic term is involved, this dimension independence gives evidence for the power of deep CNNs. A multi-layer neural network is a sequence of function vectors $h^{(j)}(x)$ satisfying an iterative relation $$ h^{(j)} (x) = \sigma \left(T^{(j)} h^{(j-1)}(x) - b^{(j)}\right), \qquad j=1, 2, \ldots, J. $$ Here $T^{(j)}$ is a full connection matrix without special structures. So a deep CNN is a special multi-layer neural network with sparse convolutional matrices. This sparsity gives difficulty in developing a mathematical theory for deep CNNs, since the techniques in the literature of fully connected shallow or multi-layer neural networks do not apply. Our novelty to overcome the difficulty is to factorize an arbitrary finitely supported sequence into convolutions of filter masks $\{w^{(j)}\}_{j=1}^J$ supported in $\{0, 1, \ldots, s\}$. Our method can be applied to distributed learning algorithms \cite{LGZ, GLZ}. Recently there have been quite a few papers \cite{Telgarsky2016, Eldan2016, Yarosky, Coifman, Grohs, Petersen} on approximation and representation of functions by deep neural networks and benefit of depth, but all these results are for fully connected networks without pre-specified structures, not for deep CNNs. In particular, it was shown in \cite{Grohs, Petersen} that the rate of approximaton of some function classes by multi-layer fully connected neural networks may be achieved by networks with sparse connection matrices $T^{(j)}$, but the locations of the sparse connections are unknown. This sparsity of unknown pattern is totally different from that of deep CNNs, the latter enables computing methods like stochastic gradient descent to learn values of the free parameters efficiently. Deep CNNs are often combined with pooling, a small number of fully connected layers, and some other techniques for improving the practical performance of deep learning. Our purpose to analyze purely convolutional networks is to demonstrate that convolution makes full use of shift-invariance properties of speeches and images for extracting data features efficiently. Also, for processing an image, convolutions based on the 2-D lattice ${\mathbb Z}^2$ are implemented by taking inner products of $(s+1) \times (s+1)$ filter matrices with shifted patches of the image. Though we do not consider such deep learning algorithms in this paper, some of our ideas can be used to establish mathematical theories for more general deep neural networks involving convolutions. \bigskip \noindent{\bf Methods} \medskip For approximation in $C(\Omega)$ we can only consider those Sobolev spaces which can be embedded into the space of continuous functions, that is, those spaces with the regularity index $r > \frac{d}{2}$. To establish rates of approximation we require $r>\frac{d}{2} +2$ in Theorem B. In this case, the set $H^r (\Omega)$ is dense in $C(\Omega)$, so Theorem A follows from Theorem B by scaling. \bigskip \noindent{\bf Proof of Theorem B.} Let $J \geq \frac{2d}{s-1}$ and $m$ be the integer part of $\frac{(s-1)J}{d} -1\geq 1$. In our assumption, $f =F|_{\Omega}$ for some function $F \in H^{r} ({\mathbb R}^d)$ with the Fourier transform $\widehat{F}(\omega)$ giving the norm $\left\|F\right\| =\left\|\left(1+ |\omega|^2\right)^{r/2} \widehat{F}(\omega)\right\|_{L^2}$. By the Schwarz inequality and the condition $r>\frac{d}{2} +2$, $v_{F, 2}:= \int_{{\mathbb R}^d} \|\omega\|_1^2 \left|\widehat{F}(\omega)\right| d\omega \leq c_{d, r} \left\|F\right\|$ where $c_{d, r}$ is the finite constant $\left\|\|\omega\|_1^2 \left(1+ |\omega|^2\right)^{-r/2}\right\|_{L^2}$. Then we apply a recent result from \cite{Klusowski2018} on ridge approximation to $F|_{[-1, 1]^d}$ and know that there exists a linear combination of ramp ridge functions of the form $$ F_m (x) = \beta_0 + \alpha_0 \cdot x + \frac{v}{m} \sum_{k=1}^m \beta_k \left(\alpha_k \cdot x - t_k\right)_+ $$ with $\beta_k \in [-1, 1], \|\alpha_k\|_1 =1, t_k \in [0, 1], \beta_0 = F(0), \alpha_0 = \nabla F(0)$ and $|v| \leq 2 v_{F, 2}$ such that $$ \left\|F - F_m\right\|_{C([-1, 1]^d)} \leq c_0 v_{F, 2} \max\left\{\sqrt{\log m}, \sqrt{d}\right\} m^{-\frac{1}{2} - \frac{1}{d}} $$ for some universal constant $c_0>0$. Now we turn to the key step of constructing the filter mask sequence ${\bf w}$. Define a sequence $W$ supported in $\{0, \ldots, (m+1) d-1\}$ by stacking the vectors $\alpha_0, \alpha_1, \ldots, \alpha_m$ (with components reversed) by $$\left[W_{(m+1) d -1} \ \ldots \ W_{1} \ W_{0}\right] = \left[\alpha_m^T \ \ldots \ \alpha_1^T \ \alpha_0^T\right].$$ We apply Theorem C to the sequence $W$ with support in $\{0, 1, \ldots, (m+1) d\}$ and find a sequence of filter masks ${\bf w}=\{w^{(j)}\}_{j=1}^{\hat{J}}$ supported in $\{0, 1, \ldots, s\}$ with $\hat{J} < \frac{(m+1) d}{s-1}+1$ such that $W = w^{(\hat{J})}{*}w^{(\hat{J}-1)}{*}\ldots {*}w^{(2)}{*}w^{(1)}.$ The choice of $m$ implies $\frac{(m+1) d}{s-1} \leq J$. So $\hat{J} \leq J$ and by taking $w^{(\hat{J}+1)} = \ldots = w^{(J)}$ to be the delta sequence, we have $W = w^{(J)}{*}w^{(J-1)}{*}\ldots {*}w^{(2)}{*}w^{(1)}.$ This tells us \cite{ZhouAA} that $$T^{(J)} \ldots T^{(1)}= T^{W}$$ where $T^{W}$ is the $d_J \times d$ matrix given by $[W_{\ell -k}]_{\ell=1, \ldots, d_J, k=1, \ldots, d}$. Observe from the definition of the sequence $W$ that for $k =0, 1, \ldots, m$, the $(k+1)d$-th row of $T^{W}$ is exactly the transpose of $\alpha_k$. Also, since $Js \geq (m+1) d$, we have $W_{Js}=0$ and the last row of $T^{W}$ is a zero row. Then we construct ${\bf b}$. Denote $\|w\|_1=\sum_{k=-\infty}^{\infty} |w_k|$, $B^{(0)} =\max_{x\in \Omega} \max_{k=1, \ldots, d} |x_k|$ and $B^{(j)} = \|w^{(j)}\|_1 \ldots \|w^{(1)}\|_1 B^{(0)}$ for $j\geq 1$. Then we have $$\left\|\left(T^{(j)} \ldots T^{(1)} x\right)_k\right\|_{C(\Omega)} \leq B^{(j)}, \qquad \forall k=1, \ldots, d_j.$$ Take $b^{(1)} =- B^{(1)} {\bf 1}_{d_1} :=- B^{(1)} (1, \ldots, 1)^T$, and $$b^{(j)} = B^{(j-1)} T^{(j)} {\bf 1}_{d_{j-1}} - B^{(j)} {\bf 1}_{d_j}, \qquad j=1, \ldots, J-1. $$ Then for $j=1, \ldots, J-1$, we have $$ h^{(j)} (x) = T^{(j)}\ldots T^{(1)} x + B^{(j)} {\bf 1}_{d_j}$$ and $b^{(j)}_{\ell} = B^{(j-1)} \sum_{k=0}^{s} w^{(j)}_{k} - B^{(j)}= b^{(j)}_{s+1}$ for $\ell=s+1, \ldots, d_j -s.$ Hence the bias vectors are of the required form. Finally, we take the bias vector $b^{(J)}$ by setting $b^{(J)}_{\ell}$ to be $$ \left\{\begin{array}{ll} B^{(J-1)} (T^{(J)} {\bf 1}_{d_{J-1}})_{\ell} - B^{(J)}, & \hbox{if} \ \ell =d, d + Js, \\ B^{(J-1)} (T^{(J)} {\bf 1}_{d_{J-1}})_{\ell} + t_k, & \hbox{if} \ \ell =(k+1)d, \ 1 \leq k \leq m, \\ B^{(J-1)}(T^{(J)} {\bf 1}_{d_{J-1}})_{\ell} + B^{(J)}, & \hbox{otherwise.} \end{array}\right. $$ Substituting this bias vector and the expression for $h^{(J-1)} (x)$ into the iterative relation of the deep CNN, we see from the identity $T^{(J)} \ldots T^{(1)}= T^{W}$ and the definition of the sequence $W$ that the $\ell$-th component $h^{(J)}_{\ell} (x)$ of $h^{(J)} (x)$ equals $$ \left\{\begin{array}{ll} \alpha_0 \cdot x + B^{(J)}, & \hbox{if} \ \ell =d, \\ B^{(J)}, & \hbox{if} \ \ell =d +Js, \\ \left(\alpha_k \cdot x - t_k\right)_+, & \hbox{if} \ \ell =(k+1)d, \ 1 \leq k \leq m, \\ 0, & \hbox{otherwise.} \end{array}\right. $$ Thus, we can take $f^{{\bf w}, {\bf b}}_J = F_m|_{\Omega} \in \hbox{span}\{h^{(J)}_{k} (x)\}_{k=1}^{d_J} ={\mathcal H}^{{\bf w}, {\bf b}}_J$ and know that the error $\|f-f^{{\bf w}, {\bf b}}_J \|_{C(\Omega)} \leq \left\|F - F_m\right\|_{C([-1, 1]^d)}$ can be bounded as $$\|f-f^{{\bf w}, {\bf b}}_J \|_{C(\Omega)} \leq c_0 v_{F, 2} \max\left\{\sqrt{\log m}, \sqrt{d}\right\} m^{-\frac{1}{2} - \frac{1}{d}}. $$ But $\frac{1}{2} (s-1)J \leq m d < (s-1)J$ and $2 r -d -4 \geq 1$. By a polar coordinate transformation, $c_{d, r} d^{1 + \frac{1}{d}} \leq \sqrt{\frac{d^6 \pi^{d/2}}{\Gamma (\frac{d}{2} +1)}} \left(1 + \frac{1}{\sqrt{2 r -d -4}}\right)$ which can be bounded by an absolute constant $c' :=\max_{\ell\in {\mathbb N}} 2\sqrt{\ell^6 \pi^{\ell/2}/\Gamma (\frac{\ell}{2} +1)}$. Therefore, $$\|f-f^{{\bf w}, {\bf b}}_J \|_{C(\Omega)} \leq 2 c_0 c' \left\|F\right\| \sqrt{\log J} J^{-\frac{1}{2} - \frac{1}{d}}. $$ This proves Theorem B by taking $c = 2 c_0 c'$. \bigskip Convolutional factorizations have been considered in our recent work \cite{ZhouAA} for sequences $W$ supported in $\{0, 1, \ldots, S\}$ with $S \geq d$ under the special restrictions $W_0>0$ and $W_{S}\not= 0$. Theorem C gives a more general result by improving the bound for $J$ in \cite{ZhouAA} and removing the special restrictions on $W_0$ and $W_S$. \bigskip \noindent{\bf Proof of Theorem C.} We apply a useful concept from the literature of wavelets \cite{Daubechies}, the symbol $\widetilde{w}$ of a sequence $w$ finitely supported in the set of nonnegative integers, defined as a polynomial on ${\mathbb C}$ by $\widetilde{w} (z) = \sum_{k=0}^\infty w_k z^k$. The symbol of the convoluted sequence $a{*}b$ is given by $\widetilde{a{*}b} (z) = \widetilde{a} (z)\widetilde{b} (z)$. Notice that the symbol $\widetilde{W}$ of the sequence $W$ supported in $\{0, \ldots, {\mathcal M}\}$ is a polynomial of degree $M$ with real coefficients for some $0 \leq M \leq {\mathcal M}$. So we know that complex roots $z_k =x_k + i y_k$ of $\widetilde{W}$ with $x_k \not=0$ appear in pairs and by $(z-z_k)(z- \overline{z_k}) =z^2 - 2 x_k z + \left(x_k^2 + y_k^2\right)$, the polynomial $\widetilde{W} (z)$ can be completely factorized as $$ \widetilde{W} (z) =W_{M} \Pi_{k=1}^K \left\{z^2 - 2 x_k z + \left(x_k^2 + y_k^2\right)\right\} \Pi_{k=2K+1}^{M} (z-x_k), $$ where $2K$ is the number of complex roots with multiplicity, and $M - 2 K$ is the number of real roots with multiplicity. By taking groups of up to $s/2$ quadratic factors (or $(s-1)/2$ quadratic factors with a linear factor) and $s$ linear factors in the above factorization, we get $\widetilde{W} (z) = \widetilde{w^{(J)}}(z) \ldots \widetilde{w^{(2)}}(z) \widetilde{w^{(1)}}(z)$, a factorization of $\widetilde{W}$ into polynomials of degree up to $s$, which yields a desired convolutional factorization $W = w^{(J)}{*}w^{(J-1)}{*}\ldots {*}w^{(2)}{*}w^{(1)}$ and proves Theorem C. \section*{Acknowledgments} The author would like to thank Gilbert Strang and Steve Smale for their detailed suggestions and encouragement. The work described in this paper is supported partially by the Research Grants Council of Hong Kong [Project No CityU 11306617] and by National Nature Science Foundation of China [Grant No 11461161006]. \bibliographystyle{abbrvnat}
{'timestamp': '2018-07-23T02:04:43', 'yymm': '1805', 'arxiv_id': '1805.10769', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.10769'}
\section{Introduction} \subsection{Background} Let $\mathbb{D} \subset \mathbb{C}$ denote the open unit disc and let \begin{equation*} H^2 = \Big\{ f \in \mathcal{O}(\mathbb{D}) : \sup_{0 \le r < 1} \int_{0}^{2 \pi} |f (r e^{ i t})|^2 \, dt < \infty \Big \} \end{equation*} be the classical Hardy space. It is well known that $H^2$ can be identified with the closed subspace of all functions in $L^2(\partial \mathbb{D})$ whose negative Fourier coefficients vanish. Correspondingly, subsets of $\partial \mathbb{D}$ of linear Lebesgue measure zero frequently play the role of small or negligible sets in the theory of $H^2$ and related spaces. For instance, a classical theorem of Fatou shows that every function in $H^2$ has radial limits outside of a subset of $\partial \mathbb{D}$ of Lebesgue measure zero; see for instance \cite[Chapter 3]{Hoffman62}. For the disc algebra \begin{equation*} A(\mathbb{D}) = \{ f \in C (\overline{\mathbb{D}}): f \big|_{\mathbb{D}} \in \mathcal{O}(\mathbb{D}) \}, \end{equation*} the Rudin--Carleson theorem shows that every compact set $E \subset \partial \mathbb{D}$ of Lebesgue measure zero is an interpolation set for $A(\mathbb{D})$, meaning that for each $g \in C(E)$, there exists $f \in A(\mathbb{D})$ with $f \big|_E = g$. In fact, one can achieve that $|f(z)| < \|g\|_\infty$ for $z \in \overline{\mathbb{D}} \setminus E$ (provided that $g$ is not identically zero); this is called peak interpolation. In particular, there exists $f \in A(\mathbb{D})$ with $f \big|_E = 1$ and $|f(z)| < 1$ for $z \in \overline{\mathbb{D}} \setminus E$, meaning that $E$ is peak set for $A(\mathbb{D})$. Conversely, every peak set and every interpolation set has Lebesgue measure zero. For background on this material, see \cite[Chapter II]{Gamelin69}. In the theory of the classical Dirichlet space \begin{equation*} \mathcal{D} = \Big\{ f \in \mathcal{O}(\mathbb{D}): \int_{\mathbb{D}} |f'|^2 \, dA < \infty \Big\}, \end{equation*} a frequently used notion of smallness of subsets of $\partial \mathbb{D}$ is that of having logarithmic capacity zero; see \cite[Chapter II]{EKM+14} for an introduction. This notion is particularly important in the potential theoretic approch to the Dirichlet space. We will review the definition in Section \ref{sec:prelim}. A theorem of Beurling shows that every function in $\mathcal{D}$ has radial limits outside of a subset of $\partial \mathbb{D}$ of (outer) logarithmic capacity zero; see \cite[Section 3.2]{EKM+14}. In the context of boundary interpolation, Peller and Khrushch\"{e}v \cite{PK82} showed that a compact set $E \subset \partial \mathbb{D}$ is an interpolation set for $A(\mathbb{D}) \cap \mathcal{D}$ if and only if $E$ has logarithmic capacity zero. Many of these considerations have been extended to standard weighted Dirichlet spaces and their associated capacities, and more generally to Hardy--Sobolev spaces on the Euclidean unit ball $\mathbb{B}_d$ of $\mathbb{C}^d$ by Cohn \cite{Cohn89} and by Cohn and Verbitsky \cite{CV95}. The Hardy space $H^2$, the Dirichlet space $\mathcal{D}$ and more generally Hardy--Sobolev spaces on the ball belong to a large class of reproducing kernel Hilbert spaces of holomorphic functions on the ball, called regular unitarily invariant spaces. We will recall the precise definition in Section \ref{sec:prelim}. In studying regular unitarily invariant spaces $\mathcal{H}$ and especially their multipliers, a functional analytic smallness condition of subsets of $\partial \mathbb{B}_d$ has proved to be very useful in recent years. This smallness condition has its roots in the study of the ball algebra \[ A(\mathbb B_d) = \{ f \in C( \overline{\mathbb B_d}) : f \big|_{\mathbb B_d} \in \mathcal O(\mathbb B_d) \} \] as explained in Rudin's book \cite[Chapter 10]{Rudin08}. We let \begin{equation*} \Mult(\mathcal{H}) = \{ \varphi: \mathbb{B}_d \to \mathbb{C}: \varphi \cdot f \in \mathcal{H} \text{ whenever } f \in \mathcal{H} \} \end{equation*} denote the multiplier algebra of $\mathcal{H}$. If $\varphi \in \Mult(\mathcal{H})$, its multiplier norm $\|\varphi\|_{\Mult(\mathcal{H})}$ is the norm of the multiplication operator $f \mapsto \varphi \cdot f$ on $\mathcal{H}$. A complex regular Borel measure $\mu$ on $\partial \mathbb{B}_d$ is said to be \emph{$\Mult(\mathcal{H})$-Henkin} if whenever $(p_n)$ is a sequence of polynomials satisfying $\|p_n\|_{\Mult(\mathcal{H})} \le 1$ for all $n \in \mathbb{N}$ and $\lim_{n \to \infty} p_n(z) = 0$ for all $z \in \mathbb{B}_d$, we have \begin{equation*} \lim_{n \to \infty} \int_{\partial \mathbb{B}_d} p_n \, d \mu = 0. \end{equation*} A Borel subset $E \subset \partial \mathbb{B}_d$ is said to be \emph{$\Mult(\mathcal{H})$-totally null} if $|\mu|(E) = 0$ for all $\Mult(\mathcal{H})$-Henkin measures $\mu$. The Henkin condition can be rephrased in terms of a weak-$*$ continuity property; see Section \ref{sec:prelim} for this reformulation and for more background. In the case of $H^2$, a measure is Henkin if and only if it is absolutely continuous with respect to Lebesgue measure on $\partial \mathbb{D}$. Hence the totally null sets are simply the sets of Lebesgue measure $0$. Beyond the ball algebra, these notions were first studied by Clou\^atre and Davidson for the Drury--Arveson space \cite{CD16}, and then for more general regular unitarily invariant spaces by Bickel, M\textsuperscript{c}Carthy and the second named author \cite{BHM17}. Just as in the case of the ball algebra, Henkin measures and totally null sets appear naturally when studying the dual space of algebras of multipliers \cite{CD16,DH20}, ideals of multipliers \cite{CD18,DH20}, functional calculi \cite{BHM17,CD16a}, and peak interpolation problems for multipliers \cite{CD16,DH20}. \subsection{Main results} In this article, we will compare the functional analytic notion of being totally null with the potential theoretic notion of having capacity zero. As was pointed out in \cite{DH20}, for the Dirichlet space $\mathcal{D}$, the energy characterization of logarithmic capacity easily implies that every compact subset of $\partial \mathbb{D}$ that is $\Mult(\mathcal{D})$-totally null necessarily has logarithmic capacity zero. We will show that for Hardy--Sobolev spaces spaces on the ball, including the the Dirichlet space on the disc, the two notions of smallness in fact agree. To state the result, let us recall the definition of Hardy--Sobelev spaces (a.k.a. Besov--Sobelev spaces) on the ball. Let $\sigma$ denote the normalized surface measure on $\partial \bB_d$ and let \[ H^2(\mathbb B_d) = \Big\{f \in \cO(\bB_d) : \sup_{0 \le r < 1} \int_{\partial \bB_d} |f( r \zeta)|^2 d \sigma(\zeta) < \infty \Big\} \] be the Hardy space on the unit ball. Let $s \in \bR$. If $f \in \mathcal{O}(\mathbb{B}_d)$ has a homogeneous decomposition $ f= \sum_{n=0}^\infty f_n$, we let \begin{equation*} \|f\|_s^2 = \sum_{n=0}^\infty (n+1)^{2 s} \|f_n\|^2_{H^2(\mathbb B_d)} \end{equation*} and define \begin{equation*} \mathcal{H}_s = \{ f \in \mathcal{O}(\mathbb{B}_d): \|f\|_s < \infty \}. \end{equation*} Thus, if $R^s f = \sum_{n=1}^\infty n^s f_n$ denotes the fractional radial derivative, then \begin{equation*} \mathcal{H}_s = \{f \in \mathcal{O}(\mathbb{B}_d): R^s f \in H^2(\mathbb B_d) \}. \end{equation*} There are also natural $L^p$-versions of these spaces, but we will exclusively work in the Hilbert space setting. If $s < 0$, then $\mathcal{H}_s$ is a weighted Bergman space on the ball, and clearly $\mathcal{H}_0 = H^2(\mathbb B_d)$. If $d=1$, then $\mathcal{H}_{1/2} = \mathcal{D}$, the classical Dirichlet space on the disc, and if $0 < s < \frac{1}{2}$, then the spaces $\mathcal{H}_s$ are the standard weighted Dirichlet spaces on the disc. For $d \ge 1$, the space $\mathcal{H}_{d/2}$ is sometimes called the Dirichlet space on the ball, and the spaces $\mathcal{H}_{s}$ for $ \frac{d-1}{2} < s < \frac{d}{2}$ are multivariable versions of the standard weighted Dirichlet spaces on the disc. If $s > \frac{d}{2}$, then every function in $\mathcal{H}_s$ extends to a continuous function on $\ol{\mathbb{B}_d}$. In the range $s \le \frac{d}{2}$, there is a different description of $\mathcal{H}_s$, which is also sometimes used as the definition. For $a > 0$, let \begin{equation*} K_a(z,w) = \frac{1}{(1 - \langle z,w \rangle)^a } \end{equation*} and \begin{equation*} K_0(z,w) = \log \Big( \frac{e}{1 - \langle z,w \rangle} \Big), \end{equation*} and let $\mathcal{D}_a$ denote the reproducing kernel Hilbert space on $\mathbb{B}_d$ with kernel $K_a$. It is well known that if $a = d - 2s$, then $\mathcal{D}_a = \mathcal{H}_s$, with equivalence of norms. (This follows by expanding $K_a$ in a power series and comparing the coefficients with $\|z_1^n\|_{\mathcal H_s}^{-2}$ with the help of Stirlings' formula and the known formula for $\|z_1^n\|_{H^2(\mathbb B_d)}$; see \cite[Proposition 1.4.9]{Rudin08}). The space $\mathcal{D}_1$ is usually called the \emph{Drury--Arveson space $H^2_d$} and plays a key role in multivariable operator theory \cite{Arveson98} and in the theory of complete Pick spaces \cite{AM02}. For more background on these spaces, we refer the reader to \cite{ZZ08}. For each of the spaces $\mathcal{H}_s$ for $\frac{d-1}{2} < s \le \frac{d}{2}$, there is a natural notion of (non-isotropic Bessel) capacity $C_{s,2}(\cdot)$, introduced by Ahern and Cohn \cite{AC89}. Equivalently, for the spaces $\mathcal D_a$, there is a notion of capacity that can be defined in terms of the reproducing kernel of $\mathcal D_a$. We will review these definitions and show their equivalence in Section \ref{sec:prelim}. Our main result concerning the Hardy--Sobolev spaces $\mathcal H_s$ is the following. \begin{thm} \label{thm:main_dirichlet} Let $d \in \mathbb{N}$ and let $\frac{d-1}{2} < s \le \frac{d}{2}$. A compact subset $E \subset \partial \mathbb{B}_d$ is $\Mult(\mathcal{H}_s)$-totally null if and only if $C_{s,2}(E) = 0$. \end{thm} In particular, taking $d=1$ and $s=\frac{1}{2}$, we see that in the context of the classical Dirichlet space $\mathcal D$, a compact subset $E \subset \partial \mathbb D$ is $\Mult(\mathcal D)$-totally null if and only if it has logarithmic capacity zero. A direct proof of Theorem \ref{thm:main_dirichlet} will be provided in Section \ref{sec:Dirichlet}. Moreover, we will prove an abstract result about totally null sets, which, in combination with work on exceptional sets by Ahern and Cohn \cite{AC89} and by Cohn and Verbitsky \cite{CV95}, will yield a second proof of Theorem \ref{thm:main_dirichlet}. This result applies to some spaces that are not covered by Theorem \ref{thm:main_dirichlet}, such as the Drury--Arveson space. It is possible to interpret the capacity zero condition as a condition involving the reproducing kernel Hilbert space $\mathcal{H}$ (cf.\ Proposition \ref{prop:energy_fa} below), whereas the totally null condition is a condition on the multiplier algebra $\Mult(\mathcal{H})$. Complete Pick spaces form a class of spaces in which it is frequently possible to go back and forth between $\mathcal{H}$ and $\Mult(\mathcal{H})$; see the book \cite{AM02} and Section \ref{sec:prelim} of the present article for more background. For now, let us simply mention that the spaces $\mathcal{D}_a$ for $0 \le a \le 1$ are complete Pick spaces. (For $a=0$, one needs to pass to a suitable equivalent norm.) If $\cH$ is a reproducing kernel Hilbert space on $\mathbb B_d$, let us say that a compact subset $E \subset \partial \mathbb{B}_d$ is an \emph{unboundedness set for $\mathcal{H}$} if there exists $f \in \mathcal{H}$ so that $\lim_{r \nearrow 1} |f(r \zeta)|$ for all $\zeta \in E$. The following result covers the spaces in Theorem \ref{thm:main_dirichlet}, but it also applies, for example, to the Drury--Arveson space, which corresponds to the end point $s = \frac{d-1}{2}$. \begin{thm} \label{thm:main_CNP} Let $\mathcal{H}$ be a regular unitarily invariant complete Pick space on $\mathbb{B}_d$. A compact set $E \subset \partial \mathbb{B}_d$ is an unboundedness set for $\mathcal{H}$ if and only if $E$ is $\Mult(\mathcal{H})$-totally null. \end{thm} A refinement of this result will be proved in Section \ref{sec:main_CNP}. The results of Ahern and Cohn \cite{AC89} and of Cohn and Verbitsky \cite{CV95} on exceptional sets show that in the case of the spaces $\mathcal{H}_s$ for $\frac{d-1}{2} < s \le \frac{d}{2}$, a compact subset $E \subset \partial \mathbb B_d$ is an unboundedness set for $\mathcal H_s$ if and only if $C_{s,2}(E) = 0$. Indeed, the ``only if'' part follows from \cite[Theorem B]{AC89}, the ``if'' part is contained in the construction on p.\ 443 of \cite{AC89}; see also \cite[p. 94]{CV95}. Thus, we obtain another proof of Theorem \ref{thm:main_dirichlet}. \subsection{Applications} We close the introduction by mentioning some applications of Theorem \ref{thm:main_dirichlet}. The first application concerns peak interpolation. Extending the work of Peller and Khrushch\"{e}v \cite{PK82} on boundary interpolation in the Dirichlet space, Cohn and Verbitsky \cite[Theorem 3]{CV95} showed that every compact subset $E \subset \partial \mathbb{B}_d$ with $C_{s,2}(E) = 0$ is a strong boundary interpolation set for $\mathcal{H}_s \cap A(\mathbb{B}_d)$. This means that for every $g \in C(E)$, there exists $f \in \mathcal{H}_s \cap A(\mathbb{B}_d)$ with $f \big|_E = g$ and $\max( \|f\|_{\mathcal{H}_s}, \|f\|_{A(\mathbb{B}_d)}) \le \|f\|_{C(E)}$. Combining Theorem \ref{thm:main_dirichlet} with a peak interpolation result for totally null sets of Davidson and the second named author \cite{DH20}, we can strengthen the result of Cohn and Verbitsky in two ways. Firstly, we replace $\mathcal{H}_s \cap A(\mathbb{B}_d)$ with the smaller space $A(\mathcal{H}_s)$, which is defined to be the multiplier norm closure of the polynomials in $\Mult(\mathcal{H}_s)$. Thus, \begin{equation*} A(\mathcal{H}_s) \subset \Mult(\mathcal{H}_s) \cap A(\mathbb{B}_d) \subset \mathcal{H}_s \cap A(\mathbb{B}_d) \end{equation*} with contractive inclusions. Secondly, we obtain a strict pointwise inequality off of $E$. \begin{thm} \label{thm:peak_interpolation} Let $d \in \mathbb{N}$, let $\frac{d-1}{2} < s \le \frac{d}{2}$ and let $E \subset \partial \mathbb{B}_d$ be compact with $C_{s,2}(E) = 0$. Then for each $g \in C(E) \setminus \{0\}$, there exists $f \in A(\mathcal{H}_s)$ so that \begin{enumerate} \item $f \big|_E = g$, \item $|f(z)| < \|g\|_\infty$ for every $z \in \overline{\mathbb{B}_d} \setminus E$, and \item $\|f\|_{\Mult(\mathcal{H}_s)} = \|g\|_\infty$. \end{enumerate} \end{thm} \begin{proof} According to \cite[Theorem 1.4]{DH20}, the conclusion holds when $\mathcal{H}_s$ is replaced with any regular unitarily invariant space $\mathcal{H}$ and $E$ is $\Mult(\mathcal{H})$-totally null. Combined with Theorem \ref{thm:main_dirichlet}, the result follows. \end{proof} In fact, in the setting of Theorem \ref{thm:peak_interpolation}, there exists an isometric linear operator $L: C(E) \to A(\mathcal H_s)$ of peak interpolation; see \cite[Theorem 8.3]{DH20}. In a similar fashion, one can now apply other results of \cite{DH20} in the context of the spaces $\mathcal{H}_s$, replacing the totally null condition with the capacity zero condition. In particular, this yields a joint Pick and peak interpolation result (cf.\ \cite[Theorem 1.5]{DH20}) and a result about boundary interpolation in the context of interpolation sequences (cf. \cite[Theorem 6.6]{DH20}). Our second application concerns cyclic functions. Recall that a function $f \in \mathcal{H}_s$ is said to be \emph{cyclic} if the space of polynomial multiples of $f$ in dense in $\mathcal{H}_s$. It is a theorem of Brown and Cohn \cite{BC85} that if $E \subset \partial \mathbb{D}$ has logarithmic capacity zero, then there exists a function $f \in \mathcal{D} \cap A(\mathbb{D})$ that is cyclic for $\mathcal{D}$ so that $f \big|_E = 0$; see also \cite{EL19} for an extension to other Dirichlet type spaces on the disc. The following result extends the theorem of Brown and Cohn to the spaces $\mathcal{H}_s$ on the ball, and moreover achieves that $f \in A(\mathcal{H}_s)$, so in particular, $f$ is a multiplier. \begin{cor} Let $d \in \mathbb{N}$, let $\frac{d-1}{2} < s \le \frac{d}{2}$ and let $E \subset \partial \mathbb{B}_d$ be compact with $C_{s,2}(E)= 0$. Then there exists $f \in A(\mathcal{H}_s)$ that is cyclic for $\mathcal H_s$ so that $E = \{z \in \overline{\mathbb{B}_d}: f(z) = 0 \}$. \end{cor} \begin{proof} Applying Theorem \ref{thm:peak_interpolation} to the constant function $g = 1$, we find $h \in A(\mathcal{H}_s)$ so that $h \big|_E = 1, |h(z)| < 1$ for $z \in \overline{\mathbb{B}_d} \setminus E$ and $\|h\|_{\Mult(\mathcal{H}_s)} = 1$. Set $f = 1 - h$. Clearly, $f$ vanishes precisely on $E$. The fact that $\|h\|_{\Mult(\mathcal{H}_s)} \le 1$ easily implies that $f$ is cyclic; see for instance \cite[Lemma 2.3]{AHM+17a} and its proof. \end{proof} \section{Preliminaries} \label{sec:prelim} \subsection{Regular unitarily invariant spaces and totally null sets} Throughout, let $d \in \mathbb N$. A \emph{regular unitarily invariant space} is a reproducing kernel Hilbert space $\mathcal{H}$ on $\mathbb{B}_d$ whose reproducing kernel is of the form \begin{equation} \label{eqn:unit_inv_kernel} K(z,w) = \sum_{n=0}^\infty a_n \langle z,w \rangle^n, \end{equation} where $a_0 = 1$, $a_n > 0$ for all $n \in \mathbb{N}$ and $\lim_{n \to \infty} \frac{a_n}{a_{n+1}} = 1$. We think of the last condition as a regularity condition, as it is natural to assume that the power series defining $K$ has radius of convergence $1$, since $\mathcal{H}$ is a space of functions on the ball of radius $1$. Under this assumption, the limit, if it exists, necessarily equals $1$. We recover $H^2$ and $\mathcal{D}$ by choosing $d=1$ and $a_n = 1$, respectively $a_n = \frac{1}{n+1}$, for all $n \in \mathbb{N}$. Expanding the reproducing kernels of $\cD_a$ into a power series, one easily checks that $\cD_a$ is a regular unitarily invariant space for all $a \ge 0$. While the class of regular unitarily invariant spaces is not stable under passing to an equivalent norm, one can also check that the spaces $\cH_s$ are regular unitarily invariant spaces for all $s \in \bR$. Indeed, each space $\mathcal H_s$ has a reproducing kernel as in \eqref{eqn:unit_inv_kernel}, where \begin{equation*} a_n = \|z_1^n\|_{\mathcal H_s}^{-2}. \end{equation*} More background on these spaces can be found in \cite{BHM17,DH20,GHX04}. Let $\mathcal{H}$ be a regular unitarily invariant space. We let $\Mult(\mathcal{H})$ denote the multiplier algebra of $\mathcal{H}$. Identifying a multiplier $\varphi$ with the corresponding multiplication operator on $\mathcal{H}$, we can regard $\Mult(\mathcal{H})$ as a WOT closed subalgebra of $\mathcal{B}(\mathcal{H})$, the algebra of all bounded linear operators on $\mathcal{H}$. By trace duality, $\Mult(\mathcal{H})$ becomes a dual space in this way, and hence is equipped with a weak-$*$ topology. The density of the linear span of kernel functions in $\mathcal{H}$ implies that on bounded subsets of $\Mult(\mathcal{H})$, the weak-$*$ topology agrees with the topology of pointwise convergence on $\mathbb{B}_d$. In a few places, we will use the following basic and well known fact, which we state as a lemma for easier reference. For a proof, see for instance \cite[Lemma 2.2]{DH20}. \begin{lem} \label{lem:dilations_convergence} Let $\mathcal{H}$ be a regular unitarily invariant space and let $\varphi \in \Mult(\mathcal{H})$. Let $\varphi_r(z) = \varphi(r z)$ for $0 \le r \le 1$ and $z \in \mathbb{B}_d$. Then $\|\varphi_r\|_{\Mult(\mathcal{H})} \le \|\varphi\|_{\Mult(\mathcal{H})}$ for all $0 \le r \le 1$ and $\lim_{r \nearrow 1} \varphi_r = \varphi$ in the weak-$*$ topology of $\Mult(\mathcal{H})$. \end{lem} Let $M(\partial \mathbb{B}_d)$ be the space of complex regular Borel measures on $\partial \mathbb{B}_d$. \begin{defn} \label{defn:Henkin_TN} Let $\mathcal{H}$ be a regular unitarily invariant space. \begin{enumerate}[label=\normalfont{(\alph*)}] \item A measure $\mu \in M( \partial \mathbb{B}_d)$ is said to be $\Mult(\mathcal{H})$-Henkin if the functional \begin{equation*} \Mult(\mathcal{H}) \supset \mathbb{C}[z_1,\ldots,z_d] \to \mathbb{C}, \quad p \mapsto \int_{\partial \mathbb{B}_d} p \, d \mu, \end{equation*} extends to a weak-$*$ continuous functional on $\Mult(\mathcal{H})$. \item A Borel subset $E \subset \partial \mathbb{B}_d$ is said to be $\Mult(\mathcal{H})$-totally null if $|\mu|(E) = 0$ for all $\Mult(\mathcal{H})$-Henkin measures $\mu$. \end{enumerate} \end{defn} By \cite[Lemma 3.1]{BHM17}, the definition of Henkin measure given here is equivalent to the one given in the introduction in terms of sequences of polynomials converging pointwise to zero. The set of $\Mult(\mathcal{H})$-Henkin measures forms a band (see \cite[Lemma 3.3]{BHM17}), meaning in particular that $\mu$ is $\Mult(\mathcal{H})$-Henkin if and only if $|\mu|$ is Henkin. This band property implies that a compact set $E$ is $\Mult(\mathcal{H})$-totally null if and only if $\mu(E) = 0$ for every positive $\Mult(\mathcal{H})$-Henkin measure $\mu$ that is supported on $E$; see \cite[Lemma 2.5]{DH20}. Finally, we require the notion of a \emph{complete Pick space}. Complete Pick spaces are reproducing kernel Hilbert spaces that are defined in terms of an interpolation condition for multipliers; see the book \cite{AM02} for more background. In the context of regular unitarily invariant spaces, there is a concrete characterization in terms of the reproducing kernel. If the reproducing kernel of $\mathcal{H}$ is $K(z,w) = \sum_{n=0}^\infty a_n \langle z,w \rangle^n$, then $\mathcal{H}$ is a complete Pick space if and only if the sequence $(b_n)_{n=1}^\infty$ defined by the power series identity \begin{equation*} \sum_{n=1}^\infty b_n t^n = 1 - \frac{1}{\sum_{n=0}^\infty a_n t^n} \end{equation*} tsatisfies $b_n \ge 0$ for all $n \in \mathbb{N}$ (this is a straightforward generalization of \cite[Theorem 7.33]{AM02}). In particular, the spaces $\mathcal{D}_a$ are complete Pick spaces in the range $0 \le a \le 1$ ; cf.\ \cite[Lemma 7.38]{AM02}. (For $a=0$, one needs to pass to the equivalent norm induced by the reproducing kernel $\frac{1}{\langle z, w \rangle} \log\big( \frac{1}{1 - \langle z,w \rangle} \big)$.) \subsection{Capacity} When defining capacity for (compact) sets $E \subset \partial \bB_d$ induced by the Hardy-Sobolev spaces $\cH_s$, there are at least two possible approaches. Each one can be viewed as natural depending the perspective, and in fact we require both approaches in the proof of Theorem \ref{thm:main_dirichlet}, one for each implication. The two definitions turn out to be equivalent in the sense that the capacities defined are comparable with absolute constants. In particular, capacity zero sets coincide in both senses. We shall briefly discuss the equivalence of the two definitions. The first definition, introduced in \cite[p. 489]{AC89}, is motivated by the fact that the spaces $\cH_s$ can be understood as potential spaces and fits into the framework of the general potential theory of Adams and Hedberg \cite{Adams1996}. Let $M^+(\partial \mathbb{B}_d)$ denote the set of positive regular Borel measures on $\partial \mathbb{B}_d$. We let $\sigma$ be the normalized surface measure on $\partial \mathbb B_d$. If $E \subset \partial \mathbb B_d$ is compact, let $M^+(E)$ be the set of all measures in $M^+(\partial \mathbb B_d)$ that are supported on $E$. For $0 \le s < d$, consider the kernel \[ k_s(z,w) = \frac{1}{|1 - \langle z,w \rangle|^{d-s}} \quad (z,w \in \overline{\bB_d}) \] and also set \[ k_d(z,w) = \log \frac{e}{|1 - \langle z,w \rangle|} \quad (z,w \in \overline{\bB_d}). \] \begin{defn} Let $0 \le s \le d$, let $\mu \in M^+(\partial \mathbb B_d)$ and let $E \subset \partial \mathbb B_d$ be compact. \begin{enumerate}[label=\normalfont{(\alph*)}] \item The \emph{non-isotropic Riesz potential} of $\mu$ is \[ \cI_s(\mu)(z) = \int_{\partial \bB_d} k_s(z,w) d \mu(w) \quad (z \in \partial \mathbb B_d).\] We extend the definition to non-negative measurable functions $f \in L^1(\partial \bB_d, d\sigma ) $ by letting $\cI_s(f) = \cI_s(f \, d\sigma )$. \item The \emph{non-isotropic Bessel capacity} of $E$ is defined by \[ C_{s,2}(E)= \inf \{ \norm f ^2_{L^2(\partial \bB_d , d\sigma )} : \cI_s(f) \geq 1 \,\,\, \text{on} \,\,\, E, f \geq 0 \}. \] \item The quantity $\|\cI_s(\mu)\|_{L^2(\partial \bB_d, d \sigma)}^2 \in [0,\infty]$ is called the \emph{energy} of $\mu$. \end{enumerate} \end{defn} By \cite[Theorem 2.5.1]{Adams1996}, we have the following ``dual'' expression for the capacity $C_{s,2}(\cdot)$, \begin{equation} \label{eqn:dual} C_{s,2}(E)^{1/2} = \sup \{ \mu(E): \mu \in M^+(E), \,\,\, \norm { \cI_s(\mu) }_{L^2(\partial \bB_d, d\sigma)} \leq 1 \}. \end{equation} In particular, $C_{s,2}(E) > 0$ if and only if $E$ supports a probability measure of finite energy. A different approach, which can be justified by regarding $\cH_s$ as a reproducing kernel Hilbert space, is the following; cf. \cite[Chapter 2]{EKM+14}. Recall that if $a=d-2s$, then $\mathcal H_s = \mathcal D_a$ with equivalent norms. Moreover, we have $k_{2s} = |K_a|$. \begin{defn} Let $\frac{d-1}{2} < s \le \frac{d}{2}$, let $a= d- 2 s$, let $\mu \in M^+(\partial \mathbb{B}_d)$ and let $E \subset \partial \mathbb{B}_d$ be compact. \begin{enumerate}[label=\normalfont{(\alph*)}] \item The $\mathcal D_a$-\emph{potential} of $\mu$ is \begin{equation*} \cI_{2s}(\mu)(z) = \int_{\partial \mathbb B_d} |K_a(z,w)| \, d \mu (w). \end{equation*} \item The $\mathcal D_a$-\emph{energy} of $\mu$ is defined by \begin{equation*} \mathcal{E}(\mu, \mathcal D_a) = \int_{\partial \bB_d} \int_{\partial \bB_d} |K_a(z,w)| d \mu(z) d \mu(w). \end{equation*} \item The $\mathcal D_a$-\emph{capacity} of $E$ is defined by \begin{equation*} \capac{E}{\mathcal D_a}^{1/2} = \sup \{ \mu(E): \mu \in M^+(E) , \,\, \mathcal{E}(\mu, \mathcal D_a) \leq 1 \}. \end{equation*} \end{enumerate} \end{defn} As for the Bessel capacity, $\capac{E}{\mathcal{D}_a}> 0$ if and only if $E$ supports a probability measure of finite energy. The formula $d(z,w) = | 1 - \langle z, w \rangle|^{1/2}$ defines a metric on $\partial \mathbb B_d$, called the \emph{Koranyi metric}; see \cite[Proposition 5.1.2]{Rudin08}. Thus, the capacities $\capac{\cdot}{\mathcal D_a}$ fit into the framework of capacities on compact metric spaces developed in \cite[Chapter 2]{EKM+14}. It appears to be well known to experts that the capacities $\capac{\cdot}{\mathcal D_a}$ and $C_{s,2}(\cdot)$ are equivalent if $a = d- 2s$, the point being that the corresponding energies are comparable. A proof in the case $d=1, s=\frac{1}{2}$ can be found in \cite[Lemma 2.2]{Cascante2012}. In the case $s \neq \frac{d}{2}$, the crucial estimate is stated in \cite[Remark 2.1]{CO95} without proof. A proof of the estimate in one direction in this case is contained in \cite[p.442-442]{AC89}. For the sake of completeness, we provide an argument that applies to all cases under consideration. We adapt the proof in \cite[Lemma 2.2]{Cascante2012} to the non-isotropic geometry of $\partial \bB_d$. Throughout, we write $A \lesssim B$ to mean that there exists a constant $C \in (0,\infty)$ so that $A \le C B$, and $A \approx B$ to mean that $A \lesssim B$ and $B \lesssim A$. \begin{lem}\label{lem:equiv_riesz_pot} Let $\frac{d-1}{2} < s \leq \frac{d}{2}$ and $\mu \in M^+(\partial \bB_d ) $. Then \[ \cI_s(\cI_s(\mu)) \approx \cI_{2s}(\mu), \] where the implied constants only depend on $s$ and $d$. \end{lem} \begin{proof} We will show that \[ \int_{\partial \bB_d} k_s(z, \zeta) k_s(\zeta,w) d \sigma(\zeta) \approx k_{2s}(z,w) \quad (z,w \in \partial \bB_d). \] The statement then follows by integrating both sides with respect to $\mu$ and using Fubini's theorem. Let \[ d(z,w)=|1-\inner{z}{w}|^{\frac{1}{2}}. \quad (z,w \in \partial \mathbb B_d) \] be the Koranyi metric; see \cite[Proposition 5.1.2]{Rudin08}. If $\delta>0$ we let $Q_\delta(z) = \{ \zeta \in \partial \bB_d: d(\zeta,z) \le \delta\}$ be the Koranyi ball centered at $z$ with radius $\delta.$ We will repeatedly use the fact that $\sigma(Q_{\delta}(z)) \approx \delta^{2 d}$ for $0 \le \delta \le \sqrt{2}$; see \cite[Proposition 5.1.4]{Rudin08}. Now let $z,w\in \partial \bB_d$ and set $\delta=\frac{d(z,w)}{2}$. Then, in order to estimate the kernel \[ \int_{\partial \bB_d} \frac{d\sigma(\zeta)}{|1-\inner{z}{\zeta}|^{d-s}|1-\inner{\zeta}{w}|^{d-s}} = \int_{\partial \bB_d} \frac{d \sigma(\zeta)}{d(z,\zeta)^{2(d-s)} d(\zeta,w)^{2(d-s)}}, \] we split the domain of integration $\partial \bB_d$ as follows \[ \partial \bB_d = Q_\delta(z) \cup Q_\delta(w) \cup \{ d(\zeta,z) \leq d(\zeta,w) \} \setminus Q_\delta(z) \cup \{ d(\zeta,w) \leq d(\zeta,z) \} \setminus Q_\delta(w). \] We denote by \MakeUppercase{\romannumeral 1}, \MakeUppercase{\romannumeral 1}$'$, \MakeUppercase{\romannumeral 2}, \MakeUppercase{\romannumeral 2}$'$ the corresponding integrals. By the symmetry of the problem it suffices to estimate \MakeUppercase{\romannumeral 1} and \MakeUppercase{\romannumeral 2}. For \MakeUppercase{\romannumeral 1}, we note that if $\zeta \in Q_{\delta}(z)$, then $\delta \le d(\zeta,w) \le 3 \delta$ by the triangle inequality for $d$. Hence, integrating with the help of the the distribution function, we find that \begin{align*} \text{\MakeUppercase{\romannumeral 1}} & \approx \frac{1}{\delta^{2(d-s)}} \int_{Q_\delta(z)}\frac{d\sigma(\zeta)}{d(z,\zeta)^{2(d-s)}} \\ &= \frac{1}{\delta^{2(d-s)}} \int_{0}^\infty \sigma( \{ \zeta \in Q_{\delta}(z) : d(z,\zeta) \le t^{\frac{-1}{2 (d-s)}} \}) dt\\ & = \frac{\sigma ( Q_\delta(z) )}{\delta^{4(d-s)}} + \frac{1}{\delta^{2(d-s)}} \int_{\delta^{-2(d-s)}}^\infty \sigma ( \{ \zeta \in \partial \mathbb B_d: d(z,\zeta) \leq t^{\frac{-1}{2(d-s)}} \} ) dt \\ & \approx \delta^{-2(d-2s)} + \frac{1}{\delta^{2(d-s)}} \int_{\delta^{-2(d-s)}}^{\infty} t^{\frac{-d}{d-s}}dt \\ & \approx \delta^{-2(d-2s)}. \end{align*} Next, using the fact that $d(z,\zeta) \le \sqrt{2}$ for all $z, \zeta \in \partial \bB_d$, we see that \begin{align*} \textup{II} &\le \int_{\partial \bB_d \setminus Q_{\delta}(z)} \frac{d \sigma(\zeta)}{d(z,\zeta)^{4 (d-s)}} \\ &= \int_{0}^\infty \sigma( \{ \zeta \in \partial \bB_d : \delta < d(z,\zeta) \le t^{\frac{-1}{4(d-s)}} \} ) dt\\ &\lesssim 1 + \int_{2^{-2(d-s)}}^{\delta^{-4 (d-s)}} t^{\frac{-d}{2(d-s)}} dt \\ &\lesssim \begin{cases} \delta^{-2(d-2s)}, & \text{ if } s < \frac{d}{2}, \\ \log( \delta^{-2}), & \text { if } s = \frac{d}{2}. \end{cases} \end{align*} Combining the estimates for \textup{I} and \textup{II} and recalling the definition of $\delta$ we see that \[ \int_{\partial \bB_d} \frac{d\sigma(\zeta)}{|1-\inner{z}{\zeta}|^{d-s}|1-\inner{\zeta}{w}|^{d-s}} \lesssim \begin{cases} \frac{1}{|1 - \langle z,w \rangle |^{d - 2 s}}, & \text{ if } s < \frac{d}{2} \\ \log\big( \frac{e}{|1 - \langle z,w \rangle|} \big), & \text{ if } s = \frac{d}{2}. \end{cases} \] To establish the lower bound, it suffices to consider $z,w \in \partial \bB_d$ for which $d(z,w)$ is small. In the case $s < \frac{d}{2}$, the lower bound follows from the treatment of the integral $\textup{I}$ above. Let $s = \frac{d}{2}$. Notice that in the region $ \cU_{z,w} = \{ \zeta \in \partial \bB_d : d(z,w) \leq d(w,\zeta) \} $, the triangle inequality yields $d(\zeta,z) \leq 2d(\zeta,w)$. Hence integrating again with the distribution function and writing $\delta = (z,w )$, we estimate \begin{align*} \int_{\partial \bB_d} \frac{d\sigma(\zeta)}{d(\zeta,w)^dd(\zeta,z)^d} & \gtrsim \int_{ \cU_{z,w} } \frac{d\sigma(\zeta)}{d(\zeta,w)^{2d}} \\ & = \int_0^{\delta^{-{2d}}} \sigma( \{ \zeta \in \partial \bB_d : \delta \leq d(\zeta,w) \leq t^{\frac{-1}{2d}} \} ) dt \\ & = \int_{0}^{\delta^{-{2d}}} \sigma ( Q_{t^{-\frac{1}{2d}}}(w) ) dt - \delta^{-{2d}} \sigma( Q_\delta(w) ) \\ & \geq c_0 \log ( \delta^{-1} ) - c_1, \end{align*} where $c_0,c_1 > 0$ are constants depending only on the dimension $d$. This shows the lower bound for small $\delta$, which concludes the proof. \end{proof} From this lemma the equivalence of the capacities $C_{s,2}(\cdot)$ and $\capac{\cdot}{\mathcal D_a}$ for $a = d - 2s$ follows easily. \begin{cor} \label{cor:energy_comparable} Let $\frac{d-1}{2} < s \leq \frac{d}{2}$, let $a = d - 2s$ and $\mu \in M^+(\partial \bB_d ) $. Then \[ \|\cI_s(\mu)\|^2_{L^2(\partial \bB_d, d \sigma)} \approx \cE(\mu,\mathcal D_a). \] Hence $C_{s,2}(E) \approx \capac{E}{\cD_a}$ for compact subsets $E \subset \partial \bB_d$. Here, all implied constants only depend on $d$ and $s$. \end{cor} \begin{proof} For a measure $\mu \in M^+(\partial \bB_d) $, we compute \begin{align*} \norm{\cI_s(\mu)}^2_{L^2(\partial \bB_d, d\sigma)} & = \int_{\partial \bB_d} \Big( \int_{\partial \bB_d} \frac{d\mu(z)}{|1-\inner{z}{\zeta}|^{d-s}} \Big)^2 d\sigma \\ & = \int_{\partial \bB_d} \int_{\partial \bB_d}\int_{\partial \bB_d} \frac{d\sigma(\zeta)}{|1-\inner{z}{\zeta}|^{d-s}|1-\inner{w}{\zeta}|^{d-s}} d\mu(z)d\mu(w) \\ & = \int_{\partial \bB_d} \cI_s(\cI_s(\mu)) d\mu(w). \end{align*} Thus, Lemma \ref{lem:equiv_riesz_pot} yields that \[ \norm{\cI_s(\mu)}^2_{L^2(\partial \bB_d, d\sigma)} \approx \int_{\partial \bB_d} \cI_{2s}(\mu)(w) d\mu(w) = \cE(\mu, \mathcal D_a). \] Since the energies involved are comparable, so are the capacities by \eqref{eqn:dual}. \end{proof} \section{Direct proof of Theorem \ref{thm:main_dirichlet}} \label{sec:Dirichlet} To prove Theorem \ref{thm:main_dirichlet}, we will make use of holomorphic potentials. Since several of our proofs involve reproducing kenrel arguments, it is slightly more convenient to work with the spaces $\mathcal D_a$ rather than with $\mathcal H_s$. \begin{defn} Let $0 \le a < 1$ and let $\mu \in M^+(\partial \mathbb{B}_d)$. The holomorphic potential of $\mu$ is the function \begin{equation*} f_\mu: \mathbb{B}_d \to \mathbb{C}, \quad z \mapsto \int_{\partial \mathbb{B}_d} K_a(z,w) \, d\mu(w). \end{equation*} \end{defn} Let $A(\mathbb{B}_d)$ denote the ball algebra. If $\mu \in M^+(\partial \mathbb{B}_d)$, let \begin{equation*} \rho_\mu: A(\mathbb{B}_d) \to \mathbb{C}, \quad f \mapsto \int_{\partial \mathbb{B}_d} f \, d \mu \end{equation*} denote the associated integration functional. The following functional analytic interpretation of the holomorphic potential and of capacity will show that every totally null set has capacity zero. In the case of the Dirichlet space on the disc, it is closely related to the energy formula for logarithmic capacity in terms of the Fourier coefficients of a measure; see for instance \cite[Theorem 2.4.4]{EKM+14}. \begin{prop} \label{prop:energy_fa} Let $\mu \in M^+(\partial \mathbb{B}_d)$ and let $0 \le a < 1$. The following assertions are equivalent: \begin{enumerate}[label=\normalfont{(\roman*)}] \item $\mathcal{E}(\mu,\mathcal D_a) < \infty$, \item the densely defined functional $\rho_\mu$ is bounded on $\mathcal{D}_a$, \item $f_\mu \in \mathcal{D}_a$. \end{enumerate} In this case, \begin{equation*} \mathcal{E}(\mu,\mathcal D_a) \approx \|\rho_\mu\|^2_{(\mathcal{D}_a)^*} = \| f_\mu\|_{\mathcal{D}_a}^2, \end{equation*} where the implied constants only depend on $a$ and $d$, and \begin{equation*} \rho_\mu(g) = \langle g, f_\mu \rangle_{\mathcal{D}_a} \end{equation*} for all $g \in \mathcal{D}_a$. \end{prop} \begin{proof} For ease of notation, we write $f = f_\mu$, $\rho = \rho_\mu$ and $k_w(z) = K_a(z,w)$. For $0 \le r < 1$, define $f_r(z) = f(rz)$ and $\rho_r(f) = \rho(f_r)$. Then each $f_r \in \mathcal{D}_a$ and each $\rho_r$ is a bounded functional on $\mathcal{D}_a$. First, we connect $f_r$ and $\rho_r$, which will be useful in all parts of the proof. By the reproducing property of the kernel, we find that \begin{equation*} \langle k_z, f_r \rangle = \overline{f(r z)} = \int_{\partial \mathbb{B}_d} k_{rz}(w) \, d \mu(w) = \int_{\partial \mathbb{B}_d} k_z(r w) \, d\mu (w) = \rho_r(k_z) \end{equation*} for all $z \in \mathbb{B}_d$. Since finite linear combinations of kernels are dense in $\mathcal{D}_a$, it follows that \begin{equation} \label{eqn:proof_energy_fa} \rho_r(g) = \langle g, f_r \rangle \end{equation} for all $g \in \mathcal{D}_a$ and hence $\|\rho_r\|_{(\mathcal{D}_a)^*} = \|f_r\|_{\mathcal{D}_a}$. Next, we show the equivalence of (ii) and (iii). If $f \in \mathcal{D}_a$, then $\lim_{r \nearrow 1} f_r = f$ in $\mathcal{D}_a$ and hence for all $g \in A(\mathbb{B}_d) \cap \mathcal{D}_a$, Equation \eqref{eqn:proof_energy_fa} shows that \begin{equation*} \rho(g) = \lim_{r \nearrow 1} \rho_r(g) = \lim_{r \nearrow 1} \langle g, f_r \rangle = \langle g,f \rangle, \end{equation*} so $\rho$ is bounded on $\mathcal{D}_a$. In this case, $\|\rho\|_{(\mathcal{D}_a)^*} = \|f\|_{\mathcal{D}_a}$, which establishes the final statement of the proposition holds as well. Conversely, if $\rho$ is bounded on $\mathcal{D}_a$, then since $\|\rho_r\|_{(\mathcal{D}_a)^*} \le \|\rho\|_{(\mathcal{D}_a)^*}$, it follows that $\sup_{0 \le r < 1} \|f_r\|_{\mathcal{D}_a} \le \|\rho\|_{(\mathcal{D}_a)^*}$, hence $f \in \mathcal{D}_a$. It remains to show the equivalence of (i) and (iii) and that $\mathcal{E}(\mu,\mathcal D_a) \approx \|f\|_{\mathcal{D}_a}^2$. With the help of Equation \eqref{eqn:proof_energy_fa}, we see that \begin{align*} \|f_r\|_{\mathcal{D}_a}^2 = \langle f_r, f_r \rangle = \rho_r(f_r) &= \int f(r^2 z) \, d \mu(z) \\ &= \iint K_a(rz,rw) \, d\mu(w) d \mu(z), \end{align*} where all integrals are taken over $\partial \mathbb{B}_d$. Taking real parts and using the fact that $\Re K_a$ and $|K_a|$ are comparable, we find that \begin{equation*} \|f_r\|_{\mathcal{D}_a}^2 \approx \iint |K_a(r z, rw)| d \mu(z) d \mu(w). \end{equation*} Thus, if $f \in \mathcal{D}_a$, then Fatou's lemma shows that \begin{equation*} \mathcal{E}(\mu,\mathcal D_a) = \iint |K_a(z,w)| \, d \mu(z) d \mu(w) \lesssim \|f\|_{\mathcal{D}_a}^2. \end{equation*} Conversely, if $\mathcal{E}(\mu,\mathcal D_a) < \infty$, we use the basic inequality \begin{equation*} \Big| \frac{1}{1 - r^2 \langle z,w \rangle} \Big| \le 2 \Big| \frac{1}{ 1 - \langle z,w \rangle} \Big| \quad (z,w \in \mathbb{B}_d) \end{equation*} and the Lebesgue dominated convergence theorem to find that \begin{equation*} \lim_{r \nearrow 1} \|f_r\|_{\mathcal{D}_a}^2 \lesssim \mathcal{E}(\mu,\mathcal D_a), \end{equation*} so $f \in \mathcal{D}_a$ and $\|f\|_{\mathcal{D}}^2 \lesssim \mathcal{E}(\mu,\mathcal D_a)$. \end{proof} With this proposition in hand, we can prove the ``only if'' part of Theorem \ref{thm:main_dirichlet}, which we restate in equivalent form (see Corollary \ref{cor:energy_comparable} for the equivalence). The idea is the same as that in the proof of \cite[Proposition 2.6]{DH20}. \begin{prop} \label{prop:only_if} Let $0 \le a < 1$ and let $E \subset \partial \mathbb{B}_d$ be compact. If $E$ is $\Mult(\mathcal{D}_a)$-totally null, then $\capac{E}{\mathcal{D}_a} = 0$. \end{prop} \begin{proof} Suppose that $\capac{E}{\mathcal{D}_a} > 0$. Then $E$ supports a probability measure $\mu$ of finite energy $\mathcal E(\mu,\mathcal D_a)$. By Proposition \ref{prop:energy_fa}, we see that the integration functional $\rho_\mu$ is bounded on $\mathcal{D}_a$. In particular, it is weak-$*$ continuous on $\Mult(\mathcal{D}_a)$. Hence $E$ is not $\Mult(\mathcal{D}_a)$-totally null. \end{proof} To prove the converse, we require the following fundamental properties of the holomorphic potential of a capacitary extremal measure of a compact subset $E \subset \mathbb B_d$, i.e. a measure for which the supremum in \eqref{eqn:dual} is achieved. If $a> 0$, these properties are contained in the proof of \cite[Theorem 2.10]{AC89}, see also \cite[Lemma 2.3]{Cascante2012} for a proof in the case $d=1$ and $a=0$. An argument that directly works with the capacity $\capac{\cdot}{\mathcal D_0}$ in the case $d=1$ and $a=0$ can be found on pp.\ 40--41 of \cite{EKM+14}. We briefly sketch the argument in general. \begin{lem} \label{lem:potential_basic} Let $E \subset \partial \mathbb{B}_d$ be a compact set with $\capac{E}{\mathcal{D}_a} > 0$. There exists a positive measure $\mu$ supported on $E$ so that the corresponding holomorphic potential $f_\mu$ satisfies \begin{enumerate}[label=\normalfont{(\alph*)}] \item $f_\mu \in \mathcal D_a$ with $\|f_\mu\|^2_{\cD_a} \lesssim \capac{E}{\cD_a}$. \item $\liminf_{r \nearrow 1} \Re f_\mu(r \zeta) \gtrsim 1$ for all $\zeta \in \operatorname{int}(E)$, and \item $|f_\mu(z)| \lesssim 1$ for all $z \in \mathbb{B}_d$. \end{enumerate} Here, the implied constants only depend on $a$ and $d$. \end{lem} \begin{proof} Let $s = \frac{d-a}{2}$, so that $\mathcal H_s = \mathcal D_a$ with equivalent norms. The general theory of Bessel capacities (see \cite[Theorem 2.5.3]{Adams1996}), combined with the maximum principle for the capacity $C_{s,2}(\cdot)$ \cite[Lemma 1.15]{AC89} implies that there exists a positive measure $\mu$ supported on $E$ so that \begin{enumerate}[] \item $\mu(E) = \|\mathcal I_s(\mu)\|^2_{L^2(\partial \mathbb B_d, d \sigma)} = C_{s,2}(E)$; \item $\mathcal{I}_s(\mathcal I_s(\mu)) \ge 1$ on $E \setminus F$, where $F$ is a countable union of compact sets of $C_{s,2}$-capacity zero. \item $\mathcal{I}_s(\mathcal I_s(\mu)) \lesssim 1$ on $\partial \mathbb B_d$. \end{enumerate} (See also \cite[Corollary 2.4.3]{EKM+14} for an approach using $\capac{\cdot}{\mathcal D_0}$ in the case $d=1$ and $a=0$.) Item (1) and Corollary \ref{cor:energy_comparable} show that $\mathcal E(\mu, \mathcal D_a) \approx \|\mathcal I_{s}(\mu) \|^2_{L^2(\partial \bB_d, \sigma)} \approx \capac{E}{\mathcal D_a}$, hence Proposition \ref{prop:energy_fa} yields that (a) holds. Lemma \ref{lem:equiv_riesz_pot} and Item (3) show that for $z \in \partial \mathbb B_d$, we have \[ \int_{\partial \mathbb B_d} |K_a(z,w)| d \mu(w) = \mathcal I_{2s}(\mu)(z) \approx \mathcal I_s(\mathcal I_s(\mu))(z) \lesssim 1, \] so in combination with the basic inequality $| \frac{1}{1 - r \langle z, w \rangle}| \le 2 | \frac{1}{1 - \langle z,w \rangle }|$ for $z,w \in \partial \mathbb B_d$ and $0 \le r <1$, we see that (c) holds. To establish (b), notice that (c) implies that $f_\mu \in H^\infty(\mathbb B_d)$, so $f_\mu$ has radial boundary limits $f_\mu^*$ almost everywhere with respect to $\sigma$, and $f_\mu = P[f_\mu^*]$, the Poisson integral of $f_\mu^*$. Fatou's lemma and the fact that $\Re K_a$ and $|K_a|$ are comparable show that for $\sigma$-almost every $z \in \partial \mathbb B_d$, the estimate \begin{align*} \Re f_{\mu}^*(z) = \lim_{r \nearrow 1} \int_{\partial \mathbb B_d} \Re K_a(r z, w) d \mu(w) &\gtrsim \int_{\partial \mathbb B_d} |K_a(z,w)| \, d \mu (w) \\ &= \mathcal I_{2s}(\mu)(z). \end{align*} Now $C_{s,2}(K) = 0$ implies that $\sigma(K) = 0$ for compact sets $K \subset \partial \bB_d$. (This is because $\sigma \big|_K$ has finite energy, which for instance follows from Proposition \ref{prop:energy_fa} since $\cD_a$ is continuously contained in $H^2(\mathbb B_d)$.) Therefore, Item (2) and Lemma \ref{lem:equiv_riesz_pot} imply that $\Re f_{\mu}^*(z) \gtrsim 1$ for $\sigma$-almost every $z \in E$. In combination with $\Re f_\mu = P[ \Re f_{\mu}^*]$, this easily implies (b). \end{proof} In \cite{Cascante2014}, Cascante, F\`{a}brega and Ortega showed that if $0 < a < 1$ and if the holomorphic potential $f_\mu$ is bounded in $\mathbb{B}_d$, then it is a multiplier of $\mathcal{D}_a$. They also proved an $L^p$-analogue of this statement. We will require an explicit estimate for the multiplier norm of $f_\mu$. It seems likely that the arguments in \cite{Cascante2014} could be used to obtain such an estimate. Instead, we will provide a different argument in the Hilbert space setting, based on the following result of Aleman, M\textsuperscript{c}Carthy, Richter and the second named author \cite{AHM+17c}. It also applies to the case $a=0$ without changes. The function $V_f$ below is called the \emph{Sarason function} of $f$. \begin{thm}[\cite{AHM+17c}] \label{thm:Sarason_function} Let $0 \le a < 1$, let $f \in \mathcal{D}_a$ and define \begin{equation*} V_f(z) = 2 \langle f, K_a(\cdot,z) f \rangle - \|f\|^2. \end{equation*} If $\Re V_f$ is bounded in $\mathbb{B}_d$, then $f \in \Mult(\mathcal{D}_a)$ and \begin{equation*} \|f\|_{\Mult(\mathcal{D}_a)} \lesssim \| \Re V_f\|_{\infty}^{1/2}, \end{equation*} where the implied constant only depends on $a$ and $d$. \end{thm} \begin{proof} In \cite[Theorem 4.5]{AHM+17c}, it is shown that if $\mathcal{H}$ is a normalized complete Pick space that admits an equivalent norm which is given by an $L^2$-norm of derivatives of order at most $N$, then boundedness of $\Re V_f$ implies that $f \in \Mult(\mathcal{H})$, and \begin{equation*} \|f\|_{\Mult(\mathcal{H})} \lesssim (\|\Re V_f\|_\infty + 3)^{N+\frac{1}{2}}. \end{equation*} This applies in particular to the spaces $\mathcal{D}_a$. The improved bound on the multiplier norm of $f$ follows from the scaling properties of both sides of the inequality. Indeed, if $t > 0$, then $V_{t f} = t^2 V_f$, so applying the above inequality to the function $t f$, we find that \begin{equation*} \|f\|_{\Mult(\mathcal{D}_a)}^2 \lesssim \frac{1}{t^2} (t^2 \|\Re V_f\|_\infty +3)^{2 N + 1} \end{equation*} for all $t > 0$. If $\|\Re V_f\|_\infty = 0$, then taking $t \to \infty$ above yields $f=0$. If $\|\Re V_f\|_\infty \neq 0$, then choosing $t = \| \Re V_f\|_{\infty}^{-1/2}$, we obtain the desired estimate. (The choice of $t$ could be optimized to improve the implicit constants, but we will not do so here.) \end{proof} With the help of Theorem \ref{thm:Sarason_function}, we can establish the desired multiplier norm estimate of $f_\mu$. It can be regarded as a quantitative version of the result of Cascante, F\`abrega and Ortega \cite{Cascante2014} in the Hilbert space setting. \begin{prop} \label{prop:multiplier_norm} Let $0 \le a < 1$ and let $\mu \in M^+(\partial \mathbb{B}_d)$. If $f_\mu$ is bounded in $\mathbb{B}_d$, then $f_\mu$ is a multiplier of $\mathcal{D}_a$, and \begin{equation*} \|f_\mu\|_{\Mult(\mathcal{D}_a)} \approx \|f_\mu\|_\infty, \end{equation*} where the implied constants only depend on $a$ and $d$. \end{prop} \begin{proof} Since the multiplier norm dominates the supremum norm, we have to show the inequality ``$\lesssim$''. Let $f = f_\mu$ and \begin{equation} \label{eqn:f_r} f_r(z) = f(r z) = \int_{\partial \mathbb{B}_d} K_a(r z,w) \, d \mu(w) = \int_{\partial \mathbb{B}_d} K_a(z, rw) \, d \mu (w). \end{equation} We will show that $\|f_r\|_{\Mult} \lesssim \|f_r\|_{\infty}$ for all $0 < r <1$, where the implied constant is independent of $f$ and $r$. To simplify notation, write $k_w(z) = K_a(z,w)$. We will use Theorem \ref{thm:Sarason_function} and instead show that \begin{equation*} \sup_{z \in \mathbb{B}_d} \Re \langle f_r, k_z f_r \rangle \lesssim \|f_r\|_\infty^2. \end{equation*} Since the map \begin{equation*} \partial \mathbb{B}_d \to \mathcal{D}_a, \quad w \mapsto k_{r w}, \end{equation*} is continuous, the integral on the right-hand side of \eqref{eqn:f_r} converges in $\mathcal{H}$. Thus, by the reproducing property of the kernel, \begin{align*} \Re \langle f_r, k_z f_r \rangle &= \iint \langle k_{r w}, k_z k_{r \zeta} \rangle d \mu(\zeta) d \mu(w) \\ &= \iint \overline{k_{r \zeta}(r w)} \overline{k_z(r w)} \, d \mu(\zeta) d \mu(w). \end{align*} Equation \eqref{eqn:f_r} shows that $\int {k_{r \zeta}(r w)} \, d \mu(\zeta) = f_r(rw)$, hence \begin{equation*} \Re \langle f_r, k_z f_r \rangle = \int \overline{f_r(r w)} \overline{ k_z(r w)} \, d \mu(w) \end{equation*} and so \begin{align*} \Re \langle f_r, k_z f_r \rangle \le \|f_r\|_\infty \int |k_z(r w)| \, d \mu (w) &\lesssim \|f_r\|_\infty \Re \int k_z(r w) \, d \mu(w) \\ &= \|f_r\|_\infty \Re f_r(z) \le \|f_r\|_\infty^2. \qedhere \end{align*} \end{proof} We are ready to provide the direct proof of Theorem \ref{thm:main_dirichlet}, which we restate in equivalent form. \begin{thm} Let $0 \le a < 1$ and let $E \subset \partial \mathbb{B}_d$ be compact. Then $E$ is $\Mult(\mathcal{D}_a)$-totally null if and only if $\capac{E}{\mathcal{D}_a} = 0$. \end{thm} \begin{proof} The ``only if'' part was already established in Proposition \ref{prop:only_if}. Conversely, suppose that $\capac{E}{\mathcal{D}_a} = 0$. By upper semi-continuity of capacity, there exists a decreasing sequence $(E_n)$ of compact neighborhoods of $E$ so that $ \lim_{n \to \infty} \capac{E_n}{\mathcal{D}_a} = 0$; see \cite[Theorem 2.1.6]{EKM+14}. Let $\mu_n$ be a positive measure supported on $E_n$ as in Lemma \ref{lem:potential_basic} and let $g^{(n)} = f_{\mu_n}$ be the corresponding holomorphic potential. We claim that \begin{enumerate} \item $\liminf_{r \nearrow 1} \Re g^{(n)}(r \zeta) \gtrsim 1$ for all $\zeta \in E$ and all $n \in \mathbb{N}$; \item the sequence $(g^{(n)})$ converges to $0$ in the weak-$*$ topology of $\Mult(\mathcal{D}_a)$. \end{enumerate} Indeed, Part (1) is immediate from Lemma \ref{lem:potential_basic}. To see (2), we first observe that Lemma \ref{lem:potential_basic} (c) and Proposition \ref{prop:multiplier_norm} imply that the sequence $(g^{(n)})$ is bounded in multiplier norm. Using Lemma \ref{lem:potential_basic} (a), we see that $\|g^{(n)}\|_{\cD_a}^2 \lesssim \capac{E_n}{\cD_a}$, so $(g^{(n)})$ converges to zero in the norm of $\cD_a$ and in particular pointwise on $\bB_d$, hence (2) holds. Let now $\nu$ be a positive $\Mult({\mathcal{D}_a})$-Henkin measure that is supported on $E$. We will finish the proof by showing that $\nu(E) = 0$; see the discussion following Definition \ref{defn:Henkin_TN}. Item (1) above and Fatou's lemma show that \begin{equation*} \nu(E) = \int_{E} 1 \, d \nu \lesssim \liminf_{r \nearrow 1} \int_{\partial \mathbb{B}_d} \Re g^{(n)}(r \zeta) \, d \nu(\zeta). \end{equation*} Since $\nu$ is $\Mult(\mathcal{D}_a)$-Henkin, the associated integration functional $\rho_\nu$ extends to a weak-$*$ continuous functional on $\Mult(\mathcal{D}_a)$, which we continue to denote by $\rho_\nu$. Since $\lim_{r \nearrow 1} g^{(n)}_r = g^{(n)}$ in the weak-$*$ topology of $\Mult(\mathcal{D}_a)$ by Lemma \ref{lem:dilations_convergence}, we find that for all $n \in \mathbb{N}$, \begin{equation*} \lim_{r \nearrow 1} \int_{\partial \mathbb{B}_d} \Re g_n(r \zeta) d \nu(\zeta) = \Re \rho_\nu(g_n). \end{equation*} Thus, \begin{equation*} \nu(E) \lesssim \Re \rho_\nu(g_n) \end{equation*} for all $n \in \mathbb{N}$. Taking the limit $n \to \infty$ and using Item (2), we see that $\nu(E) = 0$, as desired. \end{proof} \section{Proof of Theorem \ref{thm:main_CNP}} \label{sec:main_CNP} In this section, we prove a refined version of Theorem \ref{thm:main_CNP}. Let $\mathcal{H}$ be a regular unitarily invariant space on $\mathbb{B}_d$. Recall that a compact set $E \subset \mathbb{B}_d$ is said to be an \emph{unboundedness set for $\mathcal{H}$} if there exists $f \in \mathcal{H}$ with $\lim_{r \nearrow 1} |f(r \zeta)| = \infty$ for all $\zeta \in E$. We also say that $E$ is a \emph{weak unboundedness for $\mathcal{H}$} if there exists a separable auxiliary Hilbert space $\mathcal{E}$ and $f \in \mathcal{H} \otimes \mathcal{E}$ so that $\lim_{r \nearrow 1} \|f(r \zeta)\| = \infty$ for all $\zeta \in E$. \begin{thm} Let $\mathcal{H}$ be a regular unitarily invariant complete Pick space on $\mathbb{B}_d$. The following assertions are equivalent for a compact set $E \subset \mathbb{B}_d$. \begin{enumerate}[label=\normalfont{(\roman*)}] \item $E$ is $\Mult(\mathcal{H})$-totally null. \item $E$ is an unboundedness set for $\mathcal{H}$. \item $E$ is a weak unboundedness set for $\mathcal{H}$. \end{enumerate} \end{thm} \begin{proof} (i) $\Rightarrow$ (ii) Suppose that $E$ is totally null. In the first step, we will show that for each $M > 1$, there exists $f \in \mathcal{H} \cap A(\mathbb B_d)$ satisfying \begin{enumerate} \item $f \big|_E = M$, \item $\|f\|_\mathcal{H} \le 1$, and \item $\Re f \ge 0$ on $\overline{\mathbb{B}_d}$. \end{enumerate} Let $\varepsilon = 1/M$. Since $E$ is totally null, the simultaneous Pick and peak interpolation result \cite[Theorem 1.5]{DH20} shows that there exists $\eta \in A(\mathcal{H}) \subset \mathcal H \cap A(\mathbb B_d)$ satisfying $\eta \big|_E = (1 - \varepsilon^2)^{1/2}$, $\eta(0) = 0$ and $\|\eta\|_{\Mult(\mathcal{H})} \le 1$. It follows that the column multiplier \begin{equation*} \begin{bmatrix} \varepsilon \\ (1 - \varepsilon^2)^{1/2} \eta \end{bmatrix} \end{equation*} has multiplier norm at most one, so the implication (b) $\Rightarrow$ (a) of part (i) of \cite[Theorem 1.1]{AHM+17c} implies that the function $f$ defined by \begin{equation*} f = \frac{\varepsilon}{1 - (1 - \varepsilon^2)^{1/2} \eta} \end{equation*} belongs to the closed unit ball of $\mathcal{H}$. Moreover, since $\|\eta\|_{\Mult(\mathcal{H})} \le 1$, we find that $|\eta(z)| \le 1$ for all $\zeta \in \overline{\mathbb{B}_d}$, from which it follows that $f \in A(\mathbb B_d)$ and $\Re f \ge 0$. Clearly, $f \big|_E = \frac{1}{\varepsilon} = M$. This observation finishes the construction of $f$. The above construction yields, for each $n \ge 1$, a function $f_n \in \mathcal{H} \cap A(\mathbb B_d)$ satisfying $f_n \big|_E = 1$, $\|f_n\|_{\mathcal{H}} \le 2^{-n}$ and $\Re f_n \ge 0$. Define $f = \sum_{n=1}^\infty f_n \in \mathcal{H}$. Let $\zeta \in E$. Then for each $N \in \mathbb{N}$, we have that \begin{equation*} \liminf_{r \nearrow 1} \Re f(r \zeta) \ge \sum_{n=1}^N \Re f_n(\zeta) = N. \end{equation*} Thus, $\lim_{r \nearrow 1} |f(r \zeta)| = \infty$ for all $\zeta \in E$, so $E$ is an unboundedness set for $\mathcal{H}$. (ii) $\Rightarrow$ (iii) is trivial. (iii) $\Rightarrow$ (i) Suppose that $E$ is a weak unboundedness set for $\mathcal{H}$ and let $f \in \mathcal{H} \otimes \mathcal{E}$ satisfy $\|f\| \le 1$ and $\lim_{r \nearrow 1} \|f(r \zeta)\| = \infty$ for all $\zeta \in E$. By the implication (a) $\Rightarrow$ (b) of part (i) of \cite[Theorem 1.1]{AHM+17c}, we may write $f = \frac{\Phi}{1 - \psi}$, where $\Phi \in \Mult(\mathcal{H}, \mathcal{H} \otimes \mathcal{E}),\psi \in \Mult(\mathcal{H})$ have multiplier norm at most $1$ and $|\psi(z)| < 1$ for all $z \in \mathbb{B}_d$. In particular, $\|\Phi(z)\| \le 1$ for all $z \in \mathbb{B}_d$, hence $\lim_{r \nearrow 1} \psi(r \zeta) = 1$ for all $\zeta \in E$. Let now $\mu$ be a positive $\Mult(\mathcal{H})$-Henkin measure that is supported on $E$ and let $\rho_\mu$ denote the associated weak-$*$ continuous integration functional on $\Mult(\mathcal{H})$. We have to show that $\mu(E) = 0$; see the discussion following Definition \ref{defn:Henkin_TN}. To this end, we write $\psi_r(z) = \psi(r z)$ and let $n \in \mathbb{N}$. Applying the dominated convergence theorem and the fact that $\lim_{r \nearrow 1} \psi_r^n = \psi^n$ in the weak-$*$ topology of $\Mult(\mathcal{H})$ (see Lemma \ref{lem:dilations_convergence}), we find that \begin{equation*} \mu(E) = \lim_{r \nearrow 1} \int_{E} \psi_r^n \, d \mu = \lim_{r \nearrow 1} \int_{\partial \mathbb{B}_d} \psi_r^n \, d \mu = \rho_\mu(\psi^n). \end{equation*} Since $\psi$ is a contractive multiplier satisfying $|\psi(z)| < 1$ for all $z \in \mathbb{B}_d$, it follows that $\psi^n$ tends to zero in the weak-$*$ topology of $\Mult(\mathcal{H})$. So taking the limit $n \to \infty$ above, we conclude that $\mu(E) = 0$, as desired. \end{proof} Let us briefly compare the direct proof of the implication ``capacity $0$ implies totally null'' given in Section \ref{sec:Dirichlet} with the proof via Theorem \ref{thm:main_CNP}. If $E \subset \partial \mathbb B_d$ is a compact set with $C_{s,2}(E) = 0$, then the work of Ahern and Cohn \cite{AC89} and of Cohn and Verbitsky \cite{CV95} shows that $E$ is unboundedness set for $\mathcal H_s$. To show this, they use holomorphic potentials and their fundamental properties (cf.\ Lemma \ref{lem:potential_basic}) to construct a function $f \in \mathcal H_s$ satisfying $\lim_{r \nearrow 1} |f(r \zeta)| = 1$ for all $\zeta \in E$. Proceeding via Theorem \ref{thm:main_CNP}, one then applies the factorization result \cite[Theorem 1.1]{AHM+17c} to $f$ to obtain a multiplier $\psi$ of $\mathcal H$ of norm at most $1$ satisfying $\lim_{r \nearrow 1} \psi(r \zeta) = 1$ for all $\zeta \in E$, from which the totally null property of $E$ can be deduced. The direct proof given in Section \ref{sec:Dirichlet} uses holomorphic potentials as well, this time to construct a sequence of functions in $\mathcal H$, which, roughly speaking, have large radial limits on $E$ compared to their norm. It is then shown that the holomorphic potentials themselves form a bounded sequence of multipliers, from which the totally null property of $E$ can once again be deduced. \bibliographystyle{amsplain}
{'timestamp': '2020-07-06T02:10:58', 'yymm': '2007', 'arxiv_id': '2007.01569', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.01569'}
\section{Introduction} In past decades, with the idea of AdS/CFT or the more generic holographic principle~\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}, physicists are trying to build a bridge connecting gravity and other areas of modern theoretical physics, such as condensed matter physics (CMT)~\cite{Hartnoll:2009sz,Herzog:2009xv,McGreevy:2009xe,Horowitz:2010gk,Cai:2015cya}, QCD~\cite{Mateos:2007ay,Gubser:2009md,CasalderreySolana:2011us}, cosmology~\cite{Banks:2004eb}, quantum information theory (QIT)~\cite{Swingle:2009bg,Swingle:2012wq,Qi:2013caa} and etc. It is hoped that this bridge may help us get insights into both the strongly coupled problems in the quantum field theory (QFT) side as well as the origin of spacetime in the gravity side. After decades' efforts, several precise correspondences between the two sides are proposed. Recently, Susskind and his collaborators conjecture that complexity of the boundary QFT may be related to the interior geometry of black hole in the gravity side~\cite{Susskind:2014rva}. In QFT (or QIT), complexity of a target state is an important concept defined as the minimum number of unitary operators (or gates) needed to prepare the state starting from some reference state (for example the vacuum). So far, this conjecture has been refined into two concrete proposals, namely the CV (complexity=volume) and CA (complexity=action) conjectures. In the CV conjecture, complexity of a state living on a time slice $\Sigma$ of the boundary equals to the extremal volume of a codimension-one hypersurface ${\cal B}$ in the bulk ending on $\Sigma$ at the boundary~\cite{Stanford:2014jda}, that is \begin{eqnarray}\label{CV} C_V (\Sigma) =\max_{\partial {\cal B} = \Sigma} \left(\frac{{\rm Vol} ({\cal B})}{G_N R}\right), \end{eqnarray} where $G_N$ is the gravitational constant and $R$ is some typical length scale of the bulk geometry, for example the AdS radius or the horizon radius. While the CA conjecture states that complexity of a state equals to the on-shell gravitational action evaluated on the so-called Wheeler-DeWitt (WDW) patch of the bulk spacetime~\cite{Brown:2015bva,Brown:2015lvg}. Each conjecture has its own merits and demerits respectively~\cite{Hashimoto:2018bmb}. Inspired by these ideas, an amount of work are raised to study the holographic complexity for various gravity models to check these proposals~\cite{Momeni:2016ekm,Cai:2016xho,Brown:2016wib,Couch:2016exn,Yang:2016awy,Chapman:2016hwi, Carmi:2016wjl,Pan:2016ecg,Brown:2017jil,Kim:2017lrw,Cai:2017sjv,Alishahiha:2017hwg,Bakhshaei:2017qud, Tao:2017fsy,Guo:2017rul,Zangeneh:2017tub,Alishahiha:2017cuk,Abad:2017cgl,Reynolds:2017lwq,Hashimoto:2017fga,Nagasaki:2017kqe,Miao:2017quj,Ge:2017rak, Ghodrati:2017roz,Qaemmaqami:2017lzs,Carmi:2017jqz,Kim:2017qrq,Cottrell:2017ayj,Sebastiani:2017rxr, Moosa:2017yvt,HosseiniMansoori:2017tsm,Zhang:2017nth,Reynolds:2017jfs,Chapman:2018dem,Chapman:2018lsv,Khan:2018rzm,Caputa:2018kdj,Feng:2018sqm,Liu:2019smx,Jiang:2019qea,Jiang:2019pgc,Jiang:2018sqj,Jiang:2018gft}. The above two conjectures are for the whole boundary system which both are then extended to be defined on subsystem respectively in Refs.~\cite{Alishahiha:2015rta} and~\cite{Carmi:2016wjl} later, and they are now called holographic subregion complexity (HSC). The subregion CV proposal is a natural extension of the well-known Hubney-Ryu-Takayanagi (HRT) holographic entanglement entropy (HEE) conjecture~\cite{Ryu:2006bv,Hubeny:2007xt}. Namely, complexity of a subregion ${\cal A}$ of the boundary system equals to the volume of the extremal codimension-one hypersurface $\Gamma_{\cal A}$ enclosed by ${\cal A}$ and the corresponding Hubney-Ryu-Takayanagi (HRT) surface $\gamma_{\cal A}$~\cite{Ryu:2006bv,Hubeny:2007xt}, that is \begin{eqnarray}\label{subCV} {\cal C} (\gamma_{\cal A}) = \frac{{\rm Vol (\Gamma_{{\cal A}})}}{8 \pi G_{d+1} L}, \end{eqnarray} where $L$ is the AdS radius. Later studies suggest that it should be dual to the fidelity susceptibility in QIT~\cite{Alishahiha:2015rta,MIyaji:2015mia}. While in the subregion CA proposal, complexity of subregion ${\cal A}$ is given by the on-shell gravitational action evaluated on the intersection region between WDW patch and the so-called entanglement wedge~\cite{Czech:2012bh,Headrick:2014cta}. Also, lots of work and effort have been devoted to understand the holographic subregion complexity~\cite{Caputa2017,Caputa2017b,Czech1706,subBenAmi2016,subRoy2017, subBanerjee2017,subBakhshaei2017,subSarkar2017,subZangeneh2017,subMomeni2017,subRoy2017b,subCarmi2017,Chen:2018mcc,Ageev:2018nye,Ghodrati:2018hss,Zhang:2018qnt,Alishahiha:2018lfv,Alishahiha:2018tep}. On the other hand, in the so-called "holographic thermalization" topic, the AdS/CFT duality has been applied successfully to study the physics in non-equilibrium processes, especially the thermalization process of hot QCD matter which is strongly coupled and produced in heavy ion collisions at the Relativistic Heavy Ion Collider~\cite{Gelis:2011xw,Iancu:2012xa,Muller:2011ra,CasalderreySolana:2011us}. According to the AdS/CFT dictionary, the thermalization process in the boundary QFT system is dual to a black hole formation process in the bulk which can be modelled simply by a Vaidya-like metric. There are already lots of work on this topic and many interesting results are obtained. For more details on this topic, please refer to the review~\cite{Balasubramanian:2011ur} and references therein. Complexity in the holographic thermalization process is also studied to investigate its time evolution behaviours under thermal quench. In Refs.~\cite{Chapman:2018dem,Chapman:2018lsv}, by applying the CV and CA conjectures, it is found that the late time growth of holographic complexity in the Vaidya spacetime is the same as that found for an eternal black hole. In Ref.~\cite{Chen:2018mcc}, the time evolution of subregion complexity is studied in the process with the subregion CV conjecture. And the results show that the subregion complexity is not always a monotonically increasing function of time. Actually, it increases at early time, but after reaching a maximum it decreases quickly and gets to saturation finally. For other related work, please see Refs.~\cite{Ageev:2018nye,Ageev:2019fxn,Ling:2018xpc,Fan:2018xwf,Jiang:2018tlu} However, it should be noted that the boundary QFTs considered in the above mentioned work are usually living on the flat Minkowski spacetimes. It would be interesting to generalize the discussions to more realistic situations where QFTs lives on curved spacetimes, which may hep us to understand the extremely hot and condensed physics such as in the very early universe. Several holographic models of the quantum field theory in curved spacetimes (QFTCS) have already been proposed in de Sitter (dS) spacetime and other cosmological backgrounds (please refer to the review \cite{Marolf:2013ioa} for details). Here we would like to mention the work done in Ref.~\cite{Marolf:2010tg}, where an interesting holographic model was built to relate the QFTs living on the dS boundary to the bulk Einstein gravity. Employing this model, the thermalization process of QFTs in dS spacetime is studied holographically in Ref.~\cite{Fischler:2013fba}. By applying the holographic entanglement entropy as a probe, the whole thermalization is found to be similar to the flat boundary case~\cite{Liu:2013iza,Liu:2013qca} and can be divided into a sequence of processes. Moreover, the saturation time is found to depend almost only on the entanglement sphere radius. When the radius is small, the saturation time is almost a linear increasing function of the radius, as expected to coincide with the result of the flat boundary case at this time~\cite{Balasubramanian:2011ur}. However, when the radius becomes larger and larger to approach the cosmological horizon, the saturation time blows up logarithmically. Later, the study is extended to include the effect of higher-derivative terms, such as the Gauss-Bonnet correction~\cite{Zhang:2014cga}. And it is found that increasing the Gauss-Bonnet coupling will shorten the saturation time. Please also refer to Refs. \cite{Fischler:2014ama,Fischler:2014tka,Nguyen:2017ggc} for other related work on AdS/CFT with dS boundary. As the deep connection between holographic entanglement entropy (HEE) and holographic subregion CV (HSCV), it would be interesting to study the time evolution of subregion complexity in the thermalization process of the QFTs living on dS spacetime within the above model. It is natural to ask the following questions: How the existence of the cosmological horizon affects the behaviour of HSCV? Whether the time evolution behaviours of HSCV can be used to describe the the whole thermalization process? Is there any difference between behaviours of HSCV and HEE? The main goal of this work is trying to address these questions. The work is organized as follows. In the next section, we will give a brief review of the holographic model of QFTs in dS spacetime proposed in Ref.~\cite{Marolf:2010tg}, including the Vaidya-like solution. Then in Sec. III, we study in detail the time evolution of HSCV in the thermalization process. Due to the complication of the equations needed to solve, we rely mainly on numerical calculations. The final section is devoted to discussions and summary. \section{Gravity solutions with dS boundary} In this section, following Refs.~\cite{Fischler:2013fba,Zhang:2014cga}, we will briefly review the bulk solutions in Einstein gravity with a foliation such that the boundary metric corresponds to a de Sitter spacetime. Three relevant solutions will be presented, including a vacuum AdS, a static AdS black hole and its Vaidya-like cousin. \subsection{Action} We consider $(d+1)$-dimensional Einstein-Hilbert action as follows \begin{eqnarray}\label{Action} S = \frac{1}{16 \pi G_N} \int d^{d+1} x \sqrt{-g} \left(R - 2\Lambda\right), \end{eqnarray} where $G_N$ is the Newton constant and $\Lambda$ negative cosmological constant. The action gives the following equations of motion \begin{eqnarray}\label{EoMs} R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \Lambda g_{\mu\nu}=0. \end{eqnarray} For asymptotically AdS spacetime, the metric can be written in the Fefferman-Graham form~\cite{Fefferman:1985} \begin{eqnarray}\label{FGForm} ds^2 = \frac{L^2}{z^2} \left(g_{\mu\nu} (z,x)dx^\mu dx^\nu + dz^2\right), \end{eqnarray} where $L$ is the AdS raduis related to the cosmological constant as $\Lambda = -\frac{d(d-1)}{2 L^2}$. The dual quantum field theory lives at the conformal boundary $z=0$ with a metric $ds^2_\Sigma = g_{\mu\nu} (0,x)dx^\mu dx^\nu$. In this paper, we are interested in cases where the boundary metric $ds^2_\Sigma$ corresponds to a dS spacetime in certain coordinates. \subsection{AdS vacuum solution} The equations of motion (\ref{EoMs}) admit an AdS vacuum solution as \begin{eqnarray}\label{Vacuum} &&ds^2 = \frac{L^2}{z^2} \left(-f(r) g(z) dt^2 + f^{-1}(r) g(z) dr^2 + r^2 g(z) d\Omega_{d-2}^2 + dz^2\right),\nonumber\\ &&f(r) =1- H^2 r^2 ,\qquad g(z) = \left(1- \frac{H^2 z^2}{4}\right)^2. \end{eqnarray} The conformal boundary locates at $z=0$ with conformally reduced metric \begin{eqnarray}\label{dS} ds^2_\Sigma = -f(r) dt^2 + f^{-1} (r) dr^2 + r^2 d\Omega_{d-2}^2, \end{eqnarray} which is just the dS spacetime in the static patch with a cosmological horizon at $r=1/H$, where $H$ denotes the Hubble constant. The AdS vacuum solution is dual to the vacuum state of the dual QFT with the latter can be taken as the well-known Bunch-Davis or Euclidean vacuum. For a geodesic observer sitting at $r=0$, the Bunch-Davis vacuum appears to have temperature $T_{dS} = H/2\pi$ natural for the existence of the cosmological horizon. \subsection{AdS black hole solution} The equations of motion (\ref{EoMs}) also admit an AdS black hole solution with the dS boundary (\ref{dS}) \begin{eqnarray}\label{BH} &&ds^2 = \frac{L^2}{z^2} \left(-h(z) dt^2 + \frac{dz^2}{h(z)}+ \frac{H^2 L^2}{f(r)^2}dr^2 + \frac{H^2 L^2}{f(r)} r^2 d\Omega_{d-2}^2\right),\nonumber\\ &&h(z) = 1 - \frac{z^2}{L^2} - \frac{m z^d}{L^{2(d-1)}}. \end{eqnarray} The event horizon $z_+$ is given by the largest positive root of $h(z)$. The mass parameter $m$ can be written in terms of the horizon as \begin{eqnarray} m = \frac{L^{2(d-1)}}{z_+^d} \left(1-\frac{z_+^2}{L^2}\right). \end{eqnarray} The Hawking temperature of the black hole is \begin{eqnarray}\label{Temperature} T_H = \frac{L^2 d -(d-2) z_+^2}{4\pi L^2 z_+}. \end{eqnarray} It should be noted that the zero temperature limit of the black hole solution (\ref{BH}) is the not the solution with $m=0$ which is isometric to the AdS vacuum solution (\ref{Vacuum}). Actually, the zero temperature limit of the solution has the smallest horizon radius and most "negative" mass as \begin{eqnarray} z_+^{\textrm ext} = L \sqrt{\frac{d}{d-2}},\qquad m^{\textrm ext} = - \frac{2 L^{2(d-1)}}{(d-2) (z_+^{\textrm ext})^d} \end{eqnarray} This means that when the mass is negative in the range $0> m > m^{\textrm ext}$, the black hole still has a regular horizon and reasonable thermodynamics. This is a typical behavior of topological black holes. Holographically, the black hole solution is dual to the QFT on the static patch of dS spacetime at the temperature given by Eq.~(\ref{Temperature}). Note that this temperature does not have to be the same as the dS temperature $T_{dS}$. For more discussions on this point, please refer to Ref.~\cite{Fischler:2013fba}. \subsection{Vaidya-like solution} Our aim is to study the holographic thermalization process of the dual QFT under quench. This process can be simply described holographically by a Vaidya-like geometry in the bulk. Going to the Eddington-Finskelstein coordinates and introducing a time-dependent mass parameter, from the black hole solution one can obtain its Vaidya-like cousin as \begin{eqnarray}\label{Vaidya} &&ds^2 =\frac{L^2}{z^2} \left(-h(v,z) dv^2 -2 dv dz+ \frac{H^2 L^2}{f(r)^2}dr^2 + \frac{H^2 L^2}{f(r)} r^2 d\Omega_{d-2}^2\right),\nonumber\\ &&h(v,z) = 1 - \frac{z^2}{L^2} - \frac{m(v) z^d}{L^{2(d-1)}}. \end{eqnarray} External source should be introduced to maintain the equations of motion \begin{eqnarray} R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R + \Lambda g_{\mu\nu} = 8\pi G_N T_{\mu\nu}^{\textrm ext},\nonumber\\ 8\pi G_N T_{\mu\nu}^{\textrm ext} = \frac{(d-1)z^{d-1}}{2 L^{2(d-1)}} \frac{d m(v)}{dv} \delta_{\mu v} \delta_{\nu v}, \end{eqnarray} which implies that the infalling shell is made of null dust. We take the form of the mass function as \begin{eqnarray} m(v) = \frac{M}{2} \left[1 + \tanh\left(\frac{v}{v_0}\right)\right], \end{eqnarray} where $M>0$ is the total mass of the dust shell and $v_0$ of its thickness. Then the solution describes the collapsing of the null dust shell from the boundary into the bulk to form a black hole. At the QFT side, it corresponds to a sudden global injection of energy into the system and then let it evolve from the Bunch-Davis vacuum to a thermal state with $T>T_{dS}$. \section{Holographic entanglement entropy and subregion complexity} In this section, by applying the holographic subregion CV (HSCV) (\ref{subCV}), we will study the time evolution of holographic subregion complexity in the thermalization process which is described by the Vaidya-like geometry holographically. On the boundary at time $\tilde{t}$, taking into account the symmetry of the Vaidya-like metric (\ref{Vaidya}), it is convenient to choose the subregion ${\cal A}$ to be a $(d-1)$-dimensional sphere centred at $\tilde{r} =0$ ($\tilde{r} \equiv H r$) with raduis $\tilde{R}$. According to the conjecture (\ref{subCV}), the holographic subregion complexity of ${\cal A}$ is given by the extreme volume of the codimension-one hypersurface $\Gamma_{\cal A}$ enclosed by ${\cal A}$ and its corresponding HRT surface $\gamma_{\cal A}$. So, to study the holographic subregion complexity, we should first find the HRT surface $\gamma_{\cal A}$ whose area gives the holographic entanglement entropy. \subsection{Holographic entanglement entropy} Considering the symmetry, the HRT surface $\gamma_{\cal A}$ in the bulk can be parameterized by functions $z(\tilde{r})$ and $v(\tilde{r})$, with the boundary conditions \begin{eqnarray} z(\tilde{R}) = \epsilon, v(\tilde{R}) = \tilde{t}, \end{eqnarray} where $\epsilon$ is an UV cutoff constant. At the tip of the HRT surface, taking into account the symmetry, we have \begin{eqnarray} z'(0) = v'(0)=0,z(0) = z_\ast, v(0)=v_\ast, \end{eqnarray} where $(z_\ast, v_\ast)$ are two parameters labelling the location of the tip and the prime denotes derivative with respect to $\tilde{r}$. The induced metric on $\gamma_{\cal A}$ is \begin{eqnarray} ds^2 = \frac{L^2}{z^2} \left(\frac{L^2}{(1-\tilde{r}^2)^2} - h(v,z) v'^2 - 2 v' z'\right) d\tilde{r}^2 + \frac{L^4}{z^2} \frac{\tilde{r}^2}{1-\tilde{r}^2} d\Omega_{d-2}^2. \end{eqnarray} The holographic entanglement entropy functional is given by the area of the HRT surface \begin{eqnarray}\label{HEE} &&{\cal S} = \frac{L^{2d-3}}{4 G_N} \Omega_{d-2} \int_0^{\tilde{R}} \frac{d\tilde{r}}{z^{d-1}} Q P^{d-2},\\ &&Q \equiv \sqrt{\frac{L^2}{(1-\tilde{r}^2)^2} - h(v, z) v'^2 - 2 v' z'},\qquad P \equiv \frac{\tilde{r}}{\sqrt{1-\tilde{r}^2}}.\nonumber \end{eqnarray} To find the extreme value of this functional, we need to solve the two equations of motion, which can be obtained by varying the functional and are rather complicated \begin{eqnarray} &&h^2 \left(\tilde{r}^2-1\right)^2 v'^3 \left((d-1) \left(\tilde{r}^2-1\right) \tilde{r} z'+(d-2) z\right)\nonumber\\ &&+v' \left(\left(\tilde{r}^2-1\right) z' \left(\left(\tilde{r}^2-1\right) \left(2 z' \left((d-1) \left(\tilde{r}^2-1\right) \tilde{r} z'+(d-2) z\right)-\tilde{r} \left(\tilde{r}^2-1\right) z z''\right)-(d-1) h L^2 \tilde{r}\right)\right.\nonumber\\ &&\qquad\left.+h L^2 z \left(-d+2 \tilde{r}^2+2\right)\right)\nonumber\\ &&+3 h \left(\tilde{r}^2-1\right)^2 v'^2 z' \left((d-1) \left(\tilde{r}^2-1\right) \tilde{r} z'+(d-2) z\right)-(d-1) L^2 \tilde{r} \left(\tilde{r}^2-1\right) z'^2\nonumber\\ &&+L^2 z \left(-d+2 \tilde{r}^2+2\right) z'+v'' \left(h L^2 \tilde{r} \left(\tilde{r}^2-1\right) z+\tilde{r} \left(\tilde{r}^2-1\right)^3 z z'^2\right)+L^2 \left(\tilde{r}^2-1\right) \tilde{r} z z''=0,\nonumber\\ \end{eqnarray} \begin{eqnarray} &&(d-1) h^2 \tilde{r} \left(\tilde{r}^2-1\right)^4 v'^4-2 (d-1) h L^2 \tilde{r} \left(\tilde{r}^2-1\right)^2 v'^2\nonumber\\ &&+z' \left(3 (d-1) h \tilde{r} \left(\tilde{r}^2-1\right)^4 \left(v'\right)^3-3 (d-1) L^2 \tilde{r} \left(\tilde{r}^2-1\right)^2 v'\right.\nonumber\\ &&\qquad\left.+z \left(\tilde{r} \left(\tilde{r}^2-1\right)^4 v' v''-2 (d-2) \left(\tilde{r}^2-1\right)^3 v'^2\right)\right)\nonumber\\ &&+z \left((2-d) h \left(\tilde{r}^2-1\right)^3 v'^3+L^2 \left(\tilde{r}^2-1\right) \left(d-2 \tilde{r}^2-2\right) v'-L^2 \tilde{r} \left(\tilde{r}^2-1\right)^2 v''\right)\nonumber\\ &&+(d-1) L^4 \tilde{r}+2 (d-1) \tilde{r} \left(\tilde{r}^2-1\right)^4 v'^2 z'^2-\tilde{r} \left(\tilde{r}^2-1\right)^4 z v'^2 z''=0.\nonumber\\ \end{eqnarray} To avoid symbol confusion, we denote the solution of the above two equations as $(v_0(\tilde{r}), z_0(\tilde{r}))$ which parameterize the HRT surface. The relation between $v$ and $z$ on the HRT surface, denoted as $v_0(z)$, can be obtained by eliminating the parameter $\tilde{r}$ from the two functions. Generally, the HEE (\ref{HEE}) is ultra-divergent. To remove the divergence and for convenience, we define a normalised HEE as \begin{eqnarray}\label{RHEE} \hat{\cal S} \equiv \frac{4 G_N ({\cal S}_{Vaidya} - {\cal S}_{AdS})}{V_{\cal A}}, \end{eqnarray} where ${\cal S}_{AdS}$ is the HEE for the same subregion ${\cal A}$ in pure AdS geometry. And $V_{\cal A} \equiv L^{d-1} \Omega_{d-2} \int_0^{\tilde{R}} \frac{\tilde{r}^{d-2}}{(1-\tilde{r}^2)^{d/2}} d\tilde{r} = L^{d-1} \Omega_{d-2} \frac{\tilde{R}^{d-1}}{d-1} \ _2 F_1\left(\frac{d-1}{2},\frac{d}{2},\frac{d+1}{2}, \tilde{R}^2\right)$ is the volume of the subregion ${\cal A}$ \footnote{Actually, it should be noted that $V_{\cal A}$ is divergent as $\tilde{R}$ approaches the cosmological horizon to cover the whole boundary space.}. So, $\hat{\cal S}$ can be seen as a normalised entanglement entropy density. \subsection{Holographic subregion complexity} Due to the spherical symmetry, the co-dimension one extreme hypersurface $\Gamma_{\cal A}$, enclosed by ${\cal A}$ and the HRT surface $\gamma_{\cal A}$, can be parameterized by function $v = v(z, \tilde{r})$. The induced metric on $\Gamma_{\cal A}$ is \begin{eqnarray} ds^2 = &&\frac{L^2}{z^2} \left[-\left(h \frac{\partial v}{\partial z} + 2\right) \frac{\partial v}{\partial z} d z^2 - 2 \left(h \frac{\partial v}{\partial z} + 1\right) \frac{\partial v}{\partial \tilde{r}} dz d\tilde{r} \right. \nonumber\\ &&+ \left.\left(\frac{L^2}{(1-\tilde{r}^2)^2} - h \left(\frac{\partial v}{\partial \tilde{r}}\right)^2 \right) d\tilde{r}^2 + \frac{L^2 \tilde{r}^2}{1-\tilde{r}^2} d\Omega_{d-2}^2\right]. \end{eqnarray} According to the HSCV proposal (\ref{subCV}), the holographic subregion complexity functional of $\Gamma_{\cal A}$ is \begin{eqnarray} \label{HSC} &&{\cal C} = \frac{L^{2d-2} \Omega_{d-2}}{8\pi G_N L} \int_0^{z_\ast} dz \int_0^{\tilde{r}(z)} d\tilde{r} N P^{d-2}{z^{-d}} ,\\ &&N \equiv \sqrt{-\left[\frac{L^2}{(1-\tilde{r}^2)^2} - h v_{\tilde{r}}^2\right](h v_z + 2) v_z - (h v_z + 1)^2 v_{\tilde{r}}^2},\qquad P \equiv \frac{\tilde{r}}{\sqrt{1-\tilde{r}^2}},\nonumber \end{eqnarray} where $v_z \equiv \frac{\partial v}{\partial z}$ and $v_{\tilde{r}} \equiv \frac{\partial v}{\partial \tilde{r}}$. To extremizing the HSCV functional, we need to solve the equation of motion which can be obtained by varying the functional with respect to $v(z, \tilde{r})$ \begin{eqnarray} &&L^2 \tilde{r} v_z \left(L^2 \left(v_z \left(2 d h \left(h v_z+3\right)-z v_z \left(h_v+h h_z\right)-3 z h_z\right)+4 d\right)-2 \left(\tilde{r}^2-1\right)^2 z v_{rr} \left(h v_z+2\right)\right)\nonumber\\ &&+2 L^2 \left(\tilde{r}^2-1\right) z v_r \left(\left(d-2 \left(\tilde{r}^2+1\right)\right) v_z \left(h v_z+2\right)+2 \left(\tilde{r}^2-1\right) \tilde{r} v_{zr} \left(h v_z+1\right)\right)\nonumber\\ &&+2 L^2 \left(\tilde{r}^2-1\right)^2 \tilde{r} v_r^2 \left(v_z \left(d h-z h_z\right)+d-h z v_{zz}\right)+2 (d-2) \left(\tilde{r}^2-1\right)^3 z v_r^3+2 L^4 \tilde{r} z v_{zz}=0.\nonumber\\ \end{eqnarray} At first glance, it seems difficult to solve the above equation. However, it is interesting to note that $v(z,\tilde{r})=v_0(z)$ is just the solution (Here we would like to emphasize again that $v_0(z)$ is just the function giving the relation between $v$ and $z$ on the HRT surface), which can be checked directly by plugging $v_0(z)$ into the equation. This simply means that $\Gamma_{\cal A}$ is just formed by dragging the HRT surface $\gamma_{\cal A}$ along the $\tilde{r}$ direction. Similar feature has already been observed in flat boundary case with strip subregion in Ref.~\cite{Chen:2018mcc}. As HEE, HSC is also ultra-divergent, so we can also define a normalised HSC density as \begin{eqnarray} \hat{\cal C} \equiv \frac{8\pi G L ({\cal C}_{Vaidya} - {\cal C}_{AdS}) }{ V_{\cal A}}, \end{eqnarray} where ${\cal C}_{AdS}$ is the HSCV for ${\cal A}$ in pure AdS geometry. \subsection{Numerical results} Having set up the general frame work of HEE and HSCV, now we are ready to study the time evolution of HSC in holographic thermalization. Due to the complication of the equations needed to solve, we rely on numerical method. And for convenience, we set the AdS radius $L=1$. \subsubsection{General behaviours} In Fig.~1, we plot the time evolution of normalised HSC density $\hat{\cal C}$ for various $\tilde{R}$ with fixed spacetime dimension. From the figure, one can see that the time evolution of $\hat{\cal C}$ is not a monotonically increasing function of the time. Rather, it can be divided into four stages: After quench, firstly it grows quickly and almost linearly, then the growth slows down; After reaching a maximal value $\hat{\cal C}_{max}$ it starts to drop down fast,and shortly after the drop down stops and it saturates to a constant value $\hat{\cal C}_{sat}$ finally. Moreover, it is interesting to note that the final saturation constant may be negative, which means that the final value of the complexity may be smaller than its initial value. These behaviours are very different from that in CV or CA conjectures, where the complexity is always a monotonically increasing function of time \cite{Chapman:2018dem,Chapman:2018lsv}. Similar behaviours have been observed in flat boundary cases with strip subregion \cite{Chen:2018mcc,Ling:2018xpc}, indicating universality of the behaviours. \begin{figure}[!htbp] \centering \includegraphics[width=0.32\textwidth]{CAdS3.pdf} \includegraphics[width=0.33\textwidth]{CAdS4.pdf} \includegraphics[width=0.32\textwidth]{CAdS5.pdf} \caption{(colour online) Time evolution of normalised HSCV density $\hat{\cal C}$ for various $\tilde{R}$ with fixed $d$. The mass is fixed as $M=1$.} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{CAdS3R05M.pdf} \includegraphics[width=0.45\textwidth]{CAdS5R05M.pdf} \caption{(colour online) Time evolution of normalised HSCV density $\hat{\cal C}$ for various $M$ with fixed $d$. The subregion size is fixed as $\tilde{R}=0.5$.} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{CR05.pdf} \includegraphics[width=0.45\textwidth]{CR09.pdf} \caption{(colour online) Time evolution of normalised HSCV density $\hat{\cal C}$ for various $d$ with fixed $\tilde{R}$. The mass is fixed as $M=1$. } \end{figure} From Fig.~1-3, one can see that the maximal value $\hat{\cal C}_{max}$ depends on the subregion size $\tilde{R}$, the spacetime dimension $(d+1)$ and the mass parameter $M$. Increasing $\tilde{R}$ or $M$ will yield a bigger $\hat{\cal C}_{max}$, while increasing the dimension will, on the contrary, lower the maximal value. Moreover, one can also see that the final saturation constant $\hat{\cal C}_{sat}$ also depends on $(\tilde{R}, d, M)$ but in a more complicated way. \subsubsection{Linear growth stage} Let us focus on discussing the first stage when $\hat{\cal C}$ grows almost linearly in time, i.e., \begin{eqnarray} \frac{d\hat{\cal C}}{d \tilde{t}} \sim A, \end{eqnarray} where $A$ is the proportional constant which may depend on $(\tilde{R}, d, M)$. From Fig.~1-3, one can see that $A$ is nearly independent of $\tilde{R}$ and $d$; While it strongly depends on $M$. By fitting the numerical data, it is found that $A \approx 0.4 M$. From Fig.~1, one can also see that larger the subregion size $\tilde{R}$ is, longer time the linear growth stage lasts. It is expected that as $\tilde{R}$ approaches the cosmological horizon $\tilde{r} =1$ to cover the entire boundary space, the linear growth stage will last forever which agrees well with the CV conjecture. We can see this point more clearly in Fig.~4 where we take $d=2$ case as an example. We will give more evidences on this point later. \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{CAdS3R.pdf} \caption{(colour online) Time evolution of normalised HSCV density $\hat{\cal C}$ for various $\tilde{R}$ with fixed $d$. The mass is fixed as $M=1$.} \end{figure} \subsubsection{Saturation time} In Refs.~\cite{Fischler:2013fba,Zangeneh:2017tub}, one defines the saturation time as the time HEE approaches a constant. Similarly, for the complexity, we can also define a saturation time $\tilde{t}_{sat}$ as the time $\hat{\cal C}$ reaches its saturation constant $\hat{\cal C}_{sat}$. In Fig.~5, we plot the time evolution of the two observables, $\hat{\cal S}$ and $\hat{\cal C}$ to make a comparison. From the figure, we can see the well-known fact that $\hat{\cal S}$ is always a monotonically increasing function of time. Moreover, from Fig.~5 and Fig.~2, one can see that $\hat{\cal S}$ and $\hat{\cal C}$ reache their saturation values at almost the same time. And their saturation time $\tilde{t}_{sat}$ is nearly independent of $d$ and $M$. \begin{figure}[!htbp] \centering \includegraphics[width=0.45\textwidth]{EECR05.pdf} \includegraphics[width=0.45\textwidth]{EECR09.pdf} \caption{(colour online) Time evolution of HEE and $\hat{\cal C}$ for fixed $d$. The mass parameter is fixed as $M=1$.} \end{figure} In Fig.~6, the saturation time $\tilde{t}_{sat}$ for $\hat{\cal C}$ as a function of the subregion size $\tilde{R}$ is plotted. The numerical results can be well fitted by the function $\tilde{t}_{sat} = \tanh^{-1} (\tilde{R})$, as for the HEE \cite{Fischler:2013fba}. It is interesting to note that the $\tilde{t}_{sat}$ is just the time light takes travelling from the origin $\tilde{r}=0$ to the boundary of the subregion $\tilde{r} = \tilde{R}$.\footnote{We thank J.F. Pedraza for pointing out this.} From the figure and the fitting, one can easily see that $\tilde{t}_{sat}$ is linear in $\tilde{R}$ when $\tilde{R}$ is small; However, as $\tilde{R}$ approaches the cosmological horizon $\tilde{r} =1$, the saturation time diverges logarithmically and thus the linear growth stage will also last forever, as we already mentioned above. \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{Ctsat.pdf} \caption{(colour online) Saturation time $\tilde{t}_{sat}$ as a function of the subregion size $\tilde{R}$. Red dots correspond to numerical results while the solid blue curve represents the fitting function $\tilde{t}_{sat} = \tanh^{-1} (\tilde{R})$. Space dimension and mass parameter are fixed as $d=2$ and $M=1$, respectively.} \end{figure} \section{Summary and Discussions} In this work, we consider the holographic model of thermalization process for QFTs in dS spacetime. By applying the holographic subregion CV conjecture, we study the time evolution of subregion complexity under quench. The subregion ${\cal A}$ is chosen to be a sphere on the boundary time slice. The dual extremal codimension-one hypersurface $\Gamma_{{\cal A}}$ in the bulk, whose volume gives the complexity of ${\cal A}$, is found to be simply swept out by the HRT surface along the $\tilde{r}$-direction. The whole time evolution of subregion complexity can be divided into four stages: It first increases almost linearly; Then its growth slows down and after reaching a maximum it starts to drop down quickly, and shortly after the drop down stops and it gets to saturation finally. This picture is similar to that in flat boundary cases but with a strip subregion~\cite{Chapman:2018lsv,Ling:2018xpc}. This implies that the time evolution behaviours of subregion complexity are very general, and is independent of the subregion shape and the cosmological horizon. The linear growth rate in the first stage is found to almost only depend on the mass parameter. As the subregion size approaches the cosmological horizon, this stage is expected to last forever, and as the HEE the saturation time is logarithmically divergent. The saturation time is found to depend almost only on the subregion size $\tilde{R}$, and their relation can be well fitted by the function $\tilde{t}_{sat} = \tanh^{-1} (\tilde{R})$. It is interesting to note that the $\tilde{t}_{sat}$ is just the time light takes travelling from the origin $\tilde{r}=0$ to the boundary of the subregion $\tilde{r} = \tilde{R}$. The underlying physical meaning of this fact needs further investigation. In this work, we only consider the HSCV conjecture. It is interesting to check whether general behaviours of subregion complexity still holds for other conjectures, for example the holographic subregion CA. In Ref.~\cite{Zhang:2014cga}, using HEE as a probe we show that including the Gauss-Bonnet correction will shorten the saturation time. It is also interesting to see how the higher-derivative terms affect the time evolution of subregion complexity. We leave these questions for further investigations. \section*{Acknowledgement} This work was supported by National Natural Science Foundation of China (Nos. 11605155 and 11675144). We thank J.F. Pedraza for helpful comments on this manuscript.
{'timestamp': '2019-06-06T02:17:52', 'yymm': '1905', 'arxiv_id': '1905.10605', 'language': 'en', 'url': 'https://arxiv.org/abs/1905.10605'}
\section{Introduction} Strongly correlated many body fermion problems is an exciting area of research today \cite{Subedi2008,Campbell2009}. The main theoretical challenge in the field is to compute physical quantities of interest from first principles. Most methods that are currently used involve approximations that can be justified only in some regions of the parameter space. For problems where none of these approximations can be justified, the computational challenge is daunting. The Monte Carlo method is the only method which may be reliable in such cases. Unfortunately, this method also suffers from sign problems that arise due to the quantum nature of the underlying system \cite{Zaanen2008,MihailoCubrovic2009}. The final answer usually depends on delicate cancellations between many different quantum amplitudes which the Monte Carlo approach is unable to accomplish efficiently. The physics of nuclear matter and strongly correlated electronic systems are classic examples where the sign problem has hindered progress. Attempts to circumvent or solve the sign problem continues to be an important area of research and is also the focus of the current work. While a general solution to sign problems may not exist \cite{Troyer:2004ge}, solutions have been found in specific cases when problems are reformulated using new variables. For example, while bosonic quantum field theories with a non-zero chemical potential suffer from a sign problem in the conventional formulation \cite{aarts:131601}, in the world line approach these sign problems disappear \cite{Endres:2006xu,Chandrasekharan:2008gp}. Even in fermionic quantum field theories, where the origin of the sign problem is the Pauli principle, new solutions are beginning to appear. In the conventional approach fermions are integrated out and the partition function is written in terms of bosonic degrees of freedom with a Boltzmann weight equal to the determinant of a matrix \cite{PhysRevB.34.7911}. If this determinant is non-negative then the sign problem is absent and today such problems can be solved using the popular hybrid Monte algorithm \cite{Duane:1987de} and its variants \cite{Luscher:2010ae}. On the other hand in many interesting cases the determinant can be negative or even complex. In such cases the conventional approach offers little hope for further progress. Recent research has shown that the world line formulations offer an alternative approach. Instead of integrating out the fermions at the beginning, considering their world lines and then re summing over only a limited class of these configurations leads to new solutions of the sign problems \cite{Karsch:1988zx,PhysRevLett.83.3116}. The idea of using the world line aproach in two dimensional lattice field theories which usually do not suffer from sign problems has a long history \cite{Salmhofer:1991cc,Gattringer:2007em,Wolff:2007ip,Wolff:2008xa,PhysRevD.80.071503}. Recently these developments have been unified under the framework called the ``fermion bag'' approach which shows that the new solutions to fermion sign problems can emerge in any dimension \cite{PhysRevD.82.025007}. Basically one identifies independent dynamical regions over which the fermions naturally hop. These dynamical regions, called fermion bags, behave like non-local degrees of freedom. The weight of a fermion bag is just the path integral inside the fermion bag. When field theories are written in terms of fermion bags, sign problems may be absent since the weight of the fermion bags can be non-negative. The fermion bag approach allow us to solve some problems that seemed difficult or impossible in the conventional approach \cite{PhysRevD.82.025007}, thanks to new algorithms \cite{Adams:2003cc}. Being a relatively new idea not many examples have been studied and more work is necessary to understand the potential of the method. In this work we construct the fermion bag approach to four dimensional lattice QED with one flavor of Wilson fermions at strong gauge couplings. In a sense this is an extension of previous work in two \cite{Salmhofer:1991cc} and three dimensions \cite{PhysRevD.80.071503}. Wilson fermions contain a parameter called the hopping parameter referred to here as $\kappa$. It is well known that the determinant of the one flavor Wilson Dirac operator in the background of a strongly fluctuating gauge field configuration can be negative for some values of $\kappa$. Hence the conventional approach suffers from a sign problem in this region. Recently it was shown that the sign problem is absent in three dimensions when the partition function is written in terms of fermion bags \cite{PhysRevD.80.071503}. Is this true in four dimensions? The current work was motivated by this question. At strong gauge couplings fermions are confined into bosons and fermion bags are regions where these bosons hop around. The weight of the bag is then a sum over all paths the fermions can explore within the bag while remaining confined. Since the fermions are always paired there is a possibility that the bags will have a non-negative weight. However, we show here that this is {\em not} the case. Fermion bags with negative weight do exist, suggesting that the underlying bosonic model remains frustrated. Although the fermion bag approach does not solve the sign problem, we can learn about the nature of the sign problem and some practical solutions from it. First, we can analytically prove that fermion bags with non-negative weights only contribute at $\kappa = \infty$. Thus, the fermion bag approach is able to solve the sign problem at this special point, while the conventional approach has a very severe sign problem there. Second, we find that at small $\kappa$ most bags that contribute have a positive weight. Negative weight bags begin to enter the partition function only for $\kappa > \kappa_c$ as in the conventional approach. Third, we find that large bags which are topologically simple (to be explained later) are also almost always positive. Large complex bags on the other hand have both positive and negative weights with almost equal probability. This creates a severe sign problem if they are allowed in the partition function. However, to a good approximation they seem to cancel each other and the partition function function is dominated only by simple bags. If one assumes this reasoning to be correct one obtains a new model that seems to capture at least some the interesting physics of the original model. This method of identifying new models by focusing on a class of fermion bags which capture important physics while being practically solvable may turn out to be one of the main advantages of the fermion bag approach. Our paper is organized as follows. In section \ref{signdet} we briefly review the sign problem in strongly coupled lattice QED with one flavor of Wilson fermions in the conventional approach. In section \ref{fbagrules} we develop the fermion bag approach and construct diagrammatic rules to compute the weight of a fermion bag. In section \ref{signfbag} we classify bags as simple and complex and compute the weights of some small bags. We give examples of bags with negative weights. We also find the distribution of simple and some complex bags and use it to justify that complex bags do not contribute to the partition function. In section \ref{posfbags} we contruct a model without a sign problem that most likely contains the physics of parity breaking. We also give an analytic proof that the weight of fermion bags at $\kappa = \infty$ are non-negative. Section \ref{conc} contains our conclusions. \section{Sign Problem in the Determinant Approach} \label{signdet} Let us briefly review the sign problem in the conventional approach to strongly coupled lattice QED with one flavor of Wilson fermions. The partition function is given by \begin{equation} Z = \int [d\overline{\psi} \ d\psi] [d\phi] \exp(-S[\overline{\psi},\psi,\phi]) \end{equation} where the Wilson fermion action is given by \begin{equation} S = - \sum_{x,\alpha}\ \Big(\overline{\psi}_x \Gamma^\alpha_+ \mathrm{e}^{i\phi_{x,\alpha}}\psi_{x+\alpha} \ +\ \overline{\psi}_{x+\alpha} \Gamma^\alpha_- \mathrm{e}^{-i\phi_{x,\alpha}}\psi_{x}\Big) \ +\ \frac{1}{k} \sum_x \overline{\psi}_x\psi_x \end{equation} with the definition $\Gamma^\alpha_\pm = (1 \pm \gamma_\alpha)/2$. We denote the four Hermitian Dirac matrices as $\gamma_\alpha, \alpha = 1,2,3,4$. We also define $\gamma_5 = -\gamma_1\gamma_2\gamma_3\gamma_4$ for later convenience. For explicit calculations we will use the chiral representation in which \begin{equation} \gamma_\alpha = \left(\begin{array}{cc} 0 & \tau_\alpha \cr \tau_\alpha^\dagger & 0 \end{array}\right),\ \ \gamma_5 = \left(\begin{array}{cc} I & 0 \cr 0 & -I \end{array}\right). \end{equation} The four $2\times 2$ matrices $\tau_\alpha$ are defined by $(i\vec{\sigma},I)$ in the four vector notation. Note that $\vec{\sigma}$ are the three Pauli matrices. The lattice fields $\psi_x$ and $\overline{\psi}_x$ represent the two independent Grassmann valued four component Dirac spinors on each hyper-cubic lattice site $x$ and $\phi_{x,\alpha}$ is the compact $U(1)$ lattice gauge field. In this work we choose open boundary conditions for convenience. Further note that our definition of $\kappa$ is two times the conventional definition of $\kappa$ \cite{PhysRevD.25.1130}. The conventional approach is to integrate out the fermions and express the partition function as simply an integral over gauge fields. In this approach the Boltzmann weight of each gauge field configuration is simply the fermion determinant of the Wilson Dirac operator $D_W[\phi]$ in the background of that gauge field. More explicitly. \begin{equation} Z = \int [d\phi]\ \mathrm{Det}\Big(D_W[\phi]\Big). \label{detZ} \end{equation} where \begin{equation} (D_W[\phi])_{x,y} = -\sum_\alpha\ \delta_{x+\alpha,y}\Gamma^\alpha_+ \mathrm{e}^{i\phi_{x,\alpha}} \ +\ \delta_{x,y+\alpha} \Gamma^\alpha_- \mathrm{e}^{-i\phi_{y,\alpha}} \ +\ \frac{1}{k} \delta_{x,y} \end{equation} The Wilson Dirac operator satisfies the relation $D_W^\dagger \gamma_5 = \gamma_5 D_w $ which can be used to show that eigenvalues of $D_w$ are either real or come in complex conjugate pairs. For $\kappa < 0.25$ all real eigenvalues can be shown to be positive. However, for larger values of kappa there can in principle be an odd number of negative eigenvalues. Hence the determinant can be negative. The negative determinant is necessary to violate the Vafa-Witten theorem \cite{Vafa:1983tf} and allow for the spontaneously breaking of the parity symmetry that is expected to occur for $\kappa > \kappa_c$ \cite{Aoki:1983qi}. One expects $\kappa_c \sim 0.5$ at strong couplings. \FIGURE[h]{ \includegraphics[width=0.6\textwidth]{sign_det} \caption{Average value of the sign of $\mathrm{Det}(D_w)$ on $4^4$ and $6^4$ lattices as a function of $\kappa$ obtained using $1000$ random gauge field configurations. The mild sign problem on $4^4$ lattice for $0.5 > \kappa > 1.4$ is just a finite size effect. \label{fig:sign_det}} } In order to show the sign problem we compute the sign of the determinant of $D_W$ on $4^4$ and $6^4$ lattices in the background of a $1000$ random $U(1)$ gauge field. We plot the average value of this sign as a function of $\kappa$ in figure~\ref{fig:sign_det}. As expected the determinant approach encounters a severe sign problem when $\kappa > \kappa_c \sim0.5$. The sign problem continues to be severe even at $\kappa = \infty$. In this work we construct the fermion bag approach to this problem. \section{Fermion Bag Approach} \label{fbagrules} At strong coupling we can first perform the link integral over the gauge field connecting $x$ and $x+\alpha$ exactly to obtain an expansion of the partition function in terms of powers of Grassmann variables on each bond. We get \begin{equation} \int \frac{d\phi}{2\pi} \exp( \overline{\psi}_x \Gamma^\alpha_+ \mathrm{e}^{i\phi}\psi_{x+\alpha} \ +\ \overline{\psi}_{x+\alpha} \Gamma^\alpha_-\mathrm{e}^{-i\phi}\psi_{x}) = \sum_{k=0}^4\frac{(\overline{\psi}_x \Gamma^\alpha_+\psi_{x+\alpha} \overline{\psi}_{x+\alpha} \Gamma^\alpha_-\psi_{x})^k}{(k!)^2}. \label{link} \end{equation} We can also expand the exponential of the mass term on each site in terms of powers of Grassmann variables \begin{equation} \mathrm{e}^{-\overline{\psi}\psi/\kappa} = \sum_{n=0}^4 \Big(\frac{1}{\kappa}\Big)^n \frac{[- \overline{\psi}_x \psi_x]^n}{n!}. \end{equation} Collecting all the Grassmann variables on each site and performing the integration over the Grassmann variables using the identity \begin{equation} \int [d\psi][d\overline{\psi}] (\overline{\psi})_{i_1}\psi_{j_1} (\overline{\psi})_{i_2}\psi_{j_2} (\overline{\psi})_{i_3}\psi_{j_3} (\overline{\psi})_{i_4}\psi_{j_4} = \varepsilon_{i_1 i_2 i_3 i_4} \varepsilon_{j_1 j_2 j_3 j_4} \end{equation} we can rewrite the partition function as a sum over bond variables $k_{x,\alpha}=0,1,2,3,4$ and site variables $n_x=0,1,2,3,4$. We will refer to $n_x$ as the number of monomers on the site $x$ and $k_{x,\alpha}$ as the number of dimers on the bond connecting the site $x$ and $x+\hat{\alpha}$. Thus, in the monomer, dimer representation the partition function given by \begin{equation} Z = \sum_{[n,k]} \ \prod_B(\omega_B[n,k]) \end{equation} where $\omega_B$ is the weight of a ``fermion bag'' $B$ which is simply the set of sites connected by $k_{x,\alpha} \neq 0$. Note that every site belongs to a unique bag and fermions only hop within the sites of the bag. The Boltzmann weight of a fermion bag, $\omega_B$, is the sum over a well defined set of fermion hoppings within the bag. Of course there is no need for $\omega_B$ to be positive. However, in this work we prove that $\omega_B$ is indeed positive when $\kappa = 0$ and $|\kappa|=\infty$. We also find evidence that to a good approximation certain class of fermion bags, which almost always have positive weights, dominate the partition function for a range of values of $\kappa$. Let us now construct the rules for calculating $\omega_B$. For this purpose we define \begin{equation} S_{+,\alpha} = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} I & +\tau_\alpha \cr 0 & 0 \end{array}\right) \ \ \ \ S_{-,\alpha} = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} I & -\tau_\alpha \cr 0 & 0 \end{array}\right) \end{equation} it is easy to show that $\Gamma^\alpha_+ = S^\dagger_{+,\alpha} S_{+,\alpha},\ \Gamma^\alpha_- = S^\dagger_{-,\alpha} S_{-,\alpha} $ for every $\alpha$. Further \begin{equation} S_{-s_1,\alpha_1} S^\dagger_{s_2,\alpha_2} = \frac{1}{2} \left(\begin{array}{cc} (I - s_1 s_2\ \tau_{\alpha_1}\tau^{\dagger}_{\alpha_2}) & 0 \cr 0 & 0 \end{array}\right) = \left(\begin{array}{cc} R^{s_1s_2}_{\alpha_1,\alpha_2} & 0 \cr 0 & 0 \end{array}\right) \label{sprod} \end{equation} where $R^{s_1 s_2}_{\alpha_1,\alpha_2} \equiv (I-s_1s_2\tau_{\alpha_1}\tau^\dagger_{\alpha_2})/2$ is a $2\times 2$ matrix which can be parametrized as $F^{s_1s2}_{\alpha_1,\alpha_2} \exp(i\mathbf{n}_{\alpha_1,\alpha_2}^{s_1s_2}\cdot\vec{\sigma}(\pi/4))$. table~\ref{tab1} lists the values of $F$ and $\mathbf{n}$ for various possibilities. \TABLE[h]{ \begin{tabular}{c|c|c|c} \hline \hline $\alpha_1$ & $\alpha_2$ & $F^{s_1s_2}_{\alpha_1,\alpha_2}$ & $\mathbf{n}^{s_1s_2}_{\alpha_1,\alpha_2}$ \\ \hline $\alpha$ & $\alpha$ & $\frac{1}{2}(1-s_1s_2)$ & $0$ \\ 4 & $i=1,2,3$ & $\frac{1}{\sqrt{2}}$ & $n_k=s_1s_2\delta_{ik}$ \\ $i=1,2,3$ & 4 & $\frac{1}{\sqrt{2}}$ & $n_k=-s_1s_2\delta_{ik}$ \\ $i=1,2,3$ & $\substack{j=1,2,3\\j\neq i}$ & $\frac{1}{\sqrt{2}}$ & $n_k = -s_1s_2\epsilon_{ijk}$ \\ \hline \end{tabular} \caption{\label{tab1} Values of $F^{s_1s_2}_{\alpha_1,\alpha_2}$ and $\mathbf{n}^{s_1s_2}_{\alpha_1,\alpha_2}$ that enter the definition of $R^{s_1 s_2}_{\alpha_1,\alpha_2}$.} } Note that $R^+_{\alpha,\alpha} = 0$ and $R^-_{\alpha,\alpha} = I$, while for all other values of $\alpha_1$ and $\alpha_2$, the matrix $R^{s_1s_2}_{\alpha_1\alpha_2}$ is $(1/\sqrt{2})$ times a $(1/2,0)$ representation of an $O(4)$ rotation matrix. Using these relations we can write \begin{eqnarray} \overline{\psi}_x\Gamma^\alpha_+\psi_{x+\alpha}\overline{\psi}_{x+\alpha} \Gamma^\alpha_-\psi_x &=& i\ \Big[\sum_{k,l}\ (S_{-,\alpha})_{i k}\ (\psi_x)_k (\overline{\psi}_x)_{l} \ (S_{+,\alpha}^\dagger)_{lj}\Big] \nonumber \\ && i\ \Big[\sum_{m,n}\ (S_{+,\alpha})_{j m} \ (\psi_{x+\alpha})_{m} \ (\overline{\psi}_{x+\alpha})_{n} \ (S_{-,\alpha}^\dagger)_{ni} \Big] \nonumber \\ \end{eqnarray} The integration over the Grassmann variable then leads to specific rules that help to compute the weight $\omega_B$ of a bag. The weight of a bag turns out to be the trace of the product of Dirac tensors associated to each site. The explicit form of these Dirac tensors are discussed below. But first is useful to remember the constraint that every site in the bag must satisfy \begin{equation} n_x + \sum_\alpha \ k_{x,\alpha} + k_{x,-\alpha} = 4. \end{equation} Here we have defined $k_{x, -\alpha} = k_{x-\alpha,\alpha}$. Based on the allowed values of $n_x$, each site in the bag can be one of seven types as shown in table~\ref{tab2}. We call these as type-0,1,2,3,4a,4b and 4c depending on the number of dimers attached to the site. Note that there are three types of sites with four dimers attached to it. We distinguish them because the rules to compute the weights are slightly different for each of them. We also use two types of diagrammatic representation for each vertex: a detailed diagram and a minimal diagram. The detailed diagram shows each fermion line and is helpful in the actual computation, while the minimal diagram just shows the dimers (or a single monomer when no dimers exist on the site). Given the minimal diagram the detailed diagram can be uniquely obtained. \TABLE[h]{ \begin{tabular}{c|c|c||c|c|c} \hline Detailed & Minimal & Dirac & Detailed & Minimal & Dirac \\ Diagram & Diagram & Tensor & Diagram & Diagram & Tensor \\ \hline & & & & & \\ \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.6\textwidth]{fig1.eps} \end{center} \end{minipage} & \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{fig1a.eps} \end{center} \end{minipage} & $W_0$ & \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{fig2.eps} \end{center} \end{minipage} & \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{fig2a.eps} \end{center} \end{minipage} & $W_1$. \\ & & & & & \\ \hline & & & & & \\ \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{fig3.eps} \end{center} \end{minipage} & \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{fig3a.eps} \end{center} \end{minipage} & $W_2$ & \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{fig4.eps} \end{center} \end{minipage} & \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{fig4a.eps} \end{center} \end{minipage} & $W_3$ \\ & & & & & \\ \hline & & & & & \\ \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.75\textwidth]{fig5.eps} \end{center} \end{minipage} & \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.75\textwidth]{fig5a.eps} \end{center} \end{minipage} & $W^a_4$ & \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.75\textwidth]{fig6.eps} \end{center} \end{minipage} & \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.75\textwidth]{fig6a.eps} \end{center} \end{minipage} & $W^b_4$ \\ & & & & & \\ \hline & & & & & \\ \begin{minipage}[c]{0.15\textwidth} \begin{center} \includegraphics[width=0.75\textwidth]{fig7.eps} \end{center} \end{minipage} & \begin{minipage}[c]{0.1\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{fig7a.eps} \end{center} \end{minipage} & $W^c_4$ & & & \\ & & & & & \\ \hline \end{tabular} \caption{\label{tab2} Types of vertices in a fermion bag. The weights are given in Eqs.\ref{weights1}-\ref{weights7}.} } The simplest site is type-0 site where $n_x = 4$. Such a site forms its own bag since it is not connected to any dimers. It has a weight \begin{subequations} \begin{equation} \omega_B = W_0 = \kappa^{-4}. \label{weights1} \end{equation} Next consider the type-1 site with $n_x = 3$ and $k_{x,s_1\alpha_1}=1$ where $s_1=\pm 1$ and $\alpha_1$ is one of four possible positive directions. The contribution to the weight of the fermion bag due to such a site is in the form of a Dirac tensor $(W_1)^{s_1\alpha_1}_{i_1;j_1}$ and is given by \begin{eqnarray} (W_1)^{s_1\alpha_1}_{i_1;j_1} &=& i \frac{\kappa^{-3}}{3!} (S_{-s_1,\alpha_1})_{i_1 k_1} \varepsilon_{k_1 k_2 k_3 k_4} \varepsilon_{l_1 k_2 k_3 k_4} (S_{s_1,\alpha_1}^\dagger)_{l_1 j_1} \nonumber \\ &=& i \kappa^{-3} \Big(S_{-s_1,\alpha_1}S^\dagger_{s_1,\alpha_1}\Big)_{i_1j_1} = 0. \end{eqnarray} Hence a fermion bag cannot contain a type-1 vertex. Next consider the type-2 vertex where $n_x=2$ and $k_{x,s_1\alpha_1} = k_{x,s_2\alpha_2}=1$. In this case the Dirac tensor associated with this site is of the form $(W_2)^{s_1\alpha_1,s_2\alpha_2}_{i_1 i_2;j_1 j_2}$ that contributes to $\omega_B$ is given by \begin{eqnarray} (W_2)^{s_1\alpha_1,s_2\alpha_2}_{i_1,i_2;j_1,j_2} &=& - \frac{\kappa^{-2}}{2!} (S_{-s_1,\alpha_1})_{i_1 k_1} (S_{-s_2,\alpha_2})_{i_2 k_2} \varepsilon_{k_1 k_2 k_3 k_4} \varepsilon_{l_1 l_2 k_3 k_4} (S_{s_1,\alpha_1}^\dagger)_{l_1 j_1}(S_{s_2,\alpha_2}^\dagger)_{l_2 j_2} \nonumber \\ &=& \kappa^{-2} \Big(S_{-s_1,\alpha_1}S^\dagger_{s_2,\alpha_2}\Big)_{i_1j_2} \Big(S_{-s_2,\alpha_2}S^\dagger_{s_1,\alpha_1}\Big)_{i_2j_1} = \kappa^{-2} (R^{s_1s_2}_{\alpha_1\alpha_2})_{i_1j_2}(R^{s_2s_1}_{\alpha_2\alpha_1})_{i_2j_1} \nonumber \\ \end{eqnarray} Note that if $\alpha_1 = \alpha_2$ and $s_1 = s_2$ the tensor is zero. Next consider the type-3 site with $n_x=1$ and $k_{x,s_1\alpha_1} = k_{x,s_2\alpha_2} = k_{x,s_3\alpha_3} = 1$. Then \begin{eqnarray} (W_3)^{s_1\alpha_1,s_2\alpha_2,s_3\alpha_3}_{i_1,i_2,i_3;j_1,j_2,j_3} &=& -i \kappa^{-1} (S_{-s_1,\alpha_1})_{i_1 k_1} (S_{-s_2,\alpha_2})_{i_2 k_2} (S_{-s_3,\alpha_3})_{i_3 k_3} \varepsilon_{k_1 k_2 k_3 k_4} \nonumber \\ && \ \ \ \ \varepsilon_{l_1 l_2 l_3 k_4} (S_{s_1,\alpha_1}^\dagger)_{l_1 j_1} (S_{s_2,\alpha_2}^\dagger)_{l_2 j_2}(S_{s_3,\alpha_3}^\dagger)_{l_3 j_3} \nonumber \\ &=& - i \kappa^{-1} \Bigg\{ \Big(S_{-s_1,\alpha_1}S^\dagger_{s_2,\alpha_2}\Big)_{i_1j_2} \Big(S_{-s_2,\alpha_2}S^\dagger_{s_3,\alpha_3}\Big)_{i_2j_3} \Big(S_{-s_3,\alpha_3}S^\dagger_{s_1,\alpha_1}\Big)_{i_3j_1} \nonumber \\ &&\ \ \ \ \ \ + \Big(S_{-s_1,\alpha_1}S^\dagger_{s_3,\alpha_3}\Big)_{i_1j_3} \Big(S_{-s_3,\alpha_3}S^\dagger_{s_2,\alpha_2}\Big)_{i_3j_2} \Big(S_{-s_2,\alpha_2}S^\dagger_{s_1,\alpha_1}\Big)_{i_2j_1}\Bigg\} \nonumber \\ &=& - i \kappa^{-1} \Big( (R^{s_1s_2}_{\alpha_1\alpha_2})_{i_1j_2}(R^{s_2s_3}_{\alpha_2\alpha_3})_{i_2j_3} (R^{s_3s_1}_{\alpha_3\alpha_1})_{i_3j_1} \nonumber \\ && \hskip1in + (R^{s_1s_3}_{\alpha_1\alpha_3})_{i_1j_3}(R^{s_3s_2}_{\alpha_3\alpha_2})_{i_3j_2} (R^{s_2s_1}_{\alpha_2\alpha_1})_{i_2j_1} \Big) \end{eqnarray} Note that again all the dimers must be in different directions otherwise the site weight is zero. Finally we can have a type-4 site with $n_x = 0$. In this case we have four directions given by $k_{x,s_1\alpha_1} = k_{x,s_2\alpha_2} = k_{x,s_3\alpha_3} = k_{x,s_4\alpha_4} = 1$. Now there are three possibilities: Type-4a is one in which all the four dimers are in different directions and then we get \begin{eqnarray} (W^a_4)^{s_1\alpha_1,s_2\alpha_2,s_3\alpha_3,s_4\alpha_4}_{i_1,i_2,i_3,i_4;j_1,j_2,j_3,j_4} &=& (S_{-s_1,\alpha_1})_{i_1 k_1} (S_{-s_2,\alpha_2})_{i_2 k_2} (S_{-s_3,\alpha_3})_{i_3 k_3} (S_{-s_4,\alpha_4})_{i_4 k_4} \varepsilon_{k_1 k_2 k_3 k_4} \nonumber \\ && \ \ \ \ \varepsilon_{l_1 l_2 l_3 l_4} (S_{s_1,\alpha_1}^\dagger)_{l_1 j_1} (S_{s_2,\alpha_2}^\dagger)_{l_2 j_2} (S_{s_3,\alpha_3}^\dagger)_{l_3 j_3} (S_{s_4,\alpha_4}^\dagger)_{l_4 j_4} \nonumber \\ &=& \Big( (R^{s_1s_2}_{\alpha_1\alpha_2})_{i_1j_2} (R^{s_2s_1}_{\alpha_2\alpha_1})_{i_2j_1} (R^{s_3s_4}_{\alpha_3\alpha_4})_{i_3j_4} (R^{s_4s_3}_{\alpha_4\alpha_3})_{i_4j_3} \nonumber \\ && - (R^{s_1s_2}_{\alpha_1\alpha_2})_{i_1j_2} (R^{s_2s_3}_{\alpha_2\alpha_3})_{i_2j_3} (R^{s_3s_4}_{\alpha_3\alpha_4})_{i_3j_4} (R^{s_4s_1}_{\alpha_4\alpha_1})_{i_4j_1} \nonumber \\ && -(R^{s_1s_2}_{\alpha_1\alpha_2})_{i_1j_2} (R^{s_2s_4}_{\alpha_2\alpha_4})_{i_2j_4} (R^{s_4s_3}_{\alpha_4\alpha_3})_{i_4j_3} (R^{s_3s_1}_{\alpha_3\alpha_2})_{i_3j_1} \nonumber \\ && + (R^{s_1s_3}_{\alpha_1\alpha_3})_{i_1j_3} (R^{s_3s_1}_{\alpha_3\alpha_1})_{i_3j_1} (R^{s_2s_4}_{\alpha_2\alpha_4})_{i_2j_4} (R^{s_4s_2}_{\alpha_4\alpha_2})_{i_4j_2} \nonumber \\ && - (R^{s_1s_3}_{\alpha_1\alpha_3})_{i_1j_3} (R^{s_3s_4}_{\alpha_3\alpha_4})_{i_3j_4} (R^{s_4s_2}_{\alpha_4\alpha_2})_{i_4j_2} (R^{s_2s_1}_{\alpha_2\alpha_1})_{i_2j_1} \nonumber \\ && -(R^{s_1s_3}_{\alpha_1\alpha_3})_{i_1j_3} (R^{s_3s_2}_{\alpha_3\alpha_2})_{i_3j_2} (R^{s_2s_4}_{\alpha_2\alpha_4})_{i_2j_4} (R^{s_4s_1}_{\alpha_4\alpha_1})_{i_4j_1} \nonumber \\ && + (R^{s_1s_4}_{\alpha_1\alpha_4})_{i_1j_4} (R^{s_4s_1}_{\alpha_4\alpha_1})_{i_4j_1} (R^{s_2s_3}_{\alpha_2\alpha_3})_{i_2j_3} (R^{s_3s_2}_{\alpha_3\alpha_2})_{i_3j_2} \nonumber \\ && - (R^{s_1s_4}_{\alpha_1\alpha_4})_{i_1j_4} (R^{s_4s_3}_{\alpha_4\alpha_3})_{i_4j_3} (R^{s_3s_2}_{\alpha_3\alpha_2})_{i_3j_2} (R^{s_2s_1}_{\alpha_2\alpha_1})_{i_2j_1} \nonumber \\ && -(R^{s_1s_4}_{\alpha_1\alpha_4})_{i_1j_4} (R^{s_4s_2}_{\alpha_4\alpha_2})_{i_4j_2} (R^{s_2s_3}_{\alpha_2\alpha_3})_{i_2j_3} (R^{s_3s_1}_{\alpha_3\alpha_1})_{i_3j_1} \Big) \label{weights5} \end{eqnarray} Type-4b is the site where $k_{x,s_1\alpha_1} = 2$ and $k_{x,s_3\alpha_3} = k_{x,s_4\alpha_4}=1$. The above expression then simplifies to \begin{eqnarray} (W^b_4)^{s_1\alpha_1,s_3\alpha_3,s_4\alpha_4}_{i_1,i_2,i_3,i_4;j_1,j_2,j_3,j_4} &=& \frac{1}{2}\Big( (R^{s_1s_3}_{\alpha_1\alpha_3})_{i_1j_3} (R^{s_1s_4}_{\alpha_1\alpha_4})_{i_2j_4} - (R^{s_1s_3}_{\alpha_1\alpha_3})_{i_2j_3} (R^{s_1s_4}_{\alpha_1\alpha_4})_{i_1j_4} \Big) \nonumber \\ && (R^{s_3s_1}_{\alpha_3\alpha_1})_{i_3j_1} (R^{s_4s_1}_{\alpha_4\alpha_1})_{i_4j_2} - (R^{s_3s_1}_{\alpha_3\alpha_1})_{i_3j_2} (R^{s_4s_1}_{\alpha_4\alpha_1})_{i_4j_1}\Big) \nonumber \\ &=& \frac{1}{2}(\tau_2)_{i_1 i_2}\ (\tau_2)_{j_1j_2} [(R^{s_1s_3}_{\alpha_1\alpha_3})^T (\tau_2) (R^{s_1s_4}_{\alpha_1\alpha_4})]_{j_3j_4} [(R^{s_3s_1}_{\alpha_3\alpha_1}) (\tau_2) (R^{s_4s_1}_{\alpha_4\alpha_1})^T]_{i_3i_4}. \nonumber \\ \label{weights6} \end{eqnarray} Here the extra factor of $1/2$ is due to the fact that there are two dimers on one of the bonds and this leads to the an extra factor $1/(2!)^2$ present in Eq.(\ref{link}). This extra factor can be divided equally between the two vertices that the dimer connects. Type4-c site is obtained if $k_{x,s_1\alpha_1} = k_{x,s_3\alpha_3}=2$. In this case we get \begin{eqnarray} (W^c_4)^{s_1\alpha_1,s_3\alpha_3}_{i_1,i_2,i_3,i_4;j_1,j_2,j_3,j_4} &=& \frac{1}{4} (F^{s_1 s_3}_{\alpha_1\alpha_3})^4(\tau_2)_{i_1 i_2}\ (\tau_2)_{j_1j_2}(\tau_2)_{i_3i_4}(\tau_2)_{j_3j_4} \label{weights7} \end{eqnarray} \end{subequations} Again the extra factor of $1/4$ is due to two double dimers and the factor $F^{s_1,s_3}_{\alpha_1,\alpha_3}$ comes from the $R$ terms. This completes the classification of all the vertices \TABLE[t]{ \begin{tabular}{c|c|c|c} \hline Bag & $\omega_B$ & dimer representation & bag type\\ \hline & & &\\ \begin{minipage}[c]{0.2\textwidth} \begin{center} \includegraphics[width=0.8\textwidth]{fig8a.eps} \end{center} \end{minipage} & $\frac{1}{64}\kappa^{-4}$ & \begin{minipage}[c]{0.5\textwidth} $\begin{array}{ccc} (0,0,0,0;1,1) & (0,0,0,0;2,1) & (1,0,0,0;1,2) \cr (1,0,0,0;2,1) & (0,1,0,0;1,1) & (2,0,0,0;2,2) \cr (1,1,0,0;1,2) & & \end{array}$ \end{minipage} & $(0,0)$ \\ & & &\\ \hline & & &\\ \begin{minipage}[c]{0.2\textwidth} \begin{center} \includegraphics[width=0.65\textwidth]{fig8b.eps} \end{center} \end{minipage} & $\frac{21}{128}\kappa^{-12}$ & $\begin{array}{ccc} (0,0,0,0;1,1) & (0,0,0,0;2,1) & (1,0,0,0;1,1) \cr (1,0,0,0;2,1) & (0,1,0,0;1,1) & (0,1,0,0;2,1) \cr (2,0,0,0;2,1) & (1,1,0,0;1,1) & (1,1,0,0;2,1) \cr (0,2,0,0;1,1) & (2,1,0,0;2,1) & (1,2,0,0;1,1) \end{array}$ & $(4,1)$ \\ & & &\\ \hline & & &\\ \begin{minipage}[c]{0.2\textwidth} \begin{center} \includegraphics[width=0.65\textwidth]{fig8c.eps} \end{center} \end{minipage} & $\frac{3}{32}\kappa^{-8}$ & $\begin{array}{ccc} (0,0,0,0;1,1) & (0,0,0,0;2,1) & (0,0,0,0;3,1) \cr (1,0,0,0;2,1) & (1,0,0,0;3,1) & (0,1,0,0;1,1) \cr (0,1,0,0;3,1) & (0,0,1,0;1,1) & (0,0,1,0;2,1) \cr (1,1,0,0;3,1) & (1,0,1,0;2,1) & (0,1,1,0;1,1) \end{array}$ & $(8,0)$ \\ & & &\\ \hline & & &\\ \begin{minipage}[c]{0.2\textwidth} \begin{center} \includegraphics[width=0.8\textwidth]{fig8e.eps} \end{center} \end{minipage} & $\frac{5}{2048}\kappa^{-14}$ & $\begin{array}{ccc} (0,0,0,0;1,1) & (0,0,0,0;2,1) & (0,0,0,0;4,1) \cr (1,0,0,0;2,1) & (0,1,0,0;1,1) & (0,1,0,0;3,1) \cr (0,1,0,0;4,1) & (0,0,0,1;2,1) & (1,0,3,0;2,1) \cr (1,0,3,0;3,1) & (1,1,0,0;3,1) & (0,1,1,0;1,1) \cr (1,1,3,0;3,1) & & \end{array}$ & $(2,2)$ \\ & & &\\ \hline \end{tabular} \caption{\label{bagexamples} Some small fermion bags and their weights.} } \TABLE[t]{ \begin{tabular}{c} \hline Bag 1 \\ \hline $\begin{array}{cccccc} (0,0,0,0;1,1) & (0,0,0,0;2,1) & (1,0,0,0;2,1) & (1,0,0,0;3,2) & (0,1,0,0;1,1) & (1,0,1,0;4,1) \cr (0,0,1,0;1,1) & (0,0,1,0;4,1) & (1,0,1,1;4,1) & (0,0,1,1;2,1) & (0,1,1,1;3,1) & (0,0,1,2;1,1) \cr (0,0,1,2;3,1) & (0,0,2,1;2,1) & (0,0,2,1;4,1) & & &\cr \end{array}$ \\ \hline Bag 2 \\ \hline $\begin{array}{cccccc} (0,0,0,0;2,1) & (0,0,0,0;3,1) & (0,1,0,0;2,1) & (0,1,0,0;3,1) & (0,0,1,0;1,1) & (0,0,1,0;2,1)\cr (0,2,0,0;3,1) & (0,1,1,0;1,1) & (0,1,1,0;2,1) & (1,1,1,0;2,1) & (1,0,0,0;2,1) & (1,0,0,0;3,1)\cr (1,1,0,0;4,1) & (1,2,0,0;1,1) & (1,2,0,0;3,1) & (1,1,0,1;4,1) & (2,2,0,0;3,1) & (1,1,0,2;1,1)\cr (1,0,0,2;1,1) & (1,0,0,2;2,1) & (2,1,1,0;2,1) & (2,1,1,0;4,1) & (2,0,0,2;2,1) & (2,1,0,1;3,1)\cr (2,1,0,1;4,1) & & & & &\cr \end{array}$ \\ \hline Bag 3 \\ \hline $\begin{array}{cccccc} (1,0,1,0;4,1) & (0,0,1,0;1,1) & (0,0,1,0;4,1) & (1,0,1,1;4,1) & (0,0,1,1;2,1) & (0,1,1,1;3,1) \cr (0,0,1,2;1,1) & (0,0,1,2;3,1) & (0,0,2,1;2,1) & (0,0,2,1;4,1) & &\cr \end{array}$ \\ \hline \end{tabular} \caption{\label{negbags} Examples of bags with negative weight (Bag 1, Bag 2) and zero weight (Bag 3). Bag 1 has a weight of $-1.220703125000 \times 10^{-4}$ and Bag 2 has a weight $-1.430511474609\times 10^{-6}$ at $\kappa = 1$.} } \section{Sign Problem with Fermion Bags} \label{signfbag} Using the rules of the previous section it is possible to compute the weights of fermion bags numerically. However, it is exponentially difficult to compute the weight when the bag contains many type-3 and type-4a sites. In order to make progress, we label the bags with the number of type-3 and type-4a vertices it contains. Thus, a bag of type $(n_3,n_4)$ contains $n_3$ type-3 sites and $n_4$ type-4a sites. The $(0,0)$ bags contain no type-3 and type-4a sites and will be referred to as {\em simple} bags. Bags in which either $n_3$ or $n_4$ is non-zero will be called {\em complex} bags. Below we will argue that this classification in terms of $(n_3,n_4)$ helps in understanding the origin of the sign problem. By now it should be clear that every fermion bag can be uniquely represented through the dimers of the bag. We represent these dimers using the notation $(x_1,x_2,x_3,x_4;\alpha,k)$ where $x_i$ represent the four dimensional coordinates of the site inside the bag from which $k$ dimers emerge in the positive direction $\alpha$. Some examples of fermion bags, their dimer representation and their weights are given in table~\ref{bagexamples}. Although all the bags shown in the table have a positive weight we do find bags that have both zero weight and negative weights. However these bags are more complex. Two examples of negative weight bags and one example of zero weight bag are given in table~\ref{negbags} along with their weights : Bag-1 is a simple bag which contains twelve type-2 and two type-4b vertices. Bag-2 is a complex bag made up one type-4a, seventeen type-2, four type-3 vertices. Bag-3 is a simple loop bag with zero weight. In order to understand the sign problem we have generated fermion bags of a fixed type at random on an $L^4$ lattice using a worm algorithm. In the case of simple bags, we exclude single site bags and plaquette bags for convenience. We then analyze the probability distribution of bags of a given type using the bag action density defined by \begin{equation} s_B = -\frac{1}{N_B} \log(|\omega_B|), \end{equation} where $N_B$ is the number of sites in the bag. In figure~\ref{fig9} we plot the distribution of $(0,0)$, $(2,0)$ and $(2,1)$ bags on the $2^4$ lattice with open boundary conditions as a function of $S_B$. For each type of bag we have generated $10^4$ bags. The left panel contains the distribution of positive weight bags while the right panel shows the distribution of the negative weight bags. We find that all simple bags (or $(0,0)$ type bags) turn out to have positive weights. On the other hand complex bags ($(2,0)$ and $(2,1)$ type bags) do contain negative weight bags. In the $(2,0)$ case we find $6987$ positive and $3013$ negative weight bags, while in the $(2,1)$ case we find $5983$ positive and $4017$ negative weight bags. We have repeated a similar analysis on a $5^4$ lattice where we have generated more that $3\times 10^4$ bags. These results are plotted in figure~\ref{fig10}. In this case a small number of simple bags do have negative weights. But the positive and negative weight complex bag distributions are almost identical for both $(2,0)$ and $(2,1)$ bags as can be seen from the figure. Based on figures~\ref{fig9} and \ref{fig10} we conclude that as $n_3$ and $n_4$ increase (in other words as the bags become more complex) the distribution of positive and negative weight bags become more and more identical and hence the sign problem becomes severe. On the other hand simple bags are dominated by positive weight bags. Thus, we believe that to a very good approximation complex bags will cancel each other and the partition function is dominated by simple bags. Assuming this to be true an interesting effective model of strongly coupled QED emerges in which the partition function only contains simple bags. This model may share some of the physics of the original model. On the other hand it may be studied in its own right since it will have a much milder sign problem. We postpone this study to a future publication. \FIGURE[h]{ \includegraphics[width=0.7\textwidth]{fig9.eps} \caption{\label{fig9} Distribution positive weight bags (left panel) and negative weight bags (right panel) as a function of the action density $S_B$ on a $2^4$ hyper-cubic lattice with open boundary conditions. Three types of bags are shown: $(0,0)$-type (top) $(2,0)$-type (center) and $(2,1)$-type (bottom). See text for more details.} } \FIGURE[h]{ \includegraphics[width=0.7\textwidth]{fig10.eps} \caption{\label{fig10} Distribution positive weight bags (left panel) and negative weight bags (right panel) as a function of the action density $S_B$ on a $5^4$ hyper-cubic lattice with open boundary conditions. Three types of bags are shown: $(0,0)$-type (top) $(2,0)$-type (center) and $(2,1)$-type (bottom). See text for more details.} } \section{Fermion Bags with Non-negative Weights} \label{posfbags} Can we construct a model of strongly coupled QED with Wilson fermions which is completely free of the sign problem in the fermion bag approach? In order to answer this question we identify fermion bags with non-negative weights. We find that there are three classes of fermion bags for which we can prove analytically that the Boltzmann weights are always non-negative. The first is the {\em trivial} bag consisting of a single site for which the weight is $W_0=\kappa^{-4}$. The second class are {\em loop} bags which have a loop topology. These bags only contain sites of type-2. Since they are closed loops of confined fermion and anti-fermion world lines, their weight is a square of a trace of an $SU(2)$ matrix and hence real and non-negative. Interestingly if we modify the original action to \begin{equation} S = - \sum_{x,\alpha}\ \Big( \overline{\psi}_x\psi_x\ \overline{\psi}_x \Gamma^\alpha_+ \mathrm{e}^{i\phi_{x,\alpha}}\psi_{x+\alpha} \ +\ \overline{\psi}_{x+\alpha}\psi_{x+\alpha}\ \overline{\psi}_{x+\alpha} \Gamma^\alpha_- \mathrm{e}^{-i\phi_{x,\alpha}}\psi_{x}\Big) \ +\ \frac{1}{\kappa} \sum_x \overline{\psi}_x\psi_x \label{simplemodel} \end{equation} it is easy to argue that only trivial bags and loop bags are produced and the sign problem is completely solved. Since at small values of $\kappa$ type-3 and type-4 sites are naturally suppressed, this model may be a good approximation to the original model at small and intermediate values of $\kappa$. It most-likely contains the parity breaking phase transition of the original model \cite{Aoki:1983qi}. The hand-waving argument is as follows: At small values of $\kappa$ the loops are small while at large values loops proliferate the entire lattice and hence are naturally large. It is easy to show that on a finite lattice $\langle \overline{\psi} \gamma_5 \psi\rangle = 0$ due to the parity symmetry. On the other hand the two point correlation function $\langle \overline{\psi}_x \gamma_5 \psi_x \ \overline{\psi}_y \gamma_5 \psi_y\rangle$ will be non-zero. This two point correlation function gets contribution through an open loop with the end points at $x$ and $y$. Intuitively, at small values of $\kappa$, since the loops will be small, the correlation function decays exponentially to zero for large separations. On the other hand at large values of $\kappa$, when the loops are large, the correlation function will decay as a power law and thus signaling the spontaneous breaking of parity. This phase transition can be studied efficiently using a worm-type algorithm. We postpone this study to the future. It would be interesting to understand the nature of this transition. Note that the above model will suffer from a severe sign problem in the conventional approach since additional auxiliary fields in addition to the usual gauge field will have to be introduced to convert the action into a fermion bi-linear. This is yet another example of a model which is solvable in the fermion-bag approach rather than the conventional approach. The third class of bags with non-negative weights consist only of type-4 sites. These bags arise naturally when $\kappa=\infty$. The proof that the Boltzmann weight is non-negative is a bit more involved and relies on the bi-partite nature of the lattice. Let us briefly sketch the proof here. From Eq.(\ref{weights5}) we know that the contribution to the weight from each site within the bag comes from the tensor \begin{eqnarray} (W^a_4)^{s_1\alpha_1,s_2\alpha_2,s_3\alpha_3,s_4\alpha_4}_{i_1,i_2,i_3,i_4;j_1,j_2,j_3,j_4} &=& (S_{-s_1,\alpha_1})_{i_1 k_1} (S_{-s_2,\alpha_2})_{i_2 k_2} (S_{-s_3,\alpha_3})_{i_3 k_3} (S_{-s_4,\alpha_4})_{i_4 k_4} \varepsilon_{k_1 k_2 k_3 k_4} \nonumber \\ && \ \ \ \ \Big[(S_{s_1,\alpha_1})_{j_1 l_1} (S_{s_2,\alpha_2})_{j_2 l_2} (S_{s_3,\alpha_3})_{j_3 l_3} (S_{s_4,\alpha_4})_{j_4 l_4}\varepsilon_{l_1 l_2 l_3 l_4} \Big]^* \end{eqnarray} If we define \begin{equation} T^{s_1\alpha_1,s_2\alpha_2,s_3\alpha_3,s_4\alpha_4}_{i_1,i_2,i_3,i_4} = (S_{-s_1,\alpha_1})_{i_1 k_1} (S_{-s_2,\alpha_2})_{i_2 k_2} (S_{-s_3,\alpha_3})_{i_3 k_3} (S_{-s_4,\alpha_4})_{i_4 k_4} \varepsilon_{k_1 k_2 k_3 k_4} \end{equation} we see that \begin{equation} W_4 = T^{s_1\alpha_1,s_2\alpha_2,s_3\alpha_3,s_4\alpha_4}_{i_1,i_2,i_3,i_4} \ (T^{s_1\alpha_1,s_2\alpha_2,s_3\alpha_3,s_4\alpha_4}_{j_1,j_2,j_3,j_4})^* \end{equation} This structure of $W_4$ shows that, on a bi-partite lattice, the Boltzmann weight of the bag will be the square of the magnitude of a complex number obtained by tracing over the product of $T$'s on each site. Although the above argument proves that all the fermion bags with type-4 vertices will have non-negative weights, as far as we know, a practical Monte Carlo algorithm seems impossible due to the fact that it will be exponentially difficult to compute the Boltzmann weight of large fermion bags. In a sense, the sign problem may still be hidden in this computational difficulty. \section{Conclusions} \label{conc} In this work we have constructed the fermion bag approach to strongly coupled lattice QED with one flavor of Wilson fermions in four dimensions. We found that at $\kappa=\infty$ all fermion bags have non-negative weights. On the other hand fermion bags with negative weights do exist and create a severe sign problem at intermediate values of $\kappa$. By classifying bags as simple and complex we could show that complex bags almost cancel each other in the partition function while simple bags are almost always positive and hence contribute to the partition function. This suggests a simple solution to the sign problem. We simply approximate the partition function as the sum of contributions form simple bags. This approximate solution to the sign problem is similar in spirit to the meron cluster approach \cite{PhysRevLett.83.3116}. There special clusters called meron clusters appeared with equal weight but opposite sign in the partition function. Allowing meron clusters in the partition function would create a very severe sign problem. However, since they come with exactly equal weight and opposite signs, they cancel exactly and thus the sign problem was solved completely. In the current situation, the cancellation of complex bags is only approximate and suggestive. So, while we cannot justify rigorously that it is correct to ignore them in the partition function we believe it to be correct. In the future it would be interesting to study the partition function generated by simple bags alone. Finally, we have also constructed a simpler model (Eq.~(\ref{simplemodel})) that consists of loop bag and does not suffer from the sign problem. Simple arguments suggest that this model contains two phases : a parity symmetric phase at small values of $\kappa$ and a phase where parity is spontaneously broken at large values of $\kappa$. It would be interesting to study the nature of this phase transition in three dimensions. \acknowledgments We would like to thank Urs Wenger for discussions about the solution to the sign problem with Wilson fermions in two and three dimensions. This work was supported in part by the Department of Energy grant DE-FG02-05ER41368. S.C. wishes to acknowledge the Aspen Center for Physics for support where part of this work was accomplished. A. Li would like to thank Ming Gong for useful discussions.
{'timestamp': '2010-08-31T02:03:45', 'yymm': '1008', 'arxiv_id': '1008.5146', 'language': 'en', 'url': 'https://arxiv.org/abs/1008.5146'}
\section*{Introduction} Gr\"obner bases are the most widely applicable computational tool available in the context of commutative algebra and algebraic geometry. However they also are an important theoretical tool, as they can be used to establish properties such as, e.g., primality, normality, Cohen-Macaulayness, and to give formulas for the height of an ideal. Liaison theory, or linkage, on the other hand, is mostly regarded as a classification tool. In fact, much effort has been devoted in recent years to the study of liaison classes, in particular to deciding which ideals belong to the G-liaison class of a complete intersection. However, a clear understanding of the liaison pattern of an ideal often allows us to recursively compute invariants such as its Hilbert function and graded Betti numbers. In this paper we introduce liaison-theoretic methods as a tool in the theory of Gr\"obner bases. More precisely, we deduce that a certain set ${\mathcal G}$ of polynomials is a Gr\"obner basis for the ideal $I$ that it generates by understanding the linkage pattern of the ideal $I$ and of the monomial ideal generated by the initial terms of the elements of ${\mathcal G}$. Concretely, we apply this reasoning to ideals generated by minors or pfaffians, whose liaison pattern we understand. Ideals generated by minors or pfaffians have been studied extensively by both commutative algebraists and algebraic geometers. The study of determinantal rings and varieties is an active area of research per se, but it has also been instrumental to the development of new techniques, which have become part of the commonly used tools in commutative algebra. Ideals generated by minors and pfaffians are studied in invariant theory and combinatorics, and are relevant in algebraic geometry. In fact, many classical varieties such as the Veronese and the Segre varieties are cut out by minors or pfaffians. Degeneracy loci of morphisms between direct sums of line bundles over projective space have a determinantal description, as do Schubert varieties, Fitting schemes, and some toric varieties. Ideals generated by minors or pfaffians are often investigated by commutative algebraists by means of Gr\"obner basis techniques (see, e.g., \cite{br88}, \cite{st90}, \cite{he92}). Using such tools, many families of ideals generated by minors or pfaffians have been shown to enjoy properties such as primality, normality, and Cohen-Macaulayness. A different approach to the study of ideals generated by minors is via flags and degeneracy loci, and was initiated in~\cite{fu91}. Such an approach allows one to establish that these ideals are normal, Cohen-Macaulay and have rational singularities (see, e.g., \cite{kn05}, \cite{kn09}, \cite{kn10}). In recent years, much progress has been made towards understanding determinantal ideals and varieties also from the point of view of liaison theory. A central open question in liaison theory asks whether every arithmetically Cohen-Macaulay scheme is glicci, i.e., whether it belongs to the G-liaison class of a complete intersection. In~\cite{ga53}, \cite{kl01}, \cite{go07a}, \cite{go07b}, \cite{go08}, \cite{de09}, and~\cite{go10}, several families of ideals generated by minors or pfaffians are shown to be glicci. More precisely, it is shown that they can be obtained from an ideal generated by linear forms via a sequence of ascending elementary G-biliaisons. Moreover, each of the elementary G-biliaisons takes place between two ideals which both belong to the family in question. Since the linkage steps are described very explicitly, in theory it is possible to use the linkage results to recursively compute invariants or establish properties of these ideals. This has been done, e.g., in~\cite{co09} using the linkage results from~\cite{go07b}. Rather than contributing to the theory of liaison (see for instance~\cite{mi98}), in this paper we give a new method of using liaison as a tool. More precisely, we consider large families of ideals generated by minors or pfaffians in a matrix or a ladder, namely pfaffian ideals of ladders, mixed determinantal, and symmetric mixed determinantal ideals. Combining the liaison results from~\cite{go07b}, \cite{de09}, and~\cite{go10} with a Hilbert function computation, we are able to prove that the pfaffians or the minors are a reduced Gr\"obner basis for the ideal that they generate, with respect to any anti-diagonal or diagonal term-order. Moreover, we show the simplicial complex corresponding to the initial ideal of any ideal in the family that we consider is vertex decomposable. Vertex decomposability is a strong property, which in particular implies shellability of the complex and Cohen-Macaulayness of the associated initial ideal. In Section~\ref{mainlemma}, we prove a lemma which will be central to the subsequent arguments (Lemma~\ref{inid}). The lemma gives a sufficient criterion for a monomial ideal to be the initial ideal of a given ideal $J$. Both the ideal $J$ and the ``candidate'' initial ideal are constructed via Basic Double Linkage or elementary biliaison. In Section~\ref{ex_sect} we use Lemma~\ref{inid} to prove that the maximal minors of a matrix of indeterminates are a Gr\"obner basis of the ideal that they generate with respect to any diagonal (or anti-diagonal) term-order. Although the result is well-known, we wish to illustrate our method by showing how it applies to this simple example. In Sections~\ref{pfaff_sect}, \ref{symm_sect}, and \ref{ladd_sect}, we apply our technique to ideals generated by: pfaffians of mixed size in a ladder of a skew-symmetric matrix of indeterminates, minors of mixed size in a ladder of a symmetric matrix of indeterminates, and minors of mixed size in a ladder of a matrix of indeterminates. We prove that the natural generators of these ideals are a Gr\"obner basis with respect to any diagonal (in the case of minors of a symmetric matrix) or anti-diagonal (in the case of pfaffians or minors in a generic matrix) term-order. We also prove that the corresponding initial ideals are Cohen-Macaulay, and that the associated simplicial complexes are vertex decomposable. While Sections~\ref{mainlemma} and~\ref{ex_sect} are meant to be read first, Sections~\ref{pfaff_sect}, \ref{symm_sect}, and~\ref{ladd_sect} can be read independently of each other, and in any order. In the appendix, we indicate how our liaison-theoretic approach can be made self-contained in order to derive also all the classical Gr\"obner basis results about ladder determinantal ideals from one-sided ladders. \section{Linkage and Gr\"obner bases}\label{mainlemma} Let $K$ be an arbitrary field and let $R$ be a standard graded polynomial ring in finitely many indeterminates over $K$. In this section, we give a sufficient condition for a set ${\mathcal G}$ of polynomials to be a Gr\"obner basis with respect to a given term-order for the ideal $I$ that it generates. Our criterion depends on the linkage pattern of the ideal $I$ and of the monomial ideal generated by the initial terms of the elements of ${\mathcal G}$. In order to use geometric language, we need to consider the algebraic closure of the field $K$. Notice however that restricting the field of coefficients does not affect the property of being a Gr\"obner basis, as long as the polynomials are defined over the smaller field. More precisely, if $I=(g_1,\ldots,g_s)\subset K[x_0,\ldots,x_n]$ and $g_1,\ldots,g_s$ have coefficients in a subfield $k$ of $K$, then: $g_1,\ldots,g_s$ are a Gr\"obner basis of $I\subseteq K[x_0,\ldots,x_n]$ if and only if they are a Gr\"obner basis of $I\cap k[x_0,\ldots,x_n]$. In this sense, the property of being a Gr\"obner basis does not depend on the field of definition. Therefore, while proving that a set ${\mathcal G}$ of polynomials is a Gr\"obner basis with respect to a given term-order for the ideal $I$ that it generates, we may pass to the algebraic closure without loss of generality. We shall therefore assume without loss of generality that the field $K$ is algebraically closed. \begin{notat} Fix a term-order $\sigma$. Let $I\subset R$ be an ideal and let ${\mathcal G}$ be a set of polynomials in $R$. We denote by $in(I)$ the initial ideal of $I$ with respect to $\sigma$, and by $in({\mathcal G})$ the set of initial terms of the elements of ${\mathcal G}$ with respect to $\sigma$. \end{notat} For the convenience of the reader, we recall the definition of diagonal and anti-diagonal term-order. \begin{defn} Let $X$ be a matrix (resp. a skew-symmetric or a symmetric matrix) of indeterminates. Let $\sigma$ be a term-order on the set of terms of $K[X]$. The term-order $\sigma$ is {\bf diagonal} if the leading term with respect to $\sigma$ of the determinant of a submatrix of $X$ is the product of the indeterminates on its diagonal. It is {\bf anti-diagonal} if the leading term with respect to $\sigma$ of the determinant of a submatrix of $X$ is the product of the indeterminates on its anti-diagonal. \end{defn} \begin{notat} Let $A$ be a finitely generated, graded $R$-module. We denote by $H_A(d)$ the {\bf Hilbert function} of $A$ in degree $d$, i.e., the dimension of $A_d$ as a $k$-vector space. \end{notat} In this paper we study large families of ideals generated by minors or pfaffians in a matrix or a ladder, where the size of the minors or pfaffians is allowed to vary in different regions of the matrix or the ladder. We study their initial ideals with respect to a diagonal or anti-diagonal term-order, and we prove that the associated simplicial complexes are vertex decomposable. In particular, the initial ideals in question are Cohen-Macaulay. For the convenience of the reader, we now recall the main definitions. \begin{defn} A {\bf simplicial complex} $\Delta$ on $n+1$ vertices, is a collection of subsets of $\{0,\ldots,n\}$ such that for any $F\in\Delta$, if $G\subseteq F$, then $G\in\Delta$. An $F\in\Delta$ is called a {\bf face} of $\Delta$. The dimension of a face $F$ is $\dim F=|F|-1$, and the {\bf dimension} of the complex is $$\dim\Delta=\max\{\dim F\mid F\in\Delta\}.$$ The complex $\Delta=2^{\{0,\ldots,n\}}$ is called a {\bf simplex}. The {\bf vertices} of $\Delta$ are the subsets of $\{0,\ldots,n\}$ of cardinality one. The faces of $\Delta$ which are maximal with respect to inclusion are called {\bf facets}. A complex is {\bf pure} if all its facets have dimension equal to the dimension of the complex. \end{defn} \begin{notat} To each face $F\in\Delta$ we associate the following two simplicial subcomplexes of $\Delta$: the {\bf link} of $F$ $$\lk_F(\Delta)=\{G\in\Delta\mid F\cup G\in\Delta, F\cap G=\emptyset\}$$ and the {\bf deletion} $$\Delta-F=\{G\in\Delta\mid F\cap G=\emptyset\}.$$ If $F=\{k\}$ is a vertex, we denote the link of $F$ and the deletion by $\lk_k(\Delta)$ and $\Delta-k$, respectively. \end{notat} \begin{defn} A simplicial complex $\Delta$ is {\bf vertex decomposable} if it is a simplex, or it is the empty set, or there exists a vertex $k$ such that $\lk_k(\Delta)$ and $\Delta-k$ are both pure and vertex decomposable, and $$\dim\Delta=\dim(\Delta-k)=\dim\lk_k(\Delta)+1.$$ \end{defn} In this article, we show that the simplicial complexes associated to the initial ideals of the family of ideals that we consider are vertex decomposable. \begin{defn} The {\bf Stanley-Reisner ideal} associated to a complex $\Delta$ on $n+1$ vertices is the squarefree monomial ideal $$I_{\Delta}=(x_{i_1},\ldots,x_{i_s}\mid \{i_1,\ldots,i_s\}\not\in\Delta)\subset K[x_0,\ldots,x_n].$$ Conversely, to every squarefree monomial ideal $I\subseteq K[x_0,\ldots,x_n]$ one can associate the unique simplicial complex $\Delta(I)$ on $n+1$ vertices, such that $I_{\Delta(I)}=I.$ \end{defn} \begin{rmk} \begin{enumerate} \item A vertex $\{k\}\in\Delta$ is called a {\bf cone point} if for every face $F\in\Delta$, $F\cup\{k\}\in\Delta$. If $\Delta$ has a cone point $\{k\}$, then $$I_{\Delta}=I_{\Delta-k}K[x_0,\ldots,x_n].$$ Moreover, $\Delta$ is vertex decomposable if and only if $\Delta-k$ is. Therefore, {\em we will not distinguish between a complex and a cone over it}. \item Notice that, if $\Delta$ is a complex on $n+1$ vertices, then both $\lk_k(\Delta)$ and $\Delta-k$ are complexes on $n$ (or fewer) vertices. However, since we do not distinguish between a complex and a cone over it, we will regard them as complexes on $n+1$ vertices. \item On the side of the associated Stanley-Reisner ideals, let $I$ and $J$ be squarefree monomial ideals such that the generators of $I$ involve fewer variables than the generators of $J$. Then we may associate to $I$ and $J$ simplicial complexes $\Delta(I)$ and $\Delta(J)$ on the same number of variables. This amounts to regarding $I$ and $J$ as ideals in the same polynomial ring. \end{enumerate} \end{rmk} We now recall some definitions from liaison theory that will be fundamental throughout the paper. \begin{defn}\label{g0} Let $J\subset R$ be a homogeneous, saturated ideal. We say that $J$ is {\bf Gorenstein in codimension $\leq$ c} if the localization $(R/J)_P$ is a Gorenstein ring for any prime ideal $P$ of $R/J$ of height smaller than or equal to $c$. We often say that $J$ is $G_c$. We call {\bf generically Gorenstein}, or $G_0$, an ideal $J$ which is Gorenstein in codimension 0. \end{defn} \begin{defn}\label{bdl} Let $A\subset B\subset R$ be homogeneous ideals such that $\hgt A=\hgt B- 1$ and $R/A$ is Cohen-Macaulay. Let $f\in R_d$ be a homogeneous element of degree $d$ such that $A:f=A$. The ideal $C:=A+fB$ is called a {\bf Basic Double Link} of degree $d$ of $B$ on $A$. If moreover $A$ is $G_0$ and $B$ is unmixed, then $C$ is a {\bf Basic Double G-Link} of $B$ on $A$. \end{defn} \begin{defn}\label{bilid} Let $I,J\subset R$ be homogeneous, saturated, unmixed ideals, such that $\hgt(I)=\hgt(J)=c$. We say that $J$ is obtained by an {\bf elementary biliaison} of height $\ell$ from $I$ if there exists a Cohen-Macaulay ideal $N$ in $R$ of height $c-1$ such that $N\subseteq I\cap J$ and $J/N\cong [I/N](-\ell)$ as $R/N$-modules. If in addition the ideal $N$ is $G_0$, then $J$ is obtained from $I$ via an {\bf elementary G-biliaison}. If $\ell>0$ we have an {\bf ascending} elementary G-biliaison. \end{defn} We refer to~\cite{mi98}, \cite{kl01}, and~\cite{ha07} for the basic properties of Basic Double Linkage and elementary biliaison. Notice in particular that, if $C$ is a Basic Double Link of $B$ on $A$, then it is not known in general whether $B$ and $C$ belong to the same G-liaison class. On the other side, if $C$ is a Basic Double G-Link of $B$ on $A$, then $B$ and $C$ can be G-linked in two steps. We are now ready to state a sufficient condition for a set of polynomials to be a Gr\"obner basis (with respect to a given term-order) for the ideal that they generate. \begin{lemma}\label{inid} Let $I,J,N\subset R$ be homogeneous, saturated, unmixed ideals, such that $N\subseteq I\cap J$ and $\hgt(I)=\hgt(J)=\hgt(N)+1$. Assume that $N$ is Cohen-Macaulay. Let $A,B,C\subset R$ be monomial ideals such that $C\subseteq in(J)$, $A=in(N)$ and $B=in(I)$ with respect to some term-order $\sigma$. Assume that $A$ is Cohen-Macaulay and that $\hgt(B)=\hgt(A)+1$. Suppose that $J$ is obtained from $I$ via an elementary biliaison of height $\ell$ on $N$, and that $C$ is a Basic Double Link of degree $\ell$ of $B$ on $A$. Then $C=in(J)$. \end{lemma} \begin{proof} Since $C\subseteq in(J)$, it suffices to show that $H_C(d)=H_J(d)$ for all $d\in{\mathbb Z}$. This is indeed the case, since $$H_C(d)=H_B(d-\ell)+H_A(d)-H_A(d-\ell)=H_I(d-\ell)+H_N(d)-H_N(d-\ell)=H_J(d).$$ \end{proof} \begin{rmks} \begin{enumerate} \item Notice that, if $J$ is obtained from $I$ via an elementary biliaison on $N$, we do not know in general whether they belong to the same G-liaison class. However, if in addition $N$ is generically Gorenstein, then $J$ is obtained from $I$ via an elementary G-biliaison on $N$. In particular, it can be obtained from $I$ via two Gorenstein links on $N$. \item If in addition $A$ is generically Gorenstein, then $C$ is a Basic Double G-Link of $B$ on $A$. In particular, it can be obtained from $B$ via two Gorenstein links on $A$. \item The concepts of Basic Double Linkage and biliaison are interchangeable in the statement of Lemma~\ref{inid}. More precisely, Basic Double Linkage is a special case of biliaison. Moreover, it can be shown that if $J$ is obtained from $I$ via an elementary biliaison of height $\ell$ on $N$, then there exist an ideal $H$ and a $d\in{\mathbb Z}$ s.t. $H$ is a Basic Double Link of degree $d+\ell$ of $I$ on $N$ and also a Basic Double Link of degree $d$ of $J$ on $N$. Then it is easy to verify that the lemma holds under the weaker assumption that $C$ is obtained from $B$ via an elementary biliaison of height $\ell$ on $A$. \end{enumerate} \end{rmks} In the next section, we use the lemma to prove that the maximal minors of a matrix of indeterminates are a Gr\"obner basis of the ideal that they generate with respect to any diagonal term-order. Although the result is well-known, we wish to illustrate our method by showing how it applies to this simple example. In Sections~\ref{pfaff_sect}, \ref{symm_sect}, and \ref{ladd_sect}, we apply the lemma to ideals generated by: pfaffians of mixed size in a ladder of a skew-symmetric matrix of indeterminates, minors of mixed size in a ladder of a symmetric matrix of indeterminates, and minors of mixed size in a one-sided ladder of a matrix of indeterminates. We prove that the natural generators of these ideals are a Gr\"obner basis with respect to any diagonal (in the case of minors of a symmetric matrix) or anti-diagonal (in the case of pfaffians or minors in a generic matrix) term-order. We also prove that their initial ideals are squarefree and that they can be obtained from an ideal generated by indeterminates via a sequence of Basic Double G-links of degree 1, which only involve squarefree monomial ideals. In particular, they are glicci (i.e., they can be obtained from a complete intersections via a sequence of G-links), hence they are Cohen-Macaulay. Moreover, we prove that the simplicial complexes associated to their initial ideals are vertex decomposable. Notice that, if we knew a priori that the simplicial complexes associated to the initial ideals are vertex decomposable, then we could deduce that the corresponding squarefree monomial ideals are glicci by the following result of Nagel and R\"omer. However, we cannot directly apply their result in our situation, since we need to first produce the Basic Double G-links on the squarefree monomial ideals, in order to deduce that the associated simplicial complexes are vertex decomposable. \begin{thm}[\cite{na08}, Theorem~3.3] Let $\Delta$ be a simplicial complex on $n+1$ vertices and let $I_{\Delta}\subset K[x_0,\ldots,x_n]$ be the Stanley-Reisner ideal of $\Delta$. Assume that $\Delta$ is (weakly) vertex decomposable. Then $I_{\Delta}$ can be obtained from an ideal generated by indeterminates via a sequence of Basic Double G-links of degree 1, which only involve squarefree monomial ideals. \end{thm} Notice that, although the statement above is slightly stronger than Theorem~3.3 in~\cite{na08}, the result above follows from the proof in~\cite{na08}. \section{A simple example: ideals of maximal minors}\label{ex_sect} This section is meant to illustrate the idea and the method of our proof on a simple example. We prove that the maximal minors of a matrix of indeterminates are a Gr\"obner basis of the ideal that they generate with respect to any diagonal term-order. Notice that for the case of minors in a matrix, diagonal term-orders are the same as anti-diagonal ones, up to transposing the matrix. \begin{thm}\label{maxmin} Let $X=(x_{ij})$ be an $m\times n$ matrix whose entries are distinct indeterminates, $m\leq n$. Let $K[X]=K[x_{ij} \;|\; 1\leq i,j\leq n ]$ be the polynomial ring associated to $X$. Let ${\mathcal G}_m(X)$ be the set of maximal minors of $X$ and let $I_m(X)\subset K[X]$ be the ideal generated by ${\mathcal G}_m(X)$. Let $\sigma$ be a diagonal term-order and let $in(I_m(X))$ be the initial ideal of $I_m(X)$ with respect to $\sigma$. Then ${\mathcal G}_m(X)$ is a reduced Gr\"obner basis of $I_m(X)$ with respect to $\sigma$ and $in(I_m(X))$ is a squarefree, Cohen-Macaulay ideal. Moreover, the simplicial complex $\Delta_X$ associated to $in(I_m(X))$ is vertex decomposable. \end{thm} \begin{proof} We proceed by induction on $mn=|X|$. If $|X|=1$, then the ideal $I_1(X)$ is generated by one indeterminate. Hence ${\mathcal G}_1(X)$ is a reduced Gr\"obner basis of $I_1(X)$ with respect to any term ordering and $I_1(X)=in(I_1(X))$ is generated by indeterminates. The associated simplicial complex $\Delta_X$ is the empty set, hence it is vertex decomposable. By induction hypothesis, in order to prove the thesis for a matrix with $m$ rows and $n$ columns, we may assume that it holds for any matrix with fewer than $mn$ entries. If $m=1$, then ${\mathcal G}_1(X)$ consists of indeterminates, hence it is a reduced Gr\"obner basis of $I_1(X)$ with respect to any term ordering. Moreover, $I_1(X)=in(I_1(X))$ is generated by indeterminates. The associated simplicial complex $\Delta_X$ is the empty set, hence it is vertex decomposable. If $m\geq 2$, let $C\subseteq in(I_m(X))$ be the ideal generated by the initial terms of ${\mathcal G}_m(X)$. We claim that $C=in(I_m(X))$. In fact, let $Z$ be the $m\times(n-1)$ matrix obtained from $X$ by deleting the last column, and let $Y$ be the $(m-1)\times(n-1)$ matrix obtained from $Z$ by deleting the last row. Let $A=in({\mathcal G}_m(Z))$ be the ideal generated by the initial terms of the elements of ${\mathcal G}_m(Z)$, and let $B=in({\mathcal G}_{m-1}(Y))$ be the ideal generated by the initial terms of the elements of ${\mathcal G}_{m-1}(Y)$. By the induction hypothesis, ${\mathcal G}_m(Z)$ is a Gr\"obner basis for $I_m(Z)$ and ${\mathcal G}_{m-1}(Y)$ is a Gr\"obner basis for $I_{m-1}(Y)$. In other words, $A=in(I_m(Z))$ and $B=in(I_{m-1}(Y))$. Notice that $$in({\mathcal G}_m(X))=in({\mathcal G}_m(Z))\cup x_{mn}in({\mathcal G}_{m-1}(Y))$$ where $x_{mn}{\mathcal G}$ denotes the set of products $x_{mn}g$ for $g\in{\mathcal G}$. Since $x_{mn}$ does not appear in $in({\mathcal G}_m(Z))$, we have $A:x_{mn}=A$. Therefore, $$A+x_{mn}B=C\subseteq in(I_m(X))$$ and $C$ is a Basic Double G-Link of degree 1 of $B$ on $A$. $A$ and $B$ are squarefree and glicci by induction hypothesis, therefore $C$ is squarefree and glicci. It follows from~\cite[Theorem 3.6]{kl01} that $I_m(X)$ is obtained from $I_{m-1}(Y)$ via an elementary G-biliaison of height 1 on $I_m(Z)$. By Lemma~\ref{inid} the maximal minors of $X$ are a Gr\"obner basis of $I_m(X)$ with respect to $\sigma$, and $C=in(I_m(X))$. Finally, let $\Delta_Z,\Delta_Y,\Delta_X$ be the simplicial complexes associated to $A,B,C$, respectively. Since $A+x_{mn}B=C$, $$\Delta_Z=\Delta_X-mn\;\;\;\mbox{and}\;\;\; \Delta_Y=\lk_{mn}(\Delta_X).$$ Since $\Delta_Y$ and $\Delta_Z$ are vertex decomposable by induction hypothesis, so is $\Delta_X$. \end{proof} \begin{rmk} Theorem~\ref{maxmin} gives in particular a new proof of the fact that the maximal minors of a generic matrix are a Gr\"obner basis for the ideal that they generate, with respect to a diagonal or anti-diagonal term-order. This is a classical result. While previous proofs have a combinatorial flavor, our proof is completely algebraic, and independent of all the previous Gr\"obner basis results. \end{rmk} \section{Pfaffian ideals of ladders}\label{pfaff_sect} In this section, we study Gr\"obner bases with respect to an anti-diagonal term-order of ideals generated by pfaffians. We always consider pfaffians in a skew-symmetric matrix whose entries are distinct indeterminates. Pfaffians of size $2t$ in a skew-symmetric matrix are known to be a Gr\"obner basis for the ideal that they generate, as shown by Herzog and Trung in~\cite{he92} and independently by Kurano in~\cite{ku91}. In~\cite{de98}, De Negri generalized this result to pfaffians of size $2t$ in a symmetric ladder. In this section, we extend these results to pfaffians of mixed size in a symmetric ladder. In other words, we consider ideals generated by pfaffians, whose size is allowed to vary in different regions of the ladder (see Definition~\ref{idealpf}). In Theorem~\ref{gbpf} we prove that the pfaffians are a reduced Gr\"obner basis with respect to any anti-diagonal term-order for the ideal that they generate, and that the corresponding initial ideal is Cohen-Macaulay and squarefree. Moreover, the associated simplicial complex is vertex decomposable. The proof that we give is not a generalization of the earlier ones. Instead, we use our liaison-theoretic approach and the linkage results of~\cite{de09}. In the recent paper~\cite{de09u}, De Negri and Sbarra consider a different family of ideals generated by pfaffians of mixed size in a skew-symmetric matrix, namely cogenerated ideals. They are able to show that the pfaffians are almost never a Gr\"obner basis of the ideal that they generate with respect to an anti-diagonal term-order. The family of ideals that they study and the family that we consider in this article have a small overlap, which consists of ideals of pfaffians of size $2t$ in a symmetric ladder, and of ideals generated by $2t$-pfaffians in the first $m$ rows and columns of the matrix and $(2t+2)$-pfaffians in the whole matrix. For the ideals in the overlap, the pfaffians are a Gr\"obner basis for the ideal that they generate. This follows from Theorem~2.8 of~\cite{de09u}, as well as from our Theorem~\ref{gbpf}. The results in~\cite{de09u} and those in this article are obtained independently and by following a completely different approach. Nevertheless, we feel that they complement each other nicely, giving a more complete picture of the behavior of Gr\"obner bases of pfaffian ideals and of their intrinsic complexity. Pfaffian ideals of ladders were introduced and studied by De Negri and the first author in~\cite{de09}. From the point of view of liaison theory, this is a very natural family to consider. In this section, we prove that pfaffians of mixed size in a ladder of a skew-symmetric matrix are a Gr\"obner basis with respect to any anti-diagonal term-order for the ideal that they generate. We start by introducing the relevant definitions and notation. Let $X=(x_{ij})$ be an $n\times n$ skew-symmetric matrix of indeterminates. In other words, the entries $x_{ij}$ with $i<j$ are indeterminates, $x_{ij}=-x_{ji}$ for $i>j$, and $x_{ii}=0$ for all $i=1,...,n$. Let $K[X]=K[x_{ij} \;|\; 1\leq i<j\leq n ]$ be the polynomial ring associated to $X$. \begin{defn}\label{laddpf} A {\bf symmetric ladder} $\mathcal L$ of $X$ is a subset of the set ${\mathcal X}=\{(i,j)\in{\mathbb N}^2 \;|\; 1\le i,j\le n\}$ with the following properties : \begin{enumerate} \item if $(i,j)\in {\mathcal L}$ then $(j,i)\in {\mathcal L}$, \item if $i<h,j>k$ and $(i,j),(h,k)\in\mathcal L$, then also $(i,k),(i,h),(h,j),(j,k)\in\mathcal L$. \end{enumerate} We do not assume that the ladder ${\mathcal L}$ is connected, nor that $X$ is the smallest skew-symmetric matrix having ${\mathcal L}$ as ladder. It is easy to see that any symmetric ladder can be decomposed as a union of square subladders \begin{equation}\label{decomppf} {\mathcal L}={\mathcal X}_1\cup\ldots\cup {\mathcal X}_s \end{equation} where $${\mathcal X}_k=\{(i,j)\;|\; a_k\le i,j \le b_k\},$$ for some integers $1\leq a_1\leq\ldots\leq a_s\leq n$ and $1\leq b_1\leq\ldots\leq b_s\leq n$ such that $a_k<b_k$ for all $k$. We say that ${\mathcal L}$ is the ladder with {\bf upper corners} $(a_1,b_1),\ldots,(a_s,b_s)$, and that ${\mathcal X}_k$ is the square subladder of ${\mathcal L}$ with upper outside corner $(a_k,b_k)$. See Figure~\ref{decomp_fig}. \begin{figure}[h!] \input{figs/pfaff1a.pstex_t} \caption{An example of a symmetric ladder with its decomposition as a union of skew-symmetric matrices and the corresponding upper corners.} \label{decomp_fig} \end{figure} We allow two upper corners to have the same first or second coordinate, however we assume that no two upper corners coincide. We assume moreover that all upper corners belong to the border of the ladder, i.e., $(a_k-1,b_k+1)\not\in{\mathcal L}$. Notice that with these conventions a ladder does not have a unique decomposition of the form (\ref{decomppf}). In other words, a symmetric ladder does not correspond uniquely to a set of upper corners $(a_1,b_1),\ldots,(a_s,b_s)$. However, any symmetric ladder is determined by its upper corners as in (\ref{decomppf}). Moreover, the upper corners of $\mathcal L$ determine the submatrices ${\mathcal X}_k$. We assume that every symmetric ladder comes with its set of upper corners and the corresponding decomposition as a union of square submatrices as in (\ref{decomppf}). Notice that the set of upper corners as given in our definition contains all the usual upper outside corners, and may contain some of the usual upper inside corners, as well as other elements of the ladder which are not corners of the ladder in the usual sense. Given a ladder $\mathcal L$ we set $L=\{x_{ij}\in X\;|\; (i,j)\in {\mathcal L},\; i<j\}$. If $p$ is a positive integer, we let $I_{2p}(L)$ denote the ideal generated by the set of the $2p$-pfaffians of $X$ which involve only indeterminates of $L$. In particular $I_{2p}(X)$ is the ideal of $K[X]$ generated by the $2p$-pfaffians of $X$. \end{defn} \begin{defn}\label{idealpf} Let ${\mathcal L}={\mathcal X}_1\cup\ldots\cup {\mathcal X}_s$ be a symmetric ladder. Let $X_k=\{x_{i,j}\;|\; (i,j)\in{\mathcal X}_k,\; i<j\}$ for $k=1,\dots,s$. Fix a vector $t=(t_1,\ldots,t_s)$, $t\in{\mathbb Z}_+^s$. The {\bf ladder pfaffian ideal} $I_{2t}(L)$ is by definition the sum of pfaffian ideals $I_{2t_1}(X_1)+\ldots+I_{2t_s}(X_s)$. We also refer to these ideals as {\bf pfaffian ideals of ladders}. For ease of notation, we regard all ladder pfaffian ideals as ideals in $K[X]$. \end{defn} This family of ideals was introduced and studied in~\cite{de09}. From the point of view of G-biliaison, this appears to be the right family to consider. Notice that it does not coincide with the family of cogenerated pfaffian ideals as defined, e.g., in~\cite{de95}. \begin{notat}\label{genspf} Denote by ${\mathcal G}_{2t_k}(X_k)$ the set of the $2t_k$-pfaffians of $X$ which involve only indeterminates of $X_k$ and let $${\mathcal G}_{2t}(L)={\mathcal G}_{2t_1}(X_1)\cup\ldots\cup{\mathcal G}_{2t_s}(X_s).$$ The elements of ${\mathcal G}_{2t}(L)$ are a minimal system of generators of $I_{2t}(L)$. We sometimes refer to them as ``natural generators''. \end{notat} \begin{notat}\label{ladderheight} For a symmetric ladder ${\mathcal L}$ with upper corners $(a_1,b_1),\ldots,(a_s,b_s)$ and $t=(t_1,\ldots,t_s)$, we denote by $\tilde{{\mathcal L}}$ the symmetric ladder with upper corners $(a_1+t_1-1,b_1-t_1+1),\ldots,(a_s+t_s-1,b_s-t_s+1)$. See Figure~\ref{ladd_hgt_fig}. \begin{figure}[h!] \input{figs/pfaff2.pstex_t} \caption{An example of a ladder ${\mathcal L}$ with five upper corners and $t=(2,3,4,2,3)$. The corresponding $\tilde{{\mathcal L}}$ is shaded.} \label{ladd_hgt_fig} \end{figure} \end{notat} The ladder $\tilde{{\mathcal L}}$ computes the height of the ideal $I_{2t}(L)$ as follows. \begin{prop}[Proposition~1.10, \cite{de09}] Let ${\mathcal L}$ be the symmetric ladder with upper corners $(a_1,b_1), \ldots,$ $(a_s,b_s)$ and $t=(t_1,\ldots,t_s)$. Let $\tilde{{\mathcal L}}$ be as in Notation~\ref{ladderheight}. Then $\tilde{{\mathcal L}}$ is a symmetric ladder and the height of $I_{2t}(L)$ is equal to the cardinality of $\{(i,j)\in\tilde{{\mathcal L}} \;|\; i<j\}$. \end{prop} The following is the main result of~\cite{de09}. Its proof consists of an explicit description of the G-biliaison steps, which will be used in the proof of Theorem~\ref{gbpf}. \begin{thm}[Theorem~2.3, \cite{de09}]\label{biliaisonpf} Any pfaffian ideal of ladders can be obtained from an ideal generated by indeterminates by a finite sequence of ascending elementary G-biliaisons. \end{thm} By combining Lemma~\ref{inid} and Theorem~\ref{biliaisonpf}, we prove that the pfaffians are a Gr\"obner basis of the ideal that they generate with respect to any anti-diagonal term-order. \begin{thm}\label{gbpf} Let $X=(x_{ij})$ be an $n\times n$ skew-symmetric matrix of indeterminates. Let ${\mathcal L}={\mathcal X}_1\cup\ldots\cup{\mathcal X}_s$ and $t=(t_1,\ldots,t_s)$. Let $I_{2t}(L)\subset K[X]$ be the corresponding ladder pfaffian ideal and let ${\mathcal G}_{2t}(L)$ be the set of pfaffians that generate it. Let $\sigma$ be any anti-diagonal term-order. Then ${\mathcal G}_{2t}(L)$ is a reduced Gr\"obner basis of $I_{2t}(L)$ with respect to $\sigma$. Moreover, the initial ideal of $I_{2t}(L)$ with respect to $\sigma$ is squarefree, and the associated simplicial complex is vertex decomposable. In particular, the initial ideal of $I_{2t}(L)$ is Cohen-Macaulay. \end{thm} \begin{proof} Let $$I_{2t}(L)=I_{2t_1}(X_1)+\cdots+I_{2t_s}(X_s)\subset K[X]$$ be the pfaffian ideal of the ladder ${\mathcal L}$ with $t=(t_1,\ldots,t_s)$ and upper corners $(a_1,b_1),\ldots,$ $(a_s,b_s)$. Let ${\mathcal G}_{2t}(L)$ be the set of pfaffians that generate $I_{2t}(L)$. We proceed by induction on $\ell=|{\mathcal L}|$. If $\ell=|{\mathcal L}|=1$, then ${\mathcal L}=\tilde{{\mathcal L}}$ and $t=1$. ${\mathcal G}_{2t}(L)$ consists only of one indeterminate, in particular it is a reduced Gr\"obner basis with respect to any term-order $\sigma$ of the ideal that it generates. Since $in(I_2(L))=I_2(L)$ is generated by indeterminates, it is squarefree and Cohen-Macaulay, and the associated simplicial complex is the empty set. We assume that the thesis holds for ideals associated to ladders ${\mathcal N}$ with $|{\mathcal N}|<\ell$ and we prove it for an ideal $I_{2t}(L)$ associated to a ladder ${\mathcal L}$ with $|{\mathcal L}|=\ell$. If $t_1=\ldots=t_s=1$, then ${\mathcal G}_{2t}(L)$ consists only of indeterminates. In particular, it is a reduced Gr\"obner basis of the ideal that it generates, with respect to any term-order $\sigma$. Moreover, $in(I_2(L))=I_2(L)$ is generated by indeterminates, hence it is squarefree and Cohen-Macaulay. The associated simplicial complex is the empty set. Otherwise, let $k\in\{1,\ldots,s\}$ such that $t_k=\max\{t_1,\ldots,t_s\}\geq 2$. Let $\mathcal L'$ be the ladder with upper corners $$(a_1,b_1),\ldots,(a_{k-1},b_{k-1}),(a_k+1,b_k-1), (a_{k+1},b_{k+1}),\ldots,(a_s,b_s)$$ and let $t'=(t_1,\ldots,t_{k-1},t_k-1,t_{k+1},\ldots,t_s)$. Let $I_{2t'}(L')\subset K[X]$ be the associated ladder pfaffian ideal. Let ${\mathcal G}_{2t'}(L')$ be the set of pfaffians which minimally generate $I_{2t'}(L')$. Since $|{\mathcal L}'|<\ell$, by induction hypothesis ${\mathcal G}_{2t'}(L')$ is a reduced Gr\"obner basis of $I_{2t'}(L')$ with respect to any anti-diagonal term-order. Hence $$in(I_{2t'}(L'))=(in({\mathcal G}_{2t'}(L'))).$$ Let $\mathcal M$ be the ladder obtained from ${\mathcal L}$ by removing $(a_k,b_k)$ and $(b_k,a_k)$. ${\mathcal M}$ has upper corners $$(a_1,b_1),\ldots,(a_{k-1},b_{k-1}),(a_k,b_k-1), (a_k+1,b_k),(a_{k+1},b_{k+1}),\ldots,(a_s,b_s)$$ and $u=(t_1,\dots,t_{k-1},t_k,t_k,t_{k+1},\dots,t_s)$. Let $I_{2u}(M)\subset K[X]$ be the associated ladder pfaffian ideal. Let ${\mathcal G}_{2u}(M)$ be the set of pfaffians which minimally generate $I_{2u}(M)$. Since $|{\mathcal M}|=\ell-1$, by induction hypothesis ${\mathcal G}_{2u}(M)$ is a reduced Gr\"obner basis of $I_{2u}(M)$ with respect to any anti-diagonal term-order, and $$in(I_{2u}(M))=(in({\mathcal G}_{2u}(M))).$$ It follows from~\cite{de09}, Theorem~2.3 that $I_{2t}(L)$ is obtained from $I_{2t'}(L')$ via an ascending elementary G-biliaison of height $1$ on $I_{2u}(M)$. The ideals $in(I_{2t'}(L'))$ and $in(I_{2u}(M))$ are Cohen-Macaulay by induction hypothesis. Moreover $$in({\mathcal G}_{2t}(L))=in({\mathcal G}_{2u}(M))\cup x_{a_k,b_k}in({\mathcal G}_{2t'}(L')),$$ where $x_{a,b}{\mathcal G}$ denotes the set of products $x_{a,b}g$ for $g\in{\mathcal G}$. Since $x_{a_k,b_k}$ does not appear in $in({\mathcal G}_{2u}(M))$, it does not divide zero modulo the ideal $in(I_{2u}(M))$. Therefore, \begin{equation}\label{cpx_pfaff} I:=(in({\mathcal G}_{2t}(L)))=in(I_{2u}(M))+x_{a_k,b_k}in(I_{2t'}(L')) \subseteq in(I_{2t}(L)) \end{equation} and $I$ is a Basic Double G-Link of degree 1 of $in(I_{2t'}(L'))$ on $in(I_{2u}(M))$. Therefore $I$ is a squarefree Cohen-Macaulay ideal. By Lemma~\ref{inid} $I=in(I_{2t}(L))$, hence ${\mathcal G}_{2t}(L)$ is a Gr\"obner basis of $I_{2t}(L)$ with respect to any anti-diagonal term-order. Let $\Delta$ be the simplicial complex associated to $in(I_{2t}(L))$. By (\ref{cpx_pfaff}) the simplicial complexes associated to $in(I_{2t'}(L')$ and $in(I_{2u}(M))$ are $\lk_{(a_k,b_k)}(\Delta)$ and $\Delta-(a_k,b_k)$, respectively. $\Delta$ is vertex decomposable, since $\lk_{(a_k,b_k)}(\Delta)$ and $\Delta-(a_k,b_k)$ are by induction hypothesis. \end{proof} \begin{rmks}\label{blahblah} \begin{enumerate} \item From the proof of the theorem it also follows that $I_{2t}(L)$ is obtained from an ideal generated by indeterminates via a sequence of degree 1 Basic Double G-links, which only involve squarefree monomial ideals. Hence in particular it is glicci. Since any vertex decomposable complex is shellable, it also follows that the associated simplicial complex is shellable (see Section~5 of~\cite{na08} for a summary of the implications among different properties of simplicial complexes, such as vertex decomposability, shellability, Cohen-Macaulayness, etc). \item The proof of Theorem~\ref{gbpf} given above does not constitute a new proof of the fact that the $2t$-pfaffians in a matrix or in a symmetric ladder are a Gr\"obner basis with respect to any anti-diagonal term-order for the ideal that they generate. In fact, our proof is based on Theorem~2.3 in~\cite{de09}, which in turn relies on the fact that pfaffians all of the same size in a ladder of a skew-symmetric matrix generate a prime ideal. Primality of the ideal is classically deduced from the fact that the pfaffians are a Gr\"obner basis. So we are extending (and not reproving) the results in~\cite{he92}, \cite{ku91}, and~\cite{de98}. \end{enumerate} \end{rmks} \section{Symmetric mixed ladder determinantal ideals}\label{symm_sect} In this section, we study ideals generated by minors contained in a ladder of a generic symmetric matrix. We show that the minors are Gr\"obner bases for the ideals that they generate, with respect to a diagonal term-order. We also show that the corresponding initial ideal is glicci (hence Cohen-Macaulay) and squarefree, and that the associated simplicial complex is vertex decomposable. Cogenerated ideals of minors in a symmetric matrix of indeterminates or a symmetric ladder thereof were studied by Conca in~\cite{co94c} and~\cite{co94a}. We refer to~\cite{co94c} and~\cite{co94a} for the definition of cogenerated determinantal ideals in a symmetric matrix. In those articles Conca proved among other things that the natural generators of cogenerated ideals of ladders of a symmetric matrix are a Gr\"obner bases with respect to any diagonal term-order. In this section, we study the family of symmetric mixed ladder determinantal ideals. This family strictly contains the family of cogenerated ideals. Symmetric mixed ladder determinantal ideals have been introduced and studied by the first author in~\cite{go10}. This is a very natural family to study, from the point of view of liaison theory. In this paper we extend the result of Conca and prove that the natural generators of symmetric mixed ladder determinantal ideals are a Gr\"obner bases with respect to any diagonal term-order. Let $X=(x_{ij})$ be an $n\times n$ symmetric matrix of indeterminates. In other words, the entries $x_{ij}$ with $i\leq j$ are distinct indeterminates, and $x_{ij}=x_{ji}$ for $i>j$. Let $K[X]=K[x_{ij}\mid 1\leq i\leq j\leq n]$ be the polynomial ring associated to the matrix $X$. In the sequel, we study ideals generated by the minors contained in a ladder of a generic symmetric matrix. Throughout the section, we let $${\mathcal X}=\{(i,j)\mid 1\leq i,j\leq n\}.$$ We let ${\mathcal L}$ be a symmetric ladder (see Definition~\ref{laddpf}). We can restrict ourselves to symmetric ladders without loss of generality, since the ideal generated by the minors in a ladder of a symmetric matrix coincides with the ideal generated by the minors in the smallest symmetric ladder containing it. We do not assume that ${\mathcal L}$ is connected, nor that that $X$ is the smallest symmetric matrix having ${\mathcal L}$ as a ladder. Let $${\mathcal X}^+=\{(i,j)\in{\mathcal X}\mid 1\leq i\leq j\leq n\}\;\;\;\mbox{and}\;\;\; {\mathcal L}^+={\mathcal L}\cap{\mathcal X}^+.$$ Since ${\mathcal L}$ is symmetric, ${\mathcal L}^+$ determines ${\mathcal L}$ and vice versa. We will abuse terminology and call ${\mathcal L}^+$ a ladder. Observe that ${\mathcal L}^+$ can be written as $${\mathcal L}^+=\{(i,j)\in{\mathcal X}^+\mid i\leq c_l \mbox{ or } j\leq d_l \mbox{ for } l=1,\ldots,r\; \mbox{ and }$$ $$ i\geq a_l \mbox{ or } j\geq b_l \mbox{ for } l=1,\ldots,u \}$$ for some integers $1\leq a_1<\ldots<a_u\leq n$, $n\geq b_1>\ldots>b_u\geq 1$, $1\leq c_1<\ldots<c_r\leq n$, and $n\geq d_1>\ldots>d_r\geq 1$, with $a_l\leq b_l$ for $l=1,\ldots,u$ and $c_l\leq d_l$ for $l=1,\ldots,r$. The points $(a_1,b_2),\ldots,(a_{u-1},b_u)$ are the {\bf upper outside corners} of the ladder, $(a_1,b_1),\ldots,(a_u,b_u)$ are the {\bf upper inside corners}, $(c_2,d_1),\ldots,(c_r,d_{r-1})$ the {\bf lower outside corners}, and $(c_1,d_1),\ldots,(c_r,d_r)$ the {\bf lower inside corners}. If $a_u\neq b_u$, then $(a_u,a_u)$ is an upper outside corner and we set $b_{u+1}=a_u$. Similarly, if $c_r\neq d_r$ then $(d_r,d_r)$ is a lower outside corner, and we set $c_{r+1}=d_r$. A ladder has at least one upper and one lower outside corner. Moreover, $(a_1,b_1)=(c_1,d_1)$ is both an upper and a lower inside corner. See Figure~\ref{corners}. \begin{figure}[h!] \begin{center} \input{figs/symm1.pstex_t} \caption{An example of ladder with tagged lower and upper corners.} \label{corners} \end{center} \end{figure} The {\bf lower border} of ${\mathcal L}^+$ consists of the elements $(c,d)$ of ${\mathcal L}^+$ such that either $c_l\leq c\leq c_{l+1}$ and $d=d_l$, or $c=c_l$ and $d_l\leq d\leq d_{l-1}$ for some $l$. See Figure~\ref{lowerborder}. \begin{figure}[h!] \begin{center} \input{figs/symm2.pstex_t} \caption{The lower border of the same ladder.} \label{lowerborder} \end{center} \end{figure} All the corners belong to ${\mathcal L}^+$. In fact, the ladder ${\mathcal L}^+$ corresponds to its set of lower and upper outside (or equivalently lower and upper inside) corners. The lower corners of a ladder belong to its lower border. Given a ladder ${\mathcal L}$ we set $L=\{x_{ij}\in X\mid (i,j)\in{\mathcal L}^+\}$. For $t$ a positive integer we let $I_t(L)$ denote the ideal generated by the set of the $t$-minors of $X$ which involve only indeterminates of $L$. In particular $I_t(X)$ is the ideal of $K[X]$ generated by the minors of $X$ of size $t\times t$. \begin{notat}\label{decompsym} Let ${\mathcal L}^+$ be a ladder. For $(v,w)\in{\mathcal L}^+$ let $${\mathcal L}^+_{(v,w)}=\{(i,j)\in{\mathcal L}^+\mid i\leq v,\; j\leq w\}, \;\;\;\;\; L_{(v,w)}=\{x_{ij}\in X\mid (i,j)\in{\mathcal L}^+_{(v,w)}\}.$$ Notice that ${\mathcal L}^+_{(v,w)}$ is a ladder and $${\mathcal L}^+=\bigcup_{(v,w)\in{\mathcal U}}{\mathcal L}^+_{(v,w)}$$ where ${\mathcal U}$ denotes the set of lower outside corners of ${\mathcal L}^+$. \begin{figure}[h!] \begin{center} \input{figs/symm3.pstex_t} \caption{The ladder ${\mathcal L}^+$ with a shaded subladder ${\mathcal L}^+_{(v,w)}$.} \label{region} \end{center} \end{figure} \end{notat} \begin{defn}\label{ldi} Let $\{(v_1,w_1),\ldots,(v_s,w_s)\}$ be a subset of the lower border of ${\mathcal L}^+$ which contains all the lower outside corners. We order them so that $1\leq v_1\leq\ldots\leq v_s\leq n$ and $n\geq w_1\geq\ldots\geq w_s\geq 1$. Let $t=(t_1,\ldots,t_s)\in{\mathbb Z}_+^s$. Denote ${\mathcal L}^+_{(v_k,w_k)}$ by ${\mathcal L}^+_k$, and $L_{(v_k,w_k)}$ by $L_k$. The ideal $$I_t(L)=I_{t_1}(L_1)+\ldots+I_{t_s}(L_s)\subset K[X]$$ is a {\bf symmetric mixed ladder determinantal ideal}. Denote $I_{(t,\ldots,t)}(L)$ by $I_t(L)$. We call $(v_1,w_1),\ldots,(v_s,w_s)$ {\bf distinguished points} of ${\mathcal L}^+$. \end{defn} If $t=(t,\ldots,t)$, then $I_t(L)$ is the ideal generated by the $t$-minors of $X$ that involve only indeterminates from $L$. These ideals have been classically studied (see, e.g., \cite{co94c}, \cite{co94a}, \cite{co94b}). It is not hard to show (see~\cite{go10}, Examples~1.5) that the family of symmetric mixed ladder determinantal ideals contains the family of cogenerated ideals in a ladder of a symmetric matrix, as defined in~\cite{co94a}. \begin{notat}\label{genssym} Denote by ${\mathcal G}_{t_k}(L_k)$ the set of the $t_k$-minors of $X$ which involve only indeterminates of $L_k$ and let $${\mathcal G}_t(L)={\mathcal G}_{t_1}(L_1)\cup\ldots\cup{\mathcal G}_{t_s}(L_s).$$ The elements of ${\mathcal G}_t(L)$ are a minimal system of generators of $I_t(L)$. We sometimes refer to them as ``natural generators''. \end{notat} \begin{notat} Let ${\mathcal L}$ be a ladder with distinguished points $(v_1,w_1),\ldots,(v_s,w_s)$. We denote by $$\tilde{{\mathcal L}}^+=\{(i,j)\in{\mathcal L}^+\mid i\leq v_{k-1}-t_{k-1}+1 \mbox{ or } j\leq w_k-t_k+1 \mbox{ for } k=2,\ldots,s,$$ $$j\leq w_1-t_1+1,\; i\leq v_s-t_s+1\}$$ and by $$\tilde{{\mathcal L}}=\tilde{{\mathcal L}}^+\cup\{(j,i)\mid (i,j)\in\tilde{{\mathcal L}}^+\}.$$ See Figure~\ref{lplus}. \begin{figure}[h!] \begin{center} \input{figs/symm4.pstex_t} \caption{An example of ${\mathcal L}^+$ with three distinguished points and $t=(3,6,4)$. The corresponding $\tilde{{\mathcal L}}^+$ is shaded.} \label{lplus} \end{center} \end{figure} \end{notat} The ladder $\tilde{{\mathcal L}}$ computes the height of $I_t(L)$ as follows. \begin{prop}[\cite{go10}, Proposition~1.8] Let ${\mathcal L}$ be a ladder with distinguished points $(v_1,w_1),\ldots,(v_s,w_s)$ and let $\tilde{{\mathcal L}}$ and $\tilde{{\mathcal L}}^+$ be as above. Then $\tilde{{\mathcal L}}$ is a symmetric ladder and $$\hgt I_t(L)=|\tilde{{\mathcal L}}^+|.$$ \end{prop} The result about Gr\"obner bases will follow by combining the next theorem with Lemma~\ref{inid}. \begin{thm}[\cite{go10}, Theorem~2.4]\label{biliaisonsym} Any symmetric mixed ladder determinantal ideal can be obtained from an ideal generated by indeterminates by a finite sequence of ascending elementary G-biliaisons. \end{thm} The following is the main result of this section. We prove that the natural generators of symmetric mixed ladder determinantal ideals are a Gr\"obner basis with respect to any diagonal term-order, and that the simplicial complexes associated to their initial ideals are vertex decomposable. In particular, the initial ideals are Cohen-Macaulay. \begin{thm}\label{gbsym} Let $X=(x_{ij})$ be an $n\times n$ symmetric matrix of indeterminates. Let ${\mathcal L}^+={\mathcal L}^+_1\cup\ldots\cup{\mathcal L}^+_s$ and $t=(t_1,\ldots,t_s)$. Let $I_t(L)\subset K[X]$ be the corresponding symmetric mixed ladder determinantal ideal and let ${\mathcal G}_t(L)$ be the set of minors that generate it. Let $\sigma$ be any diagonal term-order. Then ${\mathcal G}_t(L)$ is a reduced Gr\"obner basis of $I_t(L)$ with respect to $\sigma$. Moreover, the initial ideal of $I_t(L)$ with respect to $\sigma$ is squarefree and Cohen-Macaulay, and the associated simplicial complex is vertex decomposable. \end{thm} \begin{proof} Let $$I_t(L)=I_{t_1}(L_1)+\cdots+I_{t_s}(L_s)\subset K[X]$$ be the symmetric mixed ladder determinantal ideal with ladder ${\mathcal L}$, $t=(t_1,\ldots,t_s)$ and distinguished points $(v_1,w_1),\ldots,(v_s,w_s)$. Let ${\mathcal G}_t(L)$ be the set of natural generators of $I_t(L)$. We proceed by induction on $\ell=|{\mathcal L}^+|$. If $\ell=1$, then ${\mathcal G}_t(L)$ consists of one indeterminate, in particular it is a Gr\"obner basis of the ideal that it generates with respect to any term-order $\sigma$. Moreover, $in(I_1(L))=I_1(L)$ is generated by indeterminates, and the simplicial complex associated to it is the empty set. We now assume that the thesis holds for ladders ${\mathcal N}$ with $|{\mathcal N}^+|<\ell$ and prove it for a ladder ${\mathcal L}$ with $|{\mathcal L}^+|=\ell$. If $t_1=\ldots=t_s=1$, then ${\mathcal G}_t(L)$ consists only of indeterminates. In particular it is a Gr\"obner basis of the ideal that it generates with respect to any term-order $\sigma$. Moreover, $in(I_1(L))=I_1(L)$ is generated by indeterminates, and the simplicial complex associated to it is the empty set. Otherwise, let $k\in\{1,\ldots,s\}$ such that $t_k=\max\{t_1,\ldots,t_s\}\geq 2$. Let ${\mathcal L}'$ be the ladder with distinguished points $$(v_1,w_1),\ldots,(v_{k-1},w_{k-1}),(v_k+1,w_k-1), (v_{k+1},w_{k+1}),\ldots,(v_s,w_s)$$ and let $t'=(t_1,\ldots,t_{k-1},t_k-1,t_{k+1},\ldots,t_s)$. Let $I_{t'}(L')\subset K[X]$ be the associated symmetric mixed ladder determinantal ideal. Let ${\mathcal G}_{t'}(L')$ be the set of minors which minimally generate $I_{t'}(L')$. Since $|{\mathcal L}'^+|<\ell$, by induction hypothesis ${\mathcal G}_{t'}(L')$ is a reduced Gr\"obner basis of $I_{t'}(L')$ with respect to $\sigma$. Hence $$in(I_{t'}(L'))=(in({\mathcal G}_{t'}(L'))).$$ Let $\mathcal M$ be the ladder obtained from ${\mathcal L}$ by removing $(v_k,w_k)$ and $(w_k,v_k)$. Let $$(v_1,w_1),\ldots,(v_{k-1},w_{k-1}),(v_k,w_k-1),(v_k+1,w_k), (v_{k+1},w_{k+1}),\ldots,(v_s,w_s)$$ be the distinguished points of $\mathcal M$ and let $u=(t_1,\dots,t_{k-1},t_k,t_k,t_{k+1},\dots,t_s)$. Let $I_u(M)\subset K[X]$ be the associated symmetric mixed ladder determinantal ideal. Let ${\mathcal G}_u(M)$ be the set of minors which minimally generate $I_u(M)$. Since $|{\mathcal M}^+|=\ell-1<\ell$, by induction hypothesis ${\mathcal G}_u(M)$ is a reduced Gr\"obner basis of $I_u(M)$ with respect to any diagonal term-order. Hence $$in(I_u(M))=(in({\mathcal G}_u(M))).$$ It follows from~\cite{go10}, Theorem~2.4 that $I_t(L)$ is obtained from $I_{t'}(L')$ via an ascending elementary G-biliaison of height $1$ on $I_u(M)$. The ideals $in(I_{t'}(L'))$ and $in(I_u(M))$ are squarefree and Cohen-Macaulay by induction hypothesis. Moreover $$in({\mathcal G}_t(L))=in({\mathcal G}_u(M))\cup x_{v_k,w_k}in({\mathcal G}_{t'}(L')),$$ where $x_{u,v}{\mathcal G}$ denotes the set of products $x_{u,v}g$ for $g\in{\mathcal G}$. Since $x_{v_k,w_k}$ does not appear in $in({\mathcal G}_u(M))$, it does not divide zero modulo the ideal $in(I_u(M))$. Therefore, \begin{equation}\label{cpx_symm} I:=(in({\mathcal G}_t(L)))=in(I_u(M))+x_{v_k,w_k}in(I_{t'}(L'))\subseteq in(I_t(L)) \end{equation} and $I$ is a Basic Double G-Link of degree 1 of $in(I_{t'}(L'))$ on $in(I_u(M))$. Hence $I$ is a squarefree Cohen-Macaulay ideal. By Lemma~\ref{inid} $I=in(I_t(L))$, hence ${\mathcal G}_t(L)$ is a Gr\"obner basis of $I_t(L)$ with respect to any diagonal term-order. Let $\Delta$ be the simplicial complex associated to $in(I_t(L))$. By (\ref{cpx_symm}) the simplicial complexes associated to $in(I_{t'}(L')$ and $in(I_{u}(M))$ are $\lk_{(v_k,w_k)}(\Delta)$ and $\Delta-(v_k,w_k)$, respectively. $\Delta$ is vertex decomposable, since $\lk_{(v_k,w_k)}(\Delta)$ and $\Delta-(v_k,w_k)$ are by induction hypothesis. \end{proof} \begin{rmks} \begin{enumerate} \item From the proof of the previous theorem it also follows that $in_t(L)$ is obtained from an ideal generated by indeterminates via a sequence of degree 1 Basic Double G-links which only involve squarefree monomial ideals. In particular, it is glicci. Moreover, the associated simplicial complex is shellable. \item The proof of Theorem~\ref{gbsym} given above does not constitute a new proof of the fact that the $t$-minors in a symmetric matrix or in a symmetric ladder are a Gr\"obner basis with respect to any diagonal term-order for the ideal that they generate. In fact, our proof is based on Theorem~2.4 in~\cite{go10}, which in turn relies on the fact that minors all of the same size in a ladder of a symmetric matrix generate a prime ideal. Primality of this ideal is classically deduced from the fact that the minors are a Gr\"obner basis. So we are extending (and not providing a new proof of) the results in~\cite{co94c}. \item Our argument, however, gives a new proof of the fact that the minors generating a cogenerated ideal in a symmetric matrix or in a ladder thereof are a Gr\"obner basis with respect to a diagonal term-order, knowing that minors all of the same size in a ladder of a symmetric matrix are a Gr\"obner basis of the ideal that they generate. \end{enumerate} \end{rmks} \section{Mixed ladder determinantal ideals}\label{ladd_sect} In this section, we prove that minors of mixed size in one-sided ladders are Gr\"obner bases for the ideals that they generate, with respect to any anti-diagonal term order. Moreover, the associated simplicial complex is vertex decomposable. These results are already known, and were established in different levels of generality in~\cite{na86}, \cite{co93}, \cite{co95}, \cite{go00}, \cite{kn05}, \cite{go07b}, \cite{kn09} and~\cite{kn09p}. The papers~\cite{kn05}, \cite{kn09} and~\cite{kn09p} follow a different approach than the others. The family that they treat strictly contains that of one-sided mixed ladder determinantal ideals. The paper~\cite{go07b} follows essentially the same approach as the the first four papers, extending it to the family of two-sided mixed ladder determinantal ideals. The proof we give here is different and independent of all the previous ones: we use the result that we established in Section~\ref{mainlemma} and the liaison results which were established in~\cite{go07b}. In~\cite{go07b} the first author approached the study of ladder determinantal ideals from the opposite point of view: she first proved that the minors were Gr\"obner bases of the ideals that they generated. From the Gr\"obner basis result, she deduced that the ideals are prime and Cohen-Macaulay, and computed their height. Finally, she proved the liaison result. Here we wish to take the opposite approach: namely, deduce the fact that the minors are a Gr\"obner basis for the ideal that they generate from the liaison result. In order to do that, we need to show how to obtain the liaison result independently of the computation of a Gr\"obner basis. We do this in Appendix~\ref{ladd_app} following the approach of~\cite{de09} and~\cite{go10}. In this section, we deduce the result about Gr\"obner bases from the G-biliaison result. We start by introducing the relevant notation. Let $X=(x_{ij})$ be an $m\times n$ matrix whose entries are distinct indeterminates, $m\leq n$. \begin{defn}\label{ladd} A {\bf one-sided ladder} ${\mathcal L}$ of $X$ is a subset of the set ${\mathcal X}=\{(i,j)\in{\mathbb N}^2 \mid 1\le i\le m,\ 1\le j\le n \}$ with the properties: \begin{enumerate} \item $(1,m)\in{\mathcal L}$, \item if $i<h,j>k$ and $(i,j),(h,k)\in{\mathcal L}$, then $(i,k),(i,h),(h,j),(j,k)\in{\mathcal L}$. \end{enumerate} \end{defn} We do not make any connectedness assumption on the ladder ${\mathcal L}$. For ease of notation, we also do not assume that $X$ is the smallest matrix having ${\mathcal L}$ as a ladder. Observe that ${\mathcal L}$ can be written as $${\mathcal L}=\bigcup_{k=1}^u \{(i,j)\in{\mathcal X}\mid i\leq c_k \mbox{ and } j\geq d_k\}$$ for some integers $1\leq c_1<\ldots<c_u\leq m$, $1\leq d_1<\ldots<d_u\leq n$. We call $(c_1,d_1),\ldots,(c_u,d_u)$ {\bf lower outside corners} and $(c_1,d_2),\ldots,(c_{u-1},d_u)$ {\bf lower inside corners} of the ladder ${\mathcal L}$. A one-sided ladder has at least one lower outside corner. A one-sided ladder which has exactly one lower outside corner is a matrix. All the corners belong to ${\mathcal L}$, and the ladder ${\mathcal L}$ corresponds to its set of lower outside (or equivalently, lower inside) corners. The {\bf lower border} of ${\mathcal L}$ consists of the elements $(c,d)$ of ${\mathcal L}$ such that $(c+1,d-1)\not\in{\mathcal L}$. See Figure~\ref{lb}. Notice that the lower corners of a ladder belong to its lower border. \begin{figure}[h!] \begin{center} \input{figs/ladd1.pstex_t} \caption{An example of a ladder with shaded lower border.} \label{lb} \end{center} \end{figure} Given a ladder ${\mathcal L}$ we set $L=\{x_{ij}\in X\mid (i,j)\in{\mathcal L}\}$. We denote by $|{\mathcal L}|$ the cardinality of the ladder. We let $I_t(L)$ denote the ideal generated by the set of the $t$-minors of $X$ which involve only indeterminates of $L$. In particular, $I_t(X)$ is the ideal of $K[X]$ generated by the $t\times t$-minors of $X$. \begin{defn} Let $\{(a_1,b_1),\ldots,(a_s,b_s)\}$ be a subset of the lower border of ${\mathcal L}$ which contains all the lower outside corners. We order them so that $1\leq a_1\leq\ldots\leq a_s\leq m$ and $1\leq b_1\leq\ldots\leq b_s\leq n$. Let $t=(t_1,\ldots,t_s)$ be a vector of positive integers. For $k=1,\ldots,s$, denote by $${\mathcal L}_k=\{ (i,j)\in{\mathcal X}\mid i\leq a_k \mbox{ and } j\geq b_k\}\;\; \mbox{ and }\;\; L_k=\{x_{i,j}\mid (i,j)\in{\mathcal L}_k\}.$$ Notice that ${\mathcal L}_k\subseteq{\mathcal L}$ and $L_k\subseteq L$. Moreover, ${\mathcal L}=\cup_{k=1}^s{\mathcal L}_k.$ The ideal $$I_t(L)=I_{t_1}(L_1)+\ldots+I_{t_s}(L_s)$$ is a {\bf mixed ladder determinantal ideal}. We denote $I_{(t,\ldots,t)}(L)$ by $I_t(L)$. We call $(a_1,b_1),\ldots,(a_s,b_s)$ {\bf distinguished points} of ${\mathcal L}$. Notice that a ladder is uniquely determined by the set of its distinguished points, but it does not determine them. \end{defn} \begin{notat}\label{gensladd} Denote by ${\mathcal G}_{t_k}(L_k)$ the set of the $t_k$-minors of $X$ which involve only indeterminates of $L_k$ and let $${\mathcal G}_t(L)={\mathcal G}_{t_1}(L_1)\cup\ldots\cup{\mathcal G}_{t_s}(L_s).$$ The elements of ${\mathcal G}_t(L)$ are a minimal system of generators of $I_t(L)$. We sometimes refer to them as ``natural generators''. \end{notat} We will need the following result. See the appendix for a self contained proof. \begin{thm}[\cite{go07b}, Theorem~2.1]\label{biliaisonladd} Any mixed ladder determinantal ideal can be obtained from an ideal generated by indeterminates by a finite sequence of ascending elementary G-biliaisons. \end{thm} We now prove that the natural generators of a mixed ladder determinantal ideal are a Gr\"obner basis with respect to any anti-diagonal term-order. \begin{thm}\label{gbladd} Let $X=(x_{ij})$ be an $m\times n$ matrix whose entries are distinct indeterminates, $m\leq n$, and let ${\mathcal L}$ be a one-sided ladder of $X$. Let $t\in{\mathbb N}^s$ and let $(a_1,b_1),\ldots,(a_s,b_s)$ be the distinguished points of the ladder. Let $I_t(L)$ be the corresponding ladder determinantal ideal. Denote by ${\mathcal G}_t(L)$ be the set of minors that generate $I_t(L)$. Then ${\mathcal G}_t(L)$ is a reduced Gr\"obner basis of $I_t(L)$ with respect to any anti-diagonal term ordering. Moreover, the initial ideal $in(I_t(L))$ is squarefree Cohen-Macaulay, and the associated simplicial complex is vertex decomposable. \end{thm} \begin{proof} We proceed by induction on $\ell=|{\mathcal L}|$. If $\ell=1$, then ${\mathcal G}_1(L)$ consists of one indeterminate. Hence ${\mathcal G}_1(L)$ is a reduced Gr\"obner basis for $I_1(L)$ with respect to any term-order. Moreover, $I_1(L)=in(I_1(L))$ is generated by indeterminates, hence the associated simplicial complex is the empty set. We now assume by induction that the statement holds for ladders ${\mathcal H}$ with $|{\mathcal H}|<\ell$, and we prove the statement for a ladder ${\mathcal L}$ with $|{\mathcal L}|=\ell$. If $t=(1,\ldots,1)$, then ${\mathcal G}_1(L)$ consists of indeterminates. Hence ${\mathcal G}_1(L)$ is a reduced Gr\"obner basis for $I_1(L)$ with respect to any term-order. Moreover, $I_1(L)=in(I_1(L))$ is generated by indeterminates, hence the associated simplicial complex is the empty set. Otherwise, let $C\subseteq in(I_t(L))$ be the ideal generated by the initial terms of ${\mathcal G}_t(L)$. It suffices to show that $C=in(I_t(L))$ and that $C$ is Cohen-Macaulay. Let $(a_1,b_1),\ldots,(a_s,b_s)$ be the distinguished points of the ladder and choose $k\in\{1,\ldots,s\}$ so that $t_k=\max\{t_1,\ldots,t_s\}\geq 2$. Let ${\mathcal M}={\mathcal L}\setminus\{(a_k,b_k)\}$, and let $$(a_1,b_1),\ldots,(a_{k-1},b_{k-1}),(a_k-1,b_k),(a_k,b_k+1), (a_{k+1},b_{k+1}),\ldots,(a_s,b_s)$$ be the distinguished points of ${\mathcal M}$. Let $p=(t_1,\ldots,t_{k-1},t_k,t_k,t_{k+1},\ldots,t_s)\in {\mathbb N}^{s+1}$. Let ${\mathcal N}$ be the ladder with distinguished points $$(a_1,b_1),\ldots,(a_{k-1},b_{k-1}),(a_k-1,b_k+1),(a_{k+1},b_{k+1}), \ldots,(a_s,b_s)$$ and let $q=(t_1,\ldots,t_{k-1},t_k-1, t_{k+1},\ldots,t_s)\in{\mathbb N}^s$. As shown in the proof of Theorem~\ref{biliaisonladd}, $I_t(L)$ is obtained from $I_q(N)$ via an elementary G-biliaison of height 1 on $I_p(M)$. Let $A=in({\mathcal G}_p(M))$ and let $B=in({\mathcal G}_q(N))$. The induction hypothesis applies to both $I_p(M)$ and $I_q(N)$, hence ${\mathcal G}_p(M)$ is a Gr\"obner basis for $I_p(M)$, ${\mathcal G}_q(N)$ is a Gr\"obner basis for $I_q(N)$ and $A=in(I_p(M)),B=in(I_q(N))$ are Cohen-Macaulay ideals. Notice that $$in({\mathcal G}_t(L))=in({\mathcal G}_p(M))\cup x_{a_k,b_k}in({\mathcal G}_q(N))$$ where $x_{a_k,b_k}{\mathcal G}$ denotes the set of products $x_{a_k,b_k}g$ for $g\in{\mathcal G}$. Since $x_{a_k,b_k}$ does not appear in $in({\mathcal G}_p(M))$, it does not divide zero modulo the ideal $A=in(I_p(M))$. Therefore, \begin{equation}\label{cpx_ladd} in(I_p(M))+x_{a_k,b_k}in(I_q(N))=C\subseteq in(I_t(L)) \end{equation} and $C$ is a Basic Double G-Link of degree 1 of $in(I_q(N))$ on $in(I_p(M))$. $C$ is Cohen-Macaulay and squarefree, since $A$ and $B$ are. By Lemma~\ref{inid} we conclude that $C\subseteq in(I_t(L))$ and that ${\mathcal G}_t(L)$ is a Gr\"obner basis of $I_t(L)$ with respect to any anti-diagonal term-order. Let $\Delta$ be the simplicial complex associated to $in(I_{t}(L))$. By (\ref{cpx_ladd}) the simplicial complexes associated to $in(I_{q}(N)$ and $in(I_{p}(M))$ are $\lk_{(a_k,b_k)}(\Delta)$ and $\Delta-(a_k,b_k)$, respectively. $\Delta$ is vertex decomposable, since $\lk_{(a_k,b_k)}(\Delta)$ and $\Delta-(a_k,b_k)$ are by induction hypothesis. \end{proof} \begin{rmks} \begin{enumerate} \item From the proof of the theorem it also follows that the ideal $in(I_t(L))$ is obtained from an ideal generated by indeterminates via a sequence of degree 1 Basic Double G-links, which only involve squarefree monomial ideals. Hence in particular it is glicci. Moreover, the associated simplicial complex is shellable. \item Notice that, in contrast to Theorem~\ref{gbpf} and Theorem~\ref{gbsym}, Theorem~\ref{gbladd} does constitute a new proof of the fact that $t$-minors in a generic matrix or in a one-sided ladder are a Gr\"obner basis with respect to any anti-diagonal term-order for the ideal that they generate. In fact, in Theorem~\ref{laddall} we give a proof of primality for mixed ladder determinantal ideals which is independent of any previous Gr\"obner basis results. \end{enumerate} \end{rmks} By following the same approach as in the previous sections and using the result of Narasimhan from~\cite{na86}, we can prove that the natural generators of mixed ladder determinantal ideals from two-sided ladders are a Gr\"obner basis for the ideal that they generate with respect to any anti-diagonal term-order (see~\cite{go07b} for the relevant definitions). This is, to our knowledge, the largest family of ideals generated by minors in a ladder for which the minors are a Gr\"obner basis for the ideal that they generate. Notice, e.g., that cogenerated ladder determinantal ideals all belong to this family. The result was already established by the first author in~\cite{go07b}, but a different proof can be given using the techniques discussed in this paper. Notice moreover that we also show that the simplicial complex associated to the initial ideal is vertex decomposable. In particular, it is shellable. Since the proof is completely analogous to the previous ones, we omit it. \begin{thm} Let $X=(x_{ij})$ be an $m\times n$ matrix whose entries are distinct indeterminates, $m\leq n$, and let ${\mathcal L}$ be a ladder of $X$. Let $t\in{\mathbb N}^s$ and let $(a_1,b_1),\ldots,(a_s,b_s)$ be the distinguished points of the ladder. Let $I_t(L)$ be the corresponding ladder determinantal ideal. Denote by ${\mathcal G}_t(L)$ be the set of minors that generate $I_t(L)$. Then ${\mathcal G}_t(L)$ is a reduced Gr\"obner basis of $I_t(L)$ with respect to any anti-diagonal term ordering, and the initial ideal $in(I_t(L))$ is squarefree Cohen-Macaulay. Moreover, the associated simplicial complex is vertex decomposable. \end{thm}
{'timestamp': '2011-06-06T02:03:30', 'yymm': '1008', 'arxiv_id': '1008.5314', 'language': 'en', 'url': 'https://arxiv.org/abs/1008.5314'}
\section{INTRODUCTION} Materials exhibiting very high values in dielectric permittivity ($\varepsilon' > 10^3$) are often coined as ``colossal dielectric constant" (CDC) materials\cite{Lunkenheimer2010, Lunkenheimer2002}. They bear enormous potential for enhancing the capacitance, e.g. in multilayer ceramic or low-temperature co-fired capacitors \cite{Kishi2003, C9TC02921D, Li2020}. Typically, proper ferroelectric materials\cite{Toledano}, such as BaTiO$_3$\cite{Merz1949, Devon1949} or Pb(Zr$_x$,Ti$_{1-x}$)O$_3$\cite{Shirane1952}, are used as their dielectric permittivity exceeds $10^3$ at ambient temperatures and the loss tangent -- indicating the dielectric loss -- is rather low (tan$\delta < 10^{-2}$)\cite{C7TA05392D, C9TC02921D,Elissalde2001}. The frequency stability of these parameters within a certain temperature range with respect to the applied voltages are key quantities for technical applicability. Another approach towards CDCs is to use thin layers with reduced conductivity -- so called barrier-layer capacitances (BLC) -- in bulk ceramics and single crystals. These BLCs can be internal layers, like insulating grain boundaries in polycrystalline ceramics\cite{Adams2002,Zhao2004,Frey1998}, or surface layers formed, e.g., due to the depletion zone of Schottky diodes arising at metal-semiconductor contacts\cite{Krohns2007,Krohns2009}. Both mechanisms are sensitive to variations in preparation (i.e., size of grains and conductivity of grain boundaries\cite{Zhao2004}) or the contact area of the Schottky barrier. BLCs appear as a step-like decrease in $\varepsilon'$ accompanied by a peak in $\varepsilon''$ in a frequency-dependent representation mimicking a classical `Debye-like' relaxation process (see \textcite{Lunkenheimer2010} and references therein for more details). The electrical heterogeneity is responsible for the first relaxation-like feature in the dielectric properties, called Maxwell-Wagner relaxation. Recently, a reduced conductivity at the DWs and a related BLC effect was observed in h-YMnO$_3$\cite{Ruff2017ConductivityYMnO3} suggesting a high dielectric constant $\varepsilon' > 200$. However, a systematic analysis that can confirm the connection between the high dielectric constant and DW driven BLCs remains illusive. Here, we provide a dielectric analysis of an h-ErMnO$_3$ single crystal for which the polarisation is parallel to the applied contact electrodes (in-plane). Two distinct relaxations are observed in this sample: the first leads to a high dielectric constant in the order of 300, and the second to a CDC of more than $5\times 10^3$. So far, mainly the dielectric properties of out-of-plane polarized samples were investigated, for which often only one relaxation was reported leading to CDC\cite{Holstad2018, Ruff2018APL, Schaab2018ElectricalWalls}. To disentangle the contribution of various BLCs arising in the sample and at the surface of the sample, we simulate the dielectric spectra by an equivalent circuit model, analogous to previous studies on the CDC prime-example, CaCu$_3$Ti$_4$O$_{12}$\cite{Krohns2008,Lunkenheimer2010,Lunkenheimer2002}. This approach, in combination with a distinct modification of the electrode contact area and the thickness of the sample, allows to distinguish BLCs arising from internal and surface effects. Furthermore, we use local probe analysis by piezo-force response (PFM) and conductive atomic force microscopy (cAFM) to determine electronic DWs properties at the sample surface and estimate the volume fraction of insulating DWs\cite{Meier2012,Schoenherr2018}. Our systematic analysis provides new insight into the dielectric properties of hexagonal manganites, corroborating that insulating DWs act as BLCs, playing a key role for the high or even colossal dielectric constants observed in this class of materials. \section{EXPERIMENT} High-quality hexagonal h-ErMnO$_3$ single crystals were grown by the pressurized floating zone technique\cite{Yan2015}. The sample was cut into a disc (area = 2.38\,mm$^2$, thickness = 0.61\,mm) with the polar axis lying parallel to the surface, i.e. (110)-oriented. For dielectric spectroscopy we used a plate capacitor geometry, coating both top and bottom surfaces either with silver paint or sputtered gold. We performed the measurements using a Alpha Analyzer (Novocontrol, Montabaur, Germany), which covers the frequency range of 1\,Hz to 1\,MHz. This analysis was conducted in a closed-cycle refrigerator between 150\,K and 300\,K. The microscopic data was recorded on the same sample at room-temperature using an NT-MDT (NTEGRA, Apeldoorn, Netherlands) atomic force microscope (AFM), using diamond tips (DDESP-10, Bruker, Billerica, MA, USA). The sample was lapped with a 9\,\textmu m-grained Al$_2$O$_3$ water suspension and polished using silica slurry (Ultra-Sol® 2EX, Eminess Technologies, Scottsdale, AZ, USA) to produce a flat surface with a root mean square roughness (RMS) of about 1.65\,nm (determined over a $25\times25\,$\textmu m$^2$ area). For domain imaging by PFM, an ac-excitation voltage of 10\,V was applied to the back electrode at a frequency of 61\,kHz while the tip was grounded. Local transport data was gained by cAFM with a dc-voltage of 2\,V applied to the back electrode. \section{RESULTS \& DISCUSSION} \subsection{Two relaxation-like dielectric features} Figure \ref{fig1} shows the temperature dependent dielectric constant $\varepsilon'$ \textbf{a} and loss-tangent tan$\delta$ \textbf{b} for various frequencies from 1\,Hz to 1\,MHz. The electrodes where made with silver paint. For almost all frequencies $\varepsilon'$ exhibits a distinct two-step increase from about 30 to 200 -- 400 and further to $\varepsilon' > 5\times 10^3$. These steps in $\varepsilon'$ are accompanied by a peak in tan$\delta$, e.g., for the 4\,kHz curve at about 185\,K and 255\,K, respectively. This behaviour corresponds to a relaxation process in the temperature-dependent representation. Such prominent relaxation-like features are well known in oxide materials\cite{Lunkenheimer2010}, often originating from BLCs, e.g., Schottky-diodes forming a depletion-zone that acts as a thin insulating layer. Further, the rather high values of the loss tangent (tan$\delta > 0.01$) corroborate the framework of BLC mechanisms responsible for the dielectric features. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{Figures/Fig1.pdf} \caption{Temperature-dependence of $\varepsilon'$ \textbf{a} and tan$\delta$ \textbf{b} of h-ErMnO$_{3}$ with in-plane polarisation at selected frequencies from 1\,Hz to 1\,MHz. The inset in \textbf{a} compares $\Delta\varepsilon' = \varepsilon'/\varepsilon_{\infty}$ of the in-plane (open symbols) and out-of-plane (lines) polarisation of samples from the same batch for two different frequencies (colours refer to the respective frequencies in \textbf{a}). The data for the out-of-plane sample were taken from \textcite{Ruff2018APL}. Numbers (\textcircled{\footnotesize{1}} and \textcircled{\footnotesize{2}}) indicate the relaxation-like features (dashed lines give a guide to the eyes for the temperature-dependency of these features). The inset in \textbf{b} compares the temperature dependent conductivity $\sigma'$ for these samples for two frequencies (again, colours refer to the respective frequencies in \textbf{a}). } \label{fig1} \end{figure} Recently, it was reported that in h-YMnO$_3$ and h-ErMnO$_3$ the Schottky-barriers give rise to CDCs\cite{Schaab2018ElectricalWalls,Ruff2017ConductivityYMnO3,Holstad2018,Ruff2018APL}. Similar to this work (Fig.\,\ref{fig1}), two distinct relaxation-like features on h-YMnO$_3$ were measured\cite{Ruff2017ConductivityYMnO3}; one attributed to an internal BLC mechanism, possibly originating from insulating DWs. In contrast to the previously published data gained on samples with out-of-plane polarisation, the measurements presented in Fig.\,\ref{fig1} show well separated features, facilitating a more detailed analysis. We illustrate this difference with the inset in Fig.\,\ref{fig1}\,\textbf{a}, comparing the change in dielectric constant $\Delta\varepsilon'$ for the present in-plane oriented sample to an out-of-plane oriented one, published in \textcite{Ruff2018APL}. The CDC feature \textcircled{\footnotesize{2}} is the same, while the high dielectric constant feature \textcircled{\footnotesize{1}} appears as a distinct increase only for the in-plane sample. It is important to note, that for this comparison of $\varepsilon'$ we used different frequencies for the in-plane (259\,Hz and 4\,kHz) and out-of-plane (4\,kHz and 67\,kHz) orientation. For BLC-driven mechanisms the bulk dc-conductivity has a strong impact on the temperature and frequency range, where this feature dominates the dielectric properties\cite{Lunkenheimer2010}. The inset of Fig.\,\ref{fig1}\,\textbf{b} shows $\sigma'$ for two representative frequencies, indicating a significant decrease of the conductivity of the out-of-plane oriented sample corroborating the above mentioned shift in the BLC driven feature. This is confirmed by the frequency-dependent dielectric analysis of h-(Er$_{0.99}$Ca$_{0.01}$)MnO$_3$ shown in Fig.\,S1. This is a first hint of an anisotropic BLC feature, which is either based on the change in dc-conductivity or might be due to differences in DW density or conductance. For the latter, it is already well established via local probe measurements, that the conductance of the DWs strongly depend on the orientation of the polarisation\cite{Meier2012,Mosberg2019}. \subsection{Quantifying the DW density} We use PFM and cAFM to estimate the density of DWs, which provide a possible origin for the observed high dielectric constant feature. To approximate the density of these DWs in our h-ErMnO$_3$ crystal, we map the domain distribution at the sample surface using PFM (in-plane contrast) as presented in Fig.\,\ref{fig2}\,\textbf{a}. The PFM scan reveals the typical domain distribution, characteristic for hexagonal manganites\cite{Choi2010,Jungk2010,Safrankova1967}. To determine the domain / DW density, we evaluate multiple test lines (one line is shown in Fig.\,\ref{fig2}\,\textbf{b} and further lines in Fig.\,S3) applying the procedure outlined in \textcite{Hubert1998}. Measurements by cAFM (Fig.\,\ref{fig2}\,\textbf{c}) confirm the presence of DWs with enhanced (tail-to-tail) and reduced (head-to-head) conductance\cite{Meier2012}. From this analysis we find $1 \pm 0.1$ DWs per \textmu m with enhanced or reduced conductance in comparison to the bulk. The same evaluation was also performed for h-(Er$_{0.99}$Ca$_{0.01}$)MnO$_3$ depicted in Fig.\,S2, providing a similar domain / DW fraction. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{Figures/Fig2.png} \caption{\textbf{a} Calibrated in-plane PFM scan where the yellow (blue) represents ferroelectric domains pointing to the left (right). These form the characteristic domain pattern present in hexagonal manganite single crystals, where the ferroelectric $180^{\circ}$ domains come together at sixfold vertex points. \textbf{b} A representative line-profile from \textbf{a} where high values are from domains with \textit{$\leftarrow$P} and low values are from \textit{$\rightarrow$P} domains, used to get an approximate density of DWs per length. \textbf{c} A cAFM image from the area marked be the red box in \textbf{a}. Light colors indicated areas of enhanced conductance (tail-to-tail walls) while dark areas indicate areas of lower conductance (head-to-head walls).} \label{fig2} \end{figure} \subsection{Dielectric features due to barrier layers} To disentangle surface and internal contributions, a frequency-dependent analysis of the dielectric response is required, which is shown in Fig.\,\ref{fig3}. The frequency-dependent dielectric permittivity (Fig.\,\ref{fig3}\,\textbf{a}) exhibits two distinct relaxations for selected temperatures varying from 170\,K to 294\,K. The first one in the 210\,K-curve evolves at 1\,kHz indicated by a step-like increase in $\varepsilon'$ from $\sim20$ to $\sim300$. The lower $\varepsilon'$-value at high frequencies denotes the contribution of ionic and electronic polarizability to the so-called intrinsic $\varepsilon_{\infty}$, confirming literature values in the order of $20 - 40$\cite{Holstad2018,Ruff2018APL}. The upper plateau of the second step for the 210\,K-curve begins at $\nu < 100$\,Hz and settles at an $\varepsilon'$ value in the order of $5 \times 10^3$. Both relaxations are accompanied by steps in $\sigma' (\nu)$. The plateaus in $\sigma'$ indicate roughly the dc-conductivity of the BLCs and the bulk. However, the dc-conductivity of step two -- most likely the Schottky barrier -- is almost shifted out of the measured frequency range. The curvature of $\sigma'$(294\,K) for $\nu < 10$\,Hz indicates the onset of this dc-plateau of approximately $\sigma_{dc} \approx 3\times 10^{-9}$\,$\Omega^{-1}$cm$^{-1}$. The $\sigma_{dc}$-plateau of the first BLC feature emerges, e.g., for the 210\,K-curve between 100 -- 1000\,Hz at $2\times 10^{-8} $\,$\Omega^{-1}$cm$^{-1}$. Finally, at higher frequencies, e.g., $\nu > 10$\,kHz for 210\,K-curve, $\sigma_{dc}$ of the bulk evolves, which is for low temperatures superimposed by a contribution of an universal dielectric response (UDR) feature\cite{Jonscher1977}, giving rise to a frequency-dependent increase in the overall conductivity. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{Figures/Fig3.pdf} \caption{Frequency-dependence of $\varepsilon$' \textbf{a} and $\sigma$' \textbf{b} of h-ErMnO$_{3}$ with in-plane polarisation at various temperatures (again numbers \textcircled{\footnotesize{1}} and \textcircled{\footnotesize{2}} as well as the blacked dashed lines indicate the relaxation-like features). The coloured dashed lines represent fits for the measured data, obtained by considering the equivalent circuit model, sketched in the corner of \textbf{b}. The equivalent circuit model consists of three $RC$-circuits connected in series.} \label{fig3} \end{figure} In oxide materials there exist different descriptions for these observed dielectric relaxation-like features: (i) hopping conductivity, (ii) displacements of DWs in excitation fields far below the coercive field, (iii) charged DWs acting as conductive inclusions\cite{Esin2017}, (iv) Maxwell-Wagner type electrical inhomogeneities or (v) Schottky-barriers at the electrode contacts. Hopping conductivity in disordered systems can be responsible for a continuous increase in $\varepsilon'$ at low frequencies (typical $\nu < 1$\,Hz)\cite{Lunkenheimer2010}. In the present study, we observe distinct relaxation-like features strongly shifting with temperature, excluding mechanism (i). Further, the DWs are strongly pinned by the vortex structure of the hexagonal manganites and the coercive-fields for polarisation reversal in h-ErMnO$_3$ are typically $> 3$\,kV/mm\cite{Choi2010,Ruff2018APL}, which is orders in magnitude higher than the used voltage of 1\,V/mm for the dielectric spectroscopy. Therefore, irreversible DW motion does not contribute to the dielectric permittivity\cite{Lewis1959,Hall1999}. In addition, the dielectric relaxation related to reversible domain wall motion typically occurs in the GHz range\cite{Kittel1951} and is temperature independent\cite{Arlt1994}. The strong temperature-dependence of the dielectric relaxation observed in our samples (Fig.\,\ref{fig1}) also allows to exclude reversible domain wall motion as an origin. Due to these robust domain structure, mechanism (ii) seems to be very unlikely. Interestingly, explanation model (iii) requires charged conductive DWs, which are discussed to enhance the dielectric permittivity in (K,Na)NbO$_3$-based ferroelectrics\cite{Esin2017}. More specific, a higher density of DWs leads to an increase in dielectric permittivity. Charged DWs with enhanced conductivity are also present in our samples (c.f. Fig.\,\ref{fig2}\,\textbf{c}). However, in contrast to the above mentioned ferroelectric, we find the contrary behaviour of the relation of DW density to the dielectric permittivity. Here, $\varepsilon'$ decreases with increasing DW density\cite{Ruff2017ConductivityYMnO3}, excluding charged conductive DWs as origin for the observed BLC feature. Thus, we focus in the following only on the latter two mechanisms of internal (iv) and surface (v) BLCs. For the present study we investigated a single crystal, for which we can further exclude a BLC mechanism due to insulating grain boundaries, as observed in oxide ceramics\cite{Adams2002,Lunkenheimer2010,Moulson}. The dashed lines in Fig.\,\ref{fig3} represent fits using the equivalent circuit model depicted in the inset of Fig.\,\ref{fig3}\,\textbf{b}. This model uses the approach of Maxwell and Wagner\cite{Wagner1914,Maxwell1873}, for which volume fractions of the overall sample with certain capacitances and conductivities can be described by discrete $RC$-circuits connected in series. In a nutshell, we deploy $RC$-circuits for step two (surface BLC) and step one (internal BLC) in series to the bulk properties. For the latter $RC$-circuit (bulk) we use an additional frequency-dependent resistor accounting for the power-law contribution of UDR to $\sigma'$\cite{Jonscher1977}. From these fits parameters of the dielectric constants and dc-conductivities ($\varepsilon_{\infty}$', $\varepsilon_{1}$', $\varepsilon_{2}$', and temperature-dependent values for $\sigma_{dc}, \sigma_1'$ and $\sigma_2'$) are gained as listed in Table S1. The fitting parameters confirm the temperature-dependent evolution of the dielectric properties of the BLC contributions and the semiconducting behaviour of the bulk. While fitting with an equivalent circuit model allows for analyzing and disentangling single BLC contributions, it cannot provide information about the underlying mechanism. The formation of a Schottky barrier at the interface of the metal electrode and the semiconducting bulk leads to a depletion layer that acts as a thin capacitor at the sample surface. A proven approach\cite{Krohns2007,Lunkenheimer2010} to establish such Schottky BLCs is to measure the dielectric properties using electrodes of different wettings, e.g. painted \textit{vs}. sputtered electrodes: for Schottky BLCs an increased wetting will provide an enhanced CDC. Internal BLCs and the intrinsic dielectric properties, however, should not be affected by this change. Another way to proof a surface BLC is to reduce the sample thickness, as both $\varepsilon'$ and $\sigma'$ are geometry independent values, so the dielectric response should not be affected. \begin{table*}[htb] \caption{Calculated and measured contribution of the DWs to the dielectric response for h-\textit{R}MnO$_3$ (\textit{R} = Y, Er) with in-plane (IP) and out-of-plane (OOP) polarisation. The values are estimated from the dielectric spectroscopy (Figs. \ref{fig3} \& S1) and local-probe analyses (Figs. \ref{fig2}, S2 \& S3).} \label{tab1} \begin{tabular}{l|c|c|c|c|c|c} sample & dir. of P & $\varepsilon_{\infty}$ & $n_{DW}$ [$1/$\textmu\text{m}] & $V_{DW}$ [\%] & estim. $\varepsilon_{1}$ & meas. $\varepsilon_{1}$ \\ \hline h-ErMnO$_3$ (Figs. \ref{fig2}, \ref{fig3} \& S3) & IP & 32 $(\pm 3)$ & 0.5 $(\pm 0.05)$ & 7.5 $(\pm 0.75)$ & 400 $(\pm 60)$ & 250 $(\pm 50)$ \\ h-(Er$_{0.99},$Ca$_{0.01}$)MnO$_3$ (Figs. S1 \& S2) & IP & 18 & 0.46 & 7 & $260$ & $220$ \\ \hline h-YMnO$_3$ sample 1 (data from \textcite{Ruff2017ConductivityYMnO3}) & OOP & 20 & 0.17 & 2.5 & $780$ & $670$ \\% $1\times 10^{-7}$ \\ h-YMnO$_3$ sample 2 (data from \textcite{Ruff2017ConductivityYMnO3}) & OOP & 20 & 2.5 & 37.5 & 54 & 40 \\ \end{tabular} \end{table*} Figure\,\ref{fig4} shows frequency-dependent $\varepsilon'$ \textbf{a} and $\sigma'$ \textbf{b} for both strategies indicated by different symbols (open symbols $\rightarrow$ silverpaint \& $d_{\text{thick}}$, closed symbols $\rightarrow$ silverpaint \& $d_{\text{thin}}$, crosses $\rightarrow$ sputtered gold \& $d_{\text{thick}}$). Analogous to Fig.\,\ref{fig3} both relaxations appear for the different surface treatments. Importantly, we find that the intrinsic bulk properties \textcircled{\footnotesize{3}}, as well as the first relaxation \textcircled{\footnotesize{1}} step, are -- as expected for the bulk properties and an internal BLC -- independent of the electrode material and the thickness of the sample. In contrast, the enhanced wetting of the sputtered electrode increases the upper plateau of $\varepsilon'$ by a factor of about two. Furthermore, reducing the sample thickness gives rise to an increase in the CDC feature. This leads us to the conclusion that the second relaxation \textcircled{\footnotesize{2}} is due to the formation of an insulating depletion layer due to a Schottky barrier at the sample surface. Critical, and in contrast, the first relaxation is not affected by these changes to the surface, corroborating the hypothesis of an internal BLC mechanism related to insulating DWs\cite{Ruff2017ConductivityYMnO3}. \subsection{Insulating DW barrier layer capacitors} From the local probe analysis of the sample surface (Fig.\,\ref{fig2}), in combination with bulk dielectric measurements, we calculate an approximate volume fraction (in \%) of the insulating DWs in our sample: $V_{DW} = n_{DW} d_{DW}$, where $n_{DW}$ denotes the number of insulating DWs per \textmu m and $d_{DW}$ the electronic thickness of the DWs referred to as electrical dressing in \textcite{Meier2012}. Note, this electronic width is much larger than the structural width of about 5\,Å\cite{Holtz2017}, reaching values in the order of 100 to 150\,nm, which was related to a delocalisation of charge carriers\cite{Meier2012}. As the contributions of the ionic and electronic polarizability to $\varepsilon_{\infty}$ for both bulk and DWs are almost the same\cite{Schaab2018ElectricalWalls}, we can estimate the value of the dielectric constant $\varepsilon_{1}$. Within the framework of the Maxwell-Wagner model, $\varepsilon_{1}$ is given by the relation: $\varepsilon_{1} \approx \varepsilon_{\infty}/V_{DW}$. \begin{figure}[htb] \centering \includegraphics[width=\linewidth]{Figures/Fig4.pdf} \caption{Frequency-dependence of $\varepsilon$' \textbf{a} and $\sigma$' \textbf{b} for two representative temperatures. In this figure the measurement data for sputtered gold and silver paint as contacting material are shown and, for the latter, also the thickness dependence ($d_{\text{thick}}$ = 1\,mm and $d_{\text{thin}}$ = 0.6\,mm). Numbers indicate the dielectric features: \textcircled{\footnotesize{1}}$\rightarrow$ BLC one (internal), \textcircled{\footnotesize{2}}$\rightarrow$ BLC two (surface), \textcircled{\footnotesize{3}}$\rightarrow$ bulk properties.} \label{fig4} \end{figure} In table \ref{tab1}, the calculated values for the dielectric response of different sample are compared. The estimated and measured values for all samples listed in table \ref{tab1} are almost in the same order, providing a strong indication for a BLC-DW based mechanism. However, we note that a smeared-out relaxation due to a distribution of relaxation times of the BLC, an inhomogeneous polar pattern, as well as possible variations of the intrinsic dielectric constant cannot be excluded, adding to the uncertainty of the presented values in table \ref{tab1}. Finally, we address the question, why in-plane polarized samples seem to be suited better than out-of-plane polarized specimens for detecting BLC-DW features. This can be explained employing the Maxwell-Wagner model: The relaxation-time $\tau_{BLC-DW}$ of the BLC-DW $RC$-circuit connected in series to the bulk is: $\tau_{BLC-DW} \propto \varepsilon'_{1}/\sigma_{dc}$. Thus, $\varepsilon'$ -- mainly based on the number of insulating DWs and their effective thickness -- and the temperature-dependent bulk $\sigma_{dc}$ are strongly affecting the frequency range in which the relaxation occurs. Especially, for the samples h-ErMnO$_3$ and h-(Er$_{0.99}$,Ca$_{0.01}$)MnO$_3$, we measured a strongly anisotropic dc-conductivity, which is enhanced for the samples with in-plane polarisation by up to a factor of 100 -- depending on the temperature -- compared to the samples with out-of-plane polarisation (Figs.\,S1 \& S2). An enhanced dc-conductivity results in a decrease in $\tau_{BLC-DW}$ and, thus in an increase in the corresponding frequency $\nu_{BLC-DW} = 1/(2\pi\tau_{BLC-DW})$. According to this, the anisotropy in the bulk $\sigma_{dc}$ seems to be the most likely reason, which allows us to disentangle the contributions of surface and internal BLCs, especially in the case of samples of h-ErMnO$_3$ with in-plane polarisation. \section{SUMMARY} In this study, we investigate the contribution of insulating DWs in h-ErMnO$_3$ to the overall dielectric response up to the MHz regime. Depending on the temperature range of the dielectric measurements, two distinct relaxation-like processes are revealed, especially for samples with in-plane polarisation. One of these relaxation-like features originates from a Schottky barrier, which we proof by target-oriented manipulation of the sample-electrode interface. For the other feature we conclude, corroborated by PFM and cAFM measurements, that an internal barrier layer is formed by insulating DWs. To proof this hypothesis, we first use an equivalent-circuit model to quantify the bulk dielectric properties of this internal barrier layer. Secondly, we compare these values to a barrier layer calculated with the density and the electronic thickness of insulating DWs measured by PFM and cAFM. Based on this data, we confirm that internal barrier layer capacitors are formed at insulating DWs, which is corroborated by the comparison of different h-YMnO$_3$ and h-ErMnO$_3$ samples. As both the density\cite{PhysRevX.7.041014, PhysRevX.2.041022} and electrical characteristic of the insulating DWs\cite{Holstad2018, Hassanpour_2016} can be tuned, the engineering the macroscopic dielectric response is feasible. This may pave the way to generate high and even colossal dielectric constants by robust internal barrier layers in h-ErMnO$_3$ and h-YMnO$_3$, which makes these improper ferroelectrics promising for the use as dielectrics in multilayer ceramic capacitors. \section*{Author contributions} S.K. initiated and coordinated the project. L.P. and A.S. conducted the dielectric experiments, and L.P., I.K. and S.K. analyzed the data. J.S., L.P., M.A. and D.E. performed and analyzed the local-probe experiments supervised by D.M. The single crystals were prepared by E.B. and Z.Y. All authors participated in the discussion and interpretation of the results. D.E., L.P., J.S., D.M. and S.K. wrote the manuscript. \section*{Acknowledgments} J.S. acknowledges the support from the Alexander von Humboldt Foundation through the Feodor-Lynen fellowship. D.M. acknowledges support by NTNU through the Onsager Fellowship Program and the Outstanding Academic Fellows Program, and funding from the European Research Council (ERC) under the European Union`s Horizon 2020 Research and Innovation Programme (Grant Agreement No. 86691). S.K., M.A., A.S., L.P., and I.K. acknowledge the funding of the German Science foundation via the Collaborative Research Center TRR80. \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{References} \subsection{Parameters of equivalent circuit analysis} Table I lists the fit parameters of the equivalent-circuit analysis performed for the data partially presented in Fig.\,3. The equivalent circuit, depicted as inset in Fig.\,3\,\textbf{b}, consists of three $RC$-elements, one for the bulk and two for barrier-layer contributions. The bulk $RC$-circuit also takes into account contributions of a frequency-dependent conductivity, the UDR behaviour\cite{Jonscher1977}. Details of the fitting routine can be found in \textcite{Lunkenheimer2010} and \textcite{Ruff2017ConductivityYMnO3}. Here, we list only the values for the dielectric properties. The additional quantities within the fitting model, i.e., the broadening-parameters $\alpha$, which comprises the distribution of relaxation times, and the UDR behaviour remain almost constant for the selected temperatures. \begin{table*}[htb] \caption{Fit parameters of the equivalent circuit analysis of the frequency-dependent dielectric properties of h-ErMnO$_3$} \centering \begin{tabular}{c|c|c|c|c|c|c} Temperature (K) & $\varepsilon_{\infty}$ & $\varepsilon_1$ & $\varepsilon_2$ & $\sigma_{dc}$ ($\Omega^{-1}\text{cm}^{-1}$) & $\sigma_{1}$ ($\Omega^{-1}\text{cm}^{-1}$) & $\sigma_{2}$ ($\Omega^{-1}\text{cm}^{-1}$)\\ \hline 290 & 35 & 200 & 14000 & $1.45\cdot10^{-4}$ & $1.56\cdot10^{-5}$ & $7.29\cdot10^{-9}$\\ 270 & 35 & 230 & 11941 & $2.90\cdot10^{-5}$ & $5.50\cdot10^{-6}$ & $3.3\cdot10^{-9}$\\ 250 & 35 & 231 & 11585 & $7.10\cdot10^{-6}$ & $1.14\cdot10^{-6}$ & $6.13\cdot10^{-10}$\\ 230 & 35 & 200 & 13000 & $2.37\cdot10^{-6}$ & $2.22\cdot10^{-7}$ & $6.21\cdot10^{-11}$\\ 210 & 35 & 200 & 13000 & $2.34\cdot10^{-7}$ & $3.26\cdot10^{-8}$ & $2.66\cdot10^{-11}$\\ 190 & 35.3 & 300 & 13000 & $9.84\cdot10^{-9}$ & $2.85\cdot10^{-9}$ & $2.22\cdot10^{-14}$\\ 170 & 35.8 & 300 & 13000 & $7.53\cdot10^{-10}$ & $2.15\cdot10^{-10}$ & $3.66\cdot10^{-15}$\\ \hline \end{tabular} \end{table*} \subsection{Dielectric and local probe measurements of h-(Er$_{0.99}$,Ca$_{0.01}$)MnO$_3$} Beside the undoped h-ErMnO$_3$ we performed the bulk dielectric and local-probe analyses on two 1\% Calcium doped ErMnO$_3$ samples of the same batch with out-of-plane and in-plane polarisation. Details of sample preparation can be found in \textcite{Hassanpour_2016}. Figure S\ref{SFigure1} shows the frequency dependent dielectric properties for samples with out-of-plane (closed symbols) and in-plane (open symbols) polarisation. Again, in analogy to the discussion of Fig.\,3, for the sample with in-plane polarisation two distinct relaxation-like features show up, a smaller one reaching $\varepsilon_1$ of about 220 and a more prominent one reaching a CDC for $\varepsilon_2$ of about $2\times 10^3$. Interestingly, for both samples a further increase at low frequencies is revealed, which could be originating from hopping conductivity\cite{Lunkenheimer2010, Jonscher1977}. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{Figures/SFig2.pdf} \caption{Frequency-dependent dielectric constant $\varepsilon$' \textbf{a} and conductivity $\sigma$' \textbf{b} h-(Er$_{0.99}$,Ca$_{0.01}$)MnO$_3$ sample with out-of-plane (closed symbols) and in-plane (open symbols) polarisation at various temperatures.} \label{SFigure1} \end{figure} Figure S\ref{SFigure2} provides the characterization of the ferroelectric domain pattern measured by PFM \textbf{a} and cAFM \textbf{c}. The density of the domain walls with enhanced and reduced conductance in comparison to the bulk is measured by several line-scans. For the DW BLC model we only take into account the number of the DW with reduced conductance, which is for \textbf{b} 0.46 per \textmu m. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{Figures/SFig4.png} \caption{\textbf{a} Calibrated in-plane PFM scan on h-(Er$_{0.99}$,Ca$_{0.01}$)MnO$_3$ where the yellow (blue) represents ferroelectric domains pointing to the left (right). These form the characteristic domain pattern present in hexagonal manganite single crystals, where the ferroelectric $180^{\circ}$ domains come together at sixfold vertex points. \textbf{b} A representative line-profile from \textbf{a} where high values are from domains with \textit{$\leftarrow$P} and low values are from \textit{$\rightarrow$P} domains, used to get an approximate density of DWs per length. \textbf{c} A cAFM image from the area marked be the red box in \textbf{a}. Light colors indicated areas of enhanced conductance (tail-to-tail walls) while dark areas indicate areas of lower conductance (head-to-head walls).} \label{SFigure2} \end{figure} \subsection{Quantifying the DW density of ErMnO$_3$} Figure S\ref{SFigure3} shows the PFM measurement of h-ErMnO$_3$ as well as the corresponding lines for the evaluation of the DW density. \begin{figure}[htb] \centering \includegraphics[width=0.7\linewidth]{Figures/SFig5.png} \caption{\textbf{a} In-plane PFM scan on h-ErMnO$_3$ with in-plane polarisation (c.f. Fig.\,2). \textbf{b} Line-scans (coloured lines in \textbf{a}) are extracted, from which we deduce an average amount of domain walls with reduced conductance of 10$\pm$1 per 20\,\textmu m. High values are from domains with $\leftarrow$\textit{P} and low values are from $\rightarrow$\textit{P} domains.} \label{SFigure3} \end{figure}
{'timestamp': '2020-11-23T02:16:10', 'yymm': '2011', 'arxiv_id': '2011.10457', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.10457'}
\section{Introduction} Solid state lighting offers the promise of significant savings in electricity consumption for general lighting purposes \cite{Humphreys2008}. For this vision to come true, further improvements in the efficiency of light emitting diodes (LEDs) and, more importantly, drastic reductions in fabrication costs are required. In this context, LEDs based on group-III-N nanowires (NWs) have recently attracted more and more interest because such devices exhibit several conceptual advantages over conventional planar structures \cite{Kikuchi2004, Kim2004, Hersee2009, Bavencove2010, Guo2010, Armitage2010, Lin2010, Nguyen2011, Bavencove2011, Hahn2011, Kunert2011, Riechert2011, Waag2011, Kishino2012, Li2012}. First, in the NW geometry, strain induced by lattice mismatch can elastically relax at the free sidewalls \cite{Bjork2002}, and thus GaN NWs can be grown in excellent crystal quality on cost-effective Si substrates \cite{Geelhaar2011}. Second, for the same reason the strain in axial (In,Ga)N/GaN quantum wells (QWs) is reduced, which in turn decreases the quantum confined Stark effect (QCSE) and enhances the internal quantum efficiency \cite{Renard2009, Lahnemann2011}. Third, by growing radial core-shell QWs on the NW sidewalls, the area of the active region is increased \cite{Waag2011, Li2012}. Fourth, light extraction is expected to be higher from a NW ensemble than from a planar layer \cite{Henneghien2011, Kim2004}. For general lighting purposes, NW-LEDs are only relevant if they are based on ensembles. Such LEDs have been grown by metal-organic hydride vapor phase epitaxy \cite{Kim2004}, by hybrid chemical vapor deposition \cite{Hahn2011}, by metal-organic vapor phase epitaxy \cite{Hersee2009}, and by molecular beam epitaxy (MBE) \cite{Kikuchi2004, Bavencove2010, Guo2010, Armitage2010, Nguyen2011, Bavencove2011, Lin2010, Kunert2011}. Naturally, for devices based on ensembles of nanostructures the question of homogeneity arises. Both variations in electroluminescence (EL) wavelength and intensity have been reported \cite{Bavencove2011, Lin2010}. The wavelength variation has been associated with fluctuations in In content and is also observed in photoluminescence (PL) wavelength \cite{Lin2010}. With respect to EL intensity, Bavencove \textit{et al.} \cite{Bavencove2011} observed a very dramatic effect in that only about 1\% of all NWs exhibited EL at all. Obviously, the number of electroluminescent NWs has to be drastically increased for applications. However, the analysis of this observation is not straightforward because the EL intensity is not only influenced by the active region and its radiative recombination efficiency but also by the current that actually flows through the active region. Thus, NWs exhibiting very bright PL may not emit EL at all if current is not injected into them, be it due to failed contacting or significant NW-to-NW variations in series resistance. To disentangle both of these aspects is a challenging task. Here, we analyze the current path in such LEDs by combining several characterization techniques. We show that even if all the NWs of the ensemble are capable of emitting light and are contacted, fluctuations in the series resistances of the individual NWs prevent many NWs from emitting EL. Therefore, the homogeneity of the current path is even more crucial than the homogeneity of the individual active regions. \section{LED fabrication} \begin{figure}[t] \centering \includegraphics{fig1} \caption{(a) Schemativ view of the NW LED structure and (b) SEM bird's-eye view image of the NW ensemble after MBE growth and prior to processing. (c) CL spectral linescan acquired at room temperature along the center of a single dispersed NW. The position indicates the distance to the NW base end. The CL intensity is color-coded on a logarithmic scale and increases from blue via green and yellow to red.}\label{fig1} \end{figure} NW LEDs were fabricated by embedding heterostructures that consist of (In,Ga)N multiple QWs (MQWs) in the intrinsic region of GaN NWs with an n-i-p diode doping profile. The NWs were grown on Si(111) substrates by plasma-assisted MBE in a self-induced way, i.\,e. without any external catalysts or pre-patterning \cite{Sanchez-Garcia1998, Debnath2007, Geelhaar2011, Consonni2011b}. In figure~\ref{fig1}(a), a schematic view of the NW heterostructure is presented along with a scanning electron microscope (SEM) image of the as-grown NW ensemble in figure~\ref{fig1}(b). Special care was taken so that the NW morphology was not affected by the supply of Si during the growth of the Si-doped GaN base. The doping of NWs is a general challenge, as the complex growth mechanisms can be significantly altered by the addition of dopants \cite{Furtmayr2008, Furtmayr2008a, Richter2008, Jeganathan2009, Limbach2010, Limbach2012, Stoica2011}. In separate growth experiments, we determined that the maximum Si flux for which the nanowire morphology is not significantly changed corresponds to a nominal Si concentration of $3 \times 10^{19}$\,cm$^{-3}$ \cite{SIMS}. This Si flux was used for the growth of the NW base. After the growth of the GaN:Si base at 780\,$^\circ$C, the substrate temperature was lowered to 604\,$^\circ$C, in order to enable the incorporation of In for the formation of QWs. The MQW structure consists of four wells with an In content of ($20\pm10$)\% and a width of ($3\pm1$)\,nm, determined by x-ray diffraction on reference NW samples \cite{Wolz2011,Kaganer2011}. The barriers were designed to have a thickness of 8\,nm. The last QW was immediately followed by a Mg-doped Al$_{0.15}$Ga$_{0.85}$N electron blocking layer with a nominal thickness of 13\,nm. Finally, a Mg-doped GaN cap layer was grown. Again, precautions were taken to prevent the doping to destroy the NW morphology. A very low growth rate of only 0.8\,$\textrm{nm}/\textrm{min}$ was employed to limit coalescence of the NWs as far as possible. For the same reason, the growth temperature was gently raised to 744\,$^\circ$C at 2\,$^\circ\textrm{C}/\textrm{min}$ during the initial phase of the cap growth. Nevertheless, the SEM image in figure~\ref{fig1}(b) indicates that some coalescence took place, and it seems rather challenging to incorporate Mg without any effect on the NW morphology \cite{Limbach2010, Furtmayr2008, Furtmayr2008a, Stoica2011}. In order to verify the growth of the designed LED structure, spectrally and spatially resolved cathodoluminescence (CL) measurements were performed. Figure~\ref{fig1}(c) shows a CL spectral linescan acquired at room temperature (RT) along a single dispersed NW with a resolution of the spectrometer of $\approx3$\,nm. All three parts of the LED structure can be identified. The bottom part with a hight of 500\,nm are dominated by the near band edge (NBE) luminescence of GaN at 364\,nm as expected for n-type GaN \cite{Reshchikov2005}. This segment is followed by the MQW structure and its broad luminescence band around 560\,nm \cite{Limbach2011,Lahnemann2011, Wolz2012}. Finally, the Mg-doped GaN cap with a length of about 150\,nm can be identified by a redshift of the luminescence compared to the n-type region \cite{Limbach2010, Furtmayr2008a, Reshchikov2005}. This redshift indicates donor-acceptor pair transitions caused by incorporated Mg. Overall, the CL measurements confirm that the NW LED structure was grown as designed. \begin{figure}[t] \centering \includegraphics{fig2} \caption{(a) Sketches of the sample geometry during processing, i.e.\ as grown, planarized, after etching, and with semi-transparent front contact and contact pads. (b) Cross-sectional TEM image of the processed NW LED.}\label{fig2} \end{figure} The different stages of the LED processing are sketched in figure~\ref{fig2}(a). The NW ensemble was planarized by spin coating a solution of hydrogen silsesquioxane (HSQ), which was subsequently transformed into amorphous SiO$_{\textrm{x}}$. The SiO$_{\textrm{x}}$ acts as an insulator between the NWs. In order to uncover the Mg-doped GaN NW tips, a back-etching step is necessary. Next, the Mg-doped NW tips were contacted by i) a semi-transparent front metallization (Ni/Au, 5\,nm\,/\,5\,nm) and ii) contact pads for bonding and current spreading (Ti/Au, 10\,nm\,/\,90\,nm). Finally, the backside of the Si substrate was metallized with Al/Au (50\,nm\,/\,50\,nm) to form the n-type contact. A particular challenge for the processing of such an LED based on a NW ensemble consists of the fluctuation of the NW height. Thus, the metal layer on top may not be continuous, and short NWs may actually not be contacted. In order to investigate the NW top contact, the LEDs were analyzed by cross-sectional transmission electron microscopy (TEM). In the image presented in figure~\ref{fig2}(b) the semi-transparent contact is visible as the dark edge. This image demonstrates, in particular, that the two contact layers of Ni and Au are well connected from NW to NW. Also, Ni is in direct contact with the NW tips, as desired. Both goals were achieved for the majority of the NWs (the image shown is representative for several areas investigated by TEM). \section{Basic LED characterization} \subsection{Current-voltage characteristics} \begin{figure}[t] \centering \includegraphics{fig3} \caption{(a) Current-voltage characteristics of a processed LED with a contact area of 0.2\,mm$^2$ on a linear and (inset) logarithmic scale. (b) Current-voltage characteristics for small currents, the dashed line is the linear fit for the determination of $R_p$. In the inset, the product of current $I$ and the derivative of the voltage $U$ with respect to $I$ is plotted against $I$. The dashed line in the inset is the linear fit from which $R_S$ is determined using only data points for which $U \gg k_B T / e$.}\label{fig4} \end{figure} Current-voltage measurements for a contact pad of 0.2\,mm$^2$ are presented in figure~\ref{fig4}(a) on a linear and logarithmic (inset) scale. The turn-on voltage of the device is approximately 5.2\,V. The exact determination of this value is not possible as each individual NW in the contacted ensemble constitutes an LED with its own turn-on voltage (see also section 4.2). Therefore, the value of 5.2\,V is only an average of turn-on voltages for the NWs active at 8\,V. The high value of the turn-on voltage is in line with earlier reports on similar devices \cite{Guo2010, Armitage2010, Bavencove2011} and can in part be explained by the amorphous Si$_{\textrm{x}}$N$_{\textrm{y}}$ layer between the NWs and the Si substrate \cite{Geelhaar2011, Calleja2007, Stoica2008}. In addition, since the processing is not as mature as for conventional LEDs, also the contacting might contribute significantly to the high turn-on voltage. In contrast, the leakage current of 1.5\,$\times$\,10$^{-6}\,\textrm{A}/\textrm{mm}^{2}$ at $-$8\,V indicates a very good insulation of the individual NWs by the SiO$_{\textrm{x}}$ and is two orders of magnitude lower than what has been published previously \cite{Kikuchi2004, Hersee2009, Guo2010, Kunert2011, Bavencove2011}. Plotting the small-current range of the characteristics in an appropriate scale as shown in figure \ref{fig4}(b) reveals a non-vanishing slope in the reverse bias regime and therefore indicates the presence of a parallel current path with resistance $R_P$. Its value can be determined by a linear fit to the reverse bias regime \cite{Schubert2006}. Such a fit yields a parallel resistance of $R_P= (27.3 \pm 0.1)\,$M$\Omega$ which is an excellent value for this device compared to the data published so far \cite{Kikuchi2004, Hersee2009, Guo2010, Kunert2011, Bavencove2011}. In the forward bias regime, the current-voltage characteristics deviate from the ideal behaviour described by the Shockley equation as well. This deviation is in part caused by a series resistance $R_S$ in the current path \cite{Schubert2006} due to both the Si$_{\textrm{x}}$N$_{\textrm{y}}$ interlayer present at the bottom of the GaN NWs \cite{Calleja2007, Stoica2008} and the contacts at the top. The inset of figure~\ref{fig4}(b) depicts a plot of $I (dU/dI)$ versus $I$, and the value of $R_S$ can be determined from its slope \cite{Schubert2006}. However, the slope is not perfectly linear due to the tunneling resistance of the Si$_{\textrm{x}}$N$_{\textrm{y}}$ interlayer which decreases at high bias voltages. Therefore, a large error for $R_S$ is associated with this approach. The linear fit to the slope results in a series resistance of $R_S = (220 \pm 40)\,\Omega$. This rather large value contributes to the high turn-on voltage that presently seems to be typical for this kind of LED \cite{Guo2010, Armitage2010, Bavencove2011} and indicates room for improvement in the device processing. \subsection{Electroluminescence} \begin{figure}[t] \centering \includegraphics{fig4} \caption{EL spectra of the NW LED at room temperature for different forward currents. The inset shows a photograph of such a device with a diameter of 1\,mm under 8\,V forward bias.}\label{fig5} \end{figure} For preliminary EL measurements, the devices were contacted on a probe station. The inset of figure~\ref{fig5} shows a photograph of a NW LED with a diameter of 1\,mm in operation. This picture demonstrates the efficient current spreading through the semi-transparent contacts for this device. The EL can be seen with the naked eye at around 4\,V. The main part of figure~\ref{fig5} shows the current-dependent EL spectra of the ensemble NW LED at room temperature. No emission from GaN is visible (wavelength region not shown) which indicates a very good overlap of the p-n-junction with the active zone. The EL emission is centered around 540\,nm in agreement with the CL emission from the (In,Ga)N QW region shown in figure \ref{fig1}. The full width at half maximum of the EL is 68\,nm at 8\,mA which is typical for this kind of LED \cite{Kikuchi2004, Guo2010, Bavencove2011, Lin2010}. The rather large value suggests inhomogeneity in the emission wavelength of the individual NW LEDs, since the presented ensemble measurement integrates all spectra of the individual NWs \cite{Lahnemann2011}. The slight shift in the peak position with increasing current is consistent with the occurrence of the QCSE that will be addressed in section 5. In order to clarify the inhomogeneity, EL maps were acquired at 200 and 500$\times$ magnification as presented in figures~\ref{fig6}(a) and (b). In agreement with previous findings \cite{Bavencove2011}, the NW LED exhibits a very spotty emission pattern. One can find emission spots with different colors ranging from blue all the way to red as demonstrated in the close-up images in figure~\ref{fig6}(c), while the vast majority of spots exhibits green emission. The diameter of the emission spots is diffraction limited, so that it is plausible that each spot corresponds to a single NW emitting light. The total number density of these electroluminescent spots is around $1\times 10^{7}\,\textrm{cm}^{-2}$ at 10\,V, i.\,e. roughly two orders of magnitude lower than the NW density on the sample after growth \cite{Limbach2012, Calarco2007, Consonni2011a}. Similar observations of such a low density of electroluminescent spots have been reported recently, suggesting that this phenomenon is typical for LEDs based on NW ensembles \cite{Kishino2007, Lin2010,Bavencove2011}. Moreover, isolated electroluminescent spots as seen on the inset in figure~\ref{fig6}(b) would not be observed if a significant fraction of NWs would emit light. Obviously, the fact that only a small fraction of NWs emits EL is unacceptable for applications. The origin of this phenomenon is elucidated in the next section. \begin{figure}[t] \centering \includegraphics{fig5} \caption{EL of a NW-LED with an area 0.2\,mm$^2$ under 10\,V forward bias imaged through a microscope at (a) 200$\times$ and (b) 500$\times$ magnification. The inset in (b) shows an enlarged part of the image. The lower part (c) shows close-ups of individual luminescence spots taken from the image with 500$\times$ magnification.}\label{fig6} \end{figure} \section{Current path and quantum efficiency} The large number of dark NWs may be caused by a negligible current injected into the majority of NWs or by a very low internal quantum efficiency (IQE). A low IQE could be explained by growth-induced defects whose concentration fluctuates between individual NWs. An inefficient current injection could, in principle, result from several factors: First, the height distribution of the NWs and the planarization of the ensemble may lead to the burial of some NWs that are significantly shorter than the mean. Second, for those NWs which are significantly longer than the mean the metallization of the MQW structure or even the n-type base is possible. Third, a locally higher thickness of the Si$_{\textrm{x}}$N$_{\textrm{y}}$ layer may significantly increase the series resistance for some NWs. Fourth, variations in the metallization may also result in locally higher contact resistance. The latter two of these factors relate to fluctuations in the series resistances, and such fluctuations were indeed already implied by the observation that the ensemble LED does not exhibit a well-defined turn-on voltage (cf. section 3.1). We investigated this phenomenon in more detail by recording EL maps for various forward biases as discussed in section 4.1. The first two of the above factors seem unlikely to be important in the present case since TEM indicates a rather homogeneous height distribution and continuous metallization [figure \ref{fig2}(b)]. Nevertheless, a merely structural characterization does not prove the realization of an electrical contact. Thus, in section 4.2 we describe experiments that assess directly the contacting of the NWs. In order to determine whether there is indeed a significant variation in IQE between the NWs, we measured their luminescence independently of the current path, and these results are presented in section 4.3. \subsection{Bias-dependence of spot density} In order to analyze variations in the series resistances of the NWs in more detail, we acquired EL maps for various forward biases. The extracted number densities of electroluminescent spots are plotted in figure~\ref{fig7}, and the corresponding images have been converted into a movie (supplementary). Already at 3.4\,V a significant number of NWs emit light, i.\,e. well below the turn-on voltage found for the device as a whole, and the number density of NWs showing electroluminescence increases monotonically up to 10\,V. Both observations indicate that there is indeed a significant inhomogeneity of the series resistances: The LED has to be considered as an ensemble of many individual NW-LEDs contacted in parallel rather than a uniform device. Each of the NWs exhibits an individual $R_S$ and turn-on voltage, as also discussed by Bavencove \textit{et al.} \cite{Bavencove2011}. In addition, the series of EL maps reveals that the increase in ensemble luminescence intensity with increasing current seen in figure~\ref{fig5} is caused by an increase both in the number density of actually luminescent NWs and in emission intensity from individual NWs. For the interpretation of these measurements, one has to take into consideration that they reveal information only about those NWs that eventually do emit light. However, these data do not account for the large number of NWs that remain dark, so it is still mandatory to assess contacting and IQE. \begin{figure}[t] \centering \includegraphics{fig6} \caption{Number density of electroluminescence spots as a function of the applied forward bias. The inset is a screen shot of a movie showing the EL during a ramp from 3\,V to 10\,V forward bias. The data were extracted from the central region of the individual frames. The movie is provided as supplementary data.}\label{fig7} \end{figure} \subsection{Electrical contacts} Conclusive evidence that most NWs were successfully contacted can be derived from electron-beam-induced current (EBIC) measurements. To this end, a processed LED structure was cleaved and contacted in order to perform measurements on a cross-section of the device. Our set-up allows the simultaneous acquisition of secondary-electron (SE), CL and EBIC signals. Such a set of measurements is depicted in figure~\ref{fig3}. The monochromatic CL of the (In,Ga)N band around 560~nm is superimposed on the corresponding SE image in figure~\ref{fig3}(a), which visualizes the position of the QWs along the NW cross-section and shows that the top contact is well above the QW region, despite the variation in NW length. Note, that most NWs exhibit CL although the intensity varies significantly. The latter two aspects will be analyzed in more detail in subsection 4.3 with the help of top-view CL images. The EBIC map associated with the same SE image is depicted in figure~\ref{fig3}(b). In EBIC, electron-hole pairs created in or diffusing to the depletion region of the p-n-junction are separated by the electric field. The resulting short-circuit current can be detected through an external current amplifier while the electron beam is scanned across the sample. In figure~\ref{fig3}(b), the bright stripe in the middle of the NWs indicates the position of the p-n junction. An additional, but weaker, EBIC signal can be detected from the region of the top contact implying the presence of a slight band bending at the semiconductor-metal interface. The strong EBIC signal related to the p-n junction directly shows which NWs are contacted. In fact, those NWs not contributing to the EBIC signal in figure~\ref{fig3}(b) are broken and thus not connected to the back contact (yellow arrows). A reverse bias of $-3.5$\,V was applied since the EBIC signal is too weak at 0\,V. The reverse bias increases the width of the depletion region of the diode and thus reduces the amount of carriers recombining in the QWs, while increasing the one contributing to the EBIC signal. At an acceleration voltage of 8~kV for the electron beam, most of the signal originates from the first row of NWs. Therefore, the cross-sectional EBIC map directly visualizes that the majority of these NWs is contacted. \begin{figure}[t] \centering \includegraphics[width=13.5cm]{fig7} \caption{(a) Monochromatic room temperature CL image recorded around 560\,nm with a bandpass of approximately 30 nm superimposed on an SEM image and (b) corresponding EBIC map with color-coded intensity (increasing from black via blue, green, red, and yellow to white). These measurements on the sample cross section were performed under a reverse bias of $-$3.5~V. The dashed horizontal lines indicate the substrate-NW (white) and NW-top-contact (red) interface, respectively. The yellow arrows highlight broken NWs that do not contribute to the EBIC signal. (c) and (d) CL images of the same region acquired unbiased and under reverse bias, respectively.} \label{fig3} \end{figure} Carrier trapping by the QW and carrier separation by the p-n-junction are competing processes after excitation by the electron beam. Under reverse bias, the drift of carriers to the contacts induced by the increased electric field dominates over the diffusion to the (In,Ga)N QWs. Therefore, only carriers excited directly at the QWs contribute to the CL signal and the spatial resolution of the CL is thus improved as can be seen in the comparison of the CL images in figure~\ref{fig3}(c) and (d) that were acquired at a bias of 0\,V and $-3.5$\,V, respectively. In the first case, electrons and holes can diffuse to the MQW and recombine radiatively even if they are excited outside the active region. In contrast, under reverse bias the CL signal is only recorded when the electron beam directly excites the QWs. Therefore, figure~\ref{fig3}(a) precisely reflects the position of the QWs in the NWs. This position coincides with the upper part of the depletion region visualized in the EBIC map. \subsection{Internal and external quantum efficiencies} To assess the homogeneity of the luminescence independent of the current path, a top-view CL image of the as-grown NW ensemble has been recorded with a wide spectral bandpass of about $50$\,nm as shown in figure~\ref{fig8}(a). This false-color image is dominated by luminescent spots with a diameter of 200--500\,nm. The spots indicate luminescence centers which collect carriers excited by the electron beam. The number density of these spots of about $1\times 10^{8}\,\textrm{cm}^{-2}$ is one order of magnitude higher than observed in EL, but still one order of magnitude lower than the NW density. The size of the spots is partly a result of the electron beam interaction volume at an acceleration voltage of 8\,kV, chosen to penetrate the cap layer and excite the QWs. However, the major effect for this spatial broadening is carrier diffusion within the partially coalesced p-type cap segment [cf.\ figure~\ref{fig1}(b)], i.\,e. between neighboring NWs. Carriers excited in the cap may diffuse along local minima in the potential landscape to QWs even in neighboring NWs and recombine radiatively there rather than at the position of the electron beam during their excitation. Recombination centers emitting at lower energies (green and red) typically collect carriers from a larger area and thus the spots have a larger diameter than for higher energies (blue). Hence, the CL image presented in figure~\ref{fig8}(a) is largely affected by the relaxation of carriers into potential minima and does not reflect the capability of the individual NWs to emit light. \begin{figure}[t] \centering \includegraphics{fig8} \caption{Top-view false-color CL images taken at room temperature of (a) the NW-LED sample (unprocessed) and (b) an undoped reference sample. To cover the broad (In,Ga)N contribution, a spectral bandpass of approximately $50$\,nm was chosen and three monochromatic images were superimposed which were recorded at the peak wavelength (green), and on the short (blue) and long (red) wavelength flanks. Additional colors result from an overlap of the monochromatic images. The bandpass regions were slightly adjusted for the two samples to accommodate minor differences in the MQWs. No SE image is superimposed.} \label{fig8} \end{figure} The coalescence leading to the carrier diffusion between neighboring NWs is a result of the Mg doping. Therefore, an undoped reference sample with similar QWs emitting at a slightly shorter wavelength, but with a similar PL intensity under resonant excitation has also been investigated. It is shown in figure~\ref{fig8}(b), the spot diameter decreases with the reduced degree of coalescence and the total number of luminescence spots agrees fairly well with the NW number density. This result shows conclusively that the low percentage of emitting NWs in the EL image of figure~\ref{fig6} is not caused by a significantly reduced IQE for the majority of the NWs. At the same time, the emission intensity of a few NWs in the reference sample is significantly higher than the mean, and their number density is similar to the one observed in figure~\ref{fig8}(a). Which may be atributed to differences in the IQE of individual NWs. Of course, such differences in the IQE of individual NWs will occur in a self-induced NW ensemble, in which fluctuations of the QW thickness from NW to NW are essentially inevitable. However, an effect just as inevitable for a random array of NWs is the fluctuation of the extraction efficiency. Whether we view the NW ensemble as a disordered photonic crystal in which multiple light scattering contributes to light extraction \cite{Long2008,Yang2008} or as an inhomogeneous effective medium in the limit of very small NW dimensions and distances \cite{Asatryan1999} does not change the result: the spatially random arrangement of dielectric cylinders results in areas of incidentally enhanced extraction efficiency. A closely related subject is the random lasing observed upon optical pumping for GaN NW ensembles in which spatial light localization occurs by chance \cite{Sakai2010}. \subsection{Discussion} Taking all the results of this section into account, we can now analyze the current path in LEDs based on NW ensembles. The TEM characterization [figure~\ref{fig2}(b)] and the EBIC investigation [figure~\ref{fig3}(b)] demonstrate that the vast majority of NWs is electrically contacted. However, the electrical characteristics (figure~\ref{fig4}) and the voltage-dependent EL maps (figure~\ref{fig7}) show that the turn-on behaviour of the individual NWs varies significantly. Moreover, the low number density of isolated, well-defined emission spots on EL images [figure~\ref{fig6}(b)] implies that many NWs do not emit EL at all. At the same time, our CL investigation [figure~\ref{fig3}(a) and figure~\ref{fig8}(b)] indicates that independent of the current path essentially all the NWs emit light, yet they exhibit fluctuations in intensity. For EL emission, an individual NW has to have a sufficiently high IQE for light emission and has to carry current, so both types of inhomogeneities have to be considered. The IQE of a given NW is not affected by the IQEs of the neighboring NWs. In contrast, NWs carrying a high current density have to be surrounded by NWs with low current density because the two values are not independent. Hence, the consequences of fluctuations in individual series resistance and thus local current density are much more severe than those of variations in IQE, in agreement with our experimental results. Therefore, we conclude that the widely differing current injection into the individual NW LEDs determines the EL pattern of an LED based on a NW ensemble. \section{NW polarity and QCSE} Processed NW LEDs offer the possibility to study the PL of the NWs while applying a bias. Such an investigation enables us to obtain additional important insight about the (In,Ga)N/GaN NWs, as we will describe in this section. Figure~\ref{fig9}(a) shows the $\mu$-PL spectra recorded at room temperature under resonant excitation with the 413.1\,nm line of a Kr$^{+}$ laser, a spot size of about $1$\,$\mu$m and a total beam power of 8\,mW. The NWs show a broad (In,Ga)N-QW emission centered around 530\,nm under biased as well as unbiased conditions as observed also in EL (see figure~\ref{fig5}). For positive voltages the EL, which contributes only 8\,\% to the total signal at 10\,V and is slightly redshifted compared to the bare PL signal, was measured independently and subtracted from all $\mu$-PL spectra. Applying a forward bias to the LED leads to a moderate but steady increase of the PL intensity and no significant shift in transition energy up to $+10$\,V. In reverse direction, the PL intensity is quenched significantly while the spectrum shifts abruptly to lower transition energies. The fact that the PL intensity is significantly influenced by the applied bias confirms that most of the NWs are contacted electrically, since all (In,Ga)N-QWs are excited simultaneously in this experiment. To investigate the possibility of carrier escape from the QWs under reverse bias, we also recorded the photocurrent during the biased PL experiment. At a reverse bias of -2\,V, where the PL signal is already significantly quenched, a photocurrent of only -22\,$\mu$A is measured. At -10\,V the photocurrent reaches $-100$\,$\mu$A. Comparing those values to the current under forward bias in figure~\ref{fig4}(a), it becomes immediately clear that carrier escape from the QWs cannot be responsible for the quenching of the PL under reverse bias. \begin{figure}[b] \centering \includegraphics{fig9} \caption{(a) $\mu$-PL spectra of the processed NW LED recorded at room temperature for an applied voltage U in forward (blue) and reverse (red) direction. For forward voltages, the EL spectra of the LED were measured separately and subtracted from the PL spectra. The inset shows a schematic representation of the band profile of a N-polar (In,Ga)N/GaN QW under the influence of an external voltage U. (b) The out-of-plane strain within the NW as obtained by finite element simulations is depicted.}\label{fig9} \end{figure} The observed behavior of the PL intensity reflects that the external bias changes the strength of the electric field within the QWs. An increase of this field, for example, reduces the electron-hole overlap and thus the internal quantum efficiency, which in turn leads to a lower PL intensity. The electric field results from the superposition of the built-in depletion field of the p-i-n-junction, the internal electrostatic field primarily induced by the piezoelectric polarization of the (In,Ga)N QWs, and the applied voltage. For a comparison of these contributions to the total electric field, we utilize finite element simulations of the strain in the (In,Ga)N QWs embedded in the GaN NW. Figure~\ref{fig9}(b) shows the component $\varepsilon_{zz}$ of the elastic strain tensor. The out-of-plane strain reaches a value of 1\% in the center part of the QW, which is close to the value expected for a laterally infinite planar (In,Ga)N QW (1.2\%). This strain within the QWs results in a piezoelectric polarization $P_z = 0.02$~ C/m$^2$. Self-consistent Schrödinger-Poisson calculations of the band profile of the structure along the NW axis show that this polarization induces an internal electrostatic field of 2~MV/cm, which is significantly stronger than the built-in depletion field of the p-i-n-junction. Assuming electron and hole concentrations in the n- and p-type segments of $10^{18}$ and $10^{17}$~cm$^{-3}$, respectively, this built-in field within the intrinsic segment of the NW amounts to 300~kV/cm. This estimate of the internal fields allows us to draw an important conclusion about the polarity of the NWs. The dependence of the PL intensity upon the applied bias implies that the total field in the QWs is reduced under forward but increased under reverse bias. Together with the fact that the field due to the internal piezoelectric polarization significantly exceeds the built-in field of the p-i-n-junction, this specific dependence of the total field on bias is only possible when the NWs exhibit N polarity, i.~e., are grown along the $[000\overline{1}]$ direction. The resulting band profile is schematically shown in the inset of figure~\ref{fig9}(a). Note that the field within the QW would change its sign for Ga polar NWs, and the dependence of the electron-hole overlap and thus of the PL intensity on the external field would be reversed. It is well known that self-induced GaN NWs grow in MBE along the c-axis \cite{Geelhaar2011}, but conflicting results have been published with respect to the NW polarity \cite{Hestroffer2011, Brubaker2011, Cherns2008, Geelhaar2011, Kong2011, Mata2012}. Our experiments support the recent and comprehensive report which indicates that these NWs are N-polar by Hestroffer et al. \cite{Hestroffer2011}. Finally, we address the QCSE. It has been argued that (In,Ga)N QWs in such NWs are completely strain-free and thus not subject to the QCSE \cite{ Nguyen2011, Armitage2008, Guo2011}, but there are also contradicting publications \cite{Lahnemann2011, Bavencove2011}. Here, we find in agreement with our previous study \cite{Lahnemann2011} that the (In,Ga)N QWs still experience the QCSE. \section{Summary and conclusions} The EL of LEDs based on NW ensembles is characterized by a spotty emission pattern because such devices have to be considered as arrays of many tiny LEDs operated in parallel. In such LEDs frequently only a small fraction of the NWs emits light under typical biases. We identified that this drawback is caused mainly by spatial inhomogeneities in the current path. While there are also variations in IQE of the individual NWs, these variations are by far not as pronounced. The NW-to-NW fluctuations in series resistances must be significant and lead to fluctuations in current density between the single-NW LEDs, and only those NWs with high current density emit EL. In addition, our PL measurements under bias support the recent findings that self-induced GaN NWs on Si grow in the N-polar direction\cite{Hestroffer2011} and that the (In,Ga)N QWs in such NWs still experience the QCSE \cite{Lahnemann2011}. For the future, a more homogeneous NW ensemble is desirable in order to reduce the inter-NW differences in series resistance. This may be achieved by the usage of selective-area grown NWs for LED applications since it is expected that the NW-to-NW fluctuations can be reduced significantly by this approach \cite{Sekiguchi2010,Bertness2010,Gotschke2011,Schumann2011a, Li2012}. While we analyzed the specific case of an LED based on an (In,Ga)N/GaN NW ensemble grown by MBE, our conclusion about the need for homogeneity in current path holds true for all devices based on NW ensembles and can therefore be transferred to other growth techniques, other material systems, and even other types of devices. \newpage \bibliographystyle{iopart-num}
{'timestamp': '2012-10-29T01:02:13', 'yymm': '1210', 'arxiv_id': '1210.7144', 'language': 'en', 'url': 'https://arxiv.org/abs/1210.7144'}
\section{Introduction} In this paper, we establish a connection between analogical reasoning and kernel-based machine learning, which are two important subfields of artificial intelligence. Essentially, this becomes possible thanks to the observation that a specific formalization of analogical relationships, so-called \emph{analogical proportions} \cite{Miclet2009,prade17}, defines a kernel function on pairs of objects. This relationship is established by means of generalized (fuzzy) equivalence relations as a bridging concept. Analogical reasoning has a long tradition in artificial intelligence research, and various attempts at formalizing analogy-based inference can be found in the literature. In this regard, the aforementioned concept of analogical proportion is an especially appealing approach, which has already been used successfully in different problem domains, including classification \cite{bounhas14}, recommendation \cite{hug16}, preference completion \cite{pirlot16}, decision making \cite{billingsley17}, and solving IQ tests \cite{beltran16}. In spite of its popularity in AI in general, analogical reasoning has not been considered very much in machine learning so far. Yet, analogical proportions have recently been used in the context of preference learning \cite{ahmadi_huellermeier_aaai18}, a branch of machine learning that has received increasing attention in recent years \cite{mpub218}. Roughly speaking, the goal in preference learning is to induce preference models from observational (or experimental) data that reveal information about the preferences of an individual or a group of individuals in a direct or indirect way; the latter typically serve the purpose of predictive modeling, i.e., they are then used to predict the preferences in a new situation. Frequently, the predicted preference relation is required to form a total order, in which case we also speak of a \emph{ranking problem}. In fact, among the problems in the realm of preference learning, the task of ``learning to rank'' has probably received the most attention in the literature so far, and a number of different ranking problems have already been introduced. Based on the type of training data and the required predictions, F\"urnkranz and H\"ullermeier \cite{mpub218} distinguish between the problems of object ranking \cite{Cohen99,kami_as10}, label ranking \cite{Har-Peled2002,Cheng2009,Vembu2011}, and instance ranking \cite{mpub191}. Building on \cite{ahmadi_huellermeier_aaai18}, the focus of this paper is on the problem of object ranking. Given training data in the form of a set of exemplary rankings of subsets of objects, the goal in object ranking is to learn a ranking function that is able to predict the ranking of any new set of objects. Our contribution is a novel approach to this problem, namely a kernel-based implementation of analog-based object ranking. The rest of the paper is organized as follows. In the next section, we recall the setting of object ranking and formalize the corresponding learning problem. Section~3 outlines existing methods for the object ranking task, followed by Section~4 in which the connection between analogical reasoning and kernel-based learning is established. In Section~5, we introduce kernel-based analogical reasoning for the object ranking problem. Finally, we present an experimental evaluation of this approach in Section~6, prior to concluding the paper with a summary and an outline of future work. \section{Problem Formulation} Consider a reference set of objects, items, or choice alternatives $\mathcal{X}$, and assume each item $\boldsymbol{x} \in \mathcal{X}$ to be described in terms of a feature vector; thus, an item is a vector $\boldsymbol{x} = (x_1, \ldots , x_d) \in \mathbb{R}^d$ and $\mathcal{X} \subseteq \mathbb{R}^d$. The goal in object ranking is to learn a \emph{ranking function} $\rho$ that accepts any (query) subset $$ Q = \{ \boldsymbol{x}_1, \ldots , \boldsymbol{x}_n \} \subseteq \mathcal{X} $$ of $n = |Q|$ items as input. As output, the function produces a ranking $\pi \in \mathbb{S}_n$ of these items, where $\mathbb{S}_n$ denotes the set of all permutations of length $n$, i.e., all mappings $[n] \longrightarrow [n]$ (symmetric group of order $n$); $\pi$ represents the total order \begin{equation}\label{eq:r} \boldsymbol{x}_{\pi^{-1}(1)} \succ \boldsymbol{x}_{\pi^{-1}(2)} \succ \ldots \succ \boldsymbol{x}_{\pi^{-1}(n)} \enspace , \end{equation} i.e., $\pi^{-1}(k)$ is the index of the item on position $k$, while $\pi(k)$ is the position of the $k$th item $\boldsymbol{x}_k$ ($\pi$ is often called a \emph{ranking} and $\pi^{-1}$ an \emph{ordering}). Formally, a ranking function is thus a mapping \begin{equation}\label{eq:map} \rho: \, \mathcal{Q} \longrightarrow \mathcal{R} \enspace , \end{equation} where $\mathcal{Q} = 2^\mathcal{X} \setminus \emptyset$ is the \emph{query space} and $\mathcal{R} = \bigcup_{n \in \mathbb{N}} \mathbb{S}_n$ the \emph{ranking space}. The order relation ``$\succ$'' is typically (though not necessarily) interpreted in terms of preferences, i.e., $\boldsymbol{x} \succ \boldsymbol{y}$ suggests that $\boldsymbol{x}$ is preferred to $\boldsymbol{y}$. A ranking function $\rho$ is learned on a set of training data that consists of a set of rankings \begin{equation}\label{eq:td} \mathcal{D} = \big\{ (Q_1, \pi_1) , \ldots , (Q_M, \pi_M) \big\} \, , \end{equation} where each ranking $\pi_\ell$ defines a total order of the set of objects $Q_\ell$. Once a ranking function has been learned, it can be used for making predictions for new query sets $Q$. Such predictions are evaluated in terms of a suitable loss function or performance metric. A common choice is the (normalized) \emph{ranking loss}, which counts the number of inversions between two rankings $\pi$ and $\pi'$: \begin{equation*} \label{eq:rankloss} d_{RL}(\pi, \pi') = \frac{ \sum_{1 \leq i , j \leq n} \llbracket {\pi(i) < \pi(j)} \rrbracket \llbracket {\pi'(i) > \pi'(j)} \rrbracket }{n(n-1)/2} \, , \end{equation*} where $\llbracket \cdot \rrbracket$ is the indicator function. The ranking function (\ref{eq:map}) sought in object ranking is a complex mapping from the query to the ranking space. An important question, therefore, is how to represent a ``ranking-valued'' function of that kind, and how it can be learned efficiently. \section{Previous Work} \label{baselines} Quite a number of approaches to object ranking and related learning-to-rank problems have already been proposed in the literature. In this section, we give a brief overview of some important state-of-the-art methods that will also be used as baselines in our experiments later on. In this regard, we distinguish between the more traditional utility-based approach, in which a ranking function is represented via an underlying (latent) utility function, and the analogy-based approach recently put forward by Ahmadi Fahandar and H{\"u}llermeier \cite{ahmadi_huellermeier_aaai18}. \subsection{Utility-based Approach} Most commonly, a ranking function is represented by means of an underlying scoring function $$ U:\, \mathcal{X} \longrightarrow \mathbb{R} \, , $$ so that $\boldsymbol{x} \succ \boldsymbol{x}'$ if $U(\boldsymbol{x})> U(\boldsymbol{x}')$. In other words, a ranking-valued function is represented through a real-valued function. Obviously, $U$ can be considered as a kind of utility function, and $U(\boldsymbol{x})$ as a latent utility degree assigned to an item $\boldsymbol{x}$. Seen from this point of view, the goal in object ranking is to learn a latent utility function on a reference set $\mathcal{X}$. Once such function is established, a predicted ordering of items is obtained by sorting them according to their estimated (latent) utilities. The representation of a ranking function in terms of a real-valued (utility) function also suggests natural approaches to learning. In particular, two such approaches are prevailing in the literature. \subsubsection{Pointwise Approach} The first approach focuses on individual objects or ``points'' $\boldsymbol{x}$ in the instance space $\mathcal{X}$, and essentially reduces the ranking problem to a regression problem. Correspondingly, it requires information about target values in the form of preference degrees for individual objects. As a representative of this category, we will include \emph{expected rank regression} (ERR) in our experimental study \cite{Kamishima2006,kami_as10}. ERR reduces the problem to standard linear regression. To this end, every training example $(Q, \pi)$ is replaced by a set of data points $(\boldsymbol{x}_i , y_i) \in \mathcal{X} \times \mathbb{R}$. Here, the target $y_i$ assigned to object $\boldsymbol{x}_i \in Q$ is given by $$ y_i = \frac{\pi(i)}{|Q| + 1} \ . $$ This is justified by taking an expectation over all (complete) rankings of $\mathcal{X}$ (which is assumed to be finite) and assuming a uniform distribution. In spite of this apparently oversimplified assumption, and the questionable transformation of ordinal ranks into numerical scores, ERR has shown quite competitive performance in empirical studies, especially when all rankings in the training data (\ref{eq:td}) are of approximately the same length \cite{tfml}. \subsubsection{Pairwise Approach} The second idea is to reduce the problem to binary classification; here, the focus is on pairs of items, which is the reason why the approach is called the pairwise approach. As a first representative of this category, we include a simple reduction to binary classification with linear models. Given a ranking (\ref{eq:r}) as training information, this approach extracts all pairwise preferences $\boldsymbol{x}_{\pi^{-1}(i)} \succ \boldsymbol{x}_{\pi^{-1}(j)}$, $1 \leq i < j \leq n$, and considers these preferences as examples for a binary classification task. This is especially simple if $U$ is a linear function of the form $U(\boldsymbol{x}) = \boldsymbol{w}^\top \boldsymbol{x}$. In this case, $U(\boldsymbol{x}) > U(\boldsymbol{x}')$ if $\boldsymbol{w}^\top \boldsymbol{x} > \boldsymbol{w}^\top \boldsymbol{x}'$, which is equivalent to $\boldsymbol{w}^\top \boldsymbol{z} > 0$ for $\boldsymbol{z} = \boldsymbol{x} - \boldsymbol{x}' \in \mathbb{R}^d$. Thus, from the point of view of binary classification (with a linear threshold model), $\boldsymbol{z}$ can be considered as a positive and $-\boldsymbol{z}$ as a negative example. In principle, any binary classification algorithm can be applied to learn the weight vector $\boldsymbol{w}$ from the set of examples produced in this way. As a representative of this class of methods, we will use support vector machines in our experiments; more specifically, we include Ranking SVM \cite{joachims02} as a state-of-the-art baseline to compare with. As a representative nonlinear method, we use a well-known model called RankNet \cite{Burges2005}. It represents a utility function in the form of a feedforward neural network. Given a pair of items $\boldsymbol{x}_i, \boldsymbol{x}_j$, it computes scores $s_i=f(\boldsymbol{x}_i)$ and $s_j=f(\boldsymbol{x}_j)$, and predicts the probability \[ \mathbf{P}( \boldsymbol{x}_i \succ \boldsymbol{x}_j) = \frac{ 1 }{ 1 + \exp( -\sigma(s_i - s_j) ) } \, . \] For an observed preference $\boldsymbol{x}_i \succ \boldsymbol{x}_j$ or $\boldsymbol{x}_j \succ \boldsymbol{x}_i$, it adapts the network weights using (stochastic) gradient descent, taking the derivative with respect to the logistic loss as a cost function. \subsection{Analogy-based Approach} A new approach to object ranking was recently proposed on the basis of analogical reasoning \cite{ahmadi_huellermeier_aaai18}. This approach essentially builds on the following inference pattern: If object $\boldsymbol{a}$ relates to object $\boldsymbol{b}$ as $\boldsymbol{c}$ relates to $\boldsymbol{d}$, and knowing that $\boldsymbol{a}$ is preferred to $\boldsymbol{b}$, we (hypothetically) infer that $\boldsymbol{c}$ is preferred to $\boldsymbol{d}$. This principle is formalized using the concept of analogical proportion \cite{Miclet2009}. For every quadruple of objects $\boldsymbol{a},\boldsymbol{b},\boldsymbol{c},\boldsymbol{d}$, the latter provides a numerical degree to which these objects are in analogical relation to each other. To this end, such a degree is first determined for each attribute value (feature) separately, and these degrees are then combined into an overall degree of analogy. Consider four values $a, b, c, d$ from an attribute domain $\mathbb{X}$. The quadruple $(a,b,c,d)$ is said to be in analogical proportion, denoted by $a:b::c:d$, if ``$a$ relates to $b$ as $c$ relates to $d$''. A bit more formally, the degree of proportion can be expressed as \begin{equation}\label{eq:ap} E \big( \mathcal{R}(a,b) , \mathcal{R}(c,d) \big) \, , \end{equation} where the relation $E$ denotes the ``as'' part of the informal description. $\mathcal{R}$ can be instantiated in different ways, depending on the underlying domain $\mathbb{X}$. In the case of Boolean variables, where $\mathbb{X} = \{0,1\}$, there are $2^4=16$ instantiations of the pattern $a:b::c:d$, of which only the following 6 satisfy a set of axioms required to hold for analogical proportions: \begin{center} \begin{tabular}{cccc} \hline $a$ & $b$ & $c$ & $d$ \\ \hline 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 \\ \hline \end{tabular} \end{center} This formalization captures the idea that $a$ differs from $b$ (in the sense of being ``equally true'', ``more true'', or ``less true'', if the values 0 and 1 are interpreted as truth degrees) exactly as $c$ differs from $d$, and vice versa. In the numerical case, assuming all attributes to be normalized to the unit interval $[0,1]$, the concept of analogical proportion can be generalized on the basis of generalized logical operators \cite{BOUNHAS201736,dubois16}. In this case, the analogical proportion will become a matter of degree, i.e., a quadruple $(a,b,c,d)$ can be in analogical proportion \emph{to some degree} between 0 and 1. An example of such a proportion, with $\mathcal{R}$ the arithmetic difference $\mathcal{R}(a,b)=a-b$, is the following: \begin{equation}\label{eq:v_a} v(a,b,c,d) \begin{cases} 1- | (a-b) - (c-d)| & \text{if } \operatorname{sign}(a-b) = \operatorname{sign}(c-d)\\ 0 & \text{otherwise.} \end{cases} \end{equation} Note that this formalization indeed generalizes the Boolean case (where $a,b,c,d \in \{0,1 \}$). To extend analogical proportions from individual values to complete feature vectors, the individual degrees of proportion can be combined using any suitable aggregation function, for example the arithmetic mean: \begin{equation*}\label{eq:agg} v(\boldsymbol{a}, \boldsymbol{b} , \boldsymbol{c} , \boldsymbol{d}) = \frac{1}{d} \sum_{i=1}^d v(a_i , b_i , c_i , d_i) \, . \end{equation*} With a measure of analogical proportion at hand, the object ranking task is tackled as follows: Consider any pair of query objects $\boldsymbol{x}_i , \boldsymbol{x}_j \in Q$. Every preference $\boldsymbol{z} \succ \boldsymbol{z}'$ observed in the training data $\mathcal{D}$, such that $(\boldsymbol{z}, \boldsymbol{z}', \boldsymbol{x}_i , \boldsymbol{x}_j)$ are in analogical proportion, suggests that $\boldsymbol{x}_i \succ \boldsymbol{x}_j$. This principle is referred as \emph{analogical transfer} of preferences, because the observed preference for $\boldsymbol{z} , \boldsymbol{z}'$ is (hypothetically) transferred to $\boldsymbol{x}_i, \boldsymbol{x}_j$. Accumulating all pieces of evidence that can be collected in favor of $\boldsymbol{x}_i \succ \boldsymbol{x}_j$ and, vice versa, the opposite preference $\boldsymbol{x}_j \succ \boldsymbol{x}_i$, an overall degree $p_{i,j}$ is derived for this pair of objects. The same is done for all other pairs in the query. Eventually, all these degrees are combined into an overall consensus ranking. We refer to \cite{ahmadi_huellermeier_aaai18} for a detailed description of this method, which is called ``\textbf{a}nalogy-\textbf{b}ased \textbf{le}arning to rank'' (able2rank) by the authors. As an aside, note that an analogy-based approach as outlined above appears to be specifically suitable for \emph{transfer learning}. This is mainly because the relation $\mathcal{R}$ is evaluated separately for ``source objects'' $a$ and $b$ on the one side and ``target objects'' $c$ and $d$ on the other side, but never between sources and targets. In principle, one could even think of using different specifications of $\mathcal{R}$ for the source and the target. \section{Analogy and Kernels} The core idea of our proposal is based on the observation that an analogical proportion, by definition, defines a kind of \emph{similarity} between the relation of pairs of objects: According to (\ref{eq:ap}), the analogical proportion $a:b::c:d$ holds if $\mathcal{R}(a,b)$ is similar to $\mathcal{R}(c,d)$. The notion of similarity plays an important role in machine learning in general, and in kernel-based machine learning in particular. In fact, kernel functions can typically be interpreted in terms of similarity. Thus, a kernel-based approach might be a natural way to incorporate analogical reasoning in machine learning. More specifically, to establish a connection between kernel-based machine learning and analogical reasoning, we make use of generalized (fuzzy) equivalence relations as a bridging concept. Fuzzy equivalences are weakened forms of standard equivalence relations, and hence capture the notion of similarity. More specifically, a fuzzy equivalence relation $E$ on a set $\mathcal{X}$ is a fuzzy subset of $\mathcal{X} \times \mathcal{X}$, that is, a function $E:\, \mathcal{X}^2 \longrightarrow [0,1]$, which is reflexive, symmetric, and $\top$-transitive: \begin{itemize} \item $E(x,x) = 1$ for all $x \in \mathcal{X}$, \item $E(x,y)=E(y,x)$ for all $x, y \in \mathcal{X}$, \item $\top(E(x,y), E(y,z)) \leq E(x,z)$ for all $x, y, z \in \mathcal{X}$, \end{itemize} where $\top$ is a triangular norm (t-norm), that is, a generalized logical conjunction. In our case, the relation $E$ in (\ref{eq:ap}) will play the role of a fuzzy equivalence. The detour via fuzzy equivalences is motivated by the result of Moser \cite{Moser2006}, who proved that certain types of fuzzy equivalence relations satisfy the properties of a kernel function. Before elaborating on this idea in more detail, we briefly recall some basic concepts of kernel-based machine learning as needed for this paper. For a thorough discussion of kernel methods, see for instance \cite{Scholkopf2001,ShaweTaylor2004}. \subsection{Kernels} Let $\mathcal{X}$ be a nonempty set. A function $k: \mathcal{X} \times \mathcal{X} \longrightarrow \mathbb{R}$ is a {\em positive semi-definite kernel} on $\mathcal{X}$ iff it is symmetric, i.e., $k(x,y) = k(y,x)$ for all $x,y \in \mathcal{X}$, and positive semi-definite, i.e., \[ \sum_{i=1}^n \sum_{j=1}^n c_i c_j k(x_i,x_j) \geq 0 \] for arbitrary $n$, arbitrary instances $x_1, \ldots, x_n \in \mathcal{X}$ and arbitrary $c_1, \ldots, c_n \in \mathbb{R}$. Given a kernel $k$ on $\mathcal{X}$, an important theorem by Mercer \cite{Mercer} implies the existence of a (Hilbert) space $\mathcal{H}$ and a map $\phi:\, \mathcal{X} \longrightarrow \mathcal{H}$, such that $$ k(x,y) = \langle \phi(x) , \phi(y) \rangle $$ for all $x,y \in \mathcal{X}$. Thus, computing the kernel $k(x,y)$ in the original space $\mathcal{X}$ is equivalent to mapping $x$ and $y$ to $\mathcal{H}$ first, using the \emph{linearization} or \emph{feature map} $\phi$, and combining them in terms of the inner product in that space afterward. This connection between a nonlinear combination of instances in the original space $\mathcal{X}$ and a linear combination in the induced feature space $\mathcal{H}$ provides the basis for the so-called ``kernel trick'', which offers a systematic way to design nonlinear extensions of methods for learning linear models. The kernel trick has been applied to various methods and has given rise to many state-of-the-art machine learning algorithms, including support vector machines, kernel principle component analysis, kernel Fisher discriminant, amongst others \cite{Scholkopf_nips,Scholkopf98}. \subsection{Analogical Proportions as Kernels} Our focus in this paper is the analogical proportion (\ref{eq:v_a}), which is a map $v:\, [0,1]^4 \longrightarrow [0,1]$. In this case, the relation $\mathcal{R}$ is the simple arithmetic difference $\mathcal{R}(a,b)=a-b$, and the similarity relation $E$ is defined as $E(u, v) = 1- |u-v|$ if both $u,v$ have the same sign and $E(u, v) = 0$ otherwise. As an aside, we note that, strictly speaking, $E$ thus defined is not a fuzzy equivalence relation. This is due to the thresholding in the case where $\text{sign} (a-b) \neq \text{sign} (c-d)$. Without this thresholding, $E$ would be a $\top_{\!\!\L}$-equivalence, where $\top_{\!\!\L}$ is the {\L}ukasievicz t-norm $(\alpha,\beta) \mapsto \max(\alpha+\beta-1,0)$. For modeling analogy, however, setting $E$ to 0 in the case where $b$ deviates positively from $a$ while $d$ deviates negatively from $c$ (or vice versa) appears reasonable. We reinterpret $v$ as defined above as a kernel function $k:\, [0,1]^2 \times [0,1]^2 \longrightarrow [0,1]$ on $\mathcal{X} = [0,1]^2$, i.e., a kernel on pairs of pairs of objects, which essentially means equating $k$ with $E$: \begin{equation}\label{eq:akernel} k(a,b,c,d) \mapsto 1- |(a-b) - (c-d)| \end{equation} if $\text{sign} (a-b) = \text{sign} (c-d)$ and 0 otherwise. In what follows, we show that the ``analogy kernel'' (\ref{eq:akernel}) does indeed define a proper kernel function. The first property to be fulfilled, namely symmetry, is obvious. Thus, it remains to show that $k$ is also positive semi-definite, which is done in Theorem 2 below. As a preparation, we first recall the following lemma, which is proved by Moser \cite{Moser2006} as part of his Theorem 11. \begin{lemma} \label{lma1} Let $\mu_1, \ldots , \mu_n \in [0,1]$, $n \in \mathbb{N}$, and the matrix $M$ be defined by \[ M^{(n)}_{i,j} = ( 1- | \mu_i - \mu_j | ) \,\, . \] Then $M$ has non-negative determinant. \end{lemma} \begin{theorem} The function $k:\, [-1,1]^2 \longrightarrow [0,1]$ defined as \begin{equation*} \label{eq:kernel} k(u,v) = \begin{cases} 1 - |u-v| & \text{if } \text{sign}(u)=\text{sign}(v), \\ 0, & \text{otherwise,} \end{cases} \end{equation*} is a valid kernel. \end{theorem} \begin{proof} It is easy to see that $k$ is symmetric. Thus, it remains to show that it is positive semi-definite. To this end, it suffices to show that the determinants of all principal minors of every kernel matrix produced by $k$ are non-negative. Thus, consider $\alpha_1, \ldots , \alpha_n \in [-1,1]$, $n \in \mathbb{N}$, and the matrix $K$ defined as \begin{equation} K^{(n)}_{i,j} = \begin{cases} 1 - |\alpha_i-\alpha_j| ,& \text{if } \text{sign}(\alpha_i)=\text{sign}(\alpha_j), \\ 0, & \text{otherwise,} \end{cases} \end{equation} We need to show that \[ \det \bigg ( K^{(m)}_{i,j} \bigg ) \ge 0 \, , \] for all $1 \le m \le n$. Thanks to the permutation-invariance of determinants, we can assume (without loss of generality) that the values $\alpha_i$ are sorted in non-increasing order, i.e., $\alpha_1 \geq \alpha_2 \geq \cdots \geq \alpha_n$; in particular, note that the positive $\alpha_i$ will then precede all the negative ones. Thus, the matrix $K$ takes the form of a diagonal block matrix \[ K = \begin{pmatrix} A & 0 \\ 0 & B \\ \end{pmatrix} \, , \] in which the submatrix $A$ contains the values of $K$ for which $\alpha_i, \alpha_j \in [0,1]$, and $ B $ contains the values of $K$ where $\alpha_i, \alpha_j$ are negative. According to Lemma (\ref{lma1}), $\det(A) \ge 0$. Moreover, since $1-|u-v| = 1-|(-u)-(-v)|$ for $u,v \in [0,1]$, the same lemma can also be applied to the submatrix $B$, hence $\det(B) \ge 0$. Finally, we can exploit that \[ \det(K) = \det(A) \det(B). \] Since both matrices $A$ and $B$ have non-negative determinant, it follows that $\det(K) \ge 0$, which completes the proof. \end{proof} The class of kernel functions is closed under various operations, including addition and multiplication by a positive constant. This allows us to extend the analogy kernel from individual variables to feature vectors using the arithmetic mean as an aggregation function: \begin{equation} \label{eq:analogykernel} k_A( \boldsymbol{a}, \boldsymbol{b} , \boldsymbol{c}, \boldsymbol{d}) = \frac{1}{d} \sum_{i=1}^d k(a_i , b_i, c_i, d_i) \, . \end{equation} Furthermore, to allow for incorporating a certain degree of non-linearity, we make use of a homogeneous polynomial kernel of degree 2, \begin{equation} \label{eq:polykernel} k_A'( \boldsymbol{a}, \boldsymbol{b} , \boldsymbol{c}, \boldsymbol{d} ) = \big( k( \boldsymbol{a}, \boldsymbol{b} , \boldsymbol{c}, \boldsymbol{d} ) \big)^2 \, , \end{equation} which is again a valid kernel. \section{Analogy-Kernel-Based Object Ranking} Recall that, in the setting of learning to rank, we suppose to be given a set of training data in the form $$ \mathcal{D} = \big\{ (Q_1, \pi_1) , \ldots , (Q_M, \pi_M) \big\} \ , $$ where each $\pi_\ell$ defines a ranking of the set of objects $Q_\ell$. If $\boldsymbol{z}_i , \boldsymbol{z}_j \in Q_\ell$ and $\pi_\ell(i) < \pi_\ell(j)$, then $\boldsymbol{z}_i \succ \boldsymbol{z}_j$ has been observed as a preference. Our approach to object ranking based on the analogy kernel, AnKer-rank, comprises two main steps: \begin{itemize} \item First, for each pair of objects $\boldsymbol{x}_i , \boldsymbol{x}_j \in Q$, a degree of preference $p_{i,j} \in [0,1]$ is derived from $\mathcal{D}$. If these degrees are normalized such that $p_{i,j} + p_{j,i} = 1$, they define a reciprocal preference relation \begin{equation}\label{eq:prefrel} P= \Big( p_{i,j} \Big)_{1 \leq i \ne j \leq n} \, . \end{equation} \item Second, the preference relation $P$ is turned into a ranking $\pi$ using a suitable ranking procedure. \end{itemize} Both steps will be explained in more detail further below. \subsection{Prediction of Pairwise Preferences} The first step of our proposed approach, prediction of pairwise preferences, is based on a reduction to binary classification. To this end, training data $\mathcal{D}_{bin}$ is constructed as follows. Consider any preference $\boldsymbol{x}_i \succ \boldsymbol{x}_j$ that can be extracted from the original training data $\mathcal{D}$, i.e., from any of the rankings $\pi_m$, $m \in [M]$. Then $\boldsymbol{z}_{i,j}=(\boldsymbol{x}_i, \boldsymbol{x}_j)$ is a positive example for the binary problem (with label $y_{i,j}=+1$), and $\boldsymbol{z}_{j,i}=(\boldsymbol{x}_j, \boldsymbol{x}_i)$ is a negative example (with label $y_{j,i}=-1$). Since these examples essentially carry the same information, we only add one of them to $\mathcal{D}_{bin}$. To keep a balance between positive and negative examples, the choice is simply made by flipping a fair coin. Note that, for any pair of instances $(\vec{a}, \vec{b})$ and $(\vec{c}, \vec{d})$ in $\mathcal{D}_{bin}$, the analogy kernel (\ref{eq:analogykernel}) is well-defined, i.e., $k_A(\vec{a}, \vec{b}, \vec{c}, \vec{d})$ can be computed. Therefore, a binary predictor $h_{bin}$ can be trained on $\mathcal{D}_{bin}$ using any kernel-based classification method. We assume $h_{bin}$ to produce predictions in the unit interval $[0,1]$, which can be achieved, for example, by means of support vector machines with a suitable post-processing such as Platt-scaling \cite{Platt99probabilisticoutputs}. Now, consider any pair of objects $\boldsymbol{x}_i, \boldsymbol{x}_j$ from a new query $Q=\{ \boldsymbol{x}_1, \ldots , \boldsymbol{x}_n \}$. Again, the analogy kernel can be applied to this pair and any example from $\mathcal{D}_{bin}$, so that a (binary) prediction for the preference between $\boldsymbol{x}_i$ and $\boldsymbol{x}_j$ can be derived from $h_{bin}$. More specifically, querying this model with $\boldsymbol{z}_{i,j}=(\boldsymbol{x}_i, \boldsymbol{x}_j)$ yields a degree of support $q_{i,j}=h_{bin}(\boldsymbol{z}_{i,j})$ in favor of $\boldsymbol{x}_i \succ \boldsymbol{x}_j$, while querying it with $\boldsymbol{z}_{j,i}=(\boldsymbol{x}_j, \boldsymbol{x}_i)$ yields a degree of support $q_{j,i}=h_{bin}(\boldsymbol{z}_{j,i})$ in favor of $\boldsymbol{x}_i \succ \boldsymbol{x}_j$. As already said, we assume both degrees to be normalized in the range $[0,1]$ and define $p_{i,j} = (1+q_{i,j}-q_{j,i})/2$ as an estimate for the probability of the preference $\boldsymbol{x}_i \succ \boldsymbol{x}_j$. This estimate constitutes one of the entries in the preference relation (\ref{eq:prefrel}). \subsection{Rank Aggregation} To turn pairwise preferences into a total order, we make use of a rank aggregation method. More specifically, we apply the Bradley-Terry-Luce (BTL) model, which is well-known in the literature on discrete choice \cite{brad_tr52}. It starts from the parametric model \begin{equation}\label{eq:pmp} \mathbf{P}(\boldsymbol{x}_i \succ \boldsymbol{x}_j) = \frac{\theta_i}{\theta_i + \theta_j} \, , \end{equation} where $\theta_i, \theta_j \in \mathbb{R}_+$ are parameters representing the (latent) utility $U(\boldsymbol{x}_i)$ and $U(\boldsymbol{x}_j)$ of $\boldsymbol{x}_i$ an $\boldsymbol{x}_j$, respectively. Thus, according to the BTL model, the probability to observe a preference in favor of a choice alternative $\boldsymbol{x}_i$, when being compared to any other alternative, is proportional to $\theta_i$. Given the preference relation (\ref{eq:prefrel}), i.e., the entries $p_{i,j}$ informing about the class probability of $\boldsymbol{x}_i \succ \boldsymbol{x}_j$, the parameter $\theta = (\theta_1, \ldots , \theta_n)$ can be estimated by likelihood maximization: $$ \hat{\theta} \in \arg \max_{\theta \in \mathbb{R}^{n} } \prod_{1 \leq i \neq j \leq n} \left( \dfrac{\theta_{i}}{\theta_{i} + \theta_{j}} \right)^{p_{i,j}} $$ Finally, the predicted ranking $\pi$ is obtained by sorting the items $\boldsymbol{x}_i$ in descending order of their estimated (latent) utilities $\hat{\theta}_i$. We note that many other rank aggregation techniques have been proposed in the literature and could principally be used as well; see e.g.\ \cite{pmlr-v70-fahandar17a}. However, since BTL seems to perform very well, we did not consider any other method. \section{Experiments} To study the practical performance of our proposed method, we conducted experiments on several real-world data sets, using able2rank, ERR, Ranking SVM (with linear kernel) and RankNet (cf.\ Section \ref{baselines}) as baselines to compare with. \begin{table*} \caption{Properties of data sets.} \label{tab:ds} \footnotesize \centering \begin{tabular}{ |c||lccccc|c| } \hline data set & domain & \# instances & \# features & numeric & binary & ordinal & name \\ \hline \multirow{4}{*}{Decathlon} & Year 2005 & 100 & 10 & x& -- & -- & D1\\ & Year 2006 & 100 & 10 & x & -- & -- & D2 \\ & Olympic 2016 & 24 & 10 & x & -- & -- & D3 \\ & U-20 World 2016 & 22 & 10 & x & -- & -- & D4 \\ \hline \multirow{3}{*}{Bundesliga} & Season 15/16 & 18 & 13 & x& -- & -- & B1\\ & Season 16/17 & 18 & 13 & x & -- & -- & B2 \\ & Mid-Season 16/17 & 18 & 7 & x & -- & -- & B3 \\ \hline \multirow{2}{*}{Footballers} & Year 2016 (Streaker) & 50 & 40 & x & x & x & F1 \\ & Year 2017 (Streaker)& 50 & 40 & x & x & x & F2 \\ \hline \multirow{3}{*}{FIFA WC} & WC 2014 Brazil & 32 & 7 & x& -- & -- & G1\\ & WC 2018 Russia & 32 & 7 & x & -- & -- & G2 \\ & U-17 WC 2017 India & 22 & 7 & x & -- & -- & G3 \\ \hline \multirow{2}{*}{Hotels} & D{\"u}sseldorf & 110 & 28 & x& x & x & H1\\ & Frankfurt & 149 & 28 & x & x & x & H2 \\ \hline \multirow{2}{*}{Uni. Rankings} & Year 2015 & 100 & 9 & x& -- & -- & U1\\ & Year 2014 & 100 & 9 & x & -- & -- & U2 \\ \hline \multirow{2}{*}{Volleyball WL} & Group 3 & 12 & 15 & x& -- & -- & V1\\ & Group 1 & 12 & 15 & x & -- & -- & V2 \\ \hline \end{tabular} \end{table*} \subsection{Data} We used the same data sets as \cite{ahmadi_huellermeier_aaai18}, which are collected from various domains (e.g., sports, education, tourism) and comprise different types of feature (e.g., numeric, binary, ordinal). Table (\ref{tab:ds}) provides a summary of the characteristics of the data sets. For a detailed description of the data, we refer the reader to the source paper. In addition, we include the ranking of the teams that participated in the men's FIFA world cup 2014 and 2018 (32 instances) as well as under-17 in the year 2017 (22 instances) with respect to ``goals statistics''. This data\footnote{Extracted from FIFA official website: \url{www.fifa.com}} comprises 7 numeric features such as MatchesPlayed, GoalsFor, GoalsScored, etc. \begin{table*} \centering \caption{Results in terms of loss $d_{RL}$ (averaged over 20 runs) on the test data.} \label{tab:results} \hspace*{-2em} \begin{tabular}{c|c|ccccc} \hline $D_{train}$ $\rightarrow$ $D_{test}$ & AnKer-rank & able2rank & ERR & Ranking SVM & RankNet \\ \hline D1 $\rightarrow$ D2 & $0.188 \pm 0.049 (5)$ & $0.055 \pm 0.000 (4)$ & $0.053 (3)$ & $0.014 (1)$ & $0.029 \pm 0.005 (2)$ \\ D1 $\rightarrow$ D3 & $0.183 \pm 0.043 (5)$ & $0.072 \pm 0.000 (4)$ & $0.054 (3)$ & $0.040 (2)$ & $0.024 \pm 0.007 (1)$ \\ D1 $\rightarrow$ D4 & $0.187 \pm 0.047 (5)$ & $0.119 \pm 0.002 (4)$ & $0.117 (3)$ & $0.095 (1)$ & $0.102 \pm 0.009 (2)$ \\ D2 $\rightarrow$ D1 & $0.195 \pm 0.034 (5)$ & $0.090 \pm 0.000 (4)$ & $0.056 (3)$ & $0.015 (1)$ & $0.041 \pm 0.005 (2)$ \\ D2 $\rightarrow$ D3 & $0.102 \pm 0.028 (5)$ & $0.082 \pm 0.002 (4)$ & $0.025 (1)$ & $0.032 (2)$ & $0.032 \pm 0.011 (2)$ \\ D2 $\rightarrow$ D4 & $0.218 \pm 0.040 (5)$ & $0.143 \pm 0.000 (4)$ & $0.126 (3)$ & $0.104 (1)$ & $0.105 \pm 0.004 (2)$ \\ D3 $\rightarrow$ D1 & $0.133 \pm 0.007 (2)$ & $0.150 \pm 0.000 (4)$ & $0.145 (3)$ & $0.096 (1)$ & $0.226 \pm 0.058 (5)$ \\ D3 $\rightarrow$ D2 & $0.107 \pm 0.007 (3)$ & $0.106 \pm 0.000 (2)$ & $0.109 (4)$ & $0.082 (1)$ & $0.184 \pm 0.023 (5)$ \\ D3 $\rightarrow$ D4 & $0.134 \pm 0.008 (2)$ & $0.144 \pm 0.003 (4)$ & $0.143 (3)$ & $0.126 (1)$ & $0.206 \pm 0.037 (5)$ \\ D4 $\rightarrow$ D1 & $ 0.108 \pm 0.008 (1) $ & $0.156 \pm 0.000 (4)$ & $0.132 (3)$ & $0.119 (2)$ & $0.177 \pm 0.047 (5)$ \\ D4 $\rightarrow$ D2 & $0.115 \pm 0.008 (2)$ & $0.144 \pm 0.000 (5)$ & $ 0.105 (1) $ & $0.118 (3)$ & $0.128 \pm 0.014 (4)$ \\ D4 $\rightarrow$ D3 & $0.101 \pm 0.014 (3)$ & $0.099 \pm 0.002 (1)$ & $0.127 (5)$ & $0.101 (3)$ & $0.099 \pm 0.037 (1)$ \\ \hline average ranks & 3.58 & 3.67 & 2.92 & 1.58 & 3.00 \\ \hline B1 $\rightarrow$ B2 & $ 0.018 \pm 0.005 (1) $ & $0.031 \pm 0.006 (2)$ & $0.065 (4)$ & $0.052 (3)$ & $0.104 \pm 0.033 (5)$ \\ B1 $\rightarrow$ B3 & $ 0.011 \pm 0.003 (1) $ & $0.013 \pm 0.000 (2)$ & $0.026 (4)$ & $0.020 (3)$ & $0.056 \pm 0.027 (5)$ \\ B2 $\rightarrow$ B1 & $ 0.001 \pm 0.002 (1) $ & $0.013 \pm 0.005 (2)$ & $0.118 (5)$ & $0.045 (3)$ & $0.096 \pm 0.022 (4)$ \\ B2 $\rightarrow$ B3 & $ 0.000 \pm 0.000 (1) $ & $0.013 \pm 0.000 (2)$ & $0.033 (4)$ & $0.032 (3)$ & $0.043 \pm 0.019 (5)$ \\ B3 $\rightarrow$ B1 & $0.000 \pm 0.000 (1)$ & $0.000 \pm 0.000 (1)$ & $0.007 (3)$ & $0.007 (3)$ & $0.053 \pm 0.024 (5)$ \\ B3 $\rightarrow$ B2 & $ 0.000 \pm 0.001 (1) $ & $0.010 \pm 0.003 (3)$ & $0.007 (2)$ & $0.092 (4)$ & $0.092 \pm 0.024 (4)$ \\ \hline average ranks & 1.00 & 2.00 & 3.67 & 3.17 & 4.67 \\ \hline F1 $\rightarrow$ F2 & $0.183 \pm 0.027 (4)$ & $0.139 \pm 0.001 (1)$ & $0.314 (5)$ & $ 0.166 (2) $ & $0.173 \pm 0.006 (3)$ \\ F2 $\rightarrow$ F1 & $0.155 \pm 0.003 (2)$ & $ 0.152 \pm 0.001 (1) $ & $0.293 (5)$ & $0.183 (4)$ & $0.163 \pm 0.009 (3)$ \\ \hline average ranks & 3.00 & 1.00 & 5.00 & 3.00 & 3.00 \\ \hline G1 $\rightarrow$ G2 & $ 0.040 \pm 0.006 (1) $ & $0.061 \pm 0.003 (3)$ & $0.102 (5)$ & $0.053 (2)$ & $0.085 \pm 0.009 (4)$ \\ G1 $\rightarrow$ G3 & $ 0.001 \pm 0.003 (1) $ & $0.012 \pm 0.002 (3)$ & $0.056 (5)$ & $0.004 (2)$ & $0.044 \pm 0.010 (4)$ \\ G2 $\rightarrow$ G1 & $0.030 \pm 0.001 (2)$ & $ 0.026 \pm 0.002 (1) $ & $0.100 (5)$ & $0.037 (3)$ & $0.045 \pm 0.008 (4)$ \\ G2 $\rightarrow$ G3 & $0.022 \pm 0.001 (1)$ & $0.025 \pm 0.002 (3)$ & $0.065 (5)$ & $0.022 (1)$ & $0.047 \pm 0.014 (4)$ \\ G3 $\rightarrow$ G1 & $0.034 \pm 0.008 (3)$ & $0.042 \pm 0.005 (4)$ & $0.029 (2)$ & $0.023 (1)$ & $0.118 \pm 0.012 (5)$ \\ G3 $\rightarrow$ G2 & $0.098 \pm 0.019 (3)$ & $0.106 \pm 0.004 (4)$ & $0.088 (2)$ & $ 0.052 (1) $ & $0.168 \pm 0.021 (5)$ \\ \hline average ranks & 1.83 & 3.00 & 4.00 & 1.67 & 4.33 \\ \hline H1 $\rightarrow$ H2 & $0.065 \pm 0.001 (2)$ & $ 0.061 \pm 0.000 (1) $ & $0.100 (5)$ & $0.076 (3)$ & $0.083 \pm 0.016 (4)$ \\ \hline average ranks & 2.00 & 1.00 & 5.00 & 3.00 & 4.00 \\ \hline U1 $\rightarrow$ U2 & $0.173 \pm 0.018 (3)$ & $0.093 \pm 0.000 (1)$ & $0.245 (4)$ & $0.246 (5)$ & $0.114 \pm 0.012 (2)$ \\ U2 $\rightarrow$ U1 & $0.232 \pm 0.005 (5)$ & $0.078 \pm 0.000 (1)$ & $0.218 (3)$ & $0.230 (4)$ & $0.107 \pm 0.010 (2)$ \\ \hline average ranks & 4.00 & 1.00 & 3.50 & 4.50 & 2.00 \\ \hline V1 $\rightarrow$ V2 & $0.030 \pm 0.000 (2)$ & $0.030 \pm 0.000 (2)$ & $0.091 (4)$ & $0.002 (1)$ & $0.120 \pm 0.046 (5)$ \\ V2 $\rightarrow$ V1 & $0.015 \pm 0.000 (1)$ & $0.038 \pm 0.008 (3)$ & $0.773 (5)$ & $0.015 (1)$ & $0.070 \pm 0.032 (4)$ \\ \hline average ranks & 1.50 & 2.50 & 4.50 & 1.00 & 4.50 \\ \hline \end{tabular} \end{table*} \subsection{Experimental Setup} For the analogy-based methods, an important pre-processing step is the normalization of the attributes in the feature representation $\boldsymbol{x}=(x_1, \ldots , x_d)$, because these attributes are assumed to take values in $[0,1]$. To this end, we simply apply a linear rescaling \[ x_k' \leftarrow \dfrac{x_k -\min_k}{ \max_k - \min_k } \, , \] where $\min_k$ and $\max_k$ denote, respectively, the smallest and largest value of the $k$th feature in the data. This transformation is applied to the training data as well as the test data when a new query $Q$ is received. Since the data from a new query is normally sparse, it might be better to take the minimum and maximum over the entire data, training and test. Yet, this strategy is not recommendable in case the test data has a different distribution. As already said, analogical inference is especially interesting for transfer learning (and indeed, in our experiments, training and test data are sometimes from different subdomains). Therefore, we first conduct a Kolmogorov-Smirnov test \cite{kstest} to test whether the two parts of the data are drawn from the same distribution. In case the null hypothesis is rejected (at a significance level of $\alpha = 0.05$), normalization is conducted on the test data alone. Otherwise, the training data is additionally taken into account. We also apply a standard normalization for the other baseline methods (ERR, Ranking SVM and RankNet), transforming each real-valued feature by standardization: \[ x \leftarrow \dfrac{x-\mu}{\sigma} \, , \] where $\mu$ and $\sigma$ denote the empirical mean and standard deviation, respectively. Like for the analogy-based methods, a hypothesis test is conducted to decide whether the test data should be normalized separately or together with the training data. We fixed the cost parameter $C$ of SVM algorithms in an (internal) 2-fold cross validation (repeated 3 times) on the training data. The search for $C$ is guided by an algorithm\footnote{Publicly available as an R package: \url{http://cran.r-project.org/web/packages/svmpath}} proposed by \cite{Hastie2004}, which computes the entire regularization path for the two-class SVM classifier (i.e., all possible values of $C$ for which the solution changes), with a cost a small ($\sim$3) multiple of the cost of fitting a single model. The following RankNet parameters are adjusted using grid-search and internal cross-validation: the number of units in the hidden layer ($32,64,128,256$), the batch size ($8, 16, 32$), the optimizer learning rate ($0.001,0.01,0.1$). Since the data sets are relatively small, the network was restricted to a single hidden layer. \subsection{Results} In our experiments, predictions were produced for certain data set $D_{test}$ of the data, using other parts $D_{train}$ as training data; an experiment of that kind is denoted by $D_{train} \rightarrow D_{test}$ that is considered for all possible combinations within each domain. The averaged ranking loss together with the standard deviation of the conducted experiments (repeated 20 times) are summarized in Table (\ref{tab:results}), where the numbers in parentheses indicate the rank of the achieved score in the respective problem. Moreover, the table shows average ranks per problem domain. As can be seen, the relative performance of the methods depends on the domain. In any case, our proposed approach is quite competitive in terms of predictive accuracy, and essentially on a par with able2rank and Ranking SVM, whereas ERR and RankNet show worse performance. \section{Conclusion and Future Work} This paper elaborates on the connection between kernel-based machine learning and analogical reasoning in the context of preference learning. Building on the observation that analogical proportions define a kind of similarity between the relation of pairs of objects, and that kernel functions can be interpreted in terms of similarity, we utilize generalized (fuzzy) equivalence relations as a bridging concept to show that a particular type of analogical proportion defines a valid kernel function. We introduce the analogy kernel and advocate a concrete kernel-based approach for the problem of object ranking. First experimental results on real-world data from various domains are quite promising and suggest that our approach is competitive to state-of-the-art methods for object ranking. By making analogical inference amenable to kernel methods, our paper depicts a broad spectrum of directions for future work. In particular, we plan to study kernel properties of other analogical proportions proposed in the literature (e.g., geometric proportions). Besides, various extensions in the direction of kernel-based methods are conceivable and highly interesting from the point of view of analogical reasoning. This includes the use of kernel-based methods other than SVM, techniques such as multiple kernel learning, etc. Last but not least, other types of applications, whether in preference learning or beyond, are also of interest.
{'timestamp': '2019-01-09T02:00:33', 'yymm': '1901', 'arxiv_id': '1901.02001', 'language': 'en', 'url': 'https://arxiv.org/abs/1901.02001'}
\section{Introduction} \label{sec:intro} Rank data arise naturally in many fields, such as web searching \citep{renda2003web}, design of recommendation systems \citep{baltrunas2010group} and genomics \citep{BADER20111099}. Many probabilistic models have been proposed for analyzing this type of data, among which the Thurstone model \citep{Thurstone1927}, the Mallows model \citep{mallows1957non} and the Plackett-Luce model \citep{luce1959,Plackett1975} are the most well-known representatives. The Thurstone model assumes that each entity possesses a hidden score and all the scores come from a joint probability distribution. The Mallows model is a location model defined on the permutation space of ordered entities, in which the probability mass of a permuted order is an exponential function of its distance from the true order. The Plackett-Luce model assumes that the preference of entity $E_i$ is associated with a weight $w_i$, and describes a recursive procedure for generating a random ranking list: entities are picked one by one with the probability proportional to their weights in a sequential fashion without replacement, and ranked based on their order of being selected. Rank aggregation aims to derive a ``better'' aggregated ranking list $\hat\tau$ from multiple ranking lists $\tau_1, \tau_2,\cdots, \tau_m$. It is a classic problem and has been studied in a variety of contexts for decades. Early applications of rank aggregation can be traced back to the 18th-century France, where the idea of rank aggregation was proposed to solve the problem of political elections \citep{de1781memoire}. In the past 30 years, efficient rank aggregation algorithms have played important roles in many fields, such as web searching \citep{renda2003web}, information retrieval \citep{fagin2003efficient}, design of recommendation systems \citep{baltrunas2010group}, social choice studies \citep{porello2012ontology,soufiani2014statistical}, genomics \citep{BADER20111099} and bioinformatics \citep{2010Integration,chen2016drhp}. Some popular approaches for rank aggregation are based on certain summary statistics. These methods simply calculate a summary statistics, such as the mean, median or geometric mean, for each entity $E_i$ based on its rankings across different ranking lists, and obtain the aggregated ranking list based on these summary statistics. Optimization-based methods obtain the aggregated ranking by minimizing a user-defined objective function, i.e., let $\hat{\tau} = \arg\min\limits_{\tau} \dfrac{1}{m} \sum\limits_{i=1}^m d\left(\tau, \tau_i\right)$, where distance measurement $d(\cdot,\cdot)$ could be either \textit{Spearman's footrule distance} \citep{diaconis1977spearman} or the \textit{Kendall tau distance} \citep{diaconis1988group}. More detailed studies on these optimization-based methods can be found in \citet{young1978consistent,young1988condorcet,dwork2001rank}. In early 2000s, a novel class of Markov chain-based methods have been proposed \citep{dwork2001rank,2010Integration,Lin2010Space,Deconde2011Combining}, which first use the observed ranking lists to construct a probabilistic transition matrix among the entities and then use the magnitudes of the entities' equilibrium probabilities of the resulting Markov chain to rank them. The boosting-based method \textit{RankBoost} \citep{freund2003efficient}, employs a \textit{feedback function} $\Phi(i,j)$ to construct the final ranking, where $\Phi(i,j)>0$ (or $\leq 0$) indicates that entity $E_i$ is (or is not) preferred to entity $E_j$. Some statistical methods utilize aforementioned probabilistic models (such as the Thurstone model) and derive the maximum likelihood estimate (MLE) of the final ranking. More recently, researchers have began to pay attention to rank aggregation methods for pairwise comparison data \citep{rajkumar2014statistical,chen2015spectral,fanjianqing2017Spectral}. We note that all aforementioned methods assume that the rankers of interest are equally reliable. In practice, however, it is very common that some rankers are more reliable than the others, whereas some are nearly non-informative and may be regarded as ``spam rankers''. Such differences in rankers' qualities, if ignored in analysis, may significantly corrupt the rank aggregation and lead to seriously misleading results. To the best of our knowledge, the earliest effort to address this critical issue can be traced to \citet{aslam2001models}, which derived an aggregated ranking list by calculating a weighted summation of the observed ranking lists, known as the \textit{Borda Fuse}. \citet{2010Integration} extended the objective function of \citet{dwork2001rank} to a weighted fashion. Independently, \citet{liu2007supervised} proposed a supervised rank aggregation to determine weights of the rankers by training with some external data. Although assigning weights to rankers is an intuitive and simple way to handle quality differences, how to scientifically determine these weights is a critical and unsolved problem in the aforementioned works. Recently, \citet{deng2014bayesian} proposed BARD, a Bayesian approach to deal with quality differences among independent rankers without the need of external information. BARD introduces a partition model, which assumes that all involved entities can be partitioned into two groups: the relevant ones and the background ones. A rationale of the approach is that, in many applications, distinguishing relevant entities from background ones has the priority over the construction of a final ranking of all entities. Under this setting, BARD decomposes the information in a ranking list into three components: (i) the relative rankings of all background entities, which is assumed to be uniform; (ii) the relative ranking of each relevant entity among all background ones, which takes the form of a truncated power-law; and, (iii) the relative rankings of all relevant entities, which is again uniform. The parameter of the truncated power-law distribution, which is ranker-specific, naturally serves as a quality measure for each ranker, as a ranker of a higher quality means a less spread truncated power-law distribution. \citet{fan2019} proposed a stage-wise data generation process based on an extended Mallows model (EMM) introduced by \citet{Fligner1986Distance}. EMM assumes that each entity comes from a two-components mixture model involving a uniform distribution to model non-informative entities, a modified Mallows model for informative entities and a ranker-specific proportion parameter. \citet{li2020bayesian} followed the Thurstone model framework to deal with available covariates for the entities as well as different qualities of the rankers. In their model, each entity is associated with a Gaussian-distributed latent score and a ranking list is determined by the ranking of these scores. The quality of each ranker is determined by the standard deviation parameter in the Gaussian model so that a larger standard deviation indicates a poorer quality ranker. Although these recent papers have proposed different ways for learning the quality variation among rankers, they all suffer from some limitations. The BARD method \citep{deng2014bayesian} simplifies the problem by assuming that all relevant entities are exchangeable. In many applications, however, the observed ranking lists often have a strong ordering information for relevant entities, and simply labeling these entities as ``relevant'' without considering their relative rankings tends to lose too much information and oversimplify the problem. \citet{fan2019} does not explicitly measure quality differences by their extended Mallows model. Although they mentioned that some of their model parameters can indicate the rankers' qualities, it is not clear how to properly combine multiple indicators to produce an easily interpretable quality measurement. The learning framework of \citet{li2020bayesian} based on Gaussian latent variables appears to be more suitable for incorporating covariates than for handling heterogeneous rankers. In this paper, we propose a \emph{partition-Mallows model} (PAMA), which combines the partition modeling framework of \citet{deng2014bayesian} with the Mallows model, to accommodate the detailed ordering information among the relevant entities. The new framework can not only quantify the quality difference of rankers and distinguish relevant entities from background entities like BARD, but also provide an explicit ranking estimate among the relevant entities in rank aggregation. In contrast to the strategy of imposing the Mallows on the full ranking lists, which tends to be sensitive to noises in low-ranking entities, the combination of the partition and Mallows models allows us to focus on highly ranked entities, which typically contain high-quality signals in data, and is thus more robust. Both simulation studies and real data applications show that the proposed approach is superior to existing methods, e.g., BARD and EMM, for a large class of rank aggregation problems. The rest of this paper is organized as follows. A brief review of BARD and the Mallows model is presented in Section \ref{sec:overview} as preliminaries. The proposed PAMA model is described in Section \ref{sec:model} with some key theoretical properties established. Statistical inference of the PAMA model, including the Bayesian inference and the pursuit of MLE, is detailed in Section \ref{sec:statinfer}. Performance of PAMA is evaluated and compared to existing methods via simulations in Section \ref{sec:simulation}. Two real data applications are shown in Section \ref{sec:realdata} to demonstrate strength of the PAMA model in practice. Finally, we conclude the article with a short discussion in Section \ref{sec:discussion}. \section{Notations and Preliminaries} \label{sec:overview} Let $U = \left\{E_1, E_2, \cdots, E_n \right\}$ be the set of entities to be ranked. We use ``$E_i \preceq E_j$" to represent that entity $E_i$ is preferred to entity $E_j$ in a ranking list $\tau$, and denote the position of entity $E_i$ in $\tau$ by $\tau(i)$. Note that more preferred entities always have lower rankings. Our research interest is to aggregate $m$ observed ranking lists, $\tau_1,\ldots, \tau_m$, presumably constructed by $m$ rankers independently into one consensus ranking list which is supposed to be ``better'' than each individual one. \subsection{BARD and Its Partition Model} The partition model in BARD \citep{deng2014bayesian} assumes that $U$ can be partitioned into two non-overlapping subsets: $U=U_R\cup U_B$, with $U_R$ representing the set of relevant entities and $U_B$ for the background ones. Let $I=\{I_i\}_{i\in U}$ be the vector of group indicators, where $I_i=\mathbb{I}(E_i \in U_R)$ and $\mathbb{I}(\cdot)$ is the indicator function. This formulation makes sense in many applications where people are only concerned about a fixed number of top-ranked entities. Under this formulation, the information in a ranking list $\tau_k$ can be equivalently represented by a triplet $(\tau_k^0, \tau_k^{1\mid 0}, \tau_k^1)$, where $\tau_k^0$ denotes relative rankings of all background entities, $\tau_k^{1\mid 0}$ denotes relative rankings of relevant entities among the background entities and $\tau_k^1$ denotes relative rankings of all relevant entities. \citet{deng2014bayesian} suggested a three-component model for $\tau_k$ by taking advantage of its equivalent decomposition: \begin{eqnarray}\label{Eq:RankDecomposition} P(\tau_k \mid I)=P(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1 \mid I)=P(\tau_k^0 \mid I)\times P(\tau_k^{1\mid 0} \mid I)\times P(\tau_k^1 \mid \tau_k^{1\mid 0},I), \end{eqnarray} where both $P(\tau_k^0 \mid I)$ (relative ranking of the background entities) and $P(\tau_k^1 \mid \tau_k^{1\mid 0},I) $ (relative ranking of the relevant entities conditional on their set of positions relative to background entities) are uniform, and the relative ranking of a relevant entity $E_i$ among background ones follows a power-law distribution with parameter $\gamma_k>0$, i.e., $$P(\tau_k^{1 \mid 0}(i)=t\mid I ) = q(t\mid \gamma_k, n_0) \propto t^{-\gamma_k}\cdot\mathbb{I}(1\leq t\leq n_0+1),$$ leading to the following explicit forms for the three terms in equation~(\ref{Eq:RankDecomposition}): \begin{eqnarray} \label{eqn:bardtau0} P(\tau_k^0 \mid I)&=&\frac{1}{n_0!},\\ \label{eqn:bardgamma} P(\tau_k^{1\mid 0} \mid I)&=&\prod_{i \in U_R} q(\tau_k^{1 \mid 0}(i)\mid \gamma_k, I)=\frac{1}{(B_{\tau_k,I})^{\gamma_k}\times(C_{\gamma_k,n_1})^{n_1}},\\ \label{eqn:bardtau1} P(\tau_k^1 \mid \tau_k^{1\mid 0},I)&=&\frac{1}{A_{\tau_k,I}}\times\mathbb{I}\big(\tau_k^1 \in \mathcal{A}_{U_R}(\tau_k^{1\mid0})\big), \end{eqnarray} where $n_1=\sum_{i=1}^n I_i$ and $n_0=n-n_1$ are the counts of relevant and background entities respectively, $B_{\tau_k,I}=\prod_{i \in U_R} \tau_k^{1\mid 0}(i)$, $C_{\gamma_k,n_1}=\sum_{t=1}^{n_0+1} t^{-\gamma_k}$ is the normalizing constant of the power-law distribution, $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$ is the set of $\tau_k^1$'s that are compatible with $\tau_k^{1\mid 0}$, and $A_{\tau_k,I}=\#\{\mathcal{A}_{U_R}(\tau_k^{1\mid0})\}=\prod_{t=1}^{n_0+1}(n_{\tau_k,t}^{1\mid 0}!)$ with $n_{\tau_k,t}^{1\mid 0}= \sum_{i \in U_R} \mathbb{I}(\tau_k^{1 \mid 0}(i) = t)$. Intuitively, this model assumes that each ranker first randomly places all background entities to generate $\tau_k^0$, then ``inserts" each relevant entity independently into the list of background entities according to a truncated power-law distribution to generate $\tau_k^{1\mid 0}$, and finally draws $\tau_k^1$ uniformly from $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$. In other words, $\tau_k^0$ serves as a baseline for modeling $\tau_k^{1\mid0}$ and $\tau_k^{1}$. It is easy to see from the model that a more reliable ranker should possess a larger $\gamma_k$. With the assumption of independent rankers, we have the full-data likelihood: \begin{eqnarray} P(\tau_1,\cdots,\tau_m \mid I,\boldsymbol{\gamma})&=&\prod_{k=1}^mP(\tau_k\mid I,\gamma_k)\nonumber\\ &=&[(n_0)!]^{-m}\times\prod_{k=1}^m\frac{\mathbb{I}\big(\tau_k^1 \in \mathcal{A}_{U_R}(\tau_k^{1\mid0})\big)}{A_{\tau_k,I}\times(B_{\tau_k,I})^{\gamma_k}\times\big(C_{\gamma_k,n_1}\big)^{n_1}}, \end{eqnarray} where $\boldsymbol{\gamma}=(\gamma_1,\cdots,\gamma_m)$. A detailed Bayesian inference procedure for $(I,\boldsymbol{\gamma})$ via Markov chain Monte Carlo can be found in \citet{deng2014bayesian}. \subsection{The Mallows Model} \label{sec:mallows} \cite{mallows1957non} proposed the following probability model for a ranking list $\tau$ of $n$ entities: \begin{equation}\label{Eq:MallowsModel} \pi(\tau \mid \tau_0, \phi) = \dfrac{1}{Z_n(\phi)}\cdot\exp\{-\phi\cdot d(\tau,\tau_0)\}, \end{equation} where $\tau_0$ denotes the true ranking list, $\phi>0$ characterizing the reliability of $\tau$, function $d(\cdot,\cdot)$ is a distance metric between two ranking lists, and \begin{equation}\label{eq:Mallow-norm} Z_n(\phi)=\sum_{\tau'}\exp\{-\phi\cdot d(\tau', \tau_0)\}=\frac{\prod_{t=2}^n(1-e^{-t\phi})}{(1-e^{-\phi})^{n-1}} \end{equation} being the normalizing constant, whose analytic form was derived in \citet{diaconis1988group}. Clearly, a larger $\phi$ means that $\tau$ is more stable and concentrates in a tighter neighborhood of $\tau_0$. A common choice of $d(\cdot, \cdot)$ is the Kendall tau distance. The Mallows model under the Kendall tau distance can also be equivalently described by an alternative multistage model, which selects and positions entities one by one in a sequential fashion, where $\phi$ serves as a common parameter that governs the probabilistic behavior of each entity in the stochastic process \citep{mallows1957non}. Later on, \citet{Fligner1986Distance} extended the Mallows model by allowing $\phi$ to vary at different stages, i.e., introducing a position-specific parameter $\phi_i$ for each position $i$, which leads to a very flexible, in many cases too flexible, framework to model rank data. To stabilize the generalized Mallows model by \citet{Fligner1986Distance}, \citet{fan2019} proposed to put a structural constraint on $\phi_i$s of the form $\phi_i=\phi\cdot(1-\alpha^i)$ with $0<\phi <1$ and $0\leq \alpha \leq 1$. As a probabilistic model for rank data, the Mallows model enjoys great interpretability, model compactness, inference and computation efficiency. For a comprehensive review of the Mallows model and its extensions, see \citet{Irurozki2014PerMallows} and \citet{fan2019}. \section{The Partition-Mallows Model} \label{sec:model} The partition model employed by BARD \citep{deng2014bayesian} tends to oversimplify the problem for scenarios where we care about the detailed rankings of relevant entities. To further enhance the partition model of BARD so that it can reflect the detailed rankings of relevant entities, we describe a new partition-Mallows model in this section. \subsection{The Reverse Partition Model}\label{subsec:RevPar} To combine the partition model with the Mallows model, a naive strategy is to simply replace the uniform model for the relevant entities, i.e., $P(\tau_k^1 \mid \tau_k^{1|0},I)$ in (\ref{Eq:RankDecomposition}), by the Mallows model, which leads to the updated Equation \eqref{eqn:bardtau1} as below: $$P(\tau_k^1 \mid \tau_k^{1\mid 0},I) = \frac{\pi(\tau_k^1)} {Z_{\tau_k,I}} \times \mathbb{I} \big(\tau_k^1 \in \mathcal{A}_{U_R}(\tau_k^{1\mid0}) \big),$$ where $\pi(\tau_k^1)$ is the Mallows density of $\tau_k^1$ and $Z_{\tau_k,I}=\sum_{\tau\in \mathcal{A}_{U_R}(\tau_k^{1\mid0})} \pi(\tau)$ is the normalizing constant of the Mallows model with a constraint due to the compatibility of $\tau_k^1$ with respect to $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$. Apparently, the calculation of $Z_{\tau_k,I}$, which involves the summation over the whole space of $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$, whose size is $A_{\tau_k,I}=\#\{\mathcal{A}_{U_R}(\tau_k^{1\mid0})\}=\prod_{t=1}^{n_0+1}(n_{\tau_k,t}^{1\mid 0}!)$, is infeasible for most practical cases, rendering such a naive combination of the Mallows model and the partition model impractical. To avoid the challenging computation caused by the constraints due to $\mathcal{A}_{U_R}(\tau_k^{1\mid0})$, we rewrite the partition model by switching the roles of $\tau_k^0$ and $\tau_k^1$ in the model: instead of decomposing $\tau_k$ as $(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1)$ conditioning on the group indicators $I$, we decompose $\tau_k$ into an alternative triplet $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$, where $\tau_k^{0\mid 1}$ denotes the {\it relative reverse rankings} of background entities among the relevant ones. Formally, we note that $\tau_k^{0\mid 1}(i) \triangleq n_1+2-\tau_{k|\{i\} \cup U_R}(i)$ for any $i\in U_R$, where $\tau_{k|\{i\} \cup U_R}(i)$ denotes the relative ranking of a background entity among the relevant ones. In this {\it reverse partition model}, we first order the relevant entities according to a certain distribution and then use them as a reference system to ``insert'' the background entities. Figure~\ref{Tab:decomposition2} illustrates the equivalence between $\tau_k$ and its two alternative presentations, $(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1)$ and $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$. Given the group indicator vector $I$, the reverse partition model based on $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$ gives rise to the following distributional form for $\tau_k$: \begin{eqnarray}\label{Eq:RankDecompositionInverse} P(\tau_k \mid I)=P(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0 \mid I)=P(\tau_k^1 \mid I)\times P(\tau_k^{0\mid 1} \mid I)\times P(\tau_k^0 \mid \tau_k^{0\mid 1},I), \end{eqnarray} which is analogous to (\ref{Eq:RankDecomposition}) for the original partition model in BARD. Comparing to (\ref{Eq:RankDecomposition}), however, the new form (\ref{Eq:RankDecompositionInverse}) enables us to specify an unconstrained marginal distribution for $\tau_k^1$. Moreover, due to the symmetry between $\tau_k^{1\mid 0}$ and $\tau_k^{0\mid 1}$, it is highly likely that the power-law distribution, which was shown in \cite{deng2014bayesian} to approximate the distribution of $\tau_k^{1\mid 0}(i)$ well for each $E_i\in U_R$, can also model $\tau_k^{0\mid 1}(i)$ for each $E_i\in U_B$ reasonably well. Detailed numerical validations are shown in Supplementary Material. If we assume that all relevant entities are exchangeable, all background entities are exchangeable, and the relative reserve ranking of a background entity among the relevant entities follows a power-law distribution, we have \begin{eqnarray} \label{eqn:InvBARD_tau1} P(\tau_k^1 \mid I)&=&\frac{1}{n_1!},\\ \label{eqn:InvBARD_tau01} P(\tau_k^{0\mid 1} \mid I,\gamma_k)&=&\prod_{i \in U_B} P(\tau_k^{0 \mid 1}(i)\mid I,\gamma_k)=\frac{1}{(B^*_{\tau_k,I})^{\gamma_k}\times (C^*_{\gamma_k,n_1})^{n_0}},\\ \label{eqn:InvBARD_tau0} P(\tau_k^0 \mid \tau_k^{0\mid 1},I)&=&\frac{1}{A^*_{\tau_k,I}}\times\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big), \end{eqnarray} where $n_1$ and $n_0$ are numbers of relevant and background entities, respectively, $B^*_{\tau_k,I}=\prod_{i \in U_B} \tau_k^{0 \mid 1}(i)$ is the unnormalized part of the power-law, $C^*_{\gamma_k,n_1}=\sum_{t=1}^{n_1+1} t^{-\gamma_k}$ is the normalizing constant, $\mathcal{A}_{U_B}(\tau_k^{0\mid1})$ is the set of all $\tau_k^0$ that are compatible with a given $\tau_k^{0\mid 1}$, and $A^*_{\tau_k,I}=\#\{\mathcal{A}_{U_B}(\tau_k^{0\mid1})\}=\prod_{t=1}^{n_1+1}(n_{\tau_k,t}^{0\mid 1}!)$ with $n_{\tau_k,t}^{0\mid 1}= \sum_{i \in U_B} \mathbb{I}(\tau_k^{0\mid 1}(i) = t)$. Apparently, the likelihood of this reverse-partition model shares the same structure as that of the original partition model in BARD, and thus can be inferred in a similar way. \subsection{The Partition-Mallows Model} \label{subsec:BMM} The reverse partition model introduced in section ~\ref{subsec:RevPar} allows us to freely model $\tau_k^1$ beyond a uniform distribution, which is infeasible for the original partition model in BARD. Here we employ the Mallows model for $\tau_k^1$ due to its interpretability, compactness and computability. To achieve this, we replace the group indicator vector $I$ in the partition model by a more general indicator vector $\mathcal{I}=\{\mathcal{I}_i\}_{i=1}^n$, which takes value in $\Omega_\mathcal{I}$, the space of all permutations of $\{1,\cdots,n_1,\underbrace{0,\ldots,0}_{n_0}\}$, with $\mathcal{I}_i=0$ if $E_i\in U_B$, and $\mathcal{I}_i=k>0$ if $E_i\in U_R$ and is ranked at position $k$ among all relevant entities in $U_R$. Figure~\ref{Tab:decomposition2} provides an illustrative example of assigning an enhanced indicator vector $\mathcal{I}$ to a universe of 10 entities with $n_1=5$. Based on the status of $\mathcal{I}$, we can define subvectors $\mathcal{I}^+$ and $\mathcal{I}^0$, where $\mathcal{I}^+$ stands for the subvector of $\mathcal{I}$ containing all positive elements in $\mathcal{I}$, and $\mathcal{I}^0$ for the remaining zero elements in $\mathcal{I}$. Figure~\ref{Tab:decomposition2} demonstrates the constructions of $\mathcal{I}$, $\mathcal{I}^+$ and $\mathcal{I}^0$, and the equivalence between $\tau_k$, $(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1)$, and $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$ given $\mathcal{I}$. Note that different from the partition model in BARD, in which we allow the number of relevant entities represented by $n_1$ to vary nearby its expected value, the number of relevant entities in the new model, is assumed to be fixed and known for conceptual and computational convenience. In other words, we have $|U_R|=n_1$ in the new setting. \begin{figure}[h] \centering \begin{tabular}{ p{0.5cm}<{\centering}|p{0.5cm}<{\centering}|p{0.5cm}<{\centering}p{0.5cm}<{\centering}p{0.5cm}<{\centering}|p{0.5cm}<{\centering}p{0.5cm}<{\centering}p{0.5cm}<{\centering} p{0.5cm}<{\centering}p{0.5cm}<{\centering}|p{0.5cm}<{\centering}|p{0.5cm}<{\centering} p{0.5cm}<{\centering}p{0.5cm}<{\centering}|p{0.5cm}<{\centering}|p{0.5cm}<{\centering}} \cline{1-3} \cline{5-6} \cline{8-8}\cline{10-12} \cline{14-16} $\mathcal{I}^+$ &$\mathcal{I}^0$ &$I$& &$\mathcal{I}$& $U$& &$\tau_k$ & &$\tau_k^1$&$\tau_k^{0\mid 1}$&$\tau_k^0$ & & $\tau_k^0$&$\tau_k^{1\mid 0}$&$\tau_k^1$ \\ \cline{1-3} \cline{5-6} \cline{8-8}\cline{10-12} \cline{14-16} 1 & - & 1 & & 1& $E_1$ & & 2 & & 2& - & - &&-&1&2\\ 2 & - & 1 & & 2& $E_2$ & & 6 & & 4& - & - &&-&3&4\\ 3 & - & 1 & & 3& $E_3$ & & 4 & & 3& - & -&&-&2&3\\ 4 & - & 1 & & 4& $E_4$ & & 1 & & 1& - & -&&-&1&1\\ 5 & - & 1 &$\Longleftarrow$& 5& $E_5$ & & 7 & $\Longleftrightarrow$& 5&-& -&$\Longleftrightarrow$&-&3&5\\ - & 0 & 0 & & 0& $E_6$ & & 5 & & -& 3 & 2&&2&-&-\\ - & 0 & 0 & & 0& $E_7$ & & 3 & & -& 4 & 1&&1&-&-\\ - & 0 & 0 & & 0& $E_8$ & & 8 & & -& 1 & 3&&3&-&-\\ - & 0 & 0 & & 0& $E_9$ & & 9 & & -& 1 & 4&&4&-&-\\ - & 0 & 0 & & 0& $E_{10}$ & & 10& & -& 1 & 5&&5&-&-\\ \cline{1-3} \cline{5-6} \cline{8-8}\cline{10-12} \cline{14-16} \end{tabular} \caption{An illustrative example of construction of $\mathcal{I}^{+}$, $\mathcal{I}^0$ and $I$ based on the enhanced indicator vector $\mathcal{I}$ of $n_1=5$ to a universe of 10 entities, and the decomposition of a ranking list $\tau_k$ into triplet $(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0)$ and $(\tau_k^0,\tau_k^{1\mid 0},\tau_k^1)$ respectively given $\mathcal{I}$. } \label{Tab:decomposition2} \end{figure} As an analogy of Equations (\ref{Eq:RankDecomposition}) and (\ref{Eq:RankDecompositionInverse}), we have the following decomposition of $\tau_k$ given the enhanced indicator vector $\mathcal{I}$: \begin{eqnarray}\label{Eq:RankDecomposition_BARDM} P(\tau_k \mid \mathcal{I})=P(\tau_k^1,\tau_k^{0\mid 1},\tau_k^0 \mid \mathcal{I})=P(\tau_k^1 \mid\mathcal{I})\times P(\tau_k^{0\mid 1} \mid\mathcal{I})\times P(\tau_k^0 \mid \tau_k^{0\mid 1},\mathcal{I}). \end{eqnarray} Assume that $\tau_k^1\mid\mathcal{I}$ follows the Mallows model (with parameter $\phi_k$) centered at $\mathcal{I}^+$: \begin{eqnarray}\label{Eq:tau1_BARDM} P(\tau_k^1 \mid \mathcal{I}, \phi_k)=P(\tau_k^1 \mid\mathcal{I}^+,\phi_k)=\frac{\exp\{-\phi_k\cdot d_{\tau}(\tau_k^1,\mathcal{I}^+)\}} {Z_{n_1}(\phi_k)}, \end{eqnarray} where $d_{\tau}(\cdot,\cdot)$ denotes Kendall tau distance and $Z_{n_1}(\phi_k)$ is defined as in \eqref{eq:Mallow-norm}. Clearly, a larger $\phi_k$ indicates that ranker $\tau_k$ is of a higher quality, as the distribution is more concentrated at the ``true ranking" defined by $\mathcal{I}^+$. Since the relative ranking of background entities are of no interest to us, we still assume that they are randomly ranked. Together with the power-law assumption for $\tau_k^{0\mid1}(i)$, we have \begin{eqnarray} \label{Eq:tau01_BARDM} P(\tau_k^{0\mid 1} \mid\mathcal{I})&=&P(\tau_k^{0\mid 1} \mid I,\gamma_k) =\frac{1}{(B^*_{\gamma_k,I})^{\gamma_k}\times (C^*_{\gamma_k,n_1})^{n-n_1}},\\ \label{Eq:tau0_BARDM} P(\tau_k^0 \mid \tau_k^{0\mid 1},\mathcal{I})&=&P(\tau_k^0 \mid \tau_k^{0\mid 1},I)=\frac{1}{A^*_{\tau_k,I}}\times\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big), \end{eqnarray} where notations $A^*_{\tau_k,I}$, $B^*_{\tau_k,I}$ and $C^*_{\gamma_k,n_1}$ are the same as in the reverse-partition model. We call the resulting model the \emph{Partition-Mallows model}, abbreviated as PAMA. Different from the partition and reverse partition models, which quantify the quality of ranker $\tau_k$ with only one parameter $\gamma_k$ in the power-law distribution, the PAMA model contains two quality parameters $\phi_k$ and $\gamma_k$, with the former indicating the ranker's ability of ranking relevant entities and the latter reflecting the ranker's ability in differentiating relevant entities from background ones. Intuitively, $\phi_k$ and $\gamma_k$ reflect the quality of ranker $\tau_k$ in two different aspects. However, considering that a good ranker is typically strong in both dimensions, it looks quite natural to further simplify the model by assuming \begin{equation}\label{Eq:phik2phi} \phi_k=\phi\cdot\gamma_k, \end{equation} with $\phi>0$ being a common factor for all rankers. This assumption, while reducing the number of free parameters by almost half, captures the natural positive correlation between $\phi_k$ and $\gamma_k$ and serves as a first-order (i.e., linear) approximation to the functional relationship between $\phi_k$ and $\gamma_k$. A wide range of numerical studies based on simulated data suggest that the linear approximation showed in \eqref{Eq:phik2phi} works reasonably well for many typical scenarios for rank aggregation. In contrast, the more flexible model with both $\phi_k$ and $\gamma_k$ as free parameters (which is referred to as PAMA$^*$) suffers from unstable performance from time to time. Detailed evidences to support assumption \eqref{Eq:phik2phi} can be found in Supplementary Material. Plugging in \eqref{Eq:phik2phi} into \eqref{Eq:tau1_BARDM}, we have a simplified model for $\tau_k^1$ given $\mathcal{I}$ as follows: \begin{eqnarray}\label{Eq:tau1_BARDM_final} P(\tau_k^1 \mid\mathcal{I}, \phi,\gamma_k)=P(\tau_k^1 \mid\mathcal{I}^+,\phi,\gamma_k)=\frac{\exp\{-\phi\cdot\gamma_k\cdot d_{\tau}(\tau_k^1,\mathcal{I}^+)\}}{Z_{n_1}(\phi\cdot\gamma_k)}. \end{eqnarray} Combining \eqref{Eq:tau01_BARDM}, \eqref{Eq:tau0_BARDM} and \eqref{Eq:tau1_BARDM_final}, we get the full likelihood of $\tau_k$: \begin{eqnarray}\label{Eq:tau_BARDM_final} P(\tau_k \mid\mathcal{I}, \phi,\gamma_k)&=&P(\tau_k^1\mid\mathcal{I},\phi,\gamma_k)\times P(\tau_k^{0|1}\mid\mathcal{I},\gamma_k)\times P(\tau_k^{0}\mid \tau_k^{0|1},\mathcal{I})\nonumber\\ &=&\frac{\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big)}{A^*_{\tau_k,I}\times(B^*_{\tau_k,I})^{\gamma_k}\times(C^*_{\gamma_k,n_1})^{n-n_1}\times (D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}\times E^*_{\phi,\gamma_k}}, \end{eqnarray} where $D^*_{\tau_k,\mathcal{I}}=\exp\{d_{\tau}(\tau_k^1,\mathcal{I}^+)\}$, $E^*_{\phi,\gamma_k}=Z_{n_1}(\phi\cdot\gamma_k)=\frac{\prod_{t=2}^{n_1}(1-e^{-t\phi\gamma_k})}{(1-e^{-\phi\gamma_k})^{n_1-1}}$, and $A^*_{\tau_k,\mathcal{I}}$, $B^*_{\tau_k,\mathcal{I}}$ and $C^*_{\tau_k,n_1}$ keep the same meaning as in the reverse partition model. At last, for the set of observed ranking lists $\boldsymbol{\tau}=(\tau_1,\cdots,\tau_m)$ from $m$ independent rankers, we have the joint likelihood: \begin{eqnarray} \label{eqn:like} P(\boldsymbol{\tau} \mid\mathcal{I},\phi,\boldsymbol{\gamma})&=&\prod_{k=1}^m P(\tau_k\mid\mathcal{I},\phi,\gamma_k). \end{eqnarray} \subsection{Model Identifiability and Estimation Consistency}\label{sec:consistency} Let $\Omega_n$ be the space of all permutations of $\{1,\cdots,n\}$ in which $\tau_k$ takes value, and let ${\boldsymbol{\theta}}=(\mathcal{I},\phi,\boldsymbol{\gamma})$ be the vector of model parameters. The PAMA model in \eqref{eqn:like}, i.e., $P(\boldsymbol{\tau} \mid {\boldsymbol{\theta}})$, defines a family of probability distributions on $\Omega_{n}^m$ indexed by parameter ${\boldsymbol{\theta}}$ taking values in space $\boldsymbol{\Theta}=\Omega_{\mathcal{I}} \times \Omega_{\phi} \times \Omega_{\boldsymbol{\gamma}} $, where $\Omega_{\mathcal{I}}$ is the space of all permutations of $\{1,\cdots,n_1,{\bf 0}_{n_0}\}$, $\Omega_{\phi}=(0,+\infty)$ and $\Omega_{\boldsymbol{\gamma}}=[0,+\infty)^m$. We show here that the PAMA model defined in \eqref{eqn:like} is identifiable and the model parameters can be estimated consistently under mild conditions. \begin{Thm} \label{thm:identi} The PAMA model is identifiable, i.e., \begin{equation}\label{eq:IdentifiablityCondition} \forall\ {\boldsymbol{\theta}}_1,{\boldsymbol{\theta}}_2\in\boldsymbol{\Theta},\ \mbox{if}\ P(\boldsymbol{\tau}\mid{\boldsymbol{\theta}}_1)= P(\boldsymbol{\tau}\mid{\boldsymbol{\theta}}_2)\ \mbox{for}\ \forall\ \boldsymbol{\tau}\in\Omega_n^m,\ \mbox{then}\ {\boldsymbol{\theta}}_1 = {\boldsymbol{\theta}}_2. \end{equation} \end{Thm} \begin{proof} See Supplementary Material. \end{proof} To show that parameters in the PAMA model can be estimated consistently, we will first construct a consistent estimator for the indicator vector $\mathcal{I}$ as $m\rightarrow \infty$ but with the number of ranked entities $n$ fixed, and show later that $\phi$ can also be consistently estimated once $\mathcal{I}$ is given. To this end, we define $\bar\tau(i)=m^{-1}\sum_{k=1}^{m}\tau_k(i)$ to be the average rank of entity $E_i$ across all $m$ rankers, and assume that the ranker-specific quality parameters $\gamma_1,\cdots,\gamma_m$ are i.i.d. samples from a non-atomic probability measure $F(\gamma)$ defined on $[0,\infty)$ with a finite first moment (referred to as condition $\boldsymbol{C}_\gamma$ hereinafter). Then, by the strong law of large numbers we have \begin{equation}\label{eq:MeanRank} \bar\tau(i)=\frac{1}{m}\sum_{k=1}^{m}\tau_k(i)\rightarrow \mathbb{E}\big[\tau(i)\big] \ a.s.\ \ \mbox{with} \ m\rightarrow\ \infty, \end{equation} since $\{\tau_k(i)\}_{k=1}^m$ are i.i.d. random variables with expectation $$\mathbb{E}\big[\tau(i)\big]=\mathbb{E}\Big[\mathbb{E}\big[\tau(i)\mid\gamma\big]\Big]=\int\mathbb{E}\big[\tau(i)\mid\gamma\big]dF(\gamma),$$ where $\mathbb{E}\big[\tau(i)\mid\gamma\big]$ is the conditional mean of $\tau(i)$ given the model parameters $(\mathcal{I},\phi,\gamma)$, i.e., $$\mathbb{E}\big[\tau(i)\mid\gamma\big]=\sum_{t=1}^n t\cdot P\big(\tau(i)=t\mid\mathcal{I},\phi,\gamma\big).$$ Clearly, $\mathbb{E}\big[\tau(i)\big]$ is a function of $\phi$ given $\mathcal{I}$ and $F(\gamma)$. We define $e_i(\phi)\triangleq \mathbb{E}\big[\tau(i)\big]$ to emphasize $\mathbb{E}\big[\tau(i)\big]$'s nature as a continuous function of $\phi$. Without loss of generality, we suppose that $U_R=\{1,\cdots,n_1\}$ and $U_B=\{n_1+1,\cdots,n\}$, i.e., $\mathcal{I}=(1,\cdots,n_1,0,\cdots,0)$, hereinafter. Then, the partition structure and the Mallows model embedded in the PAMA model lead to the following facts: \begin{equation}\label{eq:MeanRankRelation} e_1(\phi) < \cdots < e_{n_1}(\phi)\ \mbox{and}\ e_{n_1+1}(\phi)=\cdots=e_{n}(\phi)=e_0,\ \forall\ \phi\in\Omega_\phi. \end{equation} Note that $e_i(\phi)$ degenerates to a constant with respect to $\phi$ (i.e., $e_0$) for all $i>n_1$ because parameter $\phi$ influences only the relative rankings of relevant entities in the Mallows model. The value of $e_0$ is completely determined by $F(\gamma)$. For the BARD model, it is easy to see that $e_1=\cdots=e_{n_1} \leq e_{n_1+1}=\cdots=e_n.$ \begin{figure}[h] \centering \includegraphics[width=0.9\textwidth]{expectations.pdf} \caption{Average ranks of all the entities with fixed $\mathcal{I} = (1,\cdots,n_1,0,\cdots,0)$, $n=30$, $n_1=15$, $m=100000$ and $F(\gamma)= U(0,2)$. Figures (a), (b) and (c) are the corresponding results for $\phi =0, 0.2\ \mbox{and } 0.4$ respectively.} \label{fig:AverageRankOfEntitiesInPAMA} \end{figure} Figure \ref{fig:AverageRankOfEntitiesInPAMA} shows some empirical estimates of the $e_i(\phi)$'s based on $m=100,000$ independent samples drawn from PAMA models with $n=30$, $n_1=15$, and $F(\gamma)= U(0,2)$, but three different $\phi$ values: (a) $\phi=0$, which corresponds to the BARD model; (b) $\phi=0.2$; and (c) $\phi=0.5$. One surprise is that in case (c), some relevant entities may have a larger $e_i(\phi)$ (worse ranking) than the average rank of background entities. Lemma \ref{lem:AverageMean} guarantees that for almost all $\phi\in\Omega_\phi$, $e_0$ is different from $e_i(\phi)$ for $i=1,\cdots,n_1$. The proof of Lemma \ref{lem:AverageMean} can be found in Supplementary Material. \begin{lemma}\label{lem:AverageMean} For the PAMA model with condition $\boldsymbol{C}_\gamma$, $\exists\ \tilde\Omega_\phi\subset\Omega_\phi$, s.t. $(\Omega_\phi-\tilde\Omega_\phi)$ contains only finite elements and \begin{equation}\label{eq:e0neqei} e_i(\phi)\neq e_0\ \mbox{for}\ i=1,\cdots,n_1,\ \forall\ \phi\in\tilde\Omega_\phi. \end{equation} \end{lemma} The facts demonstrated in \eqref{eq:MeanRankRelation} and \eqref{eq:e0neqei} suggest the following three-step strategy to estimate $\mathcal{I}$: (a) find the subset $S_0$ of $n_0=(n-n_1)$ entities from $U$ so that the within-subset variation of the $\bar\tau(i)$'s is the smallest, i.e., \begin{equation} S_0=\argmin_{S\in U, \ |S|=n_0} \sum_{i\in S} (e_i-\bar{e}_S)^2 , \ \ \mbox{with} \ \bar{e}_S=n_0^{-1} \sum_{i\in S} e_i, \end{equation} and let $\tilde{U}_B=S_0$ be an estimate of $U_B$; (b) rank the entities in $U\setminus S_0$ by $\bar\tau(i)$ increasingly and use the obtained ranking $\tilde\mathcal{I}^+$ as an estimate of $\mathcal{I}^+$; (c) combine the above two steps to obtain the estimate $\tilde\mathcal{I}$ of $\mathcal{I}$. This can be achieved by defining $\tilde U_R=U \setminus \tilde U_B$ and $\tilde\mathcal{I}^+=rank(\{\bar{\tau}(i) :i\in\tilde U_R\}),$ and obtain $\tilde\mathcal{I}=(\tilde\mathcal{I}_1,\cdots,\tilde\mathcal{I}_n)$, with $\tilde\mathcal{I}_i=\tilde\mathcal{I}^+_i\cdot\mathbb{I}(i\in\tilde U_R).$ Note that $\tilde U_B$ is based on the mean ranks, $\{\bar\tau(i)\}_{i\in U}$, thus is clearly a moment estimator. Although this three-step estimation strategy is neither statistically efficient nor computationally feasible (step (a) is NP-hard), it nevertheless serves as a prototype for developing the consistency theory. Theorem \ref{thm:consisI} guarantees that $\tilde\mathcal{I}$ is a consistent estimator of $\mathcal{I}$ under mild conditions. \begin{Thm} \label{thm:consisI} For the PAMA model with condition $\boldsymbol{C}_\gamma$, for almost all $\phi\in\Omega_\phi$, the moment estimator $\tilde\mathcal{I}$ converges to $\mathcal{I}$ with probability 1 with $m$ going to infinity. \end{Thm} \begin{proof} Combining fact \eqref{eq:e0neqei} in Lemma \ref{lem:AverageMean} with fact \eqref{eq:MeanRank}, we have for $\forall\ \phi\in\tilde\Omega_\phi$ that $$e_1(\phi)<\cdots<e_{n_1}(\phi)\ \mbox{and}\ e_i(\phi)\neq e_0\ \mbox{for}\ i=1,\cdots,n_1.$$ Moreover, as fact \eqref{eq:MeanRank} tells us that for $\forall\ \epsilon,\delta>0$, $\exists\ M>0$ s.t. for $\forall\ m>M$, $$P\big(|\bar\tau(i)-e_i(\phi)|<\delta\big)\geq 1-\epsilon,\ i=1,\cdots,n,$$ it is straightforward to see the conclusion of the theorem. \end{proof} Theorem \ref{thm:consisI} tells us that estimating $\mathcal{I}$ is straightforward if the number of independent rankers $m$ goes to infinity: a simple moment method ignoring the quality difference of rankers can provide us a consistent estimate of $\mathcal{I}$. In a practical problem where only a finite number of rankers are involved, however, more efficient statistical inference of the PAMA model based on Bayesian or frequentist principles becomes more attractive as effectively utilizing the quality information of different rankers is critical. With $n_0$ and $n_1$ fixed, parameter $\gamma_k$, which governs the power-law distribution for the rank list $\tau_k$, cannot be estimated consistently. Thus, its distribution $F(\gamma)$ cannot be determined nonparametrically even when the number of rank lists $m$ goes to infinity. We impose a parametric form $F_\psi(\gamma)$ with $\psi$ as the hyper-parameter and refer to the resulting hierarchical-structured model as PAMA-H, which has the following marginal likelihood of $(\phi,\psi)$ given $\mathcal{I}$: $$L(\phi,\psi\mid\mathcal{I})=\int P(\boldsymbol{\tau}\mid\mathcal{I},\phi,\boldsymbol{\gamma})dF_\psi(\boldsymbol{\gamma}) =\prod_{k=1}^m\int P(\tau_k\mid\mathcal{I},\phi,\gamma_k)dF_\psi(\gamma_k)=\prod_{k=1}^mL_k(\phi,\psi\mid\mathcal{I}).$$ We show in Theorem \ref{thm:consisPhiPsi} that the MLE based on the above marginal likelihood is consistent. \begin{Thm}\label{thm:consisPhiPsi} Under the PAMA-H model, assume that $(\phi,\psi)$ belongs to the parameter space $\Omega_\phi\times\Omega_\psi$, and the true parameter $(\phi_0,\psi_0)$ is an interior point of $\Omega_\phi\times\Omega_\psi$. Let $(\hat{\phi}_\mathcal{I},\hat\psi_\mathcal{I})$ be the maximizer of $L(\phi,\psi\mid\mathcal{I})$. If $F_\psi(\gamma)$ has a density function $f_\psi(\gamma)$ that is differentiable and concave with respect to $\psi$, then $\lim_{m\rightarrow\infty}(\hat{\phi}_\mathcal{I},\hat\psi_\mathcal{I})=(\phi_0,\psi_0)$ almost surely. \end{Thm} \begin{proof} See Supplementary Material. \end{proof} \section{Inference with the Partition-Mallows Model} \label{sec:statinfer} \subsection{Maximum Likelihood Estimation} \label{subsec:MLE} Under the PAMA model, the MLE of ${\boldsymbol{\theta}}= (\mathcal{I}, \phi,\boldsymbol{\gamma})$ is $\hat{{\boldsymbol{\theta}}}= \arg\max_{{\boldsymbol{\theta}}} l({\boldsymbol{\theta}})$, where \begin{equation} \label{eqn:loglike} l({\boldsymbol{\theta}}) = \log P(\tau_1,\tau_2,\cdots,\tau_m\mid{\boldsymbol{\theta}}) \end{equation} is the logarithm of the likelihood function (\ref{eqn:like}). Here, we adopt the \textit{Gauss-Seidel} iterative method in \cite{YANG2018281}, also known as \textit{backfitting} or \textit{cyclic coordinate ascent}, to implement the optimization. Starting from an initial point ${\boldsymbol{\theta}}^{(0)}$, the Gauss-Seidel method iteratively updates one coordinate of ${\boldsymbol{\theta}}$ at each step with the other coordinates held fixed at their current values. A Newton-like method is adopted to update $\phi$ and $\gamma_k$. Since $\mathcal{I}$ is a discrete vector, we find favorable values of $\mathcal{I}$ by swapping two neighboring entities to check whether $g(\mathcal{I}\mid\boldsymbol{\gamma}^{(s+1)}, \phi^{(s+1)})$ increases. More details of the algorithm are provided in Supplementary Material. With the MLE $\hat{\boldsymbol{\theta}}=(\hat\mathcal{I},\hat\phi,\hat\boldsymbol{\gamma})$, we define $U_R(\hat\mathcal{I})=\{i\in U:\ \hat\mathcal{I}_i>0\}$ and $U_B(\hat\mathcal{I})=\{i\in U:\ \hat\mathcal{I}_i=0\}$, and generate the final aggregated ranking list $\hat\tau$ based on the rules below: (a) set the top-$n_1$ list of $\hat\tau$ as $\hat\tau_{n_1}=sort(i\in U_R(\hat\mathcal{I})\ by\ \hat\mathcal{I}_i\uparrow)$, (b) all entities in $U_B(\hat\mathcal{I})$ tie for positions behind. Hereinafter, we refer to this MLE-based rank aggregation procedure under PAMA model as PAMA$_F$. For the PAMA-H model, a similar procedure can be applied to find the MLE of ${\boldsymbol{\theta}}=(\mathcal{I},\phi,\psi)$, with the $\boldsymbol{\gamma}=(\gamma_1,\cdots,\gamma_m)$ being treated as the missing data. With the MLE $\hat{\boldsymbol{\theta}}=(\hat\mathcal{I},\hat\phi,\hat\psi)$, we can generate the final aggregated ranking list $\hat\tau$ based on $\hat\mathcal{I}$ in the same way as in PAMA, and evaluate the quality of ranker $\tau_k$ via the mean or mode of the conditional distribution below: $$f(\gamma_k\mid\tau_k;\hat\mathcal{I},\hat\phi,\hat\psi)\propto f(\gamma_k\mid\hat\psi)\cdot P(\tau_k\mid\hat\mathcal{I},\hat\phi,\gamma_k).$$ In this paper, we refer to the above MLE-based rank aggregation procedure under PAMA-H model as PAMA$_{HF}$. The procedure is detailed in Supplementary Material. \subsection{Bayesian Inference} \label{subsec:BC} Since the three model parameters $\mathcal{I}$, $\phi$ and $\boldsymbol{\gamma}$ encode ``orthogonal" information of the PAMA model, it is natural to expect that $\mathcal{I}$, $\phi$ and $\boldsymbol{\gamma}$ are mutually independent {\it a priori}. We thus specify their joint prior distribution as $$\pi(\mathcal{I},\phi,\boldsymbol{\gamma})=\pi(\mathcal{I})\cdot\pi(\phi)\cdot\prod_{k=1}^m\pi(\gamma_k).$$ Without much loss, we may restrict the range of $\phi$ and $\gamma_k$'s to a closed interval $[0,b]$ with a large enough $b$. In contrast, $\mathcal{I}$ is discrete and takes value in the space $\Omega_\mathcal{I}$ of all permutations of $\{1,\ldots,n_1, \underbrace{0,\ldots,0}_{n_0}\}$. It is convenient to specify $\pi(\mathcal{I})$, $\pi(\phi)$ and $\pi(\gamma_k)$ as uniform, i.e., $$\pi(\mathcal{I})\sim U(\Omega_\mathcal{I}),\ \pi(\phi)\sim U[0,b],\ \pi(\gamma_k)\sim U[0,b].$$ Based on our experiences in a large range of simulation studies and real data applications, we find that it works reasonably well to set $b=10$. In Section~\ref{sec:consistency} we also discussed letting $\pi(\gamma_k)$ be of a parametric form, which will be further discussed later. The posterior distribution can be expressed as \begin{eqnarray} &&f(\mathcal{I},\phi,\boldsymbol{\gamma}|\tau_1,\tau_2,\cdots,\tau_m)\nonumber\\ &\propto& \pi(\mathcal{I},\phi,\boldsymbol{\gamma})\cdot P(\tau_1,\tau_2,\cdots,\tau_m|\mathcal{I},\phi,\boldsymbol{\gamma})\nonumber\\ &=&\mathbb{I}\big(\phi\in[0,10])\big)\times\prod_{k=1}^m \Big\{ \frac{\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big)\times\mathbb{I}\big(\gamma_k\in[0,10]\big)}{A^*_{\tau_k,I}\times(B^*_{\tau_k,I})^{\gamma_k}\times(C^*_{\gamma_k,n_1})^{n-n_1}\times (D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}\times E^*_{\phi,\gamma_k}} \Big\},\label{eqn:posterior} \end{eqnarray} with the following conditional distributions: \begin{eqnarray} \label{fc:I} f(\mathcal{I}\mid\phi,\boldsymbol{\gamma}) &\propto& \prod_{k=1}^m\frac{\mathbb{I}\big(\tau_k^0 \in \mathcal{A}_{U_R}(\tau_k^{0\mid1})\big)}{A^*_{\tau_k,I}\times(B^*_{\tau_k,I})^{\gamma_k}\times(D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}},\\ \label{fc:phi} f(\phi\mid\mathcal{I},\boldsymbol{\gamma}) &\propto& {\mathbb I}\big(\phi \in [0,10]\big)\times \prod_{k=1}^m \frac{1}{(D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}\times E^*_{\phi,\gamma_k}},\\ \label{fc:gamma} f(\gamma_k\mid\mathcal{I},\phi,\boldsymbol{\gamma}_{[-k]})&\propto&\frac{\mathbb{I}\big(\gamma_k \in [0,10]\big)}{(B^*_{\tau_k,I})^{\gamma_k}\times(C^*_{\gamma_k,n_1})^{n-n_1}\times (D^*_{\tau_k,\mathcal{I}})^{\phi\cdot\gamma_k}\times E^*_{\phi,\gamma_k}}, \end{eqnarray} based on which posterior samples of $(\mathcal{I},\phi,\boldsymbol{\gamma})$ can be obtained by Gibbs sampling, where $\boldsymbol{\gamma}_{[-k]}=(\gamma_1,\cdots,\gamma_{k-1},\gamma_{k+1},\cdots,\gamma_m)$. Considering that conditional distributions in (\ref{fc:I})-(\ref{fc:gamma}) are nonstandard, we adopt the Metropolis-Hastings algorithm \citep{Hastings1970} to enable the conditional sampling. To be specific, we choose the proposal distributions for $\phi$ and $\gamma_k$ as \begin{eqnarray*} q(\phi\mid\phi^{(t)};\mathcal{I},\boldsymbol{\gamma})&\sim&\mathcal{N}(\phi^{(t)},\sigma_{\phi}^2)\\ q(\gamma_k\mid\gamma_k^{(t)};\mathcal{I},\phi,\boldsymbol{\gamma}_{[-k]})&\sim& \mathcal{N}(\gamma_k^{(t)},\sigma_{\gamma_k}^2), \end{eqnarray*} where $\sigma_{\phi}^2$ and $\sigma_{\gamma_k}^2$ can be tuned to optimize the mixing rate of the sampler. Since $\mathcal{I}$ is a discrete vector, we propose new values of $\mathcal{I}$ by swapping two randomly selected adjacent entities. Note that the entity whose ranking is $n_1$ could be swapped with any background entity. Due to the homogeneity of background entities, there is no need to swap two background entities. Therefore, the number of potential proposals in each step is $\mathcal{O}(n n_1)$. More details about MCMC sampling techniques can be found in \cite{liu2008monte}. Suppose that $M$ posterior samples $\{(\mathcal{I}^{(t)},\phi^{(t)},\boldsymbol{\gamma}^{(t)})\}_{t=1}^M$ are obtained. We calculate the posterior means of different parameters as below: \begin{eqnarray*} \bar{\mathcal{I}}_i&=&\frac{1}{M} \sum_{t=1}^M\Big[\mathcal{I}_i^{(t)}\cdot I^{(t)}_i+\frac{n_1+1+n}{2}\cdot(1-I^{(t)}_i)\Big],\ i=1,\cdots,n,\\ \bar{\phi} &=& \frac{1}{M} \sum_{t=1}^M \phi^{(t)},\\ \bar{\gamma}_k &=& \frac{1}{M} \sum_{t=1}^M \gamma_k^{(t)}, k=1,\cdots,m. \end{eqnarray*} We quantify the quality of ranker $\tau_k$ with $\bar\gamma_k$, and generate the final aggregated ranking list $\hat\tau$ based on the $\bar\mathcal{I}_i$s as following: $$\hat\tau=sort(i\in U\ by\ \bar\mathcal{I}_i \uparrow).$$ Hereinafter, we refer to this MCMC-based Bayesian rank aggregation procedure under the Partition-Mallows model as PAMA$_B$. The Bayesian inference procedure PAMA$_{HB}$ for the PAMA-H model differs from PAMA$_B$ only by replacing the prior distribution $\prod_{k=1}^m \pi(\gamma_k)$, which is uniform in $[0,b]^m$, with a hierarchically structured prior $\pi(\psi) \prod_{k=1}^m f_\psi (\gamma_k)$. The conditional distributions needed for Gibbs sampling are almost the same as \eqref{fc:I}-\eqref{fc:gamma}, except an additional one \begin{eqnarray} \label{PAMA-HB:psi} f(\psi\mid\mathcal{I},\phi,\boldsymbol{\gamma})&\propto&\pi(\psi)\cdot\prod_{k=1}^mf_\psi(\gamma_k). \end{eqnarray} We may specify $f_\psi(\gamma)$ to be an exponential distribution and let $\pi(\psi)$ be a proper conjugate prior to make \eqref{PAMA-HB:psi} easy to sample from. More details for PAMA$_{HB}$ with $f_\psi(\gamma)$ specified as an exponential distribution is provided in Supplementary Material. Our simulation studies suggest that the practical performance of PAMA$_B$ and PAMA$_{HB}$ are very similar when $n_0$ and $n_1$ are reasonably large (see Supplementary Material for details). In contrast, as we will show in Section~\ref{sec:simulation}, the MLE-based estimates (e.g., PAMA$_F$) typically produce less accurate results with a shorter computational time compared to PAMA$_B$. \subsection{Extension to Partial Ranking Lists} The proposed Partition-Mallows model can be extended to more general scenarios where partial ranking lists, instead of full ranking lists, are involved in the aggregation. Given the entity set $U$ and a ranking list $\tau_S$ of entities in $S\subseteq U$, we say $\tau_S$ is a \emph{full ranking list} if $S=U$, and a \emph{partial ranking list} if $S\subset U$. Suppose $\tau_S$ is a partial ranking list and $\tau_U$ is a full ranking list of $U$. If the projection of $\tau_U$ on $S$ equals to $\tau_S$, we say $\tau_U$ is compatible with $\tau_S$, denotes as $\tau_U\sim\tau_S$. Let $\mathcal{A}(\tau_S)=\{\tau_U:\tau_U\sim\tau_S\}$ be the set of all full lists that are compatible with $\tau_S$. Suppose a partial list $\tau_k$ is involved in the ranking aggregation problem. The probability of $\tau_k$ can be evaluated by: \begin{eqnarray} \label{eqn:pllike} P(\tau_k\mid\mathcal{I},\phi,\gamma_k)=\sum_{\tau_k^*\sim\tau_k} P(\tau_k^*\mid\mathcal{I},\phi,\gamma_k), \end{eqnarray} where $P(\tau_k^*\mid\mathcal{I},\phi,\gamma_k)$ is the probability of a compatible full list under the PAMA model. Clearly, the probability in (\ref{eqn:pllike}) does not have a closed-form representation due to complicated constraints between $\tau_k$ and $\tau_k^*$, and it is very challenging to do statistical inference directly based on this quantity. Fortunately, as rank aggregation with partial lists can be treated as a missing data problem, we can resolve the problem via standard methods for missing data inference. The Bayesian inference can be accomplished by the classic data augmentation strategy~\citep{tanner1987} in a similar way as described in \cite{deng2014bayesian}, which iterates between imputing the missing data conditional on the observed data given the current parameter values, and updating parameter values by sampling from the posterior distribution based on the imputed full data. To be specific, we iteratively draw from the following two conditional distributions: \begin{eqnarray*} P(\tau_1^\ast,\cdots,\tau_m^\ast\mid\tau_1,\cdots,\tau_m;\mathcal{I},\phi,\boldsymbol{\gamma})=\prod_{k=1}^m P(\tau_k^\ast\mid\tau_k;\mathcal{I},\phi,\gamma_k), \\ f(\mathcal{I},\phi,\boldsymbol{\gamma}\mid\tau_{1}^\ast,\cdots,\tau_m^\ast)\propto \pi(\mathcal{I})\times\pi(\boldsymbol{\gamma})\times\pi(\phi)\times\prod_{k=1}^m P(\tau_k^\ast\mid\mathcal{I},\gamma_{k},\phi). \end{eqnarray*} To find the MLE of ${\boldsymbol{\theta}}$ for this more challenging scenario, we can use the Monte Carlo EM algorithm (MCEM, \cite{tanner1990}). Let $\tau_k^{(1)},\cdots, \tau_k^{(M)}$ be $M$ independent samples drawn from distribution $P(\tau_k^\ast\mid\tau_k,\mathcal{I},\phi,\gamma_k)$. The E-step involves the calculation of the $Q$-function below: \begin{eqnarray*} Q(\mathcal{I},\boldsymbol{\gamma},\phi \mid \mathcal{I}^{(s)},\boldsymbol{\gamma}^{(s)},\phi^{(s)}) &=& E \left\{\sum_{k=1}^m \log P(\tau_k^{*} \mid \mathcal{I},\boldsymbol{\gamma},\phi) \mid \tau_k, \mathcal{I}^{(s)},\boldsymbol{\gamma}_k^{(s)},\phi^{(s)} \right\}\nonumber\\ &\approx& \dfrac{1}{M}\sum_{k=1}^m \sum_{t=1}^M \log P(\tau_k^{(t)}\mid \mathcal{I},\boldsymbol{\gamma}_k,\phi). \end{eqnarray*} In the M-step, we use the \emph{Gauss-Seidel} method to maximize the above $Q$-function in a similar way as detailed in Supplementary Material. No matter which method is used, a key step is to draw samples from \[P(\tau_k^\ast\mid\tau_k;\mathcal{I},\phi,\gamma_k)\propto P(\tau_k^\ast\mid\mathcal{I},\gamma_k,\phi)\cdot \mathbb{I}\big(\tau_k^\ast\in\mathcal{A}(\tau_k)\big).\] To achieve this goal, we start with $\tau_k^*$ obtained from the previous step of the data augmentation or MCEM algorithms, and conduct several iterations of the following Metropolis step with $P(\tau_k^\ast\mid\tau_k;\mathcal{I},\phi,\gamma_k)$ as its target distribution: (a) construct the proposal $\tau_k'$ by randomly selecting two elements in the current full list $\tau_k^*$ and swapping them; (b) accept or reject the proposal according to the Metropolis rule, that is to accept $\tau_k'$ with probability of $\min(1,\frac{P(\tau_k'\mid\mathcal{I},\gamma_k,\phi)}{P(\tau_k^{*}\mid\mathcal{I},\gamma_k,\phi)})$. Note that the proposed list $\tau_k'$ is automatically rejected if it is incompatible with the observed partial list $\tau_k$. \subsection{Incorporating Covariates in the Analysis} In some applications, covariate information for each ranked entity is available to assist rank aggregation. One of the earliest attempts for incorporating such information in analysing rank data is perhaps the \emph{hidden score model} due to \cite{Thurstone1927}, which has become a standard approach and has many extensions. Briefly, these models assume that there is an unobserved score for each entity that is related to the entity-specific covariates $X_i=(X_{i1},\cdots,X_{ip})^T$ under a regression framework and the observed rankings are determined by these scores plus noises, i.e., $$\tau_k=sort(S_{ik}\downarrow,\ E_i\in U),\ \mbox{where}\ S_{ik}=X_i^T\boldsymbol{\beta}+\varepsilon_{ik}.$$ Here, $\boldsymbol{\beta}$ is the common regression coefficient and $\varepsilon_{ik}\sim N(0,\sigma^2_k)$ is the noise term. Recent progresses along this line are reviewed by \cite{Yu2000Bayesian,Bhowmik2017,li2020bayesian}. Here, we propose to incorporate covariates into the analysis in a different way. Assuming that covariate $X_i$ provides information on the group assignment instead of the detailed ranking of entity $E_i$, we connect $X_i$ and $\mathcal{I}_i$, the enhanced indicator of $X_i$, by a logistic regression model: \begin{equation}\label{eq:BARDM_Logistics} P(\mathcal{I}_i\mid X_i)=P(I_i\mid X_i,\boldsymbol{\psi})=\dfrac{\exp\{X_i^T\boldsymbol{\psi}\cdot I_i\}}{1+\exp\{X_i^T\boldsymbol{\psi}\}},~~i=1,\cdots,n, \end{equation} where $\boldsymbol{\psi}=(\psi_1,\ldots,\psi_p)^T$ as the regression parameters. Let $\boldsymbol{X}=(X_1,\cdots,X_n)$ be the covariate matrix, we can extend the Partition-Mallows model as \begin{equation}\label{eq:BARDM_Covariate} P(\tau_1,\cdots,\tau_m, \mathcal{I} \mid \boldsymbol{X})=P(\mathcal{I} \mid \boldsymbol{X},\boldsymbol{\psi})\times P(\tau_1,\cdots,\tau_m\mid\mathcal{I},\phi,\boldsymbol{\gamma}), \end{equation} where the first term $$P(\mathcal{I} \mid \boldsymbol{X},\boldsymbol{\psi})=\prod_{i=1}^n P(I_i \mid X_i,\boldsymbol{\psi})$$ comes from the logistic regression model (\ref{eq:BARDM_Logistics}), and the second term comes from the original Partition-Mallows model. In the extended model, our goal is to infer $(\mathcal{I},\phi,\boldsymbol{\gamma},\boldsymbol{\psi})$ based on $(\tau_1,\cdots,\tau_m;\boldsymbol{X})$. We can achieve both Bayesian inference and MLE for the extended model in a similar way as described for the Partition-Mallows model. More details are provided in the Supplementary Material. An alternative way to incorporate covariates is to replace the logistic regression model by a naive Bayes model, which models the conditional distribution of $\boldsymbol{X}\mid\mathcal{I}$ instead of $\mathcal{I}\mid\boldsymbol{X}$, as follows: \begin{equation} f(\tau_1,\cdots,\tau_m,\boldsymbol{X}\mid\mathcal{I})=P(\tau_1,\cdots,\tau_m,\mid\mathcal{I},\phi,\boldsymbol{\gamma})\times f(\boldsymbol{X}\mid\mathcal{I}), \end{equation} where \begin{eqnarray*} f(\boldsymbol{X}\mid\mathcal{I})&=&\prod_{i=1}^nf(X_i\mid \mathcal{I}_i)=\prod_{i=1}^nf(X_i\mid I_i)=\prod_{i=1}^n\prod_{j=1}^pf(X_{ij}\mid I_i)\\ &=&\prod_{i=1}^n\prod_{j=1}^p\Big\{\big[f_{j}(X_{ij}\mid\psi_{j0})\big]^{1-I_i}\cdot\big[f_{j}(X_{ij}\mid\psi_{j1})\big]^{I_i}\Big\}, \end{eqnarray*} $f_j$ is pre-specified parametric distribution for covariates $X_j$ with parameter $\psi_{j0}$ for entities in the background group and $\psi_{j1}$ for entities in the relevant group. Since the performances of the two approaches are very similar, in the rest of this paper we use the logistic regression strategy to handle covariates due to its convenient form. \section{Simulation Study} \label{sec:simulation} \subsection{Simulation Settings}\label{sec:SimuSetting} We simulated data from two models: (a) the proposed Partition-Mallows model, referred to as $\mathcal{S}_{PM}$, and (b) the Thurstone hidden score model, referred to as $\mathcal{S}_{HS}$. In the $\mathcal{S}_{PM}$ scenario, we specified the true indicator vector as $\mathcal{I}=(1,\cdots,n_1,0,\cdots,0)$, indicating that the first $n_1$ entities $E_1,\cdots, E_{n_1}$ belong to $U_R$ and the rest belong to the background group $U_B$, and set $$\gamma_k=\left\{ \begin{array}{ll} 0.1, & \mbox{if } k\leq\frac{m}{2}; \\ a+(k-\frac{m}{2})\times\delta_R, & \mbox{if } k>\frac{m}{2}. \\ \end{array} \right. $$ Clearly, $a>0$ and $\delta_R>0$ control the quality difference and signal strength of the $m$ base rankers in the $\mathcal{S}_{PM}$ scenario. We set $\phi=0.6$ (defined in \eqref{Eq:phik2phi}), $\delta_R=\frac{2}{m}$, and $a$ with two options: 2.5 and 1.5. For easy reference, we denoted the strong signal case with $a=2.5$ and the weak signal case with $a=1.5$ by $\mathcal{S}_{PM_1}$ and $\mathcal{S}_{PM_2}$, respectively. In the $\mathcal{S}_{HS}$ scenario, we used the Thurstone model to generate the rank lists as $\tau_k = sort(i\in U\ by\ S_{ik} \downarrow),\ \mbox{where}\ S_{ik}\sim N(\mu_{ik},1)$ and $$\mu_{ik}=\left\{ \begin{array}{ll} 0, & \mbox{if } k\leq\frac{m}{2}\ \mbox{or}\ i>n_1; \\ a^*+\frac{b^*-a^*}{m}\times k + (n_1-i)\times\delta_E^*, & \mbox{otherwise}. \\ \end{array} \right. $$ In this model, $a^*, b^*$ and $\delta_E^*$ (all positive numbers) control the quality difference and signal strength of the $m$ base rankers. We also specified two sub-cases: $\mathcal{S}_{HS_1}$, the stronger-signal case with $(a^*,b^*,\delta_E^*)=(0.5,2.5,0.2)$; and $\mathcal{S}_{HS_2}$, the weaker-signal case with $(a^*,b^*,\delta_E^*)=(-0.5,1.5,0.2)$. Table \ref{Tab:HS1mu} shows the configuration matrix of $\mu_{ik}$ under $\mathcal{S}_{HS_1}$ when $m=10, n=100$ and $n_1=10$. In both scenarios, the first half of rankers are completely non-informative, with the other half providing increasingly strong signals. \begin{table}[h] \small \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline & $\mu_1$&$\mu_2$&$\mu_3$&$\mu_4$&$\mu_5$&$\mu_6$&$\mu_7$&$\mu_8$&$\mu_9$&$\mu_{10}$\\ \hline $E_1$&0 &0&0&0&0& 3.7&3.9&4.1&4.3&4.5 \\ $E_2$&0 &0&0&0&0 &3.5 &3.7&3.9&4.1&4.3 \\ $E_3$&0 &0&0&0&0 & 3.3&3.5&3.7&3.9&4.1 \\ $E_4$&0 &0&0&0&0 &3.1 &3.3&3.5&3.7&3.9 \\ $E_5$&0 &0&0&0&0 & 2.9&3.1&3.3&3.5&3.7 \\ $E_6$&0 &0&0&0&0 &2.7 &2.9&3.1&3.3&3.5 \\ $E_7$&0 &0&0&0&0 &2.5 &2.7&2.9&3.1&3.3 \\ $E_8$&0 &0&0&0&0 &2.3 &2.5&2.7&2.9&3.1 \\ $E_9$&0 &0&0&0&0 & 2.1&2.3&2.5&2.7&2.9 \\ $E_{10}$&0 &0&0&0&0 &1.9 &2.1&2.3&2.5&2.7 \\ $E_{11}$&0 &0&0&0&0 &0 &0&0&0&0 \\ $\vdots$&$\vdots$ &$\vdots$&$\vdots$&$\vdots$&$\vdots$ &$\vdots$ &$\vdots$&$\vdots$&$\vdots$&$\vdots$ \\ $E_{100}$&0 &0&0&0&0 &0 &0&0&0&0 \\ \hline \end{tabular} \caption{The configuration matrix of the $\mu_{ik}$'s under $\mathcal{S}_{HS_1}$ with $m$=10, $n$=100 and $n_1$=10.} \label{Tab:HS1mu} \end{table} For each of the four simulation scenarios (i.e., $\mathcal{S}_{PM_1}$, $\mathcal{S}_{PM_2}$, $\mathcal{S}_{HS_1}$ and $\mathcal{S}_{HS_2}$), we fixed the true number of relevant entities $n_1=10$, but allowed the number of rankers $m$ and the total number of entities $n$ to vary, resulting in a total of 16 simulation settings ( $\{scenarios: \mathcal{S}_{PM_1},\mathcal{S}_{PM_2},\mathcal{S}_{HS_1}, \mathcal{S}_{HS_2}\}\times\{m: 10, 20\}\times\{n: 100, 300\}\times\{n_1: 10\}$). Under each setting, we simulated 500 independent data sets to evaluate and compare performances of different rank aggregation methods. \subsection{Methods in Comparison and Performance Measures} Except for the proposed PAMA$_B$ and PAMA$_F$, we considered state-of-the-art methods in several classes, including the Markov chain-based methods MC$_1$, MC$_2$, MC$_3$ in \cite{Lin2010Space} and CEMC in \cite{2010Integration}, the partition-based method BARD in \cite{deng2014bayesian}, and the Mallows model-based methods MM and EMM in \cite{fan2019}. Classic naive methods based on summary statistics were ignored because they have been shown in previous studies to perform suboptimally, especially in cases where base rankers are heterogeneous in quality. The Markov-chain-based methods, MM, and EMM were implemented in \textit{TopKLists}, \textit{PerMallows} and \textit{ExtMallows} packages in R (https://www.r-project.org/), respectively. The code of BARD was provided by its authors. Let $\tau$ be the underlying true ranking list of all entities, $\tau_R=\{\tau(i):\ E_i\in U_R\}$ be the true relative ranking of relevant entities, $\hat\tau$ be the aggregated ranking obtained from a rank aggregation approach, $\hat\tau_R=\{\hat\tau(i):\ E_i\in U_R\}$ be the relative ranking of relevant entities after aggregation, and $\hat\tau_{n_1}$ be the top-$n_1$ list of $\hat\tau$. After obtaining the aggregated ranking $\hat\tau$ from a rank aggregation approach, we evaluated its performance by two measurements, namely the \emph{recovery distance} $\kappa_{R}$ and the \textit{coverage} $\rho_R$, defined as below: \begin{eqnarray*} \kappa_{R}&\triangleq& d_{\tau}(\hat{\tau}_R,\tau_R) + n_{\hat{\tau}} \times \frac{n+n_1+1}{2},\\ \rho_R&\triangleq&\frac{n_1 -n_{\hat{\tau}} }{n_1}, \end{eqnarray*} where $d_{\tau}(\hat{\tau}_R,\tau_R)$ denotes the Kendall tau distance between $\hat\tau_R$ and $\tau_R$, and $n_{\hat{\tau}}$ denotes the number of relevant entities who are classified as background entities in $\hat{\tau}$. The recovery distance $\kappa_R$ considers detailed rankings of all relevant entities plus mis-classification distances, while the coverage $\rho_R$ cares only about the identification of relevant entities without considering the detailed rankings. In the setting of PAMA, $\frac{n+n_1+1}{2}$ is the average rank of a background entity. The recovery distance increases if some relevant entities are mis-classified as background entities. Clearly, we expect a smaller $\kappa_R$ and a larger $\rho_R$ for a stronger aggregation approach. \subsection{Simulation Results} Table~\ref{Tab:recovery} summarizes the performances of the nine competing methods in the 16 different simulation settings, demonstrating the proposed PAMA$_B$ and PAMA$_F$ outperform all the other methods by a significant margin in most settings and PAMA$_B$ uniformly dominates PAMA$_F$. Figure~\ref{Fig:gamma} shows the quality parameter $\boldsymbol{\gamma}$ learned from the Partition-Mallows model in various simulation scenarios with $m=10$ and $n=100$, confirming that the proposed methods can effectively capture the quality difference among the rankers. The results of $\boldsymbol{\gamma}$ for other combinations of $(m,n)$ can be found in Supplementary Material which demonstrates consistent performance with Figure \ref{Fig:gamma}. \begin{table}[htp] \scriptsize \centering \begin{tabular}{c|cc|ccc|cc|cccc} \hline \multicolumn{3}{c|}{Configuration}&\multicolumn{3}{c|}{Partition-type Models}&\multicolumn{2}{c|}{{Mallows Models}} &\multicolumn{4}{c}{MC-based Models} \\ \cline{1-3} \cline{4-6} \cline{7-8} \cline{9-12} $\mathcal{S}$& $n$ &$m$ &PAMA$_F$&PAMA$_B$ &BARD & EMM & MM& MC$_1$ & MC$_2$ & MC$_3$ & CEMC\\ \cline{1-3} \cline{4-6} \cline{7-8} \cline{9-12} \multirow{8}{*}{$\mathcal{S}_{PM_1}$}&\multirow{2}{*}{100}&\multirow{2}{*}{10}& 24.5 & {\bf 15.2} & 57.1 & 51.7 & 103.2 & 338.4 & 163.1 & 198.6 & 197.8 \\ &&&[0.95] & {\bf[0.97]} & [0.91] & [0.89] & [0.81] & [0.36] & [0.69] & [0.63] & [0.62] \\ \cline{4-12} &\multirow{2}{*}{100}&\multirow{2}{*}{20}&2.6 & {\bf 0.3} & 42.1 & 22.8 & 44.2 & 466.6 & 88.9 & 121.2 & 114.7 \\ &&&[0.99] & {\bf[1.00]} & [0.95] & [0.95] & [0.93] & [0.11] & [0.82] & [0.78] & [0.77] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{10}& 17.4 & {\bf 4.0} & 180.0 & 683.3 & 519.2 & 1268.3 & 997.7 & 1075.8 & 1085.7 \\ &&&[0.99] & {\bf[1.00]} & [0.89] & [0.66] & [0.55] & [0.17] & [0.34] & [0.29] &[0.28] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{20} & 7.1& {\bf 3.2} & 122.3 & 124.4 & 157.1 & 1445.9 & 613.5 & 723.0 & 727.2\\ &&&{\bf [1.00]} & {\bf[1.00]} & [0.93]& [0.92] & [0.90] & [0.05] & [0.60] & [0.53] & [0.52] \\ \cline{1-12} \multirow{8}{*}{$\mathcal{S}_{PM_2}$}&\multirow{2}{*}{100}&\multirow{2}{*}{10}&90.0 & {\bf 66.6} & 115.2 & 108.3 & 152.9 & 404.3 & 285.5 & 307.2 & 313.8\\ &&&[0.82] & {\bf [0.86]} & [0.77] & [0.77]& [0.70] & [0.24] & [0.47] & [0.43] & [0.41]\\\cline{4-12} &\multirow{2}{*}{100}&\multirow{2}{*}{20}& 26.9 & {\bf 2.4} & 81.5 & 59.8 & 91.5 & 468.1 & 217.3 & 245.2 & 249.5\\ &&& [0.94] & {\bf[1.00]} & [0.85] & [0.87] &[0.82] & [0.11] & [0.60] & [0.55] &[0.53] \\\cline{4-12} & \multirow{2}{*}{300}&\multirow{2}{*}{10}& 81.1 & {\bf 26.8} & 468.4 & 609.8 & 472.1 & 1388.4 & 1294.7 & 1321.5 & 1328.4\\ &&&[0.95] & {\bf[0.98]} & [0.69] & [0.68] & [0.60] & [0.09] & [0.15] & [0.13] & [0.13] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{20}&77.2 &{\bf 3.4} & 313.6 & 267.5 & 337.0 & 1469.0 & 1205.9 & 1251.8 & 1258.9\\ &&&[0.95] & {\bf [1.00]} & [0.79] & [0.82] & [0.78] & [0.04] & [0.21] & [0.18] & [0.18]\\\hline \multirow{8}{*}{$\mathcal{S}_{HS_1}$}&\multirow{2}{*}{100}&\multirow{2}{*}{10}&24.9 & {\bf 20.6} & 22.9 & 54.9 & 115.9 & 334.7 & 150.9 & 180.3 & 186.0 \\ &&&[0.97] & [0.98] & {\bf[0.99]} & [0.91] & [0.80] & [0.37] & [0.71] & [0.66] & [0.64] \\ \cline{4-12} &\multirow{2}{*}{100}&\multirow{2}{*}{20}& 18.7 & 15.6 & 22.8 & {\bf 8.7} & 33.4 & 498.8 & 46.7 & 64.1 & 60.8 \\ &&&[0.98] & [0.98] & {\bf[1.00]} & {\bf[1.00]} &[0.97] & [0.05] & [0.92] & [0.89] & [0.89] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{10}&172.0 & 159.8 & {\bf 37.9} & 205.5 & 490.6 & 1098.6 & 627.0 & 752.9 & 769.4 \\ &&&[0.89] & [0.90] & {\bf[0.99]} & [0.87] & [0.68] & [0.28] & [0.59] & [0.50] & [0.49] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{20}&7.4 & {\bf 7.0} & 22.7 & 11.4 & 114.1 & 1402.6 & 237.8 & 319.7 & 322.3\\ &&&{\bf [1.00]} & {\bf[1.00]} & {\bf[1.00]} & {\bf[1.00]} & [0.94] &[0.08] & [0.84] & [0.79] & [0.79] \\ \hline \multirow{8}{*}{$\mathcal{S}_{HS_2}$}&\multirow{2}{*}{100}&\multirow{2}{*}{10}&92.6 & 74.0 & {\bf 68.7} & 123.7 & 162.3 & 382.4 & 228.2 & 250.2 & 256.6\\ &&&[0.83] & [0.86] & {\bf[0.88]} & [0.77] & [0.70] & [0.27] & [0.56] & [0.52] & [0.50] \\ \cline{4-12} &\multirow{2}{*}{100}&\multirow{2}{*}{20}&24.4 & { 20.0} & 22.2 & {\bf 12.4} & 38.3 & 500.3 & 87.5 & 103.5 & 102.9 \\ &&&[0.96] & [0.97] & {\bf[1.00]} & [0.99] &[0.95] & [0.04] & [0.83] & [0.80] & [0.80] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{10}& 319.1 & 463.8 & {\bf 245.6} & 516.9 & 683.5 & 1267.9 & 998.0 & 1076.0 & 1085.5 \\ &&&[0.79] & [0.69] & {\bf [0.84]} & [0.66] & [0.55] & [0.17] & [0.34] & [0.29] & [0.28] \\\cline{4-12} &\multirow{2}{*}{300}&\multirow{2}{*}{20}& 8.7 & {\bf 8.0} & 23.2 & 30.3& 155.5 & 1430.7 & 437.6 & 516.2 & 523.3\\ &&&{\bf[1.00]} & {\bf[1.00]} & {\bf[1.00]} & [0.99] & [0.91] & [0.06] & [0.71] & [0.66]& [0.65] \\ \hline \end{tabular} \caption{Average recovery distances [coverages] of different methods based on 500 independent replicates under different simulation scenarios.} \label{Tab:recovery} \end{table} \begin{figure}[htp] \centering \includegraphics[width=0.98\linewidth]{gamman100m10.pdf} \caption{(a) The boxplots of $\{\bar\gamma_k\}$ estimated by PAMA$_B$ with $m=10$ and $n=100$. (b) The boxplots of $\{\hat\gamma_k\}$ estimated by PAMA$_F$ with $m=10$ and $n=100$. Each column denotes a scenario setting. The results were obtained from 500 independent replicates.} \label{Fig:gamma} \end{figure} Figure~\ref{Fig:n100m10} (a) shows the boxplots of recovery distances and the coverages of the nine competing methods in the four simulation scenarios with $m=10$, $n=100$, and $n_1=10$. The five methods from the left outperform the other four methods by a significant gap, and the PAMA-based methods generally perform the best. Figure~\ref{Fig:n100m10} (b) confirms that the methods based on the Partition-Mallows model enjoys the same capability as BARD in detecting quality differences between informative and non-informative rankers. However, while both BARD and PAMA can further discern quality differences among informative rankers, EMM fails this more subtle task. Similar figures for other combinations of $(m,n)$ are provided in the Supplementary Material, highlighting consistent results as in Figure~\ref{Fig:n100m10}. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{n100m10EMM.pdf} \caption{Boxplots of the rank aggregation results of 500 replications obtained from different methods under various scenarios with $m$=10, $n$=100, and $n_1$=10. (a) Recovery distances in log scale and coverage obtained from nine algorithms. (b) Quality parameters obtained by Partition-type models and EMM.} \label{Fig:n100m10} \end{figure} \subsection{Robustness to the Specification of $n_1$} We need to specify $n_1$, the number of relevant entities, when applying PAMA$_B$ or PAMA$_F$. In many practical problems, however, there may not be a strong prior information on $n_1$ and there may not even be clear distinctions between relevant and background entities. To examine robustness of the algorithm with respect to the specification of $n_1$, we design a simulation setting $\mathcal{S}_{HS_3}$ to mimic the no-clear-cut scenario and investigate how the performance of PAMA is affected by the specification of $n_1$ under this setting. Formally, $\mathcal{S}_{HS_3}$ assumes that $\tau_k=sort(i\in U\ by\ S_{ik}\downarrow)$, where $S_{ik}\sim N(\mu_{ik},1)$, following the same data generating framework as $\mathcal{S}_{HS}$ defined in the Section \ref{sec:SimuSetting}, with $\mu_{ik}$ being replaced by $$\mu_{ik}=\left\{ \begin{array}{ll} 0, & \mbox{if } k\leq\frac{m}{2}, \\ \frac{2\times a^* \times k/m}{1+e^{-b^*\times (70-i)}}, & \mbox{otherwise}, \\ \end{array} \right. $$ where $a^*=50$ and $b^*=0.1$. Different from $\mathcal{S}_{HS_1}$ and $\mathcal{S}_{HS_2}$, where $\mu_{ik}$ jumps from 0 to a positive number as $i$ ranges from background to relevant entities, in the $\mathcal{S}_{HS_3}$ scenario $\mu_{ik}$ increases smoothly as a function of $i$ for each informative ranker $k$. In such cases, the concept of ``relevant'' entities is not well-defined. We simulate 500 independent data sets from $\mathcal{S}_{HS_3}$ with $n=100$ and $m=10$. For each data set, we try different specifications of $n_1$ ranging from 10 to 50 and compare PAMA to several competing methods based on their performance on recovering the top-$A$ list $[E_1\preceq E_2\preceq\cdots \preceq E_{A}]$, which is still well-defined based on the simulation design. The results summarized in Table~\ref{Tab:misspecifiedd} show that no matter which $n_1$ is specified, the partition-type models consistently outperform all the competing methods in terms of a lower recovery distance from the true top-$n_1$ list of items, i.e., $[E_1\preceq E_2\preceq\cdots \preceq E_{n_1}]$. Figure \ref{fig:consistentd} illustrates in details the average aggregated rankings of the top-10 entities by PAMA as $n_1$ increases, suggesting that PAMA is able to figure out the correct rankings of the top entities effectively. These results give us confidence that PAMA is robust to misspecification of $n_1$. \begin{table} \scriptsize \centering \begin{tabular}{ccc|ccc|cc|cccc} \hline \multicolumn{3}{c|}{Configuration}&\multicolumn{3}{c|}{Partition-type Models}&\multicolumn{2}{c|}{{Mallows Models}} &\multicolumn{4}{c}{MC-based Models} \\ \cline{1-3} \cline{4-6} \cline{7-8} \cline{9-12} $n$ &$m$ &$n_1$& PAMA$_F$&PAMA$_B$ &BARD & EMM & MM& MC$_1$ & MC$_2$ & MC$_3$ & CEMC\\ \cline{1-3} \cline{4-6} \cline{7-8} \cline{9-12} \multirow{2}{*}{100}&\multirow{2}{*}{10} &\multirow{2}{*}{10}&44.8 & {\bf 34.6} & 42.6 & 61.5 & 227.7 & 423.8 & 45.6 & 199.1 & 241.3 \\ &&&[0.90] & [0.93] & {\bf [0.96]} & [0.88] & [0.58] & [0.20] & [0.92] & [0.63] & [0.54] \\ \cline{4-12} \multirow{2}{*}{100}&\multirow{2}{*}{10}&\multirow{2}{*}{20}&39.2 & {\bf 33.9} & 94.2& 107.0 & 308.6 & 764.6 & 52.3 & 268.9 & 372.9 \\ &&& [0.95] & [0.96] & {\bf [0.99]} & [0.90] & [0.75] & [0.33] & [0.96] & [0.78] & [0.67] \\\cline{4-12} \multirow{2}{*}{100}&\multirow{2}{*}{10}&\multirow{2}{*}{30}& {\bf 27.5} & 29.2 & 207.4 & 126.2 & 360.6 & 1040.2 & 67.8 & 325.4 & 445.0 \\ &&&{\bf [0.98]} & {\bf [0.98]} & {\bf [0.98]} & [0.93] & [0.83] & [0.44] & [0.96] & [0.84] & [0.77] \\\cline{4-12} \multirow{2}{*}{100}&\multirow{2}{*}{10}& \multirow{2}{*}{40}& {\bf 16.0} & 17.4 & 363.9 & 131.6 & 408.1 & 1274.1 & 83.1 & 382.5 & 486.9 \\ &&&{\bf [0.99]} & {\bf [0.99]} & [0.98] & [0.95] & [0.87] & [0.54] & [0.97] & [0.87] & [0.83] \\ \cline{4-12} \multirow{2}{*}{100}&\multirow{2}{*}{10}& \multirow{2}{*}{50}& {\bf 8.8} & 9.3 & 565.3 & 134.6 & 452.2 & 1484.2 & 109.2 & 446.4 & 524.9 \\ &&&{\bf [1.00]} & {\bf [1.00]} & [0.99] & [0.97] & [0.90] & [0.62] & [0.96] & [0.89] & [0.88] \\ \cline{1-12} \end{tabular} \caption{Average recovery distances [coverages] of different methods based on 500 independent replicates under scenario $\mathcal{S}_{HS_{3}}$. } \label{Tab:misspecifiedd} \end{table} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth, height=0.4\textheight]{consistentd1.pdf} \caption{Average aggregated rankings of the top-10 entities by PAMA as $n_1$ increases from 10 to 50 for simulated data sets generated from $\mathcal{S}_{HS_3}$.} \label{fig:consistentd} \end{figure} Noticeably, although PAMA and BARD achieve comparable coverage as shown in Table \ref{Tab:misspecifiedd}, PAMA dominates BARD uniformly in terms of a much smaller recovery distance in all cases, suggesting that PAMA is capable of figuring out the detailed ranking of relevant entities that is missed by BARD. In fact, since BARD relies only on $\rho_i \triangleq P(I_i = 1 \mid \tau_1, \cdots , \tau_m)$ to rank entities, in cases where the signal to distinguish the relevant and background entities is strong, many $\rho_i$'s are very close to 1, resulting in nearly a ``random" ranking among the top relevant entities. Theoretically, if all relevant entities are recognized correctly but ranked randomly, the corresponding recovery distance would increase with $n_1$ in an order of $\mathcal{O}({n_1}^2)$, which matches well with the increasing trend of the recovery distance of BARD shown in Table \ref{Tab:misspecifiedd}. We also tested the model's performance when there is a true $n_1$ but it is mis-specified in our algorithm. We varied $n_1$ as $8, 10$ and $18$, respectively, for setting $\mathcal{S}_{HS_1}$ with $n$=100 and $m$=10, where the true $n_1$=10 (the first ten entities). Figure \ref{fig:misspecifiedd} shows boxplots of $\mathcal{I}$ for each mis-specified case. For the visualization purpose, we just show the boxplot of $E_1$ to $E_{20}$. The other entities are of the similar pattern with $E_{20}$. The figure shows a robust behavior of PAMA$_B$ as we vary the specifications of $n_1$. It also shows that the results are slightly better if we specify a $n_1$ that is moderately larger than its true value. The consistent results of other mis-specified cases, such as $5, 12, 15$, can be found in the Supplementary Material. \begin{figure} \centering \includegraphics[width=0.7 \linewidth, height=0.6 \linewidth]{misspecd.pdf} \caption{Boxplots of the estimated $\mathcal{I}$ from 500 replications under the setting of $\mathcal{S}_{HS_1}$ with $n_1$ being set as 8, 10 and $18$, respectively. The true $n_1$ is $10$. The vertical lines separate relevant entities (left) from background ones. The Y-axis shows the logarithm of the entities' ranks. The rank of a background entity is replaced by their average $\frac{100+10+1}{2}$. The triangle denotes the mean rank of each entity.} \label{fig:misspecifiedd} \end{figure} \section{Real Data Applications} \label{sec:realdata} \subsection{Aggregating Rankings of NBA Teams} We applied PAMA$_B$ to the NBA-team data analyzed in \cite{deng2014bayesian}, and compared it to competing methods in the literature. The NBA-team data set contains 34 ``predictive" power rankings of the 30 NBA teams in the 2011-2012 season. The 34 ``predictive" rankings were obtained from 6 professional rankers (sports websites) and 28 amateur rankers (college students), and the data quality varies significantly across rankers. More details of the dataset can be found in Table 1 of \cite{deng2014bayesian}. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{NBAsymmetric.pdf} \caption{Results from PAMA$_B$ for the NBA-team dataset. (a) boxplots of posterior samples of $\gamma_k$. (b) barplots of $\bar{\mathcal{I}}_i$ where the vertical line divides the NBA teams in Western and Eastern Conferences. (c) the trace plot of the log-likelihood function.} \label{Fig:NBA} \end{figure} Figure \ref{Fig:NBA} displays the results obtained by PAMA$_B$ (the partial-list version with $n_1$ specified as 16). Figure \ref{Fig:NBA} (a) shows the posterior distributions, as boxplots, of the quality parameter of each involved ranker. Figure \ref{Fig:NBA} (b) shows the posterior distribution of the aggregated power ranking of each NBA team. All the posterior samples that reports ``0" for the rank of an entity means that the entity is a background one, for visualization purpose we replace ``0'' by the rank of background average rank, $\frac{n+n_1+1}{2}=\frac{30+16+1}{2}=23.5$. The final set of 16 playoff teams are shown in blue while the champion of that season is shown in red (i.e., Heat). Figure \ref{Fig:NBA} (c) shows the trace plot of the log-likelihood of the PAMA model along the MCMC iteration. Comparing the results to Figure 8 of \cite{deng2014bayesian}, we observe the following facts: (1) PAMA$_B$ can successfully discover the quality difference of rankers as BARD; (2) PAMA$_B$ can not only pick up the relevant entities effectively like BARD, but also rank the discovered relevant entities reasonably well, which cannot be achieved by BARD; (3) PAMA$_B$ converges quickly in this application. We also applied other methods, including MM, EMM and Markov-chain-based methods, to this data set. We found that none of these methods could discern the quality difference of rankers as successfully as PAMA and BARD. Moreover, using the team ranking at the end of the regular season as the surrogate true power ranking of these NBA teams in the reason, we found that PAMA also outperformed BARD and EMM by reporting an aggregated ranking list that is the closest to the truth. Table \ref{Tab:NBArankresults} provides the detailed aggregated ranking lists inferred by BARD, EMM and PAMA respectively, as well as their coverage of and Kendall $\tau$ distance from the surrogate truth. Note that the Kendall $\tau$ distance is calculated for the eastern teams and western teams separately because the NBA Playoffs proceed at the eastern conference and the western conference in parallel until the NBA final, in which the two conference champions compete for the NBA champion title, making it difficult to validate the rankings between Eastern and Western teams. \begin{table}[ht] \footnotesize \centering \begin{tabular}{|c|c:c|c:c|c:c|c:c|} \hline Ranking& \multicolumn{2}{c|}{Surrogate truth}&\multicolumn{2}{c|}{BARD}&\multicolumn{2}{c|}{EMM}&\multicolumn{2}{c|}{PAMA}\\ \hline & Eastern & Western & Eastern & Western & Eastern & Western & Eastern & Western \\ \hline 1&Bulls& Spurs&\emph{Heat}&\emph{Thunder}&Heat&Thunder&Heat&Thunder\\ 2&Heat&Thunder&\emph{Bulls}&\emph{Mavericks}&Bulls&Maverick&Bulls&Maverickss\\ 3&Pacers&Lakers&\emph{Celtics}&\emph{Clippers}&Knicks&Clippers&Celtics&Lakers\\ 4&Celtics&Grizzlies&\emph{Knicks}&\emph{Lakers}&Celtics&Lakers&Knicks&Clippers\\ 5&Hawks&Clippers&\emph{Magic}&\emph{Spurs}&Magic&Spurs&Magic&Spurs\\ 6&Magic&Nuggets&\emph{Pacers}&\emph{Grizzlies}&Pacers&Grizzlies&Hawks&Nuggets\\ 7&Knicks&Mavericks&76ers&\emph{Nuggets}&76ers&Nuggets&Pacers&Grizzlies\\ 8&76ers&Jazz&Hawks&Jazz$^*$&Hawks$^*$&Jazz$^*$&76ers&Jazz$^*$\\ \hline Kendall $\tau$ &-&-&14.5&10.5&9&10&8&10\\ \hline Coverage & \multicolumn{2}{c|}{-}& \multicolumn{2}{c|}{$\frac{15}{16}$}&\multicolumn{2}{c|}{$\frac{14}{16}$}& \multicolumn{2}{c|}{$\frac{15}{16}$} \\ \hline \end{tabular} \caption{Aggregated power ranking of the NBA teams inferred by BARD, EMM, and PAMA, respectively, and the corresponding coverage of and the Kendall $\tau$ distance from the surrogate true rank based on the performances of these teams in the regular season. The teams in italic indicate that they have equal posterior probabilities of being in the relevant group, and the teams with asterisk are those that were misclassified to the background group.} \label{Tab:NBArankresults} \end{table} \subsection{Aggregating Rankings of NFL Quarterback Players with the Presence of Covariates} Our next application is targeted at the NFL-player data reported by \cite{li2020bayesian}. The NFL-player data contains 13 predictive power rankings of 24 NFL quarterback players. The rankings were produced by 13 experts based on the performance of the 24 NFL players during the first 12 weeks in the 2014 season. The dataset also contains covariates for each player summarizing the performances of these players during the period, including the \emph{number of games played} (G), \emph{pass completion percentage} (Pct), the \emph{number of passing attempts per game} (Att), \emph{average yards per carry} (Avg), \emph{total receiving yards} (Yds), \emph{average passing yards per attempt} (RAvg), the \emph{touchdown percentage} (TD), the \emph{intercept percentage} (Int), \emph{running attempts per game} (RAtt), \emph{running yards per attempt} (RYds) and the \emph{running first down percentage} (R1st). Details of the dataset can be found in Table 2 of \cite{li2020bayesian}. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{NFLcov.pdf} \caption{Key results from PAMA$_B$ for the NFL-player dataset. (a) Boxplots of posterior samples of $\gamma_k$. (b) Barplots of $\bar{\mathcal{I}}_i$. (c) Trace plot of the log-likelihood. (d) Barplots of posterior probabilities for each coefficient to be positive.} \label{Fig:NFLcov} \end{figure} Here, we set $n_1=12$ in order to find which players are above average. We analyzed the NFL-player data with PAMA$_B$ (the covaritate-assisted version) and the results are summarized in Figure \ref{Fig:NFLcov}: (a) the posterior boxplots of the quality parameter for all the rankers; (b) the barplot of $\bar{\mathcal{I}_i}$ for all the NFL players with the descending order; (c) the traceplot of the log-likelihood of the model; and (d), the barplot of probabilities $P({\psi}_j >0)$ and the covariates are rearranged from left to right by decreasing the probability. From Figure \ref{Fig:NFLcov} (a), we observe that rankers 1, 3, 4 and 5 are generally less reliable than the other rankers. In the study of the same dataset in \cite{li2020bayesian}, the authors assumed that the 13 rankers fall into three quality levels, and reported that seven rankers (i.e., 2, 6, 7, 8, 9, 10 and 13) are of a higher quality than the other six (see Figure 7 of \cite{li2020bayesian}). Interestingly, according to Figure \ref{Fig:NFLcov} (a), the PAMA algorithm suggested exactly the same set of high-quality rankers. In the meantime, ranker 2 is of the lowest quality among the seven high quality rankers in both studies. From Figure \ref{Fig:NFLcov} (b), a consensus ranking list can be obtained. Our result is consistent with that of Figure 6 in \cite{li2020bayesian}. Figure \ref{Fig:NFLcov} (d) shows that six covariates are more probable to have positive effects. \begin{table}[htb] \def0.8{0.8} \centering \begin{tabular}{|c|c|c|c|c|} \hline Ranking&Gold standard & BARD & EMM& PAMA\\ \hline 1 & Aaron R. & \emph{Andrew L.} & Andrew L.& Andrew L.\\ 2 & Andrew L. & \emph{Aaron R.}& Aaron R.& Aaron R.\\ 3 & Ben R. & \emph{Tom B.} & Tom B.& Tom B.\\ 4 &Drew B. & \emph{Drew B.} &Ben R. & Drew B.\\ 5 &Russell W. & \emph{Ben R.} & Drew B.& Ben R.\\ 6 &Matt R. & \emph{Ryan T.} & Ryan T. & Ryan T.\\ 7 &Ryan T. &Russell W. & Russell W. &Russell W.\\ 8 &Tom B.& Philip R.* & Philip R.*& Philip R.\\ 9 &Eli M. &Eli M.* & Eli M.* &Eli M.*\\ 10 &Philip R. &Matt R.* &Matt R.* &Matt R.*\\ \hline R-distance& - &35.5&32&25\\ \hline Coverage& - &0.7 &0.7& 0.8\\ \hline \end{tabular} \caption{Top players in the aggregated rankings inferred by BARD, EMM and PAMA. The entities in italic indicate that they have equal posterior probabilities of being in the relevant group, and the players with asterisk are those that were mis-classified to the background group.} \label{tab:rankingsofNFL} \end{table} Using the Fantasy points of the players (\url{https://fantasy.nfl.com/research/players}) derived at the end of the 2014 NFL season as the surrogate truth, the recovery distance and coverage of the aggregated rankings by different approaches can be calculated so as to evaluate the performances of different approaches. Note that the Fantasy points of two top NFL players Peyton Manning and Tony Romo are missing for unknown reasons, we excluded them from analysis and only report results for the top 10 positions instead of top 12. Table~\ref{tab:rankingsofNFL} summarizes the results, demonstrating that PAMA outperformed the other two methods. \section{Conclusion and Discussion} \label{sec:discussion} The proposed Partition-Mallows model embeds the classic Mallows model into a partition modeling framework developed earlier by \cite{deng2014bayesian}, which is analogous to the well-known ``spike-and-slab" mixture distribution often employed in Bayesian variable selection. Such a nontrivial ``mixture" combines the strengths of both the Mallows model and BARD's partition framework, leading to a stronger rank aggregation method that can not only learn quality variation of rankers and distinguish relevant entities from background ones effectively, but also provide an accurate ranking estimate of the discovered relevant entities. Compared to other frameworks in the literature for rank aggregation with heterogeneous rankers, the Partition-Mallows model enjoys more accurate results with better interpretability at the price of a moderate increase of computational burden. We also show that the Partition-Mallows framework can easily handle partial lists and and incorporate covariates in the analysis. Throughout this work, we assume that the number of relevant entities $n_1$ is known. This is reasonable in many practical problems where a specific $n_1$ can be readily determined according to research demands. Empirically, we found that the ranking results are insensitive to the choice of a wide range of values of $n_1$. If needed, we may also determine $n_1$ according to a model selection criterion, such as AIC or BIC. In the PAMA model, $\pi(\tau_k^0 \mid \tau_k^{0\mid 1})$ is assumed to be a uniform distribution. If the detailed ranking of the background entities is of interest, we can modify the conditional distribution $\pi(\tau_k^0 \mid \tau_k^{0\mid 1})$ to be the Mallows model or other models. A quality parameter can still be incorporated to control the interaction between relevant entities and background entities. The resulting likelihood function becomes more complicated, but is still tractable. In practice, the assumption of independent rankers may be violated. In the literature, a few approaches have been proposed to detect and handle dependent rankers. For example, \cite{deng2014bayesian} proposed a hypothesis-testing-based framework to detect pairs of over-correlated rankers and a hierarchical model to accommodate clusters of dependent rankers; \cite{JohnsonS2019} adopted an extended Dirichlet process and a similar hierarchical model to achieve simultaneous ranker clustering and rank aggregation inference. Similar ideas can be incorporated in the PAMA model as well to deal with non-independent rankers. \section*{Acknowledgement} We thank Miss Yuchen Wu for helpful discussions at the early stage of this work and the two reviewers for their insightful comments and suggestions that helped us improve the paper greatly. This research is supported in part by the National Natural Science Foundation of China (Grants 11771242 \& 11931001), Beijig Academy of Artificial Intelligence (Grant BAAI2019ZD0103), and the National Science Foundation of USA (Grants DMS-1903139 and DMS-1712714). The author Wanchuang Zhu is partially supported by the Australian Research Council (Data Analytics for Resources and Environments, Grant IC190100031) {\section*{Supplementary Materials}}
{'timestamp': '2021-04-16T02:09:38', 'yymm': '2104', 'arxiv_id': '2104.07261', 'language': 'en', 'url': 'https://arxiv.org/abs/2104.07261'}
\section{Introduction} The overall agreement between the Standard Model (SM) and data in the quark sector is particularly impressive: Flavour-Changing Neutral Currents (FCNC) are as small as predicted in the SM and the Kobayashi-Maskawa (KM) mechanism has been proven to describe the observed CP-violating phenomena in flavour physics with a good accuracy. One example of this convergence is provided by the global fit of the CKM matrix elements~\cite{SM_CKM} illustrated in Figure~\ref{ckm09}. All constraints (either loop and tree observables or CP-violating and CP-conserving quantities) point towards a unique solution, which proves the CKM mechanism to be at work in flavour transitions (within the present accuracy) and establishes the KM mechanism as a dominant source of CP violation in $K$- and $B$-meson systems. The consistency between these predictions is deeply related to the fact that a single Higgs doublet provides a mass to all fermions through Yukawa couplings in the SM and that all but one CP-violating phases can be rotated away by redefinition of the fields. \begin{figure}[h] \begin{center} \epsfig{file=plots/rhoeta_large_global_Vub.eps,height=120mm,width=120mm} \caption{\it \small Superimposed individual constraints at 95\% CL for the SM global fit. The yellow bean is the solution driven by the combination of all individual constraints at 95\% CL. Regions outside the red circle are excluded at 95\% CL.\label{ckm09}} \end{center} \end{figure} Extensions of the SM are often based on the introduction of additional fields interacting with the quarks (gauge bosons of new interactions, supersymmetric particles, technifermions~\cite{revNP}\ldots). These new fields bring along new arbitrary parameters, often inducing dangerous FCNC processes, as well as new CP-violating phases that cannot generally be rotated away. Therefore, flavour physics and CP-violation data provide stringent constraints on the parameters of these extensions, which yield generally fine-tuning problems. From this point a view, a rather minimal extension of the SM, with a limited number of new parameters to fix, consists in the two-Higgs doublets models (2HDM). Indeed, in the SM, the same doublet is used to couple the left-handed fermion doublets with right-handed up- and down-type quarks at the same time (exploiting the fact that the representation of the Higgs doublet under $SU(2)$ is pseudoreal, and that the hypercharges of the scalar doublets coupled to right-handed up- and down-type quarks are opposite). This choice is imposed by no other arguments but economy, and one can introduce two complex doublets $\phi_1$ and $\phi_2$ of opposite hypercharge, rather than a single doublet $\phi$ ~\cite{2hdmref1,2hdmref2,2hdmref3,2hdmref4,2hdmref5,2hdmref6}. One assumes that electroweak symmetry breaking occurs because the neutral components of the two doublets acquire two {\it a priori} different vacuum expectation values: \begin{equation} |\langle 0|\phi_1|0\rangle |= \left(\begin{array}{c} 0\\ v_1/\sqrt{2}\end{array}\right), \qquad |\langle 0|\phi_2|0\rangle |= \left(\begin{array}{c} v_2/\sqrt{2}\\ 0\end{array}\right), \end{equation} denoted respectively $v_1$ and $v_2$ (with the constraint $v_1^2+v_2^2=v^2$, where $v$ is the SM Higgs vacuum expectation value). 2HDM models contain 8 degrees of freedom in the Higgs sector, out of which 3 are used to provide a longitudinal polarisation to the weak gauge bosons $W$ and $Z$. Five real fields remain: two charged Higgs fields $H^\pm$, a neutral pseudoscalar Higgs $A$ and two neutral scalar Higgs ($h^0$ and $H^0$). The additional parameters needed to describe this SM extension are the masses of $H^\pm$, $H^0$ and $A$, the ratio of vacuum expectation values $\tan\beta=v_2/v_1$ and an angle describing the mixing between $h^0$ and $H^0$. Different versions of the 2HDM were labeled according to the couplings of the Higgs doublets to the quarks~\cite{2hdmref4}: type I corresponds to $\phi_1$ coupling to both up- and down-type quarks whereas $\phi_2$ does not couple to any quark, type II corresponds to $\phi_1$ coupling to down-type quarks whereas $\phi_2$ couples to up-type quarks (and to leptons), and type III to $\phi_1$ and $\phi_2$ both coupling to both types of quarks. Among these various possibilities 2HDM Type II is particularly alluring, because of its resemblance with the SM in the quark sector. One has two Yukawa matrices $y^{d,u}$ describing the couplings among quarks (and one for the lepton sector, $y^e$): \begin{equation} \mathcal{L}_{II,Y}=-\bar{Q}_L \phi_1 y^d D_R - \bar{Q}_L \phi_2 y^u U_R - \bar{L}_L \phi_2 y^e E_R +h.c., \end{equation} where $Q_L$ and $L_L$ denote left-handed fermion doublets, and $D_R$, $U_R$, $E_R$ down-, up-type and charged-lepton right-handed singlets, defined in a similar way as in the SM (actually, the SM is recovered through the identification $\phi_2=i\sigma_2\phi_1^*$, with $\sigma_2$ the complex Pauli matrix). One has to re-express these couplings in terms of mass eigenstates. The structure of the Yukawa terms yields a SM-like structure for the quark sector: there is a CKM matrix which is the only source of flavour-changing interactions and there are no flavour-changing neutral currents at tree level. But there are new flavour-changing charged interactions, corresponding to the exchange of a charged Higgs rather than a $W$ (obviously, there are also interactions of quarks with neutral Higgs fields, as well as couplings of the Higgs fields to the leptons). Once quark and Higgs fields are expressed in terms of mass eigenstates, one obtains the following charged-Higgs interactions for quarks and leptons: \begin{equation}\label{HiggsLag} \mathcal{L}_{II,H^+}= -\frac{g}{\sqrt{2}}\sum_{ij} \left[\tan\beta\frac{m_{dj}}{M_W}\bar{u}_{Li}V_{ij}d_{Rj} +\cot\beta\frac{m_{ui}}{M_W}\bar{u}_{Rj}V_{ij}d_{Lj} +\tan\beta\frac{m_{\ell i}}{M_W}\bar{\nu}_{Lj}\ell_{Rj} \right]H^++h.c. \end{equation} Therefore, the 2HDM Type II provides a very interesting extension of the SM: it exhibits the same CKM structure, has a natural mechanism of suppression of FCNC plaguing many other models, but it exhibits a different structure for charged currents by the addition of new (scalar and pseudoscalar) interactions. Furthermore, it relies on a limited number of additional parameters, i.e., the mass $m_{H^+}$ of the charged Higgs and the ratio $\tan \beta$ of the couplings to up-like over down-like quarks (if we restrict our study to charged currents). In many situations, the change induced by the 2HDM Type II amounts to a redefinition of some of the parameters already occurring in the SM expressions, with a new dependence on $m_{H^+}$ and $\tan\beta$. Eventually, in addition to that predictability virtue, the 2HDM Type II is embedded into the most simple supersymmetric extensions of the SM (MSSM), at least at tree level~\cite{2hdmref2} (for large $\tan\beta$, loop effects might lead supersymmetric theories to coincide with a 2HDM Type III rather than the 2HDM Type II). Searches for such charged Higgs are obviously among the prospects of the LHC experiments ATLAS and CMS~\cite{2hdmsearch}. Since the decays mediated by a weak charged current are an excellent laboratory to search for charged Higgs boson contributing in addition to $W^{\pm}$ bosons, we have collected the measured decays potentially sensitive to contributions from charged Higgs, for which a good control of the theoretical hadronic uncertainties can be achieved. A combined analysis of their branching ratios in the light of the 2HDM Type II is then performed within the frequentist statistical scheme developed by the CKMfitter group~\cite{ThePapII}. It is convenient to categorize these observables as follows: \begin{enumerate} \item The leptonic decays of mesons mediated by quark-annihilation at tree level $\Gamma[K\rightarrow\mu\nu]/\Gamma[\pi\rightarrow\mu\nu]$, ${\cal B}[D\rightarrow\mu\nu]$, ${\cal B}[D_s\rightarrow\mu\nu]$, ${\cal B}[D_s\rightarrow\tau\nu]$ and ${\cal B}[B\rightarrow\tau\nu]$, where ${\cal B}$ stands for branching ratio and $\Gamma$ for the decay width. In addition we also consider the strange hadronic decay of the $\tau$ lepton, $\tau \rightarrow K \nu$, through the ratio $\Gamma[\tau\rightarrow K\nu]/\Gamma[\tau\rightarrow\pi\nu]$, which can be seen as reversed leptonic decays. \item The semileptonic decays $B \rightarrow D \tau \nu$, through the ratio ${\cal B}[B\rightarrow D\tau\nu]/{\cal B}[B\rightarrow D \ell \nu]$, and $K \rightarrow \pi \ell \nu$, through the ratio ${\cal B}[ K \rightarrow \pi \mu \nu]/{\cal B}[ K \rightarrow \pi e \nu]$. \item The $B^0_d$ and $B^0_s$ oscillation frequencies, $\Delta m_{d}$ and $\Delta m_{s}$. \item The $Z$ partial width into $b$ quarks, $R_b =\Gamma[Z\rightarrow b\bar{b}]/\Gamma[Z\rightarrow {\rm hadrons}]$, which exhibits electroweak charged currents through $Z b \bar b$ vertex radiative corrections. \item The measurement of the FCNC radiative decay $b \to s \gamma$ through the ratio ${\cal B}[\bar{B}\rightarrow X_s\gamma]/{\cal B}[\bar{B}\rightarrow X_ce\bar{\nu}]$. \end{enumerate} Most of these observables, either from tree (first two categories) or loop (last three ones) contributions, are established individual benchmarks to constrain or measure the parameter space of the 2HDM Type II or cognate supersymmetric models~\cite{2hdmexp,Gfitter}. The measurements of these observables and their experimental uncertainties are displayed in Figure~\ref{fig-smdev} together with their SM prediction at $95\%$~CL. Figure~\ref{fig-smdev} shows an overall fair agreement between the various observables and their SM predictions, with the notable exception of the tauonic $B^+$ decay $B\rightarrow\tau\nu$ (the deviation is yet lower than $3$ $\sigma$). Let us notice that the recent CLEO measurements~\cite{cleo_update} of $D_s\rightarrow\mu\nu$ and $D_s\rightarrow\tau\nu$ decays are now in agreement with their SM predictions in contrast to the situation reported at the time of the 2008 summer conferences~\cite{cleo_summer}. One of the main objectives of the analysis reported in this article consists in determining whether a 2HDM Type II can accommodate the discrepancy coming from the large value of $B\rightarrow\tau\nu$ as measured by the $B$ factories. A second aim is to determine the allowed region of the parameter space ($m_H$, $\tan \beta$) as constrained by above set of low-energy observables, corresponding to $\Delta F=1$ tree processes or loop-induced processes featuring a single charged Higgs exchange. This region of parameter space can be compared to the limits set by LEP from the (absence of) direct production of charged Higgs bosons. \begin{figure} \begin{center} \epsfig{file=plots/NoObs-deviation.eps,height=80mm,width=80mm} \caption{\it \small Comparison of the measurements relevant to constrain the 2HDM Type II and their predicted value in the SM. The black dots indicate the deviation of the experimental result from its prediction, assuming Gaussian distributed errors for both theory and experiment. The deviation is expressed as a signed significance whith positive values indicating a mesurement higher than its prediction. Note that this figure is for illustration purposes only, since all the errors are treated as Gaussian here. In the rest of the paper, we use the \rfit\ prescription to deal separately with statistical and systematic errors\label{fig-smdev}} \end{center} \end{figure} \section{Observables and theoretical context} This section details the theoretical predictions for our set of observables in the SM and how they are modified in the context of charged Higgs contributions. A summary of the relevant measurements and parameters together with their corresponding uncertainties is given in Tables~\ref{tab-inputs-obs} and \ref{tab-inputs-par}. \begin{table} \begin{center} \begin{tabular}{c|cccc} \hline \hline Input & Value & Unit & Accuracy & Reference \\ \hline \hline \multicolumn{5}{c}{\bf Branching Ratios } \\ \hline \hline $\Gamma[K\rightarrow \mu \nu]/\Gamma[\pi\rightarrow \mu \nu]$ & $1.336 \pm 0.003$ & & $(0.2\%)$ & \cite{FlaviaNet} \\ $\Gamma[K^0 \rightarrow \pi \mu \nu]/\Gamma[K^0 \rightarrow e \nu]$ & $0.6640 \pm 0.0026$ & & $(0.4\%)$ & \cite{FlaviaNet} \\ $\Gamma[\tau \rightarrow K \nu]/\Gamma[\tau \rightarrow \pi \nu]$ & $6.370 \pm 0.215$ & $10^{-2}$ & $(3.4\%)$ & \cite{PDG} \\ ${\cal B}[D\rightarrow \mu \nu]$ & $3.82 \pm 0.32 \pm 0.09$ & $10^{-4}$ & $(8.4\%, 2.4\%)$ & \cite{CLEO} \\ ${\cal B}[D_s\rightarrow \mu \nu]$ & $5.93 \pm 0.40$ & $10^{-3}$ & $(6.7\%)$ & \cite{Stone}\cite{cleo_update} \\ ${\cal B}[D_s\rightarrow \tau \nu]$ & $5.62 \pm 0.44$ & $10^{-2}$ & $(7.8\%)$ & \cite{Stone}\cite{cleo_update} \\ ${\cal B}[B\rightarrow \tau \nu]$ & $1.73 \pm 0.35$ & $10^{-4}$ & $(20\%)$ & \cite{BaBar},\cite{Belle} \\ ${\cal B}[B\rightarrow D \tau \nu]/{\cal B}[B\rightarrow D \ell \nu]$ & $0.416 \pm 0.128$ & & $(31\%)$ & \cite{bdtaunubelle,bdtaunubabar} \\ $\Delta m_d$ & $0.507 \pm 0.005$ & $ps^{-1}$ & $(1.0\%)$ & \cite{PDG} \\ $\Delta m_s$ & $17.77 \pm 0.12$ & $ps^{-1}$ & $(0.7\%)$ & \cite{CDF-Dms} \\ $\Gamma[Z\rightarrow b \bar{b}]/\Gamma[Z\rightarrow {\rm hadrons}]$ & $0.21629 \pm 0.00066$ & & $(0.3\%)$ & \cite{EWFit} \\ ${\cal B}[\bar{B}\rightarrow X_s\gamma]/{\cal B}[\bar{B}\rightarrow X_ce\bar{\nu}]$ & $3.346 \pm 0.251$ & $10^{-3}$ & $(7.5\%)$ & \cite{HFAGbsg} \\ \hline \hline \end{tabular} \caption{\small \it Branching ratios used as inputs for the global 2HDM Type II analysis. They are listed and their values are given with their absolute uncertainty, their relative accuracy and the reference from where the value was taken. When two uncertainties are given, the first one is statistical and the second is systematic (often from theoretical origin). \label{tab-inputs-obs}} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{c|cccc} \hline \hline Input & Value & Unit & Accuracy & Reference \\ \hline \hline \multicolumn{5}{c}{\bf Decay Constants} \\ \hline \hline $f_K/f_{\pi}$ & $1.205 \pm 0.0012 \pm 0.0095$ & & $(0.1\%,0.8\%)$ & \cite{lqcd_ckmfitter} \\ $f_{D_s}/f_{D_d}$ & $1.186 \pm 0.0048 \pm 0.0010$ & & $(0.4\%,0.1\%)$ & \cite{lqcd_ckmfitter} \\ $f_{D_s}$ & $246.3 \pm 1.2 \pm 5.3$ & MeV & $(0.5\%,2.2\%)$ & \cite{lqcd_ckmfitter} \\ $f_{B_s}/f_{B_d}$ & $1.199 \pm 0.008 \pm 0.023$ & & $(0.7\%,1.9\%)$ & \cite{lqcd_ckmfitter} \\ $f_{B_s}$ & $228 \pm 3 \pm 17$ & MeV & $(1.3\%,7.5\%)$ & \cite{lqcd_ckmfitter} \\ \hline \hline \multicolumn{5}{c}{\bf Semileptonic Form Factors} \\ \hline \hline $\rho^2$ & $1.19 \pm 0.04 \pm 0.04$ & & $(3.3\%,3.3\%)$ & \cite{HFAGbsg} \\ $\Delta$ & $0.46 \pm 0 \pm 0.01$ & & $(0, 2.2\%)$ & \cite{MesciaKamenik} \\ $M_V$ & $878 \pm 6$ & MeV & $(0.6\%)$ & \cite{PDG} \\ $f_+(0)$ & $0.9653 \pm 0.0028 \pm 0.0048$ & & $(0.3\%, 0.5\%)$ & \cite{lqcd_ckmfitter} \\ \hline \hline \multicolumn{5}{c}{\bf $B\bar{B}$ mixing} \\ \hline \hline $\hat{B}_{B_s}$ & $ 1.28 \pm 0.02 \pm 0.03$ & & $(1.6\%, 2.3\%)$ & \cite{lqcd_ckmfitter} \\ $\hat{B}_{B_s} / \hat{B}_{B_d}$ & $ 1.05 \pm 0.01 \pm 0.03$ & & $(1.0\%, 2.9\%)$ & \cite{lqcd_ckmfitter} \\ $\eta_B$ & $ 0.5510 \pm 0 \pm 0.0022$ & & $(0, 0.4\%)$ & \cite{Buchalla,Lenz_etaB} \\ \hline \hline \multicolumn{5}{c}{\bf $Z \rightarrow b\bar{b}$} \\ \hline \hline $\Delta\alpha_{had}^{(5)}[m_Z]$ & $ 0.02758 \pm 0.00035$ & & $(1.3\%)$ & \cite{EWFit} \\ \hline \hline \multicolumn{5}{c}{\bf $b\rightarrow s \gamma$ parameterization } \\ \hline \hline $C$ & $0.546 \pm 0 \pm 0.033$ & & $(0, 6.0\%)$ & \cite{bsgammaNewC} \\ $m_t^{pole}$ & $172.4 \pm 1.2$ & GeV & $(1.2\%)$ & \cite{TevatronEWG} \\ $\alpha_s(m_Z)$ & $0.1176 \pm 0.0020$ & & $(1.7\%)$ & \cite{PDG} \\ \hline \hline \multicolumn{5}{c}{\bf Running Quark Masses } \\ \hline \hline $\overline{m}_u(2 \; {\rm GeV})$ & $2.40 \pm 0 \pm 0.90$ & MeV & $(0, 38\%)$ & \cite{PDG} \\ $\overline{m}_d(2 \; {\rm GeV})$ & $4.75 \pm 0 \pm 1.25$ & MeV & $(0, 26\%)$ & \cite{PDG} \\ $\overline{m}_s(2 \; {\rm GeV})$ & $96 \pm 0 \pm 30$ & MeV & $(0, 31\%)$ & \cite{PDG} \\ $\overline{m}_c(m_c)$ & $1.286 \pm 0.013 \pm 0.040$ & GeV & $(1.0\%, 3.1\%)$ & \cite{lqcd_ckmfitter} \\ $\overline{m}_b(m_b)$ & $4.243 \pm 0 \pm 0.043$ & GeV & $(0, 1.0\%)$ & \cite{HFAGbsg} \\ \hline \hline \end{tabular} \caption{\small \it Parameters used as inputs for the global 2HDM Type II analysis. The parameters entering the calculations are listed and their values are given with their absolute uncertainty, their relative accuracy and the reference from where the value was taken. When two uncertainties are given, the first one is statistical and the second is systematics, often from theoretical error, hence treated in the \rfit\ scheme. For the latter systematics, whenever individual contributions are listed in the quoted reference we combine them linearly instead of quadratically, to stay consistent with the \rfit\ scheme. Therefore, our systematics errors can be larger than the one given in the corresponding references. Further note that the scale invariant top quark mass $\overline{m}_t(m_t)$ is computed from $m_t^{pole}$, following eq.~(33) of \cite{RunningQuarkMass} with $n_f = 5$ active flavours. \label{tab-inputs-par}} \end{center} \end{table} \subsection{Leptonic decays} The decay of a charged meson $M$ into a leptonic pair $\ell \nu_\ell$ is mediated in the SM by a charged weak boson, with the branching ratio: \begin{align} \label{eq-Mlnu} {\cal B}[M\rightarrow \ell \nu_\ell]_{\rm SM} = \frac{G_F^2 m_M m_{\ell}^2 }{ 8 \pi } \left( 1 - \frac{m_{\ell}^2}{m_M^2} \right)^2 |V_{q_uq_d}|^2 f_M^2 \tau_M ( 1 + \delta_{EM}^{M \ell 2} ), \end{align} where $q_u$ ($q_d$) stands for the up-like (down-like) valence quark of the meson respectively, $V_{q_uq_d}$ is the relevant CKM matrix element, $f_M$ is the decay constant of the meson $M$ (describing how strong the coupling of the meson to the axial current can be) and $\tau_M$ its lifetime. The corrective factor $\delta_{EM}^{M \ell 2}$ stands for channel-dependent electromagnetic radiative corrections. In this work, they are taken into account in the case of the lighter mesons ($\pi$ and $K$), where their impact is estimated to be at the level of $2-3\%$~\cite{Kl2pil2em,FlaviaNet}, and for the $D$ meson, where the effect is~$1\%$~\cite{Dl2em,cleo_update}. As far as $B$-related observables are concerned, the experiments take into account soft photons corrections derived from their Monte Carlo simulated data, and we will assume that no further correction is required for these branching ratio (setting $\delta_{EM}^{B \ell 2}=0$). \par The experimental accuracies for the branching ratios are given in the Table~\ref{tab-inputs-obs}, lying within $\simeq 0.2-31\%$ depending on the leptonic decay of interest. The main theoretical uncertainty arises from the decay constant, which is a non-perturbative quantity to be estimated by theoretical methods, such as quark models, sum rules, or lattice QCD (LQCD) simulations. We opt for the latter, since they provide well-established methods to compute these observables with a good accuracy and a satisfactory theoretical control. Over the last few years, many new estimates of the decay constants have been issued by different lattice collaborations, with different ways to address the errors. A part of the uncertainties has a clear statistical interpretation: lattice simulations evaluate correlators in an Euclidean metric expressed as path integrals using Monte Carlo methods, whose accuracy depends crucially on the size of the sample of gauge configurations used for the computation. But systematics are also present and depend on the strategies of computation chosen by competing lattice collaborations: discretisation methods used to describe gauge fields and fermions on a lattice, parameters of the simulations, such as the size of the (finite) volumes and lattice spacings used for simulations, the masses of the quarks that can be simulated, and the number of dynamical flavours (2 and 2+1 being the most frequent). In relation with these choices, the extrapolation of the results to physical parameters can be subject to different theoretical treatments (chiral perturbation theory, heavy-quark expansion\ldots), going beyond a naive linear extrapolation. The combination of lattice values with different approaches to address the error budget is a critical point of most global analyses of the flavour physics data, even though the concept of the theoretical uncertainty for such quantities is ill-defined (and hence is the combination of them). The CKMfitter group has collected the relevant LQCD calculations of the decay constants $f_{B_d}$, $f_{B_s}$, $f_{D_s}$, $f_{D}$, $f_{K}$, $f_{\pi}$ (as well as bag factors ${\cal B}_{B_d}$, ${\cal B}_{B_s}$ and ${\cal B}_{K}$) and designed an averaging method aiming at providing a systematic, reproducible and to some extent conservative scheme~\cite{lqcd_ckmfitter}. These lattice averages are the input parameters used in the fits presented in this paper. \par In the specific case of light mesons (kaons and pions), the ratio of the decay constants $f_K/f_{\pi}$ is significantly better determined than the individual decay constants. It is hence worth considering the ratio $K_{\ell 2}/\pi_{\ell 2}$ of the kaon and pion leptonic partial widths instead of the individual branching ratios. It explicitly writes in the SM as~\cite{FlaviaNet}: \begin{align} \label{eq-Kl2Opil2} \frac{\Gamma[K\rightarrow\mu\nu]_{\rm SM}}{\Gamma[\pi\rightarrow\mu\nu]_{\rm SM}} = \frac{m_K}{m_\pi} \left( \frac{1 - m_l^2/m_K^2}{1 - m_l^2/m_\pi^2} \right)^2 \left|\frac{V_{us}}{V_{ud}}\right|^2 \left(\frac{f_K}{f_\pi}\right)^2 ( 1 + \delta_{EM}^{K\ell2/\pi \ell 2} ). \end{align} Let us notice that short-distance radiative corrections cancel out in the ratio. The long-distance corrections are accounted for by the parameter $\delta_{EM}^{K \ell 2/\pi \ell 2} = -0.0070 \pm 0.0035$, which can be computed using chiral perturbation theory~\cite{Kl2pil2em}. Similarly we also consider the ratio of taus to kaons to taus to pions decays. The latter writes: \begin{align} \label{eq-tauK2Otaupi2} \frac{\Gamma[\tau\rightarrow K\nu]_{\rm SM}}{\Gamma[\tau\rightarrow\pi\nu]_{\rm SM}} = \left( \frac{1 - m_K^2/m_\tau^2}{1 - m_\pi^2/m_\tau^2} \right)^2 \left|\frac{V_{us}}{V_{ud}}\right|^2 \left(\frac{f_K}{f_\pi}\right)^2 ( 1 + \delta_{EM}^{\tau K 2/\tau \pi 2} ). \end{align} The radiative correction is taken from~\cite{taupiK} and reads $\delta_{EM}^{\tau K 2/\tau \pi 2} = 0.0003$. The uncertainty is coming from $\delta_{EM}^{K \ell 2/\pi\ell 2}$ and $\delta_{EM}^{(\tau K 2/K \ell 2)/(\tau \pi 2/\pi\ell 2)}=0.0073 \pm 0.0027$, both treated as \rfit\ uncertainties. \par On the experimental side, the latest measurements of $B\rightarrow \tau \nu$ branching ratio from the $B$ factories \cite{BaBar,Belle} have been taken into account. The two experiments BaBar and Belle find results in fairly good agreement and the weighted average of ${\cal B}[B \rightarrow \tau \nu]$ exhibits a departure from the SM prediction from the global CKM fit (more than $2 \sigma $), due to tension between $\sin(2 \beta)$ and ${\cal B}[B \rightarrow \tau \nu]$, as pointed out in~ref.~\cite{cleo_summer}. \par \begin{figure} \begin{center} \begin{picture}(40,30)(0,0) \SetScale{1} \Oval(10,50)(30,10)(0) \ArrowLine(10,20)(50,50) \ArrowLine(50,50)(10,80) \Photon(50,50)(80,50){4}{4} \ArrowLine(100,30)(80,50) \ArrowLine(80,50)(100,70) \Text(33,28)[c]{$\ell^-$} \Text(33,7)[c]{$\bar\nu_\ell$} \Text(10,28)[c]{$\bar{q}_d$} \Text(10,7)[c]{$q_u$} \Text(25,22)[c]{$W^+$} \Text(3,18)[c]{$M$} \end{picture} \qquad \begin{picture}(40,30)(0,0) \SetScale{1} \Oval(10,50)(30,10)(0) \ArrowLine(10,20)(50,50) \ArrowLine(50,50)(10,80) \DashLine(50,50)(80,50){4} \ArrowLine(100,30)(80,50) \ArrowLine(80,50)(100,70) \Text(33,28)[c]{$\ell^-$} \Text(33,7)[c]{$\bar\nu_\ell$} \Text(10,28)[c]{$\bar{q}_d$} \Text(10,7)[c]{$q_u$} \Text(24,22)[c]{$H^+$} \Text(3,18)[c]{$M$} \end{picture} \end{center} \vspace{-0.7cm} \caption{\it \small Leptonic decay of a meson through the exchange of a $W$ boson (left) and a charged Higgs (right).\label{fig:leptonic}} \end{figure} In two Higgs doublet models, purely leptonic decays receive an additional contribution from charged Higgs, as illustrated in Figure~\ref{fig:leptonic}. It turns out that in this case, this correction can be factorized from the SM prediction~\cite{2hdmref4,2hdmexp,2hdm2ds}: \begin{align} \label{eq-MlnuHchIntro} {\cal B}[M\rightarrow \ell \nu] = {\cal B}[M\rightarrow \ell \nu]_{\rm SM}( 1 + r_H )^2, \end{align} where the corrective factor $r_H$ writes in 2HDM Type II: \begin{align} \label{eq-MlnuHch} r_H = \left(\frac{m_{q_u}-m_{q_d} \tan^2 \beta}{m_{q_u}+m_{q_d}}\right)\left(\frac{m_M}{m_{H^+}}\right)^2. \end{align} A comment is in order concerning the structure of eq.~(\ref{eq-MlnuHch}). Let us suppose that we have a perfect agreement between the measurement of a purely leptonic decay and its SM expression. There are actually in this case two distinct solutions in the 2HDM Type II framework: \begin{itemize} \item $r_H=0$, which can be obtained easily by sending $m_{H^+}$ to infinity. This \emph{decoupling solution} corresponds to a general way of recovering SM predictions by assuming that all additional particles are very massive. \item $r_H=-2$, which corresponds to a linear correlation between $m_{H^+}$ and $\tan\beta$. This \emph{fine-tuned solution} depends on the mass of the meson and those of its valence quarks, and it is thus different from one meson to another. \end{itemize} More generally, if we have a good agreement between the SM prediction and the measurement (as is the case for most leptonic decays), the 2HDM Type II fit will favour two regions: one region of high Higgs mass (related to the decoupling solution) and one diagonal band (in relation with the fined-tuned solution) in the parameter space ($m_{H^+}$, $\tan\beta$). \subsection{Semileptonic $B$ decay: $B\rightarrow D \tau \nu$} Purely leptonic decays of mesons intertwine electroweak and strong interactions. However, the role of strong interaction boils down to the presence of a decay constant, to be assessed through theoretical methods. Semileptonic decays are more complicate to describe, since they involve form factors with a non-trivial dependence on the transfer momentum. If the form factors are known with a sufficient accuracy, semileptonic branching ratios start becoming valuable constraints on New Physics models -- for instance, the comparison between leptonic and semileptonic decays provides a good test of the $V-A$ structure of weak interactions. The BaBar and Belle experiments recently published first measurements of ${\cal B}(B\rightarrow D\tau\nu)$ \cite{bdtaunubelle,bdtaunubabar}. An interesting observable is the normalized branching ratio ${\cal R}_{B\rightarrow D\tau\nu}={\cal B}[B\rightarrow D\tau\nu]/{\cal B}[B\rightarrow D e \nu]$, which corresponds to a $b \to c$ transition, with a CKM factor much larger than the purely leptonic $B$ decay (and thus easier to study experimentally). In principle, the relevant form factors can be studied using LQCD simulations, as long as one is interested in a limited region of space-like transfer momentum, where both incoming and outgoing states consist of a single meson and no final state-interaction occurs. In order to extend the range of determination of form factors, fits combining lattice information and $B\rightarrow D \ell \nu$ can be performed~\cite{MesciaKamenik}, in order to constrain the shape of the relevant vector and scalar form factors over the whole kinematic regime. In the case of 2HDM-II models, the scalar form factor is a key ingredient, since it encodes the impact of the charged Higgs exchange in the semileptonic decay. Unfortunately, scalar form factors are notoriously difficult to handle on the lattice, and require dedicated methods and significant computing power to be estimated~\cite{latticebtod}. Due to helicity suppression, this scalar contribution arises with a factor $m_\ell^2/m_B^2$ in the amplitude, which means that only $B\to D\tau\nu_\tau$ is sensitive to this contribution~\footnote{In principle, a similar analysis could be applied for $B\to D^*\tau\nu$. However it involves four form factors which are poorly known and out of which only one would be sensitive to charged Higgs exchange~\cite{BtoDstar}.}. Following ref.~\cite{MesciaKamenik}, we write the ratio ${\cal R}_{B\rightarrow D\tau\nu}$ as a second order polynomial in the charged Higgs coupling, as: \begin{align} \label{eq-BDtaunu} {\cal R}_{B\rightarrow D\tau\nu}= a_0 + a_1 {\cal R}e[ s_{H} ] + a_2 |s_{H}|^2. \end{align} The factor $s_{H}$ accounting for the charged Higgs coupling was taken as: \begin{align} \label{eq-NHch} s_{H}=-\frac{\tan\beta^2}{1-m_c/m_b} \cdot \frac{m_B^2-m_D^2}{m_{H^+}^2}. \end{align} The polynomial coefficients $a_i$ in eq.~(\ref{eq-BDtaunu}) depend on the slope parameter of the vector form factor $\rho^2$ and the scalar form factor $\Delta$ (taken as a constant) describing the $B \rightarrow D$ vector and scalar form factors. A detailed statistical treatment of the theoretical uncertainties {\it \`a la} \rfit\ requires an explicit knowledge of these dependencies. Therefore we do not use the final result of ref.~\cite{MesciaKamenik} but we rather integrate their expression for the $B\rightarrow D \tau \nu$ differential branching ratios for various values of the parameters governing the form factors. We checked that the variations of the branching ratio with $\rho^2$ and $\Delta$ are smooth over the range of uncertainty of these latter parameters. Consequently, the coefficients $a_i$ are well parameterized as: \begin{align} \label{eq-BDParam} a_0 = 0.2970 + 0.1286 \cdot d\rho^2 + 0.7379 \cdot d\Delta, \\ a_1 = 0.1065 + 0.0546 \cdot d\rho^2 + 0.4631 \cdot d\Delta, \\ a_2 = 0.0178 + 0.0010 \cdot d\rho^2 + 0.0077 \cdot d\Delta, \end{align} where we have averaged over $B_{d,u} \rightarrow D_{d,u}$ modes and where $d\rho^2 = \rho^2-1.19$ and $d\Delta = \Delta-0.46$ are the variations of the $\rho^2$ and $\Delta$ parameters. Let us mention that we have also investigated the constraints derived in ref.~\cite{NiersteBDtaunu} from the same observables, and we checked that we obtained very comparable results, even though the theoretical expressions are quite different. \subsection{Semileptonic kaon decay: $K\rightarrow \pi \ell \nu$} The process $K\rightarrow \pi \ell \nu$ is known to very good experimental and theoretical accuracies (better than $1\%$) and hence is included in this analysis, even though it involves lighter fermions. The branching ratio ${\cal B}[K^0\rightarrow \pi \mu \nu]/{\cal B}[K^0\rightarrow \pi e \nu]$ writes: \begin{align} \label{eq-KpiGlobal} \frac{{\cal B}[K^0\rightarrow\pi\mu\nu]}{{\cal B}[K^0\rightarrow\pi e\nu]}= \frac{\hat{I}_K^\mu}{\hat{I}_K^e}( 1 + 2\delta_{EM}^{K,\mu} - 2\delta_{EM}^{K,e} ), \end{align} where $\delta_{EM}^{K,e} = (5.7 \pm 1.5)\cdot 10^{-3}$ and $\delta_{EM}^{K,\mu} = (8.0 \pm 1.5)\cdot 10^{-3}$ are radiative electromagnetic corrections estimated in Chiral Perturbation Theory in ref.~\cite{Kl3em} and recalled in ref.~\cite{FlaviaNet}. The phase space integrals $\hat{I}_K^l$ depend on both the scalar and vector form factors describing the $K\to\pi$ transitions. In the low energy region of interest here, the vector form factor can be described accurately through resonance saturation (involving the $K^*$ pole). The scalar form factor is more delicate to describe, but it can be expressed through a dispersion relation involving data on $\pi K$ scattering. Exploiting these form factors, a parametrization of the ratio of the phase space integrals is derived in Appendix A: \begin{align} \label{eq-KpiParam} \frac{\hat{I}_K^\mu}{\hat{I}_K^e} & = p_I[z]( 1+ p_S[z]\epsilon_G + p_V[z]\Delta M_V ), \\ p_I[z] & = 7.96407 \cdot 10^{-3} z^3 + 2.59205 \cdot 10^{-2} z^2 + 7.82087 \cdot 10^{-2} z + 0.647932, \\ p_S[z] & = 6.04408 \cdot 10^{-5} z^2 + 2.30011 \cdot 10^{-4} z + 4.67096 \cdot 10^{-4} z, \\ p_V[z] & = 4.24716 \cdot 10^{-5} z - 2.69983 \cdot 10^{-5}, \end{align} where $\Delta M_V=(M_V-892)$ is a parameter related to the mass of the $K^*$ resonance expressed in MeV. The factor $\epsilon_G$ is a free parameter in $[-1;1]$ reflecting theoretical uncertainties on the scalar form factor. Charged Higgs contributions are accounted for through the parameter $z$ as~\cite{FlaviaNet}: \begin{align} \label{eq-KpiZFactor} z & = \frac{f_K}{f_\pi}\frac{1}{f_+(0)}+\Delta_{CT} - \frac{m_s \tan^2\beta-m_u}{m_s-m_u}\left(\frac{m_{K^0}^2-m_{\pi^+}^2}{m_{H^+}^2}\right), \end{align} where $f_+(0)$ denotes the normalization of the vector form factor at $q²=0 \; {\rm GeV}^2$ and $\Delta_{CT} \in [-0.5;0] \cdot 10^{-3}$, estimated in Chiral Perturbation Theory, describes the deviation of the scalar form factor from $f_K/f_\pi$ at the Callan-Treiman point. \subsection{Radiative decays: $b\rightarrow s \gamma$} The FCNC decay $b\rightarrow s \gamma$ proceeding through penguin diagrams is a powerful benchmark to constrain the charged Higgs sector of New Physics models. The calculation of the $\bar{B}\rightarrow X_s\gamma$ branching ratio has been completed up to Next-to-Next Leading Order (NNLO)~\cite{bsgammaNNLO} (see refs.~\cite{bsgammaEarly}-\cite{bsgammaHchlimit} for details). \par Its starting point is the effective Hamiltonian derived by integrating out the degrees of freedom heavier than the $b$ quark, and expressed as products of effective operators (describing long-distance effects) with Wilson coefficients (including short-distance effects). The inclusive decay rate is expressed as the imaginary part of the relevant correlator of two $b\to s\gamma$ currents, which can be expanded in powers of $1/m_b$. Most of the effort in the field has been devoted to the computation of the perturbative part of the leading contribution to this expansion. \begin{figure} \begin{center} \begin{picture}(40,25)(0,-10) \SetScale{1} \ArrowLine(0,0)(30,0) \ArrowLine(30,0)(60,0) \ArrowLine(60,0)(90,0) \PhotonArc(45,0)(20,0,180){4}{5} \Photon(45,0)(60,-15){3}{3} \Text(3,3)[c]{$b$} \Text(27,3)[c]{$s$} \Text(16,11)[c]{$W^+$} \end{picture} \qquad \begin{picture}(40,25)(0,-10) \SetScale{1} \ArrowLine(0,0)(30,0) \ArrowLine(30,0)(60,0) \ArrowLine(60,0)(90,0) \DashArrowArc(45,0)(20,0,180){4} \Photon(45,0)(60,-15){3}{3} \Text(3,3)[c]{$b$} \Text(27,3)[c]{$s$} \Text(16,11)[c]{$H^+$} \end{picture} \end{center} \vspace{-0.7cm} \caption{\it \small An example of $b\to s\gamma$ transition through $W$ exchange (left) and charged Higgs exchange (right). A second diagram can be drawn with the photon emitted from the charged boson. \label{Fig:BtoSgamma}} \end{figure} The NNLO expression for the branching ratio is complicated; for practical purposes, we have chosen to parametrize the results given by the public package SusyBSG~\cite{SusyBSG} (other programs do exist, see for instance~\cite{bsgammanazila}), based on Leading Logarithm (LL) expressions. The SusyBSG code includes NLO perturbative corrections for different theoretical frameworks: SM, 2HDM and MSSM. In the 2HDM Type II, the exchange of charged Higgs bosons in addition to charged weak gauge bosons, shown in Figure~\ref{Fig:BtoSgamma}, provides a further contribution to the relevant Wilson coefficients of the effective Hamiltonian. The parameters of the SM prediction, limited to NLO, have been tuned in order to recover the most accurate NNLO result given in ref.~\cite{bsgammaNNLO}. Following the notation in ref.~\cite{bsgammaQuark}, the normalized branching ratio for $\bar{B}\rightarrow X_s\gamma$ writes: \begin{align} \label{eq-bsgamma} {\cal R}_{b\rightarrow s \gamma} = \frac{{\cal B}[\bar{B}\rightarrow X_s\gamma]}{{\cal B}[\bar{B}\rightarrow X_c\ell \bar{\nu}]} = \left|\frac{V_{ts}^{*}V_{tb}}{V_{cb}}\right|^2\frac{6\alpha_{\rm EM}}{\pi C}(P+N), \end{align} where $P$ denotes the leading contribution in the $1/m_b$ expansion, computed in perturbative QCD, and $N$ the non-perturbative ones, corresponding to higher orders in the $1/m_b$ expansion (starting at $1/m_b^2$). If $P$ can be systematically improved by computing higher and higher orders in perturbation theory, it is hard to provide more than an order of magnitude for $N$~\cite{bsgammaEarly}. The normalization factor $C$ accounts for the phase space difference between charmed semileptonic transition and $\bar{B}\rightarrow X_s\gamma$ decay. For our analysis of 2HDM Type II, $P$ and $N$ are parametrized using two functions, $A$ and $B$, depending on a reduced set of the relevant input parameters ($\alpha_s(m_Z)$, $m_t^{\rm pole}$ and $\overline{m}_c(m_c)$). Making use of the perturbative expression of $P$ at the leading-logarithm level, the functions $A$ and $B$ are defined as: \begin{align} \label{eq-bsgammaPara} P+N = ( C_{7,SM}^{{\rm eff},(0)} + B \Delta C_{7,H^+}^{{\rm eff},(0)} )^2 + A, \end{align} and fitted to reproduce the results from the SusyBSG package. In eq.~(\ref{eq-bsgammaPara}), the factor $\Delta C_{7,H^+}^{{\rm eff},(0)}$ models the Charged Higgs contributions. $A$ and $B$, which are independent of the 2HDM Type II parameters $m_{H^+}$ and $ \tan \beta$, exhibit smooth linear variations with the input parameters. Further details on the parametrization used in this analysis as well as the formulae for $A$ and $B$ functions are given in Appendix B. \subsection{Neutral $B$-meson mixing} In the Standard Model (SM), neutral meson mixing occurs due to box diagrams with two $W$ exchanges. In the case of $B_d$ and $B_s$ mesons, the hierarchical structure of the CKM matrix and the large mass of the top means that the mixing is dominated by short-distance physics coming from diagrams where the internal fermion lines are top quarks. In two-Higgs doublet models, the observables related to neutral-meson mixing receive charged Higgs contributions~\cite{2hdmref1,2hdmref2,2hdmref3,elkaffas}. Indeed, one gets further diagrams obtained by replacing one or two $W$ lines by a charged Higgs, yielding~\cite{BBbarMixing}: \begin{alignat}{2} \label{eq-mixing} \Delta m_q & = \frac{G_F^2}{24\pi^2} (V_{tq}V_{tb}^*)^2 \eta_B m_B m_t^2 f^2_{B_q} \hat{B}_{B_q} ( S_{WW} + S_{WH} + S_{HH} ), \\ S_{WW} & = \left( 1 + \frac{9}{1 - x_{tW}} - \frac{6}{(1-x_{tW})^2} - \frac{6 x_{tW}^2 \ln(x_{tW})}{(1 - x_{tW})^3} \right), \\ S_{WH} & = \frac{x_{tH}}{\tan^2 \beta} \left( \frac{ ( 2 x_{HW} - 8 )\ln(x_{tH})}{(1 - x_{HW}) (1 - x_{tH})^2} + \frac{6 x_{HW} \ln(x_{tW})}{(1 - x_{HW})(1 - x_{tW})^2} - \frac{ 8 - 2 x_{tW}}{(1 - x_{tW})(1 - x_{tH})} \right), \\ S_{HH} & = \frac{x_{tH}}{\tan^4 \beta} \left( \frac{1 + x_{tH}}{(1 - x_{tH})^2} + \frac{ 2 x_{tH} \ln(x_{tH})}{(1 - x_{tH})^3} \right), \end{alignat} with $x_{ij} = m_i^2/m_j^2$. $S_{WW}$, $S_{WH}$ and $S_{HH}$ indicate respectively the internal bosonic lines of the corresponding diagrams with an external light quark $q=d,s$. Analyses including radiative corrections are available~\cite{BBbarMixingNLO}, but the leading-order expressions above are sufficient for the required accuracy in our present purposes. \subsection{Constraint from electroweak precision data: the $Z\rightarrow b\bar{b}$ vertex} It is time to make an excursion at the border of flavour physics. The $Z\rightarrow b\bar{b}$ vertex has provided opportunities to search for New Physics contributions, due to the heavy masses involved. In particular, the radiative corrections at the vertex might imply charged Higgs exchanges in addition to the standard $W t b$ couplings. The partial width $\Gamma[Z\rightarrow b\bar{b}]$ is subject to sizeable QCD corrections. It is hence relevant to define the ratio of the $Z$ partial widths $R_b = \Gamma[Z\rightarrow b\bar{b}]/\Gamma[Z\rightarrow {\rm hadrons}]$ for which most QCD corrections are suppressed. $R_b$ has been measured by the LEP experiments with a remarkable accuracy~\cite{EWFit}. We will not consider neutral Higgs corrections here. Indeed it has been shown~\cite{HaberLogan} that they contribute significantly for large values of $\tan \beta$ only, where $R_b$ is not a competitive observable with respect to the other observables considered here. On the other hand, the $R_b$ measurement can yield valuable constraints to exclude regions of the 2HDM Type II parameter space at low $\tan\beta$. \par Following the work described in~refs.~\cite{HaberLogan, Field, Slavich}, we parameterize $R_b$ as: \begin{align} \label{eq-Rb} \frac{1}{R_b} = 1 + \frac{K_b}{(\bar{g}_b^L-\bar{g}_b^R)^2+(\bar{g}_b^L+\bar{g}_b^R)^2}, \end{align} where $\bar{g}_b^{L,R}$ are the couplings of left- and right-chirality $b$ quark to the $Z$ boson; $K_b$ is the sum of the axial and axial-vector couplings of the quark flavours lighter than the $b$ quark, embeding QCD and QED radiative corrections remaining in $R_b$ as well as effects from the $b$-quark mass. The 3 latter SM quantities have been parameterized to reproduce the predictions of the ZFITTER package ~\cite{ZFITTER}, which depends primarily on the top quark mass, $m_t$. However, one has to take care about the correlations between the top quark mass and the rest of the electroweak parameters. In particular, the neutral Higgs mass, $m_{H^0}$, is constrained by a dedicated Electroweak fit~\cite{EWFit}, following High-$Q^2$~data fit, but excluding the direct measurements of $R_b$, $A_{FB}^{0,b}$ and $A_b$ from the fit inputs. The dependency of the Electroweak $\chi^2$ on $m_t$ is modelled by a correlation factor between $m_t$ and $m_{H^0}$. The full details of the parametrization are provided in Appendix C. With the top quark mass quoted in table~\ref{tab-inputs-par}, it yields the SM prediction $R_b = 0.21580(4)$, excluding the direct measurement of $R_b$ as well as $A_{FB}^{0,b}$ and $A_b$ from the fit~\footnote{Let us note incidentally that this prediction is in excellent agreement with the ZPOLE Fit results of~\cite{EWFit}, which include the measurements of $R_b$, $A_{FB}^{0,b}$ and $A_b$. This is due to the fact that the direct measurement of $m_t$ coincides with its prefered value from ZPOLE only inputs, which significantly depends on $R_b$.}. The charged Higgs contribution induces a redefinition of the coupling constants (ref.~\cite{HaberLogan} corrected in~\cite{Slavich}) according to: \begin{alignat}{2} \label{eq-gbLR} \bar{g}_b^L & = \bar{g}_{b,{\rm SM}}^L + \frac{G_F m_W^2 }{ 8 \sqrt{2} \pi^2}\left(\frac{m_t}{m_W}\frac{1}{\tan \beta}\right)^2 F_z\left[\frac{m^2_t}{m^2_{H^+}}\right],\\ \bar{g}_b^R & = \bar{g}_{b,{\rm SM}}^R - \frac{G_F m_W^2 }{ 8 \sqrt{2} \pi^2}\left(\frac{m_b}{m_W}\tan\beta\right)^2 F_z\left[\frac{m^2_t}{m^2_{H^+}}\right]. \end{alignat} The function $F_z$ takes into account two-loops corrections following the work developped in~\cite{Slavich} and a parameterization is provided in Appendix C. Following ~\cite{Field}, we assume that the oblique corrections due to the second Higgs doublet are negligible, so that the modifications from the 2HDM will mainly affect the vertex corrections, and thus modify b-quark observables (and hence $\Gamma(Z\to b\bar{b})$) before any other observable. We are then still allowed to determine the SM prediction for Rb from the Electroweak fit described above (which does not include any b-related observables), out of which we can deduce the value of Rb in 2HDM using eqs. ~\ref{eq-gbLR}. \section{Individual constraints in the ($m_{H^{\pm}},\tan \beta$) parameter space} All the fits reported in this paper are performed within the frequentist statistical framework advertised in ~ref.~\cite{ThePapII}. In all analyses, the CKM matrix parameters are determined simultaneously with the 2HDM additional parameters. Hence, we recall first how the CKM parameters are measured in the framework of the SM and how their determination is modified once the 2HDM hypothesis is tested. \subsection{Standard Model inputs and parameters} There are four free parameters of interest describing the CKM matrix ($\lambda$, $A$, $\bar \rho$ and $\bar \eta$) in the extended Wolfenstein parametrization~\cite{ThePapII}: \begin{align} \label{CKMparam} \lambda=\frac{|V_{us}|}{\sqrt{{|V_{ud}|}^2+{|V_{us}|}^2}}, \qquad A \lambda^2=\frac{|V_{cb}|}{\sqrt{{|V_{ud}|}^2+{|V_{us}|}^2}}, \qquad {\bar \rho}+i {\bar \eta}= -\frac{V_{ud}V_{ub}^{*}}{V_{cd}V_{cb}^{*}}. \end{align} The dependence of the other CKM matrix elements on these parameters follows from the unitarity of the CKM matrix. This definition ensures unitarity at all order of the development in power of $\lambda$ and warrants the $\bar \rho$ and $\bar \eta$ parameters not to depend on phase conventions. \par In the SM, $\lambda$ and $A$ are accurately determined: $\lambda$ is measured from super-allowed nuclear transitions and semileptonic kaon decays and $A$ comes from the inclusive and exclusive semileptonic $b$-hadron decays with charm. On the other hand, the parameters $\bar \rho$ and $\bar \eta$, being respectively the real and imaginary coordinates of the unitarity triangle (UT) apex, are less constrained. The fit of the CKM matrix in the SM hypothesis and the metrology of its four parameters make use of several observables: \begin{itemize} \item[-] $|V_{ud}|$, $|V_{us}|$ and $|V_{cb}|$ determine the $\lambda$ and $A$ parameters and fix accordingly the length scale of the UT. \item[-] $|V_{ub}|$ (including ${\cal B}(B \to \tau \nu)$), $\Delta m_d$ and $\Delta m_s$ are CP-conserving observables, sensitive to the sides of the UT. \item[-] $\alpha$, $\gamma$, $\sin 2 \beta$, $\cos 2 \beta$ are CP-violating observables measuring the UT angles from $B$-meson decays whereas $|\epsilon_K|$ assesses CP violation in kaon mixing. \end{itemize} \subsection{2HDM inputs and parameters}\label{sec:inputs} Let us move to testing the 2HDM Type II hypothesis. It requires to add the two 2HDM Type II parameters ($m_{H^+}$, $\tan \beta$), but also to modify the set of constraints used in the global SM fit to fix the CKM matrix. We have therefore to split our observables into those used to fix the CKM matrix, and those needed to constrain the additional parameters from the 2HDM Type II. First of all, the observables dealing with neutral-meson mixing proceeding through $\Delta F=2$ transitions will receive charged Higgs contributions~\cite{2hdmref1,2hdmref2,2hdmref3,elkaffas}, and we can use neither the oscillation frequencies of $B_d$ and $B_s$ mesons ($\Delta m_d$ and $\Delta m_s$ respectively) nor the CP-violating parameter $\epsilon_K$, as inputs for the CKM matrix. Moreover, the UT angles $\alpha$ and $\beta$ cannot be used independently, since their determination relies on an interference between decay and mixing. However, it turns out that the combination of $\alpha$ and $\beta$ inputs in the fit constitutes a determination of the angle $\gamma$ in which $\Delta F=2$ contributions cancel~\cite{nirgammaT}. Moreover, the $\Delta F=1$ processes proceed through a $W$-exchange in the SM, and they are thus affected directly by the presence of a charged Higgs boson. However, we know that these contributions are proportional to the masses of the quarks and charged leptons involved. At low energies, the exchange of a charged Higgs boson yields four-quark operators with weights $m_1 \cdot m_2/M_H^2$, where $m_1$ and $m_2$ are the masses of two fermions (quarks or charged leptons) involved in the four-quark operator but not coupled together with a Higgs, as can be seen from eq.~(\ref{HiggsLag}). We expect therefore that only processes involving massive quarks and leptons will be very sensitive to 2HDM Type II contributions, which selects naturally some of the processes considered above for the determination of ($m_{H^+}$, $\tan \beta$): $\mu$ or $\tau$ leptonic decays of $B$, $D_s$ and $D$, $B\to D$ $\mu$ or $\tau$ semileptonic decays (tree processes), $Z\to b\bar{b}$, $b\to s\gamma$ and neutral $B$ meson mixing (processes with a top-quark loop). On the other hand, the CKM matrix, and thus the apex of the unitarity triangle, can be determined by taking $\Delta F=1$ processes where at most one heavy mass is present. This selects: \begin{itemize} \item the determination of $\gamma$ from $\alpha+\beta$ ($m_b\cdot m_u/m_H^2$, $m_b\cdot m_d/m_H^2$), \item the determination of $|V_{cb}|$ from semileptonic $b\to c$ decays ($m_b\cdot m_e/m_H^2$, $m_c\cdot m_e/m_H^2$)~\footnote{We should rigorously have taken solely the electronic semileptonic $b \to c$ (and conversely $b \to u$) decays. Yet, the electron and muon averages we are producing are by far dominated by theoretical uncertainties. On top of that, electronic and muonic extractions are in very good agreement and the split averages are not easily available.}, \item the determination of $|V_{ub}|$ from semileptonic $b\to u$ decays ($m_b \cdot m_e/m_H^2$), \item the determination of $|V_{ud}|$ from super-allowed $\beta$ decays of nuclei (no heavy mass involved). \end{itemize} The determination of $\gamma$ from $B\to DK$ does not enter this list as it scales like $m_b \cdot m_s/M_H^2$. Figure~\ref{UUT} shows the combined constraint in the ($\bar \rho$, $\bar \eta$) parameter space. Though less constraining than the SM global fit, the constraints chosen here yield two well-defined symmetrical solutions for the apex of the unitarity triangle. The achieved accuracy is mostly due to the world average $\alpha$ determination, driven by the latest $B \rightarrow \rho \rho$ measurements (see ref.~\cite{lqcd_ckmfitter} and references therein). \begin{figure} \begin{center} \epsfig{file=plots/RhoEta_2HDM.eps,height=80mm,width=80mm} \caption{\it \small Superimposed individual constraints for the fit comprising the observables which involve light fermions at 95\% CL (and excluding $\Delta F =2$ observables). The yellow area is the solution driven by the combination of individual constraints at 95\% CL. The unitarity triangle drawn here is obtained from the global fit displayed in Figure~\ref{ckm09}. \label{UUT}} \end{center} \end{figure} Before discussing the combined analysis of all the above constraints, we would like first to focus on the most stringent individual constraints in the parameter space ($m_{H^+}, \tan \beta$) among the different classes of observables potentially sensitive to charged-Higgs exchanges. If we impose the Higgs sector to remain in a perturbative regime, an upper bound on the value of $\tan\beta$ can be obtained around 200~\cite{2hdmref3}, and our plots will correspond to this region (with a logarithmic scale). Two different constraints turn out to constrain the 2HDM Type II very efficiently: the leptonic decays and the $b\to s\gamma$ branching ratio. \subsection{Leptonic and semileptonic decays} \begin{figure}[htbp] \begin{center} \mbox{\epsfig{file=plots/kaons_2HDM.eps,height=60mm,width=60mm}}\mbox{\epsfig{file=plots/Dmunu_2HDM.eps,height=60mm,width=60mm}}\\ \mbox{\epsfig{file=plots/Dslnu_2HDM.eps,height=60mm,width=60mm}}\mbox{\epsfig{file=plots/Btaunu_2HDM.eps,height=60mm,width=60mm}} \caption{\it \small Constraints on 2HDM parameter space ($m_{H^+}$, $\tan \beta$) from purely leptonic decays. The upper left plot stands for constraints from $K\to\mu\nu$ and $\tau\to K\nu$ decays, the upper right for $D\to\mu\nu$ decays, the lower left for $D_s$ decays and the lower right for $B_d\to\tau\nu$. The superimposed green line delimits the $1\sigma$ confidence area. The excluded regions (white parts of the plots) correspond to more than 95\%CL.\label{fig-leptosplit}} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \mbox{\epsfig{file=plots/leptonic_all2s_2HDM.eps,height=80mm,width=80mm}}\mbox{\epsfig{file=plots/leptonic_12s_2HDM.eps,height=80mm,width=80mm}}\\ \caption{\it \small Combined constraints on 2HDM Type II parameter space $(m_{H^+}, \tan \beta)$ from purely leptonic decays of $K$, $D$ and $B$ mesons. The different sets of observables are defined in the left Figure. Darker to lighter shades of colours correspond to ($K_{\ell 2}/\pi_{\ell 2}+\tau\to K\nu$), ($D\to\mu\nu$), ($B\to\tau\nu$) and ($D_s\to\ell\nu$) individual constraints. The complementary area of the colored region is excluded at $95\%$~CL. For the sake of clarity, the figure on the right displays the combined constraint alone at $95\%$~CL. The dark green line defines the $1\sigma$ confidence area. \label{fig-lepto}} \end{center} \end{figure} As mentioned in the introduction, the only observable which departs significantly from the SM prediction is ${\cal B}(B^+ \to \tau^+ \nu)$. According to eq.~(\ref{eq-MlnuHch}), 2HDM Type II can accommodate such a sizeably larger value with respect to the SM in only two situations: \begin{itemize} \item at low enough $\tan \beta$ with respect to $\sqrt{m_u/m_b} \simeq 0.02$, to get a positive correction $r_H$ to the SM value, \item in a fine-tuned scenario (defined in section 2.1) with $r_H < -2$, so that the 2HDM Type II contribution $(1+r_H)^2$ enhances significantly the SM prediction. \end{itemize} The $95\%$~CL constraints derived in the plane ($m_{H^+}$, $\tan \beta$) from the various leptonic decays are shown in Figure~\ref{fig-leptosplit}, in a log-log scale. Let us recall that the charged Higgs contributions are identical for $K_{\mu 2}/\pi_{\mu 2}$ and $\tau \rightarrow K \nu / \tau \rightarrow \pi \nu$. However, since the experimental value of $K_{\mu 2}/\pi_{\mu 2}$ is yet well better known than $\tau \rightarrow K \nu / \tau \rightarrow \pi \nu$, the latter only has a marginal contribution to the combination. \par Combining all constraints from leptonic decays, the minimum $\chi^2$ value of $\chi^2_{min}=14.8$ ($p$-value~$=85.5 \pm 0.3 \%$) is found at small $m_{H^+}$ where $B \to \tau \nu$ and $D \to \ell \nu$ fine-tuned regions overlap. The very small charged Higgs mass, excluded by direct searches at LEP, reflects the fact that 2HDM Type II can hardly accommodate the large value of the measured $B \to \tau \nu$ branching ratio at low masses of the charged Higgs but in a fine-tuned scenario, as can be seen in Figure~\ref{fig-lepto}. At values of $\tan \beta$ sufficiently large compared to $\sqrt{m_{q_u}/m_{q_d}}$, the $H^{\pm}$ contribution to the branching ratio behaves as $r_H \simeq -(\tan^2 \beta) m_{\rm eff}^2/m_{H^+}^2$ with $m_{\rm eff} = m_M \sqrt{m_{q_u}/(m_{q_u}+m_{q_d})}$. Since the corrections to the SM branching ratio are at least quadratic in the effective mass $m_{\rm eff}$, the $B\rightarrow \tau \nu$ branching ratio sets a constraint on the high-$m_{H^+}$ region. Let us mention that the ratio $K_{\mu 2}/\pi_{\mu 2}$, accurately measured and predicted, plays a significant role in the combined leptonic limits. As shown in Figure~\ref{fig-lepto}, this additional observable disfavours the $B\rightarrow \tau \nu$ fine-tuned band in the region $\tan \beta \,\hbox{\lower0.6ex\hbox{$\sim$}\llap{\raise0.6ex\hbox{$>$}}}\, 10$. \begin{figure}[htbp] \begin{center} \epsfig{file=plots/XLeptonic-1D_2HDM.eps,height=80mm} \caption{\it \small Combined constraints on the ratio $m_{H^+} / \tan \beta$ derived from leptonic and semileptonic decays of $K$, $D$ and $B$ mesons, in the large $\tan \beta$ limit. \label{fig-lepto-1D}} \end{center} \end{figure} The limits derived from the semileptonic decays $B\rightarrow D \tau \nu$ and $K\rightarrow \pi \ell \nu$ are similar in shape to those derived from the purely leptonic decay $B\rightarrow \tau \nu$, although less constraining than the latter. Hence there are not further displayed. Note that despite ${\cal B}(B\rightarrow D \tau \nu)$ suffers from larger theoretical and experimental uncertainties, because of the heavier mesons involved in the decay, it is 2 times more constraining on 2HDM(II) parameters than $K\rightarrow \pi \ell \nu$. \par It is very illustrative to compare all constraints derived from leptonic and semileptonic decays in the large $\tan \beta$ limit ($\tan \beta \,\hbox{\lower0.6ex\hbox{$\sim$}\llap{\raise0.6ex\hbox{$>$}}}\, 30$). In this limit, the charged Higgs contributions depend on a single coupling parameter, namely $\tan \beta / m_{H^+}$. Figure~\ref{fig-lepto-1D} shows the limits on $m_{H^+} / \tan \beta$ derived from the most challenging observables, i.e., $B\rightarrow \tau \nu$, $K_{\mu 2}/\pi_{\mu 2}$ and $B\rightarrow D \tau \nu$. We also show the combined limit from all leptonic and semileptonic decay observables considered in this study. Both $K_{\mu 2}/\pi_{\mu 2}$ and $B\rightarrow D \tau \nu$ turn out to exclude the fine-tuned solution due to $B\rightarrow \tau \nu$, at more than $95 \%$~CL. It results in the combined limit $m_{H^+} / \tan \beta \geq 13.1$~GeV at $95 \%$~CL, from all semileptonic and leptonic decays in the large $\tan\beta$ limit. \subsection{Loop processes} \begin{figure}[htbp] \begin{center} \epsfig{file=plots/Rb-Mixing_2HDM.eps,height=80mm,width=80mm} \epsfig{file=plots/Rbsg_2HDM.eps,height=80mm,width=80mm} \caption{\it \small Superimposed constraints on 2HDM parameter space ($m_{H^+}$, $\tan \beta$) from $B \bar{B}$ mixing and $Z \to b \bar{b}$ (on the left) and from from $b \to s \gamma$ branching ratio. \label{bsg} The colored area are confidence regions at $95\%$~CL.\label{fig-ZMix}} \end{center} \end{figure} The constraints derived from $B\bar{B}$ mixing and $Z \to b \bar{b}$ are similar in shape as can be seen on Figure~\ref{fig-ZMix}. The constraints derived from $\Delta m_d$ and $\Delta m_s$ are twice as stringent as those arising from $Z \to b \bar{b}$. They all exhibit a divergence with $1 /( m_{H^+} \tan\beta )$ resulting in small values of $\tan \beta$ to be disfavoured except at very large values of $m_{H^+}$. The $b \to X_s \gamma$ branching ratio, where $X_s$ denotes any charmless final state with strangeness, is measured using either semi-inclusive or inclusive method. The Heavy Flavour Averaging Group (HFAG) proposes an average of measurements~\cite{HFAGbsg}, which is used in the present work, performed by the CLEO~\cite{cleobsg}, Belle~\cite{bellebsg} and BaBar~\cite{babarbsg} experiments (only the reference of the latest measurements is given for each collaboration). The key point of the branching ratio determination is the correction implied by the photon energy threshold experimental cut and derived from a theoretical model of the photon spectrum shape. In that respect, HFAG advocates the use of the extrapolation factors determined in~\cite{buchmullerbsg}. The average is: \begin{equation} {\cal B}(b\rightarrow s \gamma) =(3.52 \pm 0.23 ({\rm stat}) \pm 0.09 \; ({\rm syst})) \; 10^{-4}. \end{equation} The $b \to s \gamma$ branching ratio is currently the constraint which dominates the global 2HDM Type II fit. Figure~\ref{bsg} shows the exclusion region at 95\% CL in the parameter space ($m_{H^+}, \tan \beta$). This result is in fair agreement with other determinations~\cite{bsgammaHchlimit}. Let us mention that two critical parameters in the determination of this limit are the charm quark mass $\overline{m}_c(m_c)$ and the semileptonic phase space factor C. As an illustration, if the central value and uncertainties of $m_c$ are varied from the one quoted in Table~\ref{tab-inputs-par} to the one used in~ref.~\cite{bsgammaNNLO} ($\overline{m}_c(m_c) =(1.224 \pm 0.017 \pm 0.054) \; {\rm GeV}$), the limit on $m_{H^+}$ is increased by 10\%. Similarly for the parameter C, if to use the central value and errors from~ref.~\cite{bsgammaNNLO} ($C = 0.58 \pm 0 \pm 0.016$) the limit on $m_{H^+}$ is decreased by 9\%, despite the error on C is two times smaller than the one we used in our analysis. \section{Combined Analysis} \subsection{Goodness of fit} In order to test the relevance of the 2HDM Type II all observables are compared to their theoretical predictions within a combined global $\chi^2$ test. The $\chi^2$ is minimized over all theory parameters, and theoretical uncertainties are treated in the \rfit\ scheme as for the global CKM fit~\cite{ThePapII}. There are $76$ observables (the determination of the $\alpha$ angle corresponds to 41 observables alone) for $56$ parameters, yielding $\chi^2_{min,Obs}({\rm 2HDM}) = 22.54$. From a Monte-Carlo toy-experiment study, the corresponding $p$-value is found to be~$p({\rm 2HDM})=(64.8 \pm 0.8) \%$, assuming for the true parameter values those found during the initial $\chi^2$ minimisation (plug-in $p$-value scheme~\cite{ThePapII}). It is worth comparing the observed $\chi^2_{min}$ value in the 2HDM scheme to the one obtained for the same observables but with their SM predictions. For the latter, the $\chi^2_{min, Obs}$ is $\chi^2_{min, Obs}({\rm SM}) = 22.56$ and a corresponding $p$-value $p({\rm SM})=(69.1 \pm 0.8) \%$ is found. Let us stress the $\chi^2_{min}$ value can only decrease when we move from the SM to 2HDM predictions: the observables are the same but the 2HDM Type II has additional parameters which can reproduce the SM predictions in the particular limit $m_{H^+} \rightarrow \infty$. Therefore, by comparing the observations to the predictions alone, one cannot reject or nullify the 2HDM Type II while keeping the SM. Nevertheless, the almost equal value of $\chi^2_{min}$ for both models leads us to the qualitative conclusion that the 2HDM Type II does not perform significantly better than the SM. \par We can measure more quantitatively how the agreement with the data improves once one moves from the SM to the 2HDM Type II. For this purpose, we introduce a new test statistics, $\Delta\chi^2_{min}$, defined as: \begin{align} \label{eq-DeltaChi2} \Delta\chi^2_{min} = \chi^2_{min}( {\rm SM} ) - \chi^2_{min}( {\rm 2HDM} ). \end{align} High values of $\Delta\chi^2_{min}$ would indicate a deviation of the observation from the SM prediction that could be accommodated by 2HDM Type II. The cumulative distribution of this test is derived from a Monte-Carlo toy-experiment study where the SM parameter values have been fixed to the values found in the global minimisation (this is the truth hypothesis to nullify). The result of this toy analysis is shown in Figure \ref{fig-DeltaChi2}. A departure of $\Delta\chi^2_{min}$ at three standard deviations (a $p$-value of $0.27\%$) would correspond to $\Delta\chi^2_{min} \geq 9.2$. The observed value $\Delta\chi^2_{min} = 0.02$ amounts to a $p$-value of $\simeq 100 \%$. Therefore, the toy analysis does not give any reason to reject the SM in favour of the 2HDM Type II. Conversely, the 2HDM Type II hypothesis is not invalidated and the next section is dedicated to the derivation of the exclusion limits on the model parameters ($m_{H^+},\tan \beta$). Eventually, we point out that the test distribution was found to be well approximated by a $\chi^2$ distribution with one degree of freedom while one would have naively expected two degrees of freedom brought by the two additional free parameters $m_{H^+}$ and $\tan \beta$. This could be understood from the fact that the fit is dominated by the $b \rightarrow s \gamma$ constraint, almost one-dimensional and depending only on $m_{H^+}$. \begin{figure} \begin{center} \epsfig{file=plots/DeltaChi2.eps,height=80mm,width=80mm} \caption{\it \small $\Delta\chi^2_{min}$ cumulative distribution within the SM. The black squares stand for the distribution obtained from Monte-Carlo assuming the SM as the truth. For comparison, the red dots show the distribution obtained for a 1 degree of freedom $\chi^2$ distribution with identical binning. \label{fig-DeltaChi2}} \end{center} \end{figure} \subsection{Combined limits on 2HDM Type II parameters} Figure~\ref{fig-combined} shows the combined $95\%$~CL confidence area in the plane ($m_{H^+}$, $\tan \beta$). The minimum $\chi^2$ of $\chi^2_{min} \simeq 22.5$ is obtained for $m_{H^+} \simeq 4$~TeV. At high $H^+$ masses and irrespective of the value of $\tan \beta$ (decoupling limit), the charged Higgs contribution becomes negligible for all the processes we are considering in this analysis, so that the SM predictions are recovered. \begin{figure} \begin{center} \epsfig{file=plots/combined_2HDM.eps,height=160mm,width=160mm} \caption{\it \small Global constraints on 2HDM parameters $m_{H^+}$ and $\tan \beta$ from all analysed observables. Each color corresponds to a different set of observables, as quoted in the Figure. The complementary area of the colored one is excluded at $95\%$~CL. The horizontal black line indicates the $95\%$~CL limit from direct searches at LEP~\cite{LEPChargedHiggs}. The dotted line within the orange combined area delimits the corresponding $1 \sigma$ confidence area. For the combination of leptonic and semileptonic constraints (light green area) we assumed that the p-value is best approximated by a 1 d.o.f. $\chi^2$ distribution since 2HDM contributions essentially depend on the ratio $m_{H^+}/\tan(\beta)$. \label{fig-combined}} \end{center} \end{figure} At large $\tan \beta$ ($\,\hbox{\lower0.6ex\hbox{$\sim$}\llap{\raise0.6ex\hbox{$>$}}}\, 30$), leptonic (mainly $B \rightarrow \tau \nu$) and semileptonic constraints compete with $b\rightarrow s \gamma$ and sharpen its exclusion limit. At small $\tan \beta$ ($\simlt 1$), the most stringent constraint arises from the $B\bar{B}$ mixing and to a second extent from $Z \rightarrow b \bar{b}$. These results can be compared with those obtained from the Gfitter group~\cite{Gfitter}, which performed a global fit to electroweak precision data both in the Standard Model and in the 2HDM Type II. In the latter case, the observables involved were $B\to \ell \nu$, $B\to D\tau\nu$ and a ratio involving kaon decays (namely $K_{\mu 3}$, $K_{\mu 2}$ and $\pi_{\mu 2}$), $b\to s\gamma$, $Z\to b\bar{b}$ (with a more dedicated treatment of the latter observable than in our work). The values of CKM matrix elements involved in flavour observables were taken as external inputs, whereas we determine them from the fit described in sec.~\ref{sec:inputs}. We notice rather similar exclusion areas for the various individual constraints (e.g., the existence of fine-tuned solutions for leptonic decays), apart from a slightly different shape in the case of $Z\to b\bar{b}$. As in our case, $b\to s\gamma$ favours high values of the charged Higgs mass, irrespective of the value of $\tan\beta$. Figure~\ref{1Dlnm} shows the one-dimension constraint found in the global analysis. A lower limit of the charged Higgs mass can be inferred: $$m_{H^+} > 316 \; {\rm GeV} \; {\rm at} \; 95\% \;{\rm CL} \qquad [{\rm this\ work}]$$ while no significant constraint is obtained for $\tan\beta$. It is interesting to compare these results with the bound obtained from direct searches at LEP for any value of $\tan\beta$~\cite{LEPChargedHiggs} (specific studies are also reported in refs.~\cite{lepcdf}): $$m_{H^+} > 78.6 \; {\rm GeV} \; {\rm at} \; 95\% \;{\rm CL} \qquad [{\rm direct}]$$ \begin{figure} \begin{center} \epsfig{file=plots/2HDM-1D-lnm.eps,height=100mm,width=160mm} \caption{\it \small One-dimension constraint on $m_{H^+}$.\label{1Dlnm}} \end{center} \end{figure} \section{Conclusion} In the last decade, the $B$ factories have performed a set of remarkable measurements in the quark sector which yielded an impressive overall agreement between the SM and the data. Tight constraints on the theories beyond the SM can be inferred: flavour-changing neutral currents are small as predicted in the SM and the KM mechanism has proven to describe the observed CP-violating phenomena in flavour physics with a good accuracy. Among the theoretical extensions of the SM, the addition of second scalar doublet (2HDM Type II) is particularly appealing, so that the flavour structure of the SM is preserved. \par In this article, we have discussed tests of the 2HDM Type II in the light of recent flavour physics data. In particular, the measurements of purely leptonic decays of $B$ and $D$ mesons (which departed more or less significantly from their SM predictions) have been combined to obtain a comprehensive combined constraint. They have been analyzed together with complementary constraints such as semileptonic decays, $b \to s \gamma$ or $R_b$ measurements. \par The outcome of this combined analysis is that the 2HDM Type II is not favoured by low-energy data due to the interplay between ${\cal B}[\bar B \rightarrow \tau^+ \nu]$ and $b\rightarrow s \gamma$ measurements, or at least, that it does not perform better than the SM. If we assume that the 2HDM Type II is realized in Nature, constraints on its parameters can be derived from this global analysis. In that respect, a limit on the charged Higgs mass $m_{H^+} > 316 \; {\rm GeV}$ at $95\%$~CL is obtained irrespective of $\tan \beta$. \par This analysis considered tree and loop-induced $\Delta F = 1$ processes, as well as $\Delta F = 2$ mixing processes for neutral $B$ mesons. Loop-induced rare decays such as $B_s \to \mu^+ \mu^-$ might receive contributions from an extended Higgs sector, and would provide further constraints on this model. A natural extension of this work would include such observables in order to perform a more comprehensive test of the Two Higgs Doublet Model of Type II. \section*{Acknowledgments} We thank our colleagues of the CKMfitter group for many stimulating discussions and for a critical reading of the manuscript. We also thank J. Orloff and N. Mahmoudi for very useful comments on two Higgs doublet models. We eventually thank H. Fl\" acher and A. H\" ocker (from the Gfitter group) for valuable discussions on the constraints and P. Slavich and G. Degrassi for their useful comments on the $R_b$ observable description. This work was supported in part by the ANR contract ANR-06-JCJC-0056 and by EU Contract No. MRTN-CT-2006-035482, \lq\lq FLAVIAnet''.
{'timestamp': '2010-06-25T02:00:55', 'yymm': '0907', 'arxiv_id': '0907.5135', 'language': 'en', 'url': 'https://arxiv.org/abs/0907.5135'}
\section{Scalar perturbations} \section{Introduction} Cosmological perturbations \cite{Kodama:1985bj} play a fundamental role in modeling the Universe, since they allow to study the origins of large structure, the anisotropies of the cosmic microwave background (CMB) radiation, and the propagation of gravitational waves on cosmological distances. We derive a set of universal model independent effective equations and Lagrangians which can describe the evolution of scalar and tensor perturbation of any system whose field equations can be written in an Einstein like form. This includes for example multi-fields systems \cite{Romano:2020oov}, or modified gravity theories such as Horndeski's theory\cite{Horndeski:1974wa}, once they have been transformed to the Einstein frame. Given the generality of this effective description it is particularly suitable for model independent phenomenological analysis of observational data. The approach predicts naturally that the speed of gravitational waves $c_{T,A}$ con depend on frequency and polarization, due to the interactions of the graviton with itself or other fields. This prediction allows to use gravitational waves observations to investigate the elusive nature of dark matter and dark energy. The equation and Lagrangian for scalar and tensor perturbations has the same universal structure, and the effects of the interaction can be modeled at any order in perturbations by a single effective quantity, playing the role of effective propagation speed. This is particularly useful since it allows to compare different models in terms of the two effective quantities $c_s$ and $c_T$. Combining different set of observational data such as cosmic microwave background radiation and gravitational waves, it will be possible to constrain $c_s$ and $c_{T,A}$, to determine possible deviations from general relativity and vanilla inflation. The effective equations and Lagrangians are derived separately for scalar and tensor perturbations, both in physical and momentum space, and the consistency with previous results derived in the literature is checked in different cases. Other predictions are then considered, such as the frequency dependency of $c_s$ and $c_T$ in vanilla inflation due to the effects of higher order interaction terms. \section{Scalar perturbations} \subsection{Space dependent effective sound speed} In this section we will show that it is possible to define a space dependent effective sound speed (SESS) in terms of which a model independent equation for comoving curvature perturbations $\zeta$ can be derived. The SESS encodes the effects of the interaction of $\zeta$ with itself and other fields, at any order in perturbations. \subsubsection{Effective Lagrangian approach} In the action approach the EST on the r.h.s. of the Einstein's equations originates from the interaction of scalar perturbations with themselves or other fields. Based on this we can obtain an effective Lagrangian corresponding to the perturbed field equations by introducing higher order interaction terms as $\mathcal{L}_{int}(\zeta,\phi^i)$, where $\phi^i$ denotes abstractly all the other fields $\zeta$ is coupled to. We will use conformal time, and adopt the following notation for the Lagrangian density $L$ \begin{equation} S=\int d\eta dx \mathcal{L}=\int d\eta dx z^2 L \quad , \quad \mathcal{L}=\ z^2 {L} \quad , \quad z^2=\epsilon a^2 \,, \end{equation} where we are assuming a Friedman background (FRW) background with scale factor $a(\eta)$, and $\epsilon$ denotes the first order slow-roll parameter, defined in terms of cosmic time $dt=a^{-1} d\eta$. In general relativity the quadratic Lagrangian of the comoving curvature perturbations of a minimally coupled scalar field is \begin{eqnarray} \mathcal{L}^{(2)}_{\zeta}=z^2\Big[\zeta'^2-(\nabla \zeta)^2\Big] &,& z^2=\epsilon a^2\label{vani}. \end{eqnarray} We will call the above model vanilla inflation. Including higher order interaction and self interaction terms we can write a general model independent Lagrangian \begin{eqnarray} \mathcal{L}_{\zeta}=\mathcal{L}^{(2)}_{\zeta}+\mathcal{L}^{int}_{\zeta}= z^2\Big[\zeta'^2- (\nabla \zeta)^2+L^{int}_\zeta(\zeta,\phi^i)\Big]=z^2\Big[\zeta'^2\Big(1+\frac{L^{int}_\zeta}{\zeta'^2}\Big)- (\nabla \zeta)^2\Big] \label{LEint} \,, \end{eqnarray} where $\phi^i$ denotes collectively any other field. From the above equation we can obtain the effective Lagrangian \begin{eqnarray} \mathcal{L}^{eff}_{\zeta}=\frac{z^2}{c_s^2}\Big[\zeta'^2- c^2_s(\nabla \zeta)^2\Big]=\alpha^2\Big[\zeta'^2- c^2_s(\nabla \zeta)^2\Big] \label{Leffzeta} \quad &,& \quad \alpha^2=\frac{z^2}{c^2_s}=\frac{\epsilon a^2}{c^2_s} \,, \end{eqnarray} where we have defined the space effective sound speed (SESS) according to \begin{eqnarray} c_s^2(\eta,x^i)=\Big(1+\frac{L_{\zeta}^{int}}{\zeta'^2}\Big)^{-1} \label{ctS} \,. \end{eqnarray} The variation of $\mathcal{L}^{eff}_{\zeta}$ gives the model independent equation \begin{eqnarray} \zeta''+2 \frac{\alpha'}{\alpha} \zeta'-c_s^2 \nabla^2 \zeta&=&0 \,,\label{zetaalphaA} \end{eqnarray} which can be also written as \begin{eqnarray} \zeta''+2 \Big(\frac{z'}{z}-\frac{c'_s}{ c_s}\Big) \zeta'-c^2_s \nabla^2 \zeta&=&0 \,\label{zetacs}\,. \end{eqnarray} Note that in deriving the equations of motion for $\zeta$ the SESS has been treated as a function independent of $\zeta$, since the SESS is an effective quantity determined by substituting the solutions of the full theory, including the effects of interaction, into the interaction Lagrangian $L^{int}$. For any system with a well defined Lagrangian it should be always possible to solve the equations of motions, and then use the solutions to define the SESS. Eq.(\ref{zetaalphaA}) and eq.(\ref{Leffzeta}) show that $\alpha(\eta,x^i)$ can be interpreted as an effective space dependent scale factor, while eq.(\ref{zetacs}) shows explicitly the modification of the friction term induced by the SESS. Since these equations are model independent, we can immediately conclude that the friction terms cannot be modified if $c_s'=0$. The effective Lagrangian can be obtained from the vanilla case in eq.(\ref{vani}) \begin{equation} \mathcal{L}^{(2)}_{\zeta}=z^2\Big[\zeta'^2-c^2(\nabla \zeta)^2\Big] \,\label{vani}. \end{equation} by the transformation \begin{equation} z^2 \rightarrow \alpha=\frac{z^2}{c^2_s} \quad,\quad c\rightarrow c_s \,,\label{transk} \end{equation} where we are denoting with $c$ the unity sound speed, to avoid ambiguity. This is in agreement with eq.(\ref{zetaalphaA}), which shows that $\alpha$ can be regarded as an effective scale factor. The main advantage of using eq.(\ref{zetaalphaA}) is that it is completely model independent, allowing to study in a systematic way deviations from general relativity or the effects of the interaction with different fields using a single function. Note that since $\mathcal{L}_{int}$ can be space dependent, also $c_s(\eta,x^i)$ depends on space, with the exception of $\mathcal{L}_{int}\propto f(\eta)\zeta'^2$, when it is only time dependent, which corresponds to K-inflation \cite{Garriga:1999vw}. In presence of multiple scalar fields the space dependence is manifested already in the quadratic Lagrangian \cite{Vallejo-Pena:2019hgv}, while the effects of anisotropic perturbations requires at least a cubic Lagrangian, because scalar fields anisotropies appear at second order in the EST. Note that even in vanilla inflation the sound speed can be space dependent when including the cubic and higher order Lagrangians \cite{Maldacena:2002vr}, as we will show explicitly later. This is consistent with the EST approach, since a scalar field EST is anisotropic at second order, leading to source terms in the perturbed field equations. \subsubsection{Interpretation of the space dependency of the SESS} The space dependency of the SESS is a natural manifestation of the interaction and self-interaction of scalar perturbations, and of the coupling of tensor and scalar perturbations. In multiple scalar fields systems these effects are already present in the quadratic Lagrangian, and are associated to entropy perturbations, while for a single scalar field they manifest only starting from the cubic Lagrangian. \subsubsection{Effective stress-energy tensor approach} Scalar perturbations of the metric and of the Effective stress-energy tensor (EST) can be written as \begin{eqnarray} ds^2 = -(1+2A)dt^2+2a\partial_iB dx^idt + a^2\left\{\delta_{ij}(1+2C)+2\partial_i\partial_jE\right\}dx^idx^j \, , \label{pmetric} \\ T^0{}_0 = - (\rho+\delta\rho) \quad \,, \quad T^0{}_i = (\rho+P) \partial_i(v+B) \,, \nonumber \\ T^i{}_j = (P+\delta P)\delta^i{}_j + \delta^{ik} \partial_{k}\partial_{j}\Pi -\frac{1}{3} \delta^{i}{}_{j} \SPD \Pi \, . \label{psem} \end{eqnarray} where $v$ is the velocity potential and $\SPD \equiv \delta^{kl}\partial_k\partial_l$. Note that any metric and EST can always be written in the above form, making all the results obtained from it completely model independent. The comoving slices gauge is defined by the condition $(T^{0}{}_{i})_c=0$, and we denote with a subscript $c$ quantities evaluated on comoving slices. In multiple fields systems the EST is the sum of the stress-energy tensors of each field, which in the comoving gauge is associated to entropy perturbations \cite{Romano:2018frb}. In the comoving gauge entropy perturbations $\Gamma$ are introduced \cite{Kodama:1985bj} by \begin{align} \delta P_c(\eta,x^i) &= c_a(\eta)^2 \delta\rho_c(\eta,x^i) + \Gamma(\eta,x^i) \, , \label{entropy} \end{align} where $c_a$ is interpreted as the adiabatic sound speed, and is by definition a function of time only. Note that this definition can be ambiguous \cite{Romano:2018frb}. The manipulation of the perturbed Einstein's equation in the comoving gauge gives \begin{eqnarray} \R''+\frac{\partial_\eta z_a^2}{z_a^2}\R'-c_a^2\SPD\R = a^2\mathcal{S} \,, \label{zetaS} \\ \mathcal{S} =-\frac{c_a^2}{\epsilon}\SPD \Pi- \frac{1}{ 2 a^2 z_a^2 } \left[ \frac{a^3}{c_a^2 H} \left(\Gamma + \frac{2}{3}\SPD \Pi \right) \right]' \label{RS} &,& z_a^2= \frac{\epsilon a^2}{c^2_a} \,, \end{eqnarray} where we are denoting derivatives with respect to conformal time with a prime. The source term in the above equation can be absorbed into the definition of the space effective sound speed (SESS) following a similar approach to the one adopted for gravitational waves \cite{Romano:2022jeh}. We can first re-write eq.(\ref{RS}) as \begin{equation} \frac{(\R' z_a^2)'}{z_a^2}-\frac{(g z_a^2)'}{z_a^2}- c_a^2\nabla^2 \R=\frac{\Big[\R' z_a^2(1-g/\R')\Big]'}{z_a^2}- c_a^2\nabla^2 \R=0 \label{gh} \,, \end{equation} where we have defined \begin{equation} g=\frac{1}{z_a^2}\int z_a^2 a^2 \mathcal{S}\,d\eta\,. \end{equation} After introducing the quantities \begin{eqnarray} 1+\delta(\eta,x^i)=\Big(1-\frac{g}{\R'}\Big)^{-1/2} \quad &,& \quad \alpha^2=\frac{z_a^2}{(1+\delta)^2}=\frac{\epsilon a^2}{c^2_a(1+\delta)^2} \,, \label{vs} \end{eqnarray} we can rewrite eq.(\ref{gh}) as \begin{equation} \frac{1}{z_a^2}(\alpha^2 \R')'-c_a^2\nabla^2 \R=0\,. \label{zh} \end{equation} Defining the space effective sound speed (SESS) as \begin{equation} c_s(\eta,x^i)=c_a(\eta)\Big[1+\delta(\eta,x^i)\Big] \,, \label{sess} \end{equation} and re-writing $\alpha$ in terms of $c_s$ as \begin{eqnarray} \alpha^2=\frac{\epsilon a^2}{c^2_s} \,, \end{eqnarray} we finally obtain the model independent effective equation \begin{eqnarray} \zeta''+2 \frac{\alpha'}{\alpha} \zeta'-c_s^2 \nabla^2 \zeta&=&0 \label{zetaalpha} \,, \end{eqnarray} which shows that $c_s$ is the correct definition of effective sound speed. Eq.(\ref{zetaalpha}) is in agreement with eq.(\ref{zetaalphaA}), obtained using the effective Lagrangian approach, is completely general, and it can be applied to study multi-fields models, modified gravity, dark energy or dark matter. Note that the SESS definition given in eq.(\ref{sess}) is more general that the one given in \cite{Romano:2018frb}, which is including the effects of entropy, but not of anisotropy. Using an effective Langrangian approach there is no distinction between entropy and anisotropy, since they are both associated to interaction terms, and the effective description is more transparent than using the EST approach, but both methods lead to the same conclusions, since the source terms in the field equations are obtained from the variation of the interaction Lagrangian. The SVT decomposition of the EST is valid at any order in perturbations, so eq.(\ref{zetaalpha}) is including the effects of interaction at any order in perturbations, including self-interaction, and for this reason is in agreement with eq.(\ref{zetaalphaA}), obtained by encoding in $c_s$ the effects of all higher order interaction terms. \subsection{Momentum effective sound speed} In this section we will show that it is possible to define a momentum dependent effective sound speed (MESS) in terms of which a model independent equation for comoving curvature perturbations $\zeta$ can be derived. The MESS encodes the effects of the interaction of $\zeta$ with itself and other fields, at any order in perturbations. The MESS is not the Fourier transform of the SESS, but it is mathematically convenient, since it allows to obtain a model independent equation involving minimal changes of the vanilla case. \subsubsection{Effective Lagrangian approach} A model independent effective equation and Lagrangian can also be derived in momentum space, using the two different methods adopted previously, the field equations and action approach. The Lagrangian in momentum space can be written as \begin{eqnarray} \mathcal{L}_{\zeta_k}=\mathcal{L}^{(2)}_{\zeta_k}+\mathcal{L}^{int}_{\zeta_k}= z^2\Big[\zeta'^2_k+k^2 \zeta_k^2+L^{int}_{\zeta_k}(\zeta_k,\phi_k^i)\Big] \label{Lintk} \,.\\ \end{eqnarray} The effective Lagrangian is \begin{eqnarray} \mathcal{L}^{eff}_{\zeta_k}=\tilde{\alpha}^2\Big[\zeta_k'^2+\tilde{c}^2_s k^2 \zeta^2_k\Big] &,& \tilde{\alpha}^2(\eta,k)=\frac{z^2}{\tilde{c}_s^2}=\frac{\epsilon a^2}{\tilde{c}_s^2}\,, \label{Leffzetak} \end{eqnarray} where we have defined the momentum effective sound speed (MESS) $\tilde{c}_s$ and effective scalar factor as \begin{eqnarray} \tilde{c}_s^2(\eta,k)=\Big(1+\frac{L^{int}_k}{\zeta'^2_k}\Big)^{-1} \label{csk} &,& \tilde{\alpha}^2(\eta,k)=\frac{\epsilon a^2}{\tilde{c}_s^2}=\frac{z^2}{\tilde{c}_s^2} \,. \end{eqnarray} which gives the equation \begin{eqnarray} \zeta_k''+2 \frac{\tilde{\alpha}'}{\tilde{\alpha}} \zeta_k'+\tilde{c}_s^2 k^2 \zeta_k&=&0 \,, \label{zetaalphak} \end{eqnarray} which can be also be written as \begin{eqnarray} \zeta_k''+2 \Big(\frac{z'}{z}-\frac{\tilde{c_s}'}{\tilde{c}_s}\Big) \zeta_k'+\tilde{c}_s^2 k^2 \zeta_k&=&0 \,. \label{zetacsk} \end{eqnarray} Eq.(\ref{zetaalphak}) and eq.(\ref{Leffzetak}) show that $\tilde{\alpha}(\eta,k)$ can be interpreted as an effective momentum dependent scale factor, while eq.(\ref{zetacsk}) shows explicitly the modification of the friction term induced the MESS. Since these equations are model independent, we can immediately conclude that the friction terms cannot be modified if $\tilde{c}_s'=0$. The effective Lagrangian can be obtained from the vanilla inflation action \begin{equation} \mathcal{L}^{(2)}_{\zeta_k}=z^2\Big[\zeta'^2_k+c^2 k^2\zeta^2_k\Big] \,, \end{equation} by the transformation \begin{equation} z^2 \rightarrow \tilde{\alpha}=\frac{z^2}{\tilde{c}^2_s} \quad,\quad c\rightarrow \tilde{c}_s \,,\label{transk} \end{equation} where we are denoting with $c$ the unity sound speed, to avoid ambiguity. This is in agreement with eq.(\ref{zetaalphak}), which shows that $\tilde{\alpha}$ can be regarded as a momentum dependent effective scale factor. Note that the quantities $\tilde{c_s}$ and $\tilde{\alpha}$ are not the Fourier transform of $c_s$ and $\alpha$. \subsubsection{Field equations approach} Taking the Fourier transform of eq.(\ref{RS}) we get \begin{eqnarray} \R_k''+\frac{\partial_\eta z^2}{z^2}\R_k'+c_a^2 k^2 \R_k = a^2\mathcal{S}_k \,,\label{zetaS} \\ \mathcal{S}_k =\frac{c_a^2}{\epsilon}k^2 \Pi_k- \frac{1}{2 z_a^2 a} \left[ \frac{a^3}{c_a^2 H} \left(\Gamma_k - \frac{2}{3}k^2 \Pi_k \right) \right]' \label{RSk} &,& z_a^2= \epsilon a^2/c^2_a \,, \end{eqnarray} After introducing the quantities \begin{eqnarray} g_k=\frac{1}{z_a^2}\int z_a^2 a^2\mathcal{S}_k\,d\eta \quad , \quad 1+\delta_k(\eta)=\Big(1-\frac{g_k}{\R_k'}\Big)^{-1/2} \quad , \quad \tilde{\alpha}^2=\frac{z^2}{(1+\delta_k)^2}=\frac{\epsilon a^2}{c^2_a(1+\delta_k)^2}\,, \label{vs} \end{eqnarray} we can rewrite eq.(\ref{RSk}) as \begin{equation} \frac{1}{z^2}(\tilde{\alpha}^2 \R_k')'+c_a^2 k^2 \R_k=0\,. \label{zh} \end{equation} Defining the momentum effective sound speed (MESS) as \begin{equation} \tilde{c}_s(\eta,k)=c_a(\eta)\Big[1+\delta_k(\eta)\Big] \,, \label{messa} \end{equation} and re-writing $\tilde{\alpha}$ in terms of $\tilde{c}_s$ as \begin{eqnarray} \tilde{\alpha}^2=\frac{\epsilon a^2}{\tilde{c}^2_s} \,, \end{eqnarray} we finally obtain the model independent effective equation \begin{eqnarray} \zeta_k''+2 \frac{\tilde{\alpha}'}{\tilde{\alpha}} \zeta_k'+\tilde{c}_s^2 k^2 \zeta_k&=&0 \label{zetaalphak2} \,, \end{eqnarray} which shows that $\tilde{c}_s$ is the correct definition of momentum effective sound speed, in agreement with eq.(\ref{zetaalphak}). \subsection{Effective metric description} In terms of the effective metric \begin{equation} ds_{eff}^2=\epsilon a^2\Big[c_s d\eta^2-\frac{\delta_{ij}}{c_s}dx^idx^j\Big] \label{geff} \,, \end{equation} the effective Lagrangian can be written as \begin{equation} \mathcal{L}^{eff}_{\zeta}=\sqrt{-g}(\partial_{\mu}\zeta \partial^{\mu}\zeta)=g^{\zeta}_{\mu\nu}dx^{\mu}dx^{\nu} \,. \end{equation} for which the equation of motion is simply given by the convariant D'Alembert operator \begin{equation} \square \zeta=\frac{1}{\sqrt{-g^{\zeta}}}\partial_{\mu}(\sqrt{-g^{\zeta}}\partial^{\mu}\zeta)=0 \,. \end{equation} In this geometrical description the perturbations propagate in an empty curved space, whose geometry is determined by the interaction of the perturbations. This is conceptually analogous to the general relativistic geometrical interpretation of the effects of gravity in terms of geodesics in a curved space, whose geometry is determined by the EST. More about this geometrical interpretation will be discussed in a future work. \subsection{Consistency with previous calculations} \subsubsection{Minimally coupled scalar field in general relativity} The vanilla scenario corresponds to $\mathcal{L}^{int}=0$, leading to $\delta=0$, $c_s=c_a=1$. The quantity $c_w=P'/\rho'$ does not give the correct definition of sound speed, since it does not coincide with \cite{Romano:2015vxz} the SESS $c_s=1\neq c_w$. \subsubsection{K-inflation} When the interaction Lagrangian is of the form $\mathcal{L}^{int} \propto f(\eta)\zeta'^2$ the SESS is just a function of time $c_s(\eta)=c_a(\eta)$ and $\delta=0$. In this case the effective action in eq.(\ref{Leffzeta}) and effective eq.(\ref{zetaalphaA}) are in agreement with \cite{Garriga:1999vw} \begin{equation} \mathcal{L}^{eff}_{\zeta}=\frac{z^2}{c_s(\eta)}\Big[\zeta'^2- c^2_s(\eta)(\nabla \zeta)^2\Big]\,. \label{LeffzetaK} \end{equation} \subsubsection{Ultra-slow roll inflation and its generalizations} Ultra-slow roll inflation (USR) is a particular case of globally adiabatic system, in general characterized by the vanishing of $\delta P_{nad}=\delta P_{ud}=\delta P-c_w\delta\rho$ on any scale \cite{Romano:2015vxz}, where $c_w=P'/\rho'$, and the subscript "ud" stands for uniform density gauge, defined by the condition $\delta\rho=0$. In USR models the quantity $c_w=P/\rho'$ coincides with \cite{Romano:2015vxz} the SESS $c_s=c_w=1$. In other globally adiabatic models such as generalized USR and Lambert inflation \cite{Romano:2016gop} $c_s=c_w\neq1$. \subsubsection{MESS with multiple scalar fields} The momentum dependency of the sound speed has been found in some specific multi-fields systems \cite{Achucarro:2012sm} where entropy modes can be integrated out analytically. This was generalized in a model independent framework in\cite{Romano:2018frb,Romano:2020oov}, defining the MESS for an arbitrary multi-field system, including those in which entropy modes cannot be easily integrated out analytically, and for an arbitrary field space metric. \subsubsection{MESS in modified gravity} In modified gravity theories an effective entropy and anisotropy can arise in the comoving gauge perturbed field equations, leading to a MESS depending on the specific gravity theory \cite{Vallejo-Pena:2019hgv}. An equivalent definition can be obtained from the Lagrangian describing the perturbations, using the effective action approach outlined in the previous section. \subsubsection{Effective sound speed and entropy perturbations} The SESS was introduced for the first time in a model independent way in \cite{Romano:2018frb} as $c_s=\delta P_c/\delta \rho_c$, but that definition is only valid for systems with entropy perturbations, but no anisotropy, while the correct generalization including anisotropy was given in this paper in eq.(\ref{sess}). It is easy to check that in absence of anisotropy eq.(\ref{sess}) is in agreement with eq.(29) in \cite{Romano:2018frb}. From the perturbations equation we can in fact obtain $\mathcal{S}$, and the corresponding interaction Lagrangian $L_{int}$ \begin{eqnarray} \mathcal{S}&=&-\frac{1}{ 2 a^2 z_a^2 } \Big(\frac{a^3}{c_a^2 H} \Gamma\Big)'\,,\\ \L_{int}&=&z_a^2 L_{int}=z_a^2 \frac{a\Gamma}{2\epsilon H}\R'=\frac{a^3\Gamma}{2c^2_a H}\R' \,, \\ g&=&\frac{1}{z_a^2}\int z_a^2 a^2 \mathcal{S}\,d\eta\,=-\frac{a\Gamma}{2\epsilon H} \,,\\ c_s^2&=&c_a^2 \Big(1-\frac{g}{\R'}\Big)^{-1}=c_a^2 \Big(1+\frac{L_{int}}{\R'^2}\Big)^{-1}=\Big(1+\frac{a\Gamma}{2\epsilon H \R'}\Big)^{-1} \,, \end{eqnarray} which shows explicitly that entropy and curvature perturbations are coupled already at second order in the term $L_{int} \propto \Gamma\R'$, explaining why a momentum dependent $c_s$ arises already from the quadratic action, while the calculation of the momentum dependency due to the anisotropy require the cubic action. \subsection{New predictions and applications} \subsubsection{MESS in vanilla inflation due to self interaction} Even in the vanilla scenario higher order interaction terms are expected to induce a momentum dependency of the effective sound speed, associated to cubic and higher order terms. These effects are ignored in leading order calculations, but arise naturally at higher order. For example, for the scalar perturbations we have the interaction Lagrangian \cite{Maldacena:2002vr} \begin{align} \mathcal{L}^{(3)}_{int} = ~a^4 \left[ \frac{\epsilon^2}{a^2} \zeta'^2 \zeta + \frac{1}{a^2} \epsilon^2 (\partial_i \zeta)^2 \zeta \right. \left. - 2 \frac{\epsilon}{a} \zeta' \partial_i \zeta \partial_i \chi -\frac{1}{2} \frac{\epsilon^3}{a^2} \zeta'^2 \zeta + \frac{1}{2} \epsilon \zeta (\partial_i \partial_j \chi)^2 + \frac{1}{2} \frac{\epsilon}{a^2} \eta' \zeta' \zeta^2 \right], \label{S3} \end{align} where $\partial^2\chi=\zeta' \epsilon/a $. In momentum space we can compute the MESS \begin{equation} \tilde{c}_s^2(\eta,k)=\Big(1+\frac{L^{(3)}_{k,int}}{\zeta'^2_k}\Big)^{-1} \,, \end{equation} where $\mathcal{L}^{(3)}_{k,int}=z^2 L^{(3)}_{k,int}$ is the Fourier transform of $\mathcal{L}^{(3)}_{int}$. The MESS encodes the effects of self-interaction on $\zeta$, which are associated to loop corrections of the power spectrum \cite{Kristiano:2022maq}, which can become large when slow-roll is violated \cite{Romano:2016gop}. \subsubsection{MESS in modified gravity} For a specific case of Horndeski theory \cite{Vallejo-Pena:2019hgv} the MESS was computed in the comoving gauge, showing explicitly that it is momentum dependent, as expected. The same result can be extended to other modified gravity theories once the cubic and higher order actions have been computed. For example for Horndeski theory the cubic action was computed in \cite{Gao:2012ib}, including the coupling of tensor and scalar perturbations. The model independent approach we have derived does not require any definition of entropy perturbations, and it includes the effects of anisotropy, since they are both related to interactions terms. \subsubsection{f(R) theories} In the Einstein's frame $f(R)$ theories are mathematically equivalent to general relativity with a minimally coupled scalar field. The anisotropic part of the EST of the scalar field arises only at second order in scalar perturbations, is proportional to the space derivatives $\delta\phi_{,i},\delta\phi_{,j}$ \cite{Antusch:2016con}, and is associated to cubic terms in the action \cite{Gao:2012ib}, coupling also tensor and scalar perturbations, which are not included in the quadratic actions Cubic order calculations are expected to show the momentum dependency predicted by the MESS approach, which can be interpreted as the effects of the anisotropy of the EST, which corresponds to cubic self-interaction terms in the Lagrangian, and of the coupling of scalar and tensor perturbations. \subsubsection{MESS in axion inflation} The coupling of scalar perturbations with a gauge field induces a momentum dependency of the MESS, which should arise already in the quadratic Lagrangian. For example the quadratic interaction Lagrangian can contain terms of the form \begin{equation} L^{(2)int}_\zeta \propto \delta A_{\mu}\partial^{\mu}\zeta \, , \, \delta A_{\mu}\partial^{\mu}h \end{equation} where $\delta A_{\mu}$ denotes perturbations of the gauge field $A_{\mu}$, while at higher order other terms can appear such as for example \begin{equation} L^{(3)int}_\zeta \supset \,\partial_{\nu}(\delta A_{\mu})\partial^{\mu}\zeta\partial^{\nu}\zeta \quad , \quad \delta A_{\mu}\delta A^{\mu}\zeta \quad , \quad \delta F_{\mu\nu}\partial^{\mu}\zeta\partial^{\nu}\zeta \,, \end{equation} where $F_{\mu\nu}$ denoted the perturbations of the Faraday tensor $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$. The effects of these interaction terms are often ignored in the literature, but a priori there is no no general argument to justify that they can be always neglected. These interaction terms give rise to effects similar to those associated to entropy in multi-fields scalar systems, which can be dominant \cite{Romano:2020oov}, and are worth being studied systematically. \section{Gravitational waves} \subsection{Space effective gravitational wave speed} Adopting an approach similar to the one used for scalar perturbations, we derive an effective action and propagation equation for gravitational waves. \subsubsection{Effective Lagrangian approach} The effective Lagrangian for gravitational waves can be obtained with a method analogous to the one used for scalar perturbations. In this section we will use a slightly different notation for the Lagrangian density \begin{equation} S=\int d\eta dx \mathcal{L}=\int d\eta dx \sqrt{-g} L=\int d\eta dx \,a^2{L} \quad , \quad \mathcal{L}=\ a^2 {L} \,. \end{equation} The Lagrangian for the polarization mode $h_A$ in general relativity is \begin{equation} \mathcal{L}^{GR}_h=a^2\Big[h'^2_A-(\nabla h_A)^2\Big] \label{LEint} \,,\,\\ \end{equation} and adding interaction terms we have \begin{equation} \mathcal{L}_h=\mathcal{L}^{GR}_h+\mathcal{L}^{int}_h= a^2\Big[h'^2_A-(\nabla h_A)^2+L_h^{int}(h_A,\phi^i)\Big]=a^2\Big[h'^2_A\Big(1+\frac{L_h^{int}}{h'^2_A}\Big)-(\nabla h_A)^2\Big] \label{LEint} \,,\,\\ \end{equation} where $\phi^i$ denotes abstractly all the other fields the graviton is coupled to, including itself, or another polarization. We can then obtain the effective Lagrangian \cite{Romano:2022jeh} \begin{equation} \mathcal{L}^{eff}_h=\frac{a^2}{c_{T,A}^2}\Big[ h'^2_A-c_{T,A}^2 (\nabla h_A)^2\Big]=\alpha^2\Big[ h'^2_A-c_{T,A}^2 (\nabla h_A)^2\Big],\label{Lheff} \end{equation} by defining the space effective GW speed (SEGS) as \begin{equation} c^2_{T,A}(\eta,x^i)=\Big(1+\frac{L_h^{int}}{h'^2_A}\Big)^{-1} \label{cteff}\,. \end{equation} The effective Lagrangian for gravitational waves has the same structure of that for comoving curvature perturbations in eq.(\ref{Leffzeta}), and one can in fact be obtained from the other by the transformation \begin{equation} z \longleftrightarrow a \quad,\quad c_s \longleftrightarrow c_{T,A} \,,\label{transk} \quad,\quad \zeta \longleftrightarrow h_A \,. \end{equation} The effective description in terms of SESS and SEGS is the same, i.e. the effective action and equation are universal, and can be used for both scalar and tensor perturbations. It is convenient to organize the equations for scalar and tensor perturbations in a table, to show the universality of the effective approach. \begin{tabular}{ | m{1.5cm} | m{7cm}| m{8cm} | } \hline & Gravitational waves & Curvature perturbations \\ \hline Speed & $c^2_{T,A}(\eta,x^i)=\Big(1+\frac{L_h^{int}}{h'^2_A}\Big)^{-1} =\Big(1-\frac{g_A}{h'_A}\Big)^{-1}$ & $c_s^2(\eta,x^i)=\Big(1+\frac{L_{\zeta}^{int}}{\zeta'^2}\Big)^{-1}=c_a^2(\eta)\Big(1-\frac{g}{\R'}\Big)^{-1}$ \\ \hline $\mathcal{L}^{eff}$ , $\alpha$ & $\frac{a^2}{c_{T,A}^2}\Big[ h'^2_A-c_{T,A}^2 (\nabla h_A)^2\Big]$ \,,\, $\alpha_A=\frac{a^2}{c_{T,A}^2}$& $ \frac{z^2}{c_s^2}\Big[\zeta'^2- c^2_s(\nabla \zeta)^2\Big]$ \,,\, $ \alpha=\frac{\epsilon a^2}{c_s^2}=\frac{z^2}{c_s^2}$ \\ \hline Eq. & $h_A''+2 \frac{\alpha_A'}{\alpha_A} h'_A-c_{T,A}^2 \nabla^2 h_A=0$ & $\zeta''+2 \frac{\alpha'}{\alpha} \zeta'-c_s^2 \nabla^2 \zeta=0$ \\ \hline Eq. & $h_A''+2 \Big(\frac{a'}{a}-\frac{c'_{T,A}}{c_{T,A}}\Big) h'_A-c_{T,A}^2 \nabla^2 h_A=0$& $\zeta''+2 \Big(\frac{z'}{z}-\frac{c'_s}{ c_s}\Big) \zeta'-c^2_s \nabla^2 \zeta=0$ \\ \hline \end{tabular} \subsubsection{Effective stress-energy tensor approach} The perturbed field equations \cite{Kodama:1985bj} \begin{equation} h_A''+2 \mathcal{H} h_A'+ \nabla^2 h_A= a^2 \Pi^{eff}_A\label{hEI} \,, \end{equation} can be manipulated to get the same model independent equation for gravitational waves that was obtained using the effective Lagrangian approach, but with the SEGS defined in terms of the EST as \cite{Romano:2022jeh} \begin{equation} c_{T,A}^2(\eta,x^i)=\Big(1-\frac{g_A}{h'_A}\Big)^{-1} \quad,\quad g_A=\frac{1}{a^2}\int a^4 \Pi^{eff}_A\,d\eta\,. \end{equation} \subsection{Momentum effective gravitational wave speed} Using a method similar to the one used in physical space, it is possible to derive a model independent effective action and equation in momentum space. The results are summarized in the table. We are denoting with $\tilde{h}$ the Fourier transform of $h$. \vspace{1cm} \begin{tabular}{ | m{1.5cm} | m{7cm}| m{8cm} | } \hline & Gravitational waves & Curvature perturbations \\ \hline Speed & $\tilde{c}^2_{T,A}(\eta,k)=\Big(1+\frac{L_{\tilde{h}}^{int}}{\tilde{h}'^2_A}\Big)^{-1} =\Big(1-\frac{\tilde{g}_A}{\tilde{\tilde{h}}'_A}\Big)^{-1}$ & $\tilde{c}_s^2(\eta,k)=\Big(1+\frac{L_{\zeta_k}^{int}}{\zeta_k'^2}\Big)^{-1}=c_a(\eta)^2\Big(1-\frac{\tilde{g}}{\R_k'}\Big)^{-1}$ \\ \hline $\mathcal{L}^{eff}$ \,, $\alpha$ & $\frac{a^2}{\tilde{c}_{T,A}^2}\Big[ \tilde{h}'^2_A+k^2\tilde{c}_{T,A}^2 \tilde{h}_A^2\Big]$ \,,\, $\tilde{\alpha}_A=\frac{a^2}{\tilde{c}_{T,A}^2}$& $ \frac{z^2}{\tilde{c}_s^2}\Big[\zeta_k'^2+k^2 \tilde{c}_s^2 \zeta_k^2\Big]$ \,,\, $ \tilde{\alpha}=\frac{\epsilon a^2}{\tilde{c}_s^2}=\frac{z^2}{\tilde{c}_s^2}$ \\ \hline Eq. & $\tilde{h}_A''+2 \frac{\tilde{\alpha}_A'}{\tilde{\alpha}_A} \tilde{h}'_A+k^2 \tilde{c}_{T,A}^2 \tilde{h}^2_A=0$ & $\zeta_k''+2 \frac{\tilde{\alpha}'}{\tilde{\alpha}} \zeta_k'+k^2 \tilde{c}_s^2 \zeta_k^2=0$ \\ \hline Eq. & $\tilde{h}_A''+2 \Big(\frac{a'}{a}-\frac{\tilde{c}'_{T,A}}{\tilde{c}_{T,A}}\Big) \tilde{h}'_A+k^2 \tilde{c}_{T,A}^2 \tilde{h}^2_A=0$ & $\zeta_k''+2 \Big(\frac{z'}{z}-\frac{\tilde{c}'_s}{ \tilde{c}_s}\Big) \zeta_k'+k^2 \tilde{c}_s^2 \zeta_k^2=0$ \\ \hline \end{tabular} \subsection{Effective metric description} Similarly to curvature perturbations, the effective Lagrangian for gravitational waves can be written as \begin{equation} \mathcal{L}^{eff}_h=\sqrt{-g_A}(\partial_{\mu}h_A \partial^{\mu}h_A) \,, \end{equation} in terms of the effective metric \begin{equation} ds^2_A=a^2\Big[c_{T,A}d\eta^2-\frac{\delta_{ij}}{{c_{T,A}}}dx^idx^j\Big] \label{geff} \,, \end{equation} for which the GW propagation equation can be written in terms of the covariant d'Alembert operator \begin{equation} \square h_A=\frac{1}{\sqrt{-g_A}}\partial_{\mu}(\sqrt{-g_A}\partial^{\mu}h_A)=0 \,. \end{equation} As for scalar perturbations, the effects of the interaction of the graviton can be described as the propagation in a curved space whose metric depends on the SEGS. A similar result can be derived in momentum space. \subsection{Consistency with previous calculations} The model independent action and Lagrangian derived in the previous sections are consistent and extend previous calculation based on quadratic action calculations. \subsubsection{Effective field theory of inflation} The quadratic order action for tensor modes obtained using the effective field theory of inflation \cite{Creminelli:2014wna} is in agreement with eq.(\ref{Lheff}), but the latter includes also higher order interaction terms neglected in the quadratic action, which induce the polarization and frequency dependency of the speed. \subsubsection{Horndeski's theory} The quadratic action for Horndeski's theory has been computed in \cite{Kobayashi:2011nu,DeFelice:2011bh}. These calculations are in the Jordan frame, while in the previous section we have used the Einstein frame. After performing the appropriate disformal transformation \cite{Romano:2022jeh} it can be shown that the tensor modes actions are in agreement at second order, while eq.(\ref{Lheff}) is including also the effects of higher order interaction terms \cite{Gao:2012ib}, associated to self interaction and tensor scalar coupling. \subsection{New predictions and applications} In this section we consider some examples of application of the effective approach derived previously. \subsubsection{Scalar tensor feedback and effective speeds} As an example, let's consider the interaction term \begin{equation} b \,\mathcal{L}^{(3)}_{\R\R h}=b \, a^2 h_{ij}\partial^i\R \partial^j\R= b \, a^2 L^{(3)}_{\R\R h}=z^2 \frac{b}{\epsilon}\, \Big[h_+(\partial^{x}\R \partial^{x}\R - \partial^{y}\R \partial^{y}\R)+ 2 h_{\times}(\partial^{x}\R \partial^{y}\R)\Big]=z^2 \frac{b}{\epsilon} \Big[h_+ \pi_+ + h_{\times} \pi_{\times}\Big] \, \end{equation} which arises at cubic order in general relativity \cite{Maldacena:2002vr} and modified gravity theories \cite{Gao:2012ib}. The Langrange equations give \begin{eqnarray} h_A''+2 \mathcal{H} h_A'+ \nabla^2 h_A&=& \, a^2 \,b \, \pi_A = a^2 \Pi_A \,, \\ \R''+2 \frac{z'}{z} \R'-c^2_s \nabla^2 \R&=& \,a^2 \, b \, h^{ij}\partial_i\partial_j \R=a^2 \mathcal{S}\,. \end{eqnarray} Contrary to the linear regime, tensor and scalar modes are coupled, and it is necessary to solve a system coupled differential equations to compute the effects of the interactions. Using the field equations approach the effective sound speed for scalar and tensor modes could be computed, or we could also use the effective Lagrangian approach. Using the notation introduced in the previous sections we have \begin{eqnarray} \L_{\zeta}&=&\L^{(2)}_{\zeta}+\L^{int}_{\zeta}=z^2\Big[\zeta'^2- (\nabla \zeta)^2+\frac{b}{\epsilon}L^{(3)}_{\R\R h} \Big]\,,\\ \L_{h}&=&\L^{(2)}_{h}+\L^{int}_{h}=a^2\Big[h'^2- (\nabla h)^2+b \,L^{(3)}_{\R\R h} \Big] \,, \end{eqnarray} from which we obtain the SEGS and SESS \begin{eqnarray} c^2_{T,A}(\eta,x^i)&=&\Big(1+\frac{L_h^{int}}{h'^2_A}\Big)^{-1}=\Big(1+\frac{b \,L^{(3)}_{\R\R h}}{ \, h'^2_A}\Big)^{-1} \,,\\ c_s^2(\eta,x^i)&=&\Big(1+\frac{L_{\zeta}^{int}}{\zeta'^2}\Big)^{-1}=\Big(1+\frac{b \,L^{(3)}_{\R\R h}}{ \epsilon\zeta'^2}\Big)^{-1} \,. \end{eqnarray} In terms of the SESS and SEGS, the system of coupled differential equations with source terms reduces to three independent equations, without sources \begin{eqnarray} h_A''+2 \frac{\alpha_A'}{\alpha_A} h'_A-c_{T,A}^2 \nabla^2 h_A&=&0 \quad,\quad \alpha_A=\frac{a^2}{c^2_{T,A}} \,,\\ \zeta''+2 \frac{\alpha'}{\alpha} \zeta'-c_s^2 \nabla^2 \zeta&=&0 \quad,\quad \alpha=\frac{\epsilon a^2}{c_s^2}=\frac{z^2}{c_s^2} \,. \end{eqnarray} Note that \begin{equation} \L_{\zeta}+\L_h+\L^{int}\neq \L^{eff}_{\zeta}+\L^{eff}_h \, \end{equation} because the interaction Lagrangian $\L^{int}$ enters in the definition of both effective speeds, analogously to the fact it produces different source terms $\mathcal{S}_{\R}$ and $\Pi_A$ in the equations of motions. The effects of the interaction induce a modification of both speeds, since the interaction produces a source term in both equations, while in the literature often only the effects on gravitational waves are considered, ignoring those on scalar perturbations, and their possible back-reaction on tensor modes. \section{Change of frame and physical relevance of the effective Planck mass} All the result derived in the previous sections were in the Einstein frame, both using the effective Lagrangian and the EST approach. It can be convenient to find the conformal transformation taking to the Jordan frame, in which many other calculations have been performed, especially in the context of modified gravity \cite{Kobayashi:2011nu,DeFelice:2011bh}. \subsection{Curvature and tensor perturbations} Let's start by writing the scalar and tensor quadratic Lagrangians in the Jordan frame \cite{Tsujikawa:2014uza} \begin{eqnarray} \L^J&=&a^2 q_s[\zeta'^2- c_s^2(\nabla \zeta)^2\Big]+ a^2 q_t\Big[h'^2- c^2_T(\nabla h)^2 \Big] \,, \label{eqJJ} \end{eqnarray} where we are denoting with $E$ and $J$ quantities in the Jordan and Einstein frame. The Lagrangian of perturbations in a new frame obtained via disformal transformation, such that $\tilde{a}=\Omega a$, is of the form \cite{Tsujikawa:2014uza} \begin{equation} \tilde{\L}^J=\tilde{a}^2 \tilde{q}_s[\tilde{\zeta}'^2- \tilde{c}_s^2(\nabla \tilde{\zeta})^2\Big]+ a^2 \tilde{q}_t\Big[\tilde{h}'^2- \tilde{c}^2_T(\nabla \tilde{h})^2 \Big] \,, \, \end{equation} where we are denoting with a tilde quantities in the new frame, and we are just defining the conformal part of the disformal transformation, because this is the one relevant for the coefficients of the Lagrangians for curvature and tensor modes. Since scalar and tensor perturbations are invariant under disformal transformations \cite{Tsujikawa:2014uza}, i.e. $\tilde{\R}=\R,\tilde{h}=h$, and $\L=\tilde{\L}$ by definition, by comparing the coefficients \cite{Tsujikawa:2014uza} in $\L$ and $\tilde{\L}$ we obtain the following transformations \begin{equation} \tilde{a}=\Omega a\quad \rightarrow \quad \tilde{q_s}=\Omega^{-2} q_s \,,\,\tilde{q_t}=\Omega^{-2} q_t \,,\, c_s=\tilde{c}_s \,,\, c_T=\tilde{c}_T \,, \label{LJOmega} \end{equation} Note that these relations are consistent, but different, from those derived in \cite{Tsujikawa:2014uza}, due to the use of conformal time. The corresponding effective quadratic Lagrangian in the Einstein frame is \begin{eqnarray} \L^{eff}_E&=&\tilde{\epsilon} \tilde{a}^2 \frac{1}{c^2_s}\Big[\zeta'^2- c_s^2(\nabla \zeta)^2\Big]+ \tilde{a}^2 \frac{1}{c^2_T}\Big[h'^2- c^2_T (\nabla h)^2 \Big]=\L^{eff}_{\zeta}+\L^{eff}_h \,,\label{LEhzeta} \end{eqnarray} where we are using the conformal invariance of $c_s,c_T,\R,h$, and we are denoting with $\tilde{\epsilon}$ the slow roll parameter in the Einstein frame, to distinguish it from that in the Jordan frame. We use this notation only in this section, for consistency with the notation for conformal transformation starting from the Jordan, while in other sections $\epsilon$ denotes the slow-roll parameter in the Einstein frame. Note that the Lagrangian above is consistent with the scalar perturbations action for $K-$inflation in the Einstein frame \cite{Garriga:1999vw}, and with the tensor action in the effective field theory of inflation \cite{Creminelli:2014wna}. By comparing eq.(\ref{LEhzeta}) and eq.(\ref{LJOmega}) we find that the Einstein frame and Jordan frame are related by \begin{eqnarray} \tilde{q}_t&=&\frac{1}{c^2_T} \,, \\ \tilde{q}_s&=&\frac{\tilde{\epsilon}}{c^2_s} \,. \end{eqnarray} which imply \begin{eqnarray} \Omega&=&c_T\sqrt{q_t} \,,\\ \tilde{\epsilon}&=&\frac{q_s c^2_s}{q_t c^2_T} \,, \label{JET} \end{eqnarray} in agreement with \cite{Romano:2022jeh}. The parameter $q_t$ is sometimes denoted at $M^2_*$, and interpreted as effective Planck mass \cite{Lagos:2019kds}. The above equations show that in the Einstein frame the effective Planck mass is not independent from $c_T$, since $\sqrt{\tilde{q}_t}=\tilde{M}_*=M_* \Omega^{-1}=1/c_T$, and its effects on scalar perturbations are encoded in Einstein frame slow-roll parameter $\tilde{\epsilon}$. This is not surprising, since in the Einstein frame the only physically relevant quantities are $c_s,c_T,\tilde{\epsilon}$, i.e. the so called Jordan frame effective Planck mass $M_*$ emerges only in the definition of the conformal transformation taking to the Jordan frame, and as such does not really play any physical role, due to the invariance of scalar and tensor perturbations under disformal transformation \cite{Tsujikawa:2014uza}. As explained earlier, the time dependency of the $c_s(\eta)$ and $c_T(\eta)$ is related to self interactions terms of the form $(1/c^2_s(\eta)-1)\R'^2$ and \cite{Creminelli:2014wna} $h'^2(1/c^2_T(\eta)-1)$, but in this case, since $\R$ and $h$ are not coupled to each other, but only self-coupled, we have \begin{eqnarray} \L^E=\L^E_{\zeta}+\L^E_h+\L^E_{int}=\tilde{\epsilon} \tilde{a}^2 \Big[\zeta'^2- (\nabla \zeta)^2\Big]&+& \tilde{a}^2 \Big[h'^2- (\nabla h)^2 \Big]+\L_h^{int}+\L_{\R}^{int}=\L^{eff}_{\zeta}+\L^{eff}_h \nonumber\,,\\ \L_h^{int}=\tilde{a}^2 h'^2\Big[\frac{1}{c^2_T(\eta)}-1\Big] &,& \L_{\R}^{int}=\tilde{\epsilon}\tilde{a}^2 \Big[\frac{1}{c^2_s(\eta)}-1\Big]\R'^2 \end{eqnarray} which was used to find the relation between the quantities in different from \begin{eqnarray} \L^J=\L^E&=&\L^{eff}_{\zeta}+\L^{eff}_h\,. \end{eqnarray} The first equality in the above equation just comes from the fact that Lagrangian densities in different frames are equal, because they are obtained by simply re-writing the same object in different ways. The conformal transformation defined in eq.(\ref{JET}) can also be applied to the Lagrangians including higher order interactions terms \cite{Romano:2022jeh}, and is consistent with the invariance of the coefficients of the perturbations equations expected from the invariance of $\R,h$, i.e. the invariance of the solutions implies the invariance of the equations and of the Lagrangians. In other words the equations and Lagrangians in the Jordan frame look different because of the conformal transformation, but the coefficients of the equations and of the Lagrangians as functions of space and time are the same, just written in a different way in terms of $\Omega$. In the Einstein frame the real number of independent degrees of freedom is more transparent. Note we are assuming that the coupling to gravity is properly transformed from one frame to the other. This implies that theories in which matter fields are minimally coupled to gravity by the metric in the Einstein vs Jordan frame are different, i.e. they have different Lagrangians, which cannot be simply related by a conformal transformation. \subsection{Effective gravitational coupling} The perturbed Einstein equations give the useful equation \begin{eqnarray} \frac{k^2}{\tilde{a}^2} \tilde{\psi}_B=\frac{1}{2} \delta \tilde{\rho}_c \label{PoissonE} \end{eqnarray} where $\tilde{\psi}_B$ is one of the Bardeen potentials, and $\tilde{\delta} \rho_c$ is the comoving energy density perturbation \cite{Kodama:1985bj}, and the tilde denotes quantities in the Einstein frame. The above equation is also valid for a modified gravity theory, once it has been transformed to the Einstein frame, in which the full Lagrangian takes the Hilbert form \begin{eqnarray} \tilde{\L}=\sqrt{-\tilde{g}}\mathcal{\tilde{R}}+\tilde{\L}_{tot}&=&\sqrt{-\tilde{g}}\mathcal{\tilde{R}}+\tilde{\L}_m+\tilde{\L}_{\phi}+\tilde{\L}_{m \phi} \,, \\ \frac{k^2}{\tilde{a}^2} \tilde{\psi}_B=\frac{1}{2} \delta \tilde{\rho}^{tot}_c &,& \delta \tilde{\rho}^{tot}_c= \delta \tilde{\rho}^{m}_c+\delta \tilde{\rho}^{\phi}_c+\delta \tilde{\rho}^{m \phi}_c \,, \label{PoissonTot} \end{eqnarray} where $\tilde{\L}_{tot}$ contains terms associated to matter and the modification of gravity in a single object, and $\tilde{\L}_{\phi},\tilde{\L}_m,\tilde{\L}_{m \phi}$ are respectively the Einstein frame Lagrangians related to the gravity modification, matter, and the non minimal coupling of gravity to matter, and a similar notation is adopted for the comoving energy density. In the Jordan frame the Poisson equation takes the form \cite{DeFelice:2011hq}. \begin{eqnarray} \frac{k^2}{a^2} \psi_B=\frac{1}{2} G_J \delta \rho_c \label{PoissonJ} \end{eqnarray} The difference between the gravitational couplings in the two frames is expected, since in in our units in the Einstein frame we have by definition $8 \pi G^{tot}_E=1$, and the effects of the modification of gravity are encoded in the total effective energy density $\delta \tilde{\rho}^{tot}_c$, which includes contributions from $\tilde{\L}_{\phi},\tilde{\L}_m,\tilde{\L}_{m \phi}$, not just from $\tilde{\L}_m$. The relation between the gravitational coupling $G^{m}_E$ for matter comoving energy density $\delta \tilde{\rho}_c^{m}$ will be investigated in more details in a future work, but we can anticipate that by appropriately manipulating eq.(\ref{PoissonTot}) we can write an equation of the form \begin{eqnarray} \frac{k^2}{\tilde{a}^2} \tilde{\psi}_B=\frac{1}{2} G^{m}_E \delta \tilde{\rho}^{m}_c \label{PoissonEm} \,. \end{eqnarray} Note that, contrary to curvature and tensor perturbations, the Bardeen potential and the comoving density perturbations are not invariant under disformal transformations \cite{Ghosh:2022ppn}, and for this reason we use a tilde to distinguish between them. \section{Relation to other effective approaches} The effective approach formulated in this paper is completely general, and as such includes all the effects of interaction at any order in a single effective quantity, and for an arbitrary number fields. We can compare this with previous results to see how it includes and extend them. \subsection{Effective field theory of inflation} The effective approach we have derived can describe the evolution of curvature not only for single field models, but also for multi-fields models \cite{Romano:2020oov}, while in the EFT of inflation \cite{Cheung:2007st} approach it is assumed only one scalar degree of freedom, i.e. entropy perturbations are ignored, and no general effective action for curvature perturbations is derived for multi-fields systems \cite{Senatore:2010wk}. The MESS approach allows to compute the effects of entropy and anisotropy on curvature perturbations for a generic system, including any number of fields, and predicts naturally the momentum dependence of the effective sound speed of curvature perturbations. \subsection{Effective field theory of dark energy} The effective field theory of dark energy \cite{Gubitosi:2012hu} applies the same symmetry breaking idea of the EFT of inflation to dark energy, but in the Jordan frame. The action is expanded to quadratic order, and for this reason is missing the frequency and polarization dependence of the effective speed which arises naturally in the MESS and SEGS approach, due to the higher order interactions terms. The general relation between Jordan and Einstein frame was derived in the previous section. \section{Quantum field theory implications} The effective Lagrangians we have derived are based on classical calculations, but they can be related \cite{Bonifacio:2022vwa} to the wavefunction of the scalar and tensor fields by the path integral \begin{equation} \Psi[\varphi] \ = \hspace{-0.4cm} \int\limits_{\substack{\phi(t) \,=\,\varphi\\ \hspace{-0.45cm}\phi(-\infty)\,=\,0}} \hspace{-0.5cm} \raisebox{-.05cm}{ ${\cal D} \phi\, e^{iS[\phi]}\,.$ } \end{equation} where $\phi$ is a generic field, which in our case could be $\R$ or $h$, and $S$ denotes the action. At tree level the path integral can be approximated by the action evaluated on the classical solution, which is the way in which we define the effective Lagrangian. From the wavefunction we can compute the equal-time correlators as \begin{equation} \langle\phi_1 \cdots \phi_N\rangle = \int{\cal D}\phi\, \phi_1\cdots\phi_N\left\lvert\Psi[\phi]\right\rvert^2\,. \end{equation} The above method should give the same result obtained by using canonical quantization in the in-in formalism \cite{Maldacena:2002vr}. Following this method we can for example compute corrections to the spectrum, arising from higher order interaction terms in the Lagrangian, in terms of the effective speed we have defined previously. More details about this approach will be given in a separate work. \section{Conclusions} We have derived a set of universal model independent effective equations and Lagrangians which can describe the evolution of scalar and tensor perturbation of any system whose field equations can be written in an Einstein like form. This includes for example multi-fields systems, or modified gravity, once they have been transformed to the Einstein frame. Given the generality of this effective description it is particularly suitable for model independent phenomenological analysis of observational data. This approach predicts naturally that the speed of gravitational waves con depend on frequency and polarization, due to the interactions of the graviton with itself or other fields. This prediction allows to use gravitational waves observations to investigate the elusive nature of dark matter and dark energy. The equation and Lagrangian for scalar and tensor perturbations has the same universal structure, and the effects of the interaction can be modeled at any order in perturbations by a single effective quantity, playing the role of effective propagation speed. This is particularly useful since it allows to compare different models in terms of the two quantities $c_s$ and $c_T$. Combining different set of observational data such as cosmic microwave background radiation and gravitational waves, it will be possible to constrain the $c_s$ and $c_{T,A}$, to determine possible deviations from general relativity and vanilla inflation. If a deviation is found, the theoretical research can be focused on those models able to predict the $c_s$ and $c_T$ supported by observations. In order to test specific models it will be important to perform higher order perturbations calculations for different models, in order to compute the effects which are not included in the quadratic action, and in the EST approach are treated as effective model independent phenomenological quantities. \begin{acknowledgments} I thank Misao Sasaki, Tessa Baker, Sergio Vallejo, Riccardo Sturani, Rogerio Rosenfeld, Nicola Tamanini and Suvodip Mukherjee for interesting discussions. I thank the ICTP-SAIFR for the kind hospitality during the preparation of this paper. \end{acknowledgments}
{'timestamp': '2023-01-16T02:14:45', 'yymm': '2301', 'arxiv_id': '2301.05679', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.05679'}
\section{Introduction} Representing the meanings of words using low-dimensional vector embeddings has become a standard technique in NLP. \emph{Static word embeddings}~\cite{mikolov2013distributed,pennington2014glove} represent words at the \emph{form} level by assigning a single vector for all occurrences of a word irrespective of its \emph{senses}. However, representing ambiguous words such as \emph{bass}, which could mean either a \emph{musical instrument} or a type of \emph{fish}, using a single embedding is problematic. To address this problem, \emph{sense-specific static word embedding} methods~\cite{reisinger2010multi,neelakantan2014efficient,huang2012improving} assign multiple embeddings to a single polysemous word corresponding to its multiple senses. However, these embeddings are context-insensitive and we must resort to different heuristics such as selecting the sense embedding of the ambiguous word that is most similar to the context, to determine which embedding should be selected to represent the word. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{outline.png} \caption{Outline of CDES. Given a sense-tagged sentence $t$, we compute a sense embedding for the ambiguous word \emph{bank} by multiplying its static word embedding, $\vec{g}(\textit{bank})$, by a sense-specific projection matrix, $\mat{A}_{\textit{bank\%00}}$, corresponding to the correct sense of the word. Projection matrices are learnt by minimising the squared $\ell_2$ loss between the linearly transformed (via a matrix $\mat{W}$) contextualised embedding, $\vec{c}(t, \textit{bank})$, and of the (nonlinearly transformed via function $f$) sense embedding of \emph{bank}.} \label{fig:outline} \end{figure} On the other hand, \emph{contextualised word embeddings} generated from NLMs~\cite{Elmo,devlin2019bert,liu2019roberta} represent a word in a given context by an embedding that considers both the meaning of the word itself as well as its context. Different types of information such as word sense, dependency, and numeracy have shown to be encoded in contextualised word embeddings, providing rich, context-sensitive input representations for numerous downstream NLP applications. More recently, \newcite{loureiro2019language} and \newcite{scarlini2020sensembert} showed that contextualised embeddings such as BERT~\cite{devlin2019bert} and ELMo~\cite{Elmo} can be used to create sense embeddings by means of external semantic networks, such as WordNet~\cite{miller1998wordnet} and BabelNet~\cite{navigli2010babelnet}. Moreover, \newcite{levine-etal-2020-sensebert} showed that BERT can be fine-tuned using WordNet's supersenses to learn contextualised sense embeddings. Inspired by these prior successes, we ask and affirmatively answer the question -- \emph{can we extract sense-related information from contextualised word embeddings to create sense-specific versions of (pretrained) sense-agnostic static embeddings?} To this end, we propose, Context Derived Embeddings of Senses (\textbf{CDES}) a method to extract sense-related information encoded in contextualised word embeddings and inject it into pretrained sense-agnostic static word embeddings to create sense-specific static embeddings. Given a contextualised embedding, a static word embedding and a sense-annotated corpus, CDES learns sense-specific projection matrices that can be used to predict the sense embeddings of words from their word embeddings. Following the distributional hypothesis~\cite{DS}, we require that the predicted sense embedding of a word must align (possibly nonlinearly) with the meaning of the context of the word, represented using a contextualised embedding as outlined in Figure~\ref{fig:outline}. At a more conceptual level, CDES can be seen as using contextualised language models as a proxy for extracting information relevant to a particular task, without learning it directly from text corpora. In particular, prior work probing language models has shown that rich information about languages is compactly and accurately encoded within contextualised representations produced by NLMs~\cite{klafka-ettinger-2020-spying}. Moreover, CDES can also be seen as an instance of \emph{model distillation}~\cite{Furlanello:2018aa}, where a complex teacher model (i.e. a contextualsied word embedding) is used to train a simpler student model (i.e. a sense-sensitive static embedding). There are several advantages in CDES for learning sense-specific static embeddings. CDES is computationally relatively lightweight because it uses \emph{pretrained} static embeddings as well as contextualised embeddings from a \emph{pretrained} NLM and does not require training these resources from scratch. CDES static sense embeddings can be precomputed because of their independence on the context. Therefore, CDES embeddings are attractive to NLP applications that must run on limited hardware resources. Because subtokenisation methods, such as Byte Pair Encoding (BPE), must be used to limit the vocabulary sizes, one must post-process subtoken embeddings (e.g. by mean pooling) to create word embeddings with contextualised embeddings, whereas static embeddings can directly learn word embeddings. To increase the coverage of sense embeddings, in addition to the sense related information extracted from contextualised embeddings, CDES incorporates contextual information from external corpora and knowledge bases. We evaluate CDES on Word Sense Disambiguation~\cite[WSD]{navigli2009word} (Section~\ref{sec:WSD}) and Words in Context~\cite[WiC]{Pilehvar:2019} (Section~\ref{sec:wic}) tasks. In both tasks, CDES learns accurate sense embeddings and outperforms many existing static sense embeddings. In particular, on the WSD framework~\cite{raganato2017word}, CDES reports the best performance in 4 out of 6 benchmarks, and on WiC reports competitive results to the current state-of-the-art without any fine-tuning of on task data. \section{Context-Derived Embedding of Senses} \label{sec:method} Given (a) pretrained static word embeddings, (b) contextualised word embeddings from a pretrained NLM, and (c) a sense-annotated corpus, CDES learns a sense-specific version of (a), representing each sense of a word by a different vector. To describe CDES in detail, let us denote the sense-agnostic static embedding of a word $u \in \cV$ in a vocabulary $\cV$, by $\vec{g}(u) \in \R^{p}$. Moreover, let us denote the contextualised embedding model $c$, from which we can obtain a context-sensitive representation $\vec{c}(u, t) \in \R^{q}$ corresponding to $u$ in some context $t \in \cC(u)$. Here, $\cC(u)$ is the set of contexts in which $u$ occurs. An ambiguous word $u$ is likely to take different senses in different contexts $t$, and our goal is to learn a sense-specific embedding of $u$ that captures the different senses of $u$. Let us denote by $\cS$ the set of word senses taken by all words in $\cV$. An ambiguous word $u$ will belong to a subset $\cS(u)$ of senses in $\cS$. Let us denote the sense-specific embedding of $u$ corresponding to the $i$-th sense $s_{i} \in \cS(u)$ by $\vec{s}_{i}(u) \in \R^{p}$. We model the process of creating sense-specific embeddings from static embeddings as a projection learning task, where we multiply the static embedding, $\vec{g}(u)$, by a sense-specific projection matrix, $\mat{A}_{i}$, to produce $\vec{s}_{i}(u)$ as in \eqref{eq:proj}. \begin{align} \label{eq:proj} \vec{s}_{i}(u) = \mat{A}_{i} \vec{g}(u) \end{align} Here, \eqref{eq:proj} decouples a sense embedding into a sense-agnostic static lexical semantic component given by $\vec{g}(u)$ and a word-independent sense-specific component $\mat{A}_{i}$, enabling efficient sense-specific embedding learning using pretrained embeddings. The projection matrices can be seen as linear operators that produce different sense-specific embeddings from the same static word (lemma) embedding, corresponding to the different senses of the lemma. On the other hand $\vec{c}(u,t)$ encodes both sense related information for $u$ as well as information not related to $u$ such as the grammatical gender or number in the context $t$. Therefore, we apply a linear filter parameterised by a matrix $\mat{W} \in \R^{q \times p}$, to extract sense related information from $\vec{c}(u,t)$. Given a sense tagged corpus, we jointly learn $\mat{W}$ and $\mat{A}_{i}$s by minimising the objective given by \eqref{eq:loss}. \par\nobreak {\small \begin{align} \label{eq:loss} L(\mat{W}, \{\mat{A}_i\}_{i=1}^{|\cS|}) = \sum_{\substack{u \in \cV \\ t \in \cC(u) \\ s_i \in \cS(u)}} \norm{\mat{W} \vec{c}(u,t) - f(\mat{A}_i\vec{g}(u))}_2^2 \end{align} }% Here, $f$ is an elementwise nonlinear function that enables us to consider nonlinear associations between contextualised and static word embeddings. In our experiments, we consider linear, ReLU and GELU activations as $f$. After training, we can compute the sense embeddings $\vec{s}_{i}(u)$ using \eqref{eq:proj} with the pretrained static word embeddings $\vec{g}(u)$. Eq.~\eqref{eq:loss} can be seen as aligning the contextualised and static word embeddings under a nonlinear transformation. The only learnable parameters in our proposed method are $\mat{W}$ and sense-specific projections $\mat{A}_{1}, \ldots, \mat{A}_{|\cS|}$. In particular, we \emph{do not} require re-training or fine-tuning static or contextualised embeddings and can be seen as a post-processing method applied to pretrained embeddings, similar to retrofitting~\cite{shi-etal-2019-retrofitting}. We limit the sense-specific projection matrices to diagonal matrices in our experiments because in our preliminary investigations we did not find any significant advantage in using full matrices compared to the extra storage. Moreover, a diagonal matrix can be compactly represented by storing only its diagonal elements as a vector, which reduces the number of parameters to learn (thus less likely to overfit) and speeds up matrix-vector multiplications. \subsection{Context Aggregation} \label{sec:context} An important limitation of the above-mentioned setting is that it requires sense-annotated corpora. Manually annotating word senses in large text corpora is expensive and time consuming. Moreover, such resources might not be available for low resource languages. Even if such sense-annotated corpora are available for a particular language, they might not cover all different senses of all of the words in that language, resulting in an inadequate sense coverage. For example, SemCor~\cite{SemCor}, one of the largest manually-annotated corpora for English word senses including more than 220K words tagged with 25K distinct WordNet meanings, covers only 15\% of all synsets in the WordNet. To address this sense-coverage problem, we follow prior proposals~\cite{scarlini2020more} to extract additional contexts for a word from (a) \textbf{the dictionary definitions of synsets}, and (b) \textbf{an external corpus}. \paragraph{Gloss-based Sense Embeddings:} To create sense embeddings from dictionary definitions, we use the glosses of synsets in the WordNet. Given a word $u$, we create a gloss-based sense embedding, $\vec{\psi}(u)_i \in \R^q$, represented by the sentence embedding, $\vec{c}(t_i)$, computed from the gloss $t_i$ corresponding to the synset $s_i$ of $u$. Here, $\vec{c}(t_i)$ is computed by averaging the contextualised embeddings for the tokens in the gloss $t_i$ as given in \eqref{eq:context}. \begin{align} \label{eq:context} \vec{c}(t_i) = \avg_{w \in t_i}\vec{c}(w,t_i) \end{align} Here, $\mathrm{avg}$ denotes mean pooling over the tokens $w$ in $t_i$. Following \newcite{loureiro2019language} and \newcite{scarlini2020more}, in our experiments, we use BERT as the contextualised embedding model and use the sum of the final four layers as token embeddings. \paragraph{Corpus-based Sense Embeddings:} To extract contexts from an external corpus for given a word $u$, we retrieve all sentences as contexts $t \in \cC(u)$ from the corpus where $u$ occurs. We then cluster the extracted sentences (represented by the sentence embeddings computed using \eqref{eq:context}) using the $k$-means algorithm. We assume each cluster to contain similar sentences and that $u$ will be used in the same sense in all sentences in a cluster. We use UKB\footnote{\url{http://ixa2.si.ehu.eus/ukb/}}~\cite{agirre-etal-2014-random}, a knowledge-based approach to WSD that uses the Personalised PageRank algorithm~\cite{Haveliwala_2002}, to disambiguate the clusters. To increase the coverage of senses represented by the clusters, we consider collocations of $u$ available in SyntagNet~\cite{maru-etal-2019-syntagnet}\footnote{\url{http://syntagnet.org/}} following \newcite{scarlini2020more}. Specifically, for each word $u$, we find words $v$ that forms a collocation with $u$ in SyntagNet, and extract sentences $t$ that contain both $u$ and $v$ within a co-occurrence window. The synset id $s_i$ assigned to the $(u,v)$ pair in SyntagNet is used as the sense id for all extracted sentences for $u$. Finally, we compute a corpus-based sense embedding $\vec{\phi}_i(u) \in \R^q$ as the cluster centroid, where sentence embeddings are computed using \eqref{eq:context}. \subsection{Sense Embedding and Disambiguation} \label{sec:sense} The final CDES static sense embedding, $\mathbf{cdes}_i(u) \in \R^{p+2q}$ of the $i$-th sense of $u$ is computed as the concatenation of $\vec{s}_i(u)$ (given by \eqref{eq:proj}), gloss-based sense embedding $\vec{\psi}_i(u)$ and corpus-based sense embedding $\vec{\phi}_i(u)$ as given by \eqref{eq:cdes}, where $\oplus$ denotes vector concatenation. \begin{align} \label{eq:cdes} \mathbf{cdes}_i(u) = \vec{s}_i(u) \oplus \vec{\psi}_i(u) \oplus \vec{\phi}_i(u) \end{align} In order to disambiguate a word $u$ in a given context $t'$, we first compute a contextualised embedding $\vec{\zeta}(u,t') \in \R^{p+2q}$ by concatenating three vectors as give by \eqref{eq:wsd} \begin{align} \label{eq:wsd} \vec{\zeta}(u,t') = \vec{g}(u) \oplus \vec{c}(u,t') \oplus \vec{c}(u,t') \end{align} We then compute the cosine similarity between $\vec{\zeta}(u,t')$ and $\mathbf{cdes}_i(u)$ for each sense $s_i$ of $u$. We limit the candidate senses based on the lemma and part-of-speech of $u$ in $t'$, and select the most similar (1-NN) sense of $u$ as its disambiguated sense in context $t'$. \section{Experiments} \subsection{Experimental Setup} In our experiments, we use the pretained GloVe\footnote{\url{nlp.stanford.edu/projects/glove/}} embeddings (Common Crawl with 840B tokens and 2.2M vocabulary) as the static word embeddings $\vec{g}(u)$ with $p = 300$. We use pretrained BERT (\texttt{large-bert-cased}\footnote{\url{https://bit.ly/33Nsmou}}) as the contextualised embedding model, $\vec{c}(u,t)$ with $q = 1024$. Following prior work~\cite{luo2018leveraging,luo2018incorporating,loureiro2019language,scarlini2020more}, we use sense annotations from SemCor $3.0$~\cite{SemCor} as the sense-tagged corpus, which is the largest corpus annotated with WordNet sense ids. As the external corpus for extracting contexts as described in Section~\ref{sec:context}, we use the English Wikpedia. The number of clusters in $k$-means is set to the number of distinct senses for the lexeme according to the WordNet. The number of words given to UKB is set to 5 and the number of sentences extracted from Wikipedia per lemma is set to 150 following \newcite{scarlini2020more}. The co-occurrence window size for considering collocations extracted from SyntagNet is set to 3 according to \newcite{maru-etal-2019-syntagnet}. We evaluate the learnt sense embeddings in two downstream tasks: WSD (Section~\ref{sec:WSD}) and WiC (Section~\ref{sec:wic}). The statistics of SemCor, all-words English WSD and WiC datasets are showed in Table~\ref{tbl:statistics}. \begin{table}[t] \centering \resizebox{0.48\textwidth}{!}{ \begin{tabular}{lccccc} \toprule Dataset &Total &Nouns &Vebs &Adj &Adv \\ \midrule SemCor &226,036 &87,002 &88,334 &31,753 &18,947 \\ \midrule \textbf{WSD}& & & & & \\ SE2 &2,282 &1,066 &517 &445 &254 \\ SE3 &1,850 &900 &588 &350 &12 \\ SE07 &455 &159 &296 &- &- \\ SE13 &1,644 &1,644 &- &- &- \\ SE15 &1,022 &531 &251 &160 &80 \\ ALL &7,253 &4,300 &1,652 &955 &346 \\ \midrule \midrule \textbf{WiC} &Instances &Nouns &Vebs &\multicolumn{2}{c}{Unique Words} \\ \midrule Training &5,428 &2,660 &2,768 &\multicolumn{2}{c}{1,256} \\ Dev &638 &396 &242 &\multicolumn{2}{c}{599} \\ Test &1,400 &826 &574 &\multicolumn{2}{c}{1,184} \\ \bottomrule \end{tabular}} \caption{The statistics of the training and evaluation datasets. SemCor is used for training. SemEval (SE07, SE13, SE15) and Senseval (SE2, SE3) datasets are used for the WSD task, whereas the WiC dataset is used for sense discrimination task.} \label{tbl:statistics} \vspace{-5mm} \end{table} To project contextualised and static word embeddings to a common space, we set $\mat{W} \in \R^{300 \times 1024}$. To reduce the memory footprint, number of trainable parameters and thereby overfitting, we constrain the sense-specific matrices $\mat{A}_i \in \R^{300 \times 300}$ to be diagonal. We initialise all elements of $\mat{W}$ and $\mat{A}_i$s uniformly at random in $[0,1]$. We use Adam as the SGD optimiser and set the minibatch size to $64$ with an initial learning rate of 1E-4. All hyperparameter values were tuned using a randomly selected subset of training data set aside as a validation dataset. The t-SNE visualisations in the paper are produced with \texttt{sklearn.manifold.TSNE} using n\_components=2, init=\emph{pca}, perplexity=3, n\_iter=1500 and metric=\emph{cosine}. All experiments were conducted on a machine with a single Titan V GPU (12 GB RAM), Intel Xeon 2.60 GHz CPU (16 cores) and 64 GB of RAM. Overall, training time is less than 3 days on this machine, \subsection{Word Sense Disambiguation (WSD)} \label{sec:WSD} WSD is a fundamental task in NLP, which aims to identify the exact sense of an ambiguous word in a given context~\cite{navigli2009word}. To evaluate the proposed sense embeddings, we conduct a WSD task using the evaluation framework proposed by~\newcite{raganato2017word}, which includes all-words English WSD datasets: Senseval-2 (SE2), Senseval-3 task 1 (SE3), SemEval-07 task 17 (SE07), SemEval-13 task 12 (SE13) and SemEval-15 task 13 (SE15). We used the framework's official scoring scripts to avoid any discrepancies in the scoring methodology. As described in Section~\ref{sec:sense}, the sense of a word in a context is predicted by the 1-NN method. Table~\ref{tbl:wsd} shows the WSD results. Most Frequent Sense (MFS) baseline selects the most frequent sense of a word in the training corpus and has proven to be a strong baseline~\cite{McCarthy:2007}. \newcite{scarlini2020more} use \newcite{Elmo}'s method with BERT on SemCor+OMSTI~\cite{taghipour2015one} to propose SemCor+OMSTI$_{BERT}$ baseline. ELMo $k$-NN uses ELMo embeddings to predict the sense of a word following the nearest neighbour strategy. Specifically, they first obtain ELMo embeddings for all words in SemCor sentences, and average the embeddings for each sense. At test time, they run ELMo on the given test sentence containing the ambiguous word and select the sense with the highest cosine similarity. \newcite{loureiro2019language} repeated this method using BERT~\cite{devlin2019bert} embeddings to propose the BERT $k$-NN baseline. EWISE$_{ConvE}$~\cite{kumar2019zero} learns a sentence encoder for sense definition by using WordNet relations as well as ConvE~\cite{dettmers2018convolutional}. \newcite{scarlini2020more} report the performance of using BERT base-multilingual-cased (mBERT) instead of BERT large with MFS fallback. \newcite{hadiwinoto2019improved} integrating pretrained BERT model with gated linear unit (GLU) and layer weighted (LW). GlossBERT~\cite{huang2019glossbert} fine tunes the pretrained BERT model by jointly encoding contexts and glosses. LMMS~\cite{loureiro2019language} learns sense embeddings using BERT to generate contextualised embeddings from semantic networks and sense definitions. To perform WSD, they use the $1$-NN method and compare sense embeddings against contextualised embeddings generated by BERT. \newcite{scarlini2020more} augment UKB with SyntagNet’s relations~\cite{scozzafava2020personalized} and obtain UKB$_{+Syn}$. SensEmBERT is a knowledge-based approach, which produces sense embeddings by means of BabelNet and Wikipedia. Although SensEmBERT is effective in modelling nominal meanings, it only consists of nouns due to the limitation of its underlying resources. SensEmBERT$_{sup}$ is the supervised version of SensEmBERT. ARES~\cite{scarlini2020more} is a semi-supervised approach for learning sense embeddings by incorporating sense annotated datasets, unlabelled corpora and knowledge bases. \begin{table}[t] \centering \resizebox{0.5\textwidth}{!}{ \begin{tabular}{lcccccc} \toprule Models &SE2 &SE3 &SE07 &SE13 &SE15 &ALL \\ \midrule MFS &65.6 &66.0 &54.5 &63.8 &67.1 &65.6\\ SemCor+OMSTI$_{BERT}$ &74.0 &70.6 &63.1 &72.4 &75.0 &72.2\\ ELMo $k$-NN &71.5 &67.5 &57.1 &65.3 &69.9 &67.9\\ BERT $k$-NN &76.3 &73.2 &66.2 &71.7 &74.1 &73.5\\ EWISE$_{ConvE}$ &73.8 &71.1 &67.3 &69.4 &74.5 &71.8\\ mBERT $k$-NN + MFS &72.7 &70.1 &62.4 &69.0 &72.0 &70.5\\ BERT$_{GLU+LW}$ &75.5 &73.4 &68.5 &71.0 &76.2 &74.0\\ GlossBERT &77.7 &75.2 &\pmb{76.1} &72.5 &80.4 &77.0 \\ LMMS &76.3 &75.6 &68.1 & 75.1 &77.0 &75.4\\ UKB$_{+Syn}$ &71.2 &71.6 &59.6 &72.4 &75.6 &71.5\\ SensEmBERT &70.8 &65.4 &58.0 &74.8 &75.0 &70.1 \\ SenseEmBERT$_{sup}$ &72.2 &69.9 &60.2 &\pmb{78.7} &75.0 &72.8\\ ARES & 78.0 &\underline{77.1} &71.0 &77.3 &\pmb{83.2} &77.9 \\ \midrule \textit{Proposed Method} \\ CDES$_{linear}$ &\pmb{78.4} & 76.9 &71.0 &77.6 & \underline{83.1} &\underline{78.0}\\ CDES$_{ReLU}$ &\underline{78.1} &\underline{77.1} &71.0 &77.5 &\underline{83.1} &\underline{78.0} \\ CDES$_{GELU}$ &\underline{78.1} &\pmb{77.3} &\underline{71.4} &\underline{77.7} &\pmb{83.2} &\pmb{78.1} \\ \bottomrule \end{tabular} } \caption{F1 scores (\%) for English all-words WSD on the test sets of \protect\newcite{raganato2017word}. Bold and underline indicate the best and the second best results, respectively.} \label{tbl:wsd} \end{table} To study the effect of using a nonlinear mapping $f$ between static and contextualised embedding spaces in \eqref{eq:loss}, we train CDES with linear, ReLU and GELU activations to create respectively CDES$_{linear}$, CDES$_{ReLU}$ and CDES$_{GELU}$ versions. From Table~\ref{tbl:wsd} we see that among these versions, CDES$_{GELU}$ outperforms the linear and ReLU versions in all datasets, except on SE2 where CDES$_{linear}$ performs best. This result shows that nonlinear mapping (GELU) to be more appropriate for extracting sense-related information from contextualised embeddings. Moreover, we see that CDES versions consistently outperform all previously proposed sense embeddings, except on SE07 and SE13 where GlossBERT and SenseBERT$_{sup}$ performs best respectively. On SE15, the performance of CDES$_{GELU}$ is equal to that of ARES. Overall, CDES$_{linear}$ obtains the best performance on SE2, while CDES$_{GELU}$ performs best on SE3, SE15 and ALL. This result provides empirical support to our working hypothesis that contextualised embeddings produced by NLMs encode much more information beyond sense related information, which must be filtered out using $\mat{W}$. CDES is able to accurately extract the sense-specific information from contextualised embeddings generated by a pretrained NLM to create sense-specific versions of pretrained sense-agnostic static embeddings. \subsection{Words in Context (WiC)} \label{sec:wic} \newcite{Pilehvar:2019} introduced the WiC dataset for evaluating sense embedding methods. For a particular word $u$, WiC contains pairs of sentences, ($t_{1}$, $t_{2}$) where the same (\emph{positive}) or different (\emph{negative}) senses of $u$ can occur. An accurate sense embedding method must be able to discriminate the different senses of an ambiguous word. The problem is formalised as a binary classification task and classification accuracy is reported as the evaluation metric. A method that assigns the same vector to all of the senses of a word would report a chance-level (i.e. $50\%$) accuracy on WiC. Similar to Section~\ref{sec:WSD}, we first determine the sense-specific embeddings of $u$, $\vec{s}_{i}(u)$ and $\vec{s}_{j}(u)$ for the senses of $u$ used in respectively $t_{1}$ and $t_{2}$. We then train a binary logistic regression classifier using the train split of WiC, where we use the cosine similarities between the two vectors in the following six pairs as features, comparing sense and contextualised embeddings in the two sentences.: ($\vec{s}_{i}(u)$, $\vec{s}_{j}(u)$), ($\vec{\zeta}(u,t_{1})$, $\vec{\zeta}(u,t_{2})$), ($\vec{s}_{i}(u)$, $\vec{\zeta}(u,t_{1})$), ($\vec{s}_{j}(u)$, $\vec{\zeta}(u,t_{2})$), ($\vec{s}_{i}(u)$, $\vec{\zeta}(u,t_{2})$) and ($\vec{s}_{j}(u)$, $\vec{\zeta}(u,t_{1})$). We train this classifier using the official train split in WiC. In particular, we do not fine-tune the static or contextualised embeddings that are used as inputs by CDES on WiC because our goal is to extract sense-related information already present in the pretrained embeddings. \begin{table}[t] \centering \small \begin{tabular}{lc} \toprule Models &Accuracy \% \\ \midrule \textit{Static Embeddings} \\ GloVe~\cite{pennington2014glove} & 50.9\\ \midrule \textit{Contextualised Embeddings} \\ ElMo~\cite{peters2018deep} &57.7\\ ELMo-weighted~\cite{ansell2019elmo} &61.2\\ BERT-large~\cite{devlin2019bert} &65.5\\ RoBERTa~\cite{liu2019roberta} &69.9\\ KnowBERT-W+W~\cite{peters2019knowledge} &70.9\\ SenseBERT-large~\cite{levine-etal-2020-sensebert} &\underline{72.1} \\ BERT$_{ARES}$~\cite{scarlini2020more} &\pmb{72.2} \\ \midrule \textit{Static Sense Embeddings} \\ MUSE~\cite{lee2017muse} &48.4 \\ LMMS~\cite{Loureiro2019LIAADAS} &67.7\\ LessLex~\cite{colla-etal-2020-lesslex} &59.2 \\ CDES$_{linear}$ &69.0 \\ CDES$_{ReLu}$ &68.6 \\ CDES$_{GELU}$ &68.8\\ \bottomrule \end{tabular} \caption{Performance on WiC. Bold and underline respectively indicate the best and the second best results.} \label{tbl:wic-dev} \end{table} \begin{figure*}[t] \centering \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=2.0in, height=2.0in]{figures/bank_sense_glove.png} \label{fig:side:a} \end{minipage}% \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=2.0in, height=2.0in]{figures/bank_sense_ares.png} \end{minipage} \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=2.0in, height=2.0in]{figures/bank_sense_linear_CDES.png} \end{minipage} \caption{t-SNE visualisations of the nearest neighbours of \emph{bank} corresponding to the two senses \emph{financial institution} (in red) and \emph{sloping land} (in blue) are shown for GloVe, ARES and CDES embeddings. Sense labels of synonyms are omitted to avoid cluttering.} \label{fig:tsne_bank} \end{figure*} In Table~\ref{tbl:wic-dev}, we report the classification accuracies on WiC for different types of embeddings such as static word embeddings (GloVe), contextualised embeddings generated by NLMs (ELMo, ElMo-weighted, BERT-large, RoBERTa and KnowBERT), and sense-specific embeddings (MUSE, LMMS, LessLex, SenseBERT-large and BERT$_{ARES}$). Due to space limitations we omit the details of these embeddings. From Table~\ref{tbl:wic-dev} we see that SenseBERT-large and BERT$_{ARES}$ obtain better performance than other embeddings. All the CDES variants outperform previous static sense embeddings learning methods. However, MUSE\footnote{\url{https://github.com/MiuLab/MUSE}} do not assign sense labels to sense embeddings as done by LMMS, LessLex and CDES. Among CDES variants, CDES$_{linear}$ performs best and is closely followed by GELU and ReLU variants. Although, CDES variants do not surpass the current SoTA methods such as SenseBERT-large and BERT$_{ARES}$ on WiC, unlike CDES these methods fine-tune on WiC train data and/or use more complex classifiers with multiple projection layers compared to the single logistic regression over six features used by CDES.\footnote{BERT$_{ARES}$ and SenseBERT use respectively 2048 and 1024 features for sense prediction in WiC.} More importantly, results from both WSD and WiC experiments support our claim that contextualised embeddings encode word sense related information that can be extracted and injected into sense-insensitive static word embeddings via (non)linear projections to create sense-sensitive versions of the sense-insensitive static embeddings. \subsection{Visualisation of Sense Embeddings} \label{sec:nonlinear} To visualise the embeddings corresponding to the different senses of an ambiguous word, we consider \emph{bank}, which has the two distinct senses: \emph{financial institution} and \emph{sloping land}. We randomly select $5$ synonyms for each sense of \emph{bank} from the WordNet and project their sense/word embeddings using t-SNE in Figure~\ref{fig:tsne_bank}. Compared to GloVe, we see that words with related meanings are projected onto coherent clusters by ARES and CDES. This indicates that sense embeddings are able to distinguish polysemy correctly compared to static word embeddings. Overall, we see that CDES produces better separated clusters than both GloVe and ARES. \subsection{Nearest Neighbours of Sense Embeddings} \label{sec:NN} \begin{table*}[t!] \centering \small \begin{tabular}{p{2.7cm}p{2.7cm}p{2.7cm}p{2.7cm}p{2.7cm}} \toprule \multicolumn{5}{p{16cm}}{\textbf{Sentence 1:} The \colorbox{cyan}{banks} which held the mortgage on the old church declared that the interest was considerably in arrears, and the real estate people said flatly that the land across the river was being held for an eventual development for white working people who were coming in, and that none would be sold to colored folk.} \\ \midrule GloVe & BERT & LMMS & SenseBERT & CDES\\ \midrule mortgage & mortgage & mortgage & mortgage & mortgage \\ interest & interest & church & real & real estate \\ estate & held & sell & old & sell \\ river & church & interest & land & interest \\ real & river & real estate & interest & church \\ \midrule \midrule \multicolumn{5}{p{16cm}}{\textbf{Sentence 2:} Through the splash of the rising waters, they could hear the roar of the river as it raged through its canyon, gnashing big chunks out of the \colorbox{red}{banks}.} \\ \midrule GloVe & BERT & LMMS & SenseBERT & CDES\\ \midrule mortgage & river & river & splash & river \\ interest & waters & canyon & land & water \\ estate & chunks & land & out & rise \\ river & splash & folk & through & canyon \\ real & canyon & church & chunks & folk \\ \bottomrule \end{tabular} \caption{Nearest neighbours computed using the word/sense embeddings of \emph{bank} in two sentences.} \label{tbl:NN} \end{table*} \iffalse \begin{table*}[t!] \centering \small \begin{tabular}{p{2.7cm}p{2.7cm}p{2.7cm}p{2.7cm}p{2.7cm}} \toprule \multicolumn{5}{p{16cm}}{The \colorbox{cyan}{banks} which held the mortgage on the old church declared that the interest was considerably in arrears, and the real estate people said flatly that the land across the river was being held for an eventual development for white working people who were coming in, and that none would be sold to colored folk.} \\ \midrule GloVe & BERT & LMMS & SenseBERT & CDES \\ \midrule mortgage(0.64) & mortgage(0.54) & bank\%01(0.83) & mortgage(0.88) & working\%00(0.50) \\ interest(0.48) & interest(0.45) & mortgage\%00(0.70) & real(0.86) & say\%00(0.41) \\ estate(0.46) & held(0.37) & river\%00(0.67) & old(0.86) & real\_estate\%00(0.36) \\ river(0.42) & church(0.35) & church\%00(0.64) & land(0.86) & interest\%00(0.29) \\ real(0.39) & river(0.34) & real\_estate\%00(0.62) & interest(0.86) & water\%00(0.26) \\ through(0.36) & people(0.34) & water\%00(0.60) & it(0.84) & church\%00(0.25) \\ arrears(0.36) & real(0.32) & land\%01(0.60) & splash(0.81) & hear\%00(0.21) \\ big(0.34) & old(0.31) & sell\%00(0.60) &through (0.78) & considerably\%01 (0.20) \\ they(0.34) & estate(0.31) & interest\%00(0.59) & hear(0.77) & chunk\%00(0.15) \\ land(0.34) & waters(0.27) & folk\%00(0.58) & river(0.76) & big\%01(0.11) \\ \midrule \midrule \multicolumn{5}{p{16cm}}{Through the splash of the rising waters, they could hear the roar of the river as it raged through its canyon, gnashing big chunks out of the \colorbox{red}{banks}.} \\ \midrule GloVe & BERT & LMMS & SenseBERT & CDES \\ \midrule mortgage(0.64) & river(0.51) & bank\%00(0.83) & splash(0.52) & canyon\%00(0.12) \\ interest(0.49) & waters(0.44) & river\%00(0.76) & land(0.52) & river\%00(0.11) \\ estate(0.46) & chunks(0.43) & mortgage\%00(0.65) & out(0.50) & old\%01(0.09) \\ river(0.42) & it(0.40) & land\%01(0.63) & through(0.49) & be\%03(0.08) \\ real(0.39) & canyon(0.36) & church\%00(0.63) & chunks(0.48) & roar\%00(0.08) \\ through(0.36) & splash(0.35) & canyon\%00(0.63) & interest(0.48) & gnash\%00(0.06) \\ arrears(0.36) & gnashing(0.35) & water\%00(0.61) & old(0.48) & interest\%00(0.05) \\ big(0.34) & through(0.31) & folk\%00(0.60) & mortgage(0.48) & chunk\%00(0.05) \\ they(0.34) & mortgage(0.31) & real\_estate\%00(0.58) & its(0.48) & water\%00(0.04) \\ land(0.34) & rising(0.31) & people\%00(0.58) & river(0.47) & hold\%04(0.04) \\ \bottomrule \end{tabular} \caption{Nearest neighbours computed using the word/sense embeddings of \emph{bank} in two sentences.} \label{tbl:NN} \end{table*} \fi An accurate sense embedding method must be able to represent an ambiguous word with different embeddings considering the senses expressed by that word in different contexts. To understand how sense embedding of a word vary in different contexts, we compute the nearest neighbours of an ambiguous word using its sense embedding. Table~\ref{tbl:NN} shows two sentences from SemCor containing \emph{bank}, where in Sentence 1, \emph{bank} takes the \emph{financial institution} sense, and in Sentence 2 the \emph{sloping land (especially the slope beside a body of water)} sense. We compute the sense embedding of \emph{bank}, given each sentence as the context, using different methods and compute the top 5 nearest neighbours, shown in the descending order of their cosine similarity scores with the sense embedding of \emph{bank} in each sentence. GloVe, which is sense and context insensitive uses the same vector to represent \emph{bank} in both sentences, resulting in the same set of nearest neighbours, which is a mixture of finance and riverbank related words. On the other hand, BERT, which is context-sensitive but not sense-specific, returns different sets of nearest neighbours in the two cases. In particular, we see that finance-related nearest neighbours such as \emph{mortgage} and \emph{interest} are selected for the first sentence, whereas riverbank-related nearest neighbours such as \emph{water} and \emph{canyon} for the second. However, BERT does not provide sense embeddings and some neighbours such as \emph{river} appear in both sets, because it appears in the first sentence, although not related to \emph{bank} there. SenseBERT~\cite{levine-etal-2020-sensebert} disambiguates word senses at a coarse-grained WordNet's supersense level. We see that SenseBERT correctly detects words such as \emph{mortgage} and \emph{interest} as neighbours of \emph{bank} in the first sentence, and \emph{splash} and \emph{land} in the second. We see that \emph{land} appears as a nearest neighbour in both sentences, although it is more related to the \emph{sloping land} sense than the \emph{financial institution} sense of \emph{bank}. LMMS selects \emph{church} as a nearest neighbour for both sentences, despite being irrelevant to the second. On the other hand, CDES correctly detects \emph{church} for the first sentence and not for the second. Overall, CDES correctly lists financial institution sense related words such as \emph{mortgage}, \emph{real estate} and \emph{interest} for the first sentence, and sloping land sense related words such as \emph{river}, \emph{water} and \emph{canyon} in the second sentence. \section{Related Work} \newcite{reisinger2010multi} proposed multi-prototype embeddings to represent word senses, which was extended by \newcite{huang2012improving} combining both local and global contexts. Both methods use clustering to group contexts of a word related to the same sense. Although the number of senses depends on the word, these methods assign a fixed number of senses to all words. To overcome this limitation, \newcite{neelakantan2014efficient} proposed a non-parametric model, which estimates the number of senses dynamically per word. Even though clustering-based methods are able to assign multi-prototype embeddings for a word, they still suffer from the fact that the trained embeddings are not associated with any sense inventories~\cite{Camacho_Collados_2018}. In contrast, knowledge-based approaches learn sense embeddings by extracting sense-specific information from external sense inventories, such as WordNet and BabelNet. \newcite{chen2014unified} extended word2vec~\cite{mikolov2013distributed} to learn sense-specific embeddings associated with the WordNet~\cite{miller1998wordnet} \emph{synsets}. \newcite{rothe2015autoextend} used the semantic relations in WordNet to embed words and their senses into a common vector space. \newcite{iacobacci2015sensembed} use the sense definitions from BabelNet and perform word sense disambiguation (WSD) to obtain sense-specific contexts. Recently, contextualised embeddings generated by NLMs have been used to create sense embeddings. \newcite{loureiro2019language} construct sense embeddings by taking the average over the contextualised embeddings of the sense annotated tokens from SemCor. \newcite{scarlini2020sensembert} propose a knowledge-based approach for constructing BERT-based embeddings of senses by means of the lexical-semantic information in BabelNet and Wikipedia. CDES proposed in this paper extends this line of work, where we extract sense-related information from an NLM and inject into a static word embedding to create a sense-specific version of the latter. Moreover, we follow prior work~\cite{scarlini2020more,loureiro2019language} and incorporate contexts from external resources such as Wikipedia, WordNet and SyntagNet representing different senses of a word to enhance sense embeddings learnt using sense-labelled corpora. \section{Conclusion} We proposed CDES, a method which is able to generate sense embeddings by extracting the sense-related information from contextualised embeddings. CDES integrates the gloss information from a semantic network as well as the information from an external corpus to tackle the sense-coverage problem. Evaluations on multiple benchmark datasets related to WSD and WiC tasks show that CDES learns accurate sense emebddings, and report comparable results to the current SoTA. All experiments reported in the paper are limited to English language and we plan to extend the proposed method to learn multilingual sense embeddings in our future work. \section{Experimental Settings and Hyperparameters} We use Adam~\cite{Kingma:ICLR:2015} as the optimiser and set the minibatch size to $64$ with an initial learning rate of 1E-4. GloVe is used as the pre-trained static word embedding and contain $300$ dimensional embeddings, whereas BERT-large model is used as the pre-trained contextualised word embeddings and $1024$ dimensional. In order to map the contextualised embeddings and static embeddings in to a common vector space, we set $\mat{W} \in \R^{300 \times 1024}$. To reduce the memory footprint, number of trainable parameters and thereby overfitting, we constrain the sense-specific matrices $\mat{A}_i \in \R^{300 \times 300}$ to be diagonal matrices. $\mat{W}$ and $\mat{A}_{i}$ matrices are randomly initialised following Xavier initialisation~\cite{Glorot:AISTAT:2010}. Following the work of~\newcite{wiedemann2019does}, we set $k=3$ for the WSD task, which means that we consider three potential senses when conducting WSD task. For WiC task we set $k=1$ for all comparisons. All the above hyperparameter values were tuned using a randomly selected subset of training data as validation data. In terms of visualisation, we use \texttt{sklearn.manifold.TSNE} to perform t-SNE visualisation. We set the n\_components=2, init=`pca', perplexity=3, n\_iter=1500 and metric=`cosine'.
{'timestamp': '2021-10-07T02:18:37', 'yymm': '2110', 'arxiv_id': '2110.02204', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.02204'}
\section{Introduction} Competition between growth and fragmentation is a common phenomenon for a structured population. It arises for instance in a context of cell division (see, among many others, \cite{adimy, basse, Bekkal1, Bekkal2, Doumic, farkas, GW2, MetzDiekmann, PTou}), polymerization (see \cite{biben, destaing}), telecommunication (see \cite{Baccelli}) or neurosciences (see \cite{PPS}). It is also a mechanism which rules the proliferation of prion's proteins (see \cite{CL1, Greer, LW}). These proteins are responsible of spongiform encephalopaties and appear in the form of aggregates in infected cells. Such polymers grow attaching non infectious monomers and converting them into infectious ones. On the other hand they increase their number by splitting. To describe such phenomena, we write the following integro-differential equation, \begin{equation}\label{eq:temporel}\left \{ \begin{array}{l} \displaystyle\dfrac{\partial}{\partial t} u(x,t) + \dfrac{\partial}{\partial x} \big(\tau(x) u(x,t)\big) + \beta(x) u(x,t) = 2 \int_{x}^{\infty} \beta(y) \kappa (x,y) \, u(y,t) \, dy, \qquad x \geqslant0, \\ \\ u(x,0)=u_0(x), \\ \\ u(0,t)=0. \end{array}\right.\end{equation} The function $u(x,t)$ represents the quantity of individuals (cells or polymers) of structured variable (size, protein content...) $x$ at time $t.$ These individuals grow (\emph{i.e.}, polymers aggregate monomers, or cells increase by nutrient uptake for instance) with the rate $\tau(x).$ Equation \eqref{eq:temporel} also takes into account the fragmentation of a polymer (or the division of a cell) of size $y$ into two smaller polymers of size $x$ and $y-x.$ This fragmentation occurs with a rate $\beta(y)$ and produce an aggregate of size $x$ with the rate $\kappa(x,y).$ Equation \eqref{eq:temporel} is a particular case of the more general one \begin{equation}\label{eq:general}\dfrac{\p}{\p t} u(x,t) + \dfrac{\p}{\p x} \big(\tau(x) u(x,t)\big) + [\beta(x)+\mu(x)] u(x,t) = n \int_{x}^{\infty} \beta(y) \kappa (x,y) \, u(y,t) \, dy, \qquad x \geqslant x_0,\end{equation} with the bound condition $u(x_0,t)=0$ (see \cite{Banasiak, CL1, LW}). Here, polymers are broken in an average of $n>1$ smaller ones by the fragmentation process, there is a death term $\mu(x)\geq0$ representing degradation, and a minimal size of polymers $x_0$ which can be positive. This more general model is biologically and mathematically relevant in the case of prion proliferation and is used in \cite{CL2, CL1, Greer, LW} with a coupling to an ODE. Our results remain true for this generalization. A fundamental tool to study the asymptotic behaviour of the population when $t\to\infty$ is the existence of eigenelements ($\lb,\U,\phi$) solution of the equation \begin{equation} \label{eq:eigenproblem} \left \{ \begin{array}{l} \displaystyle \f{\p}{\p x} (\tau(x) \U(x)) + ( \beta (x) + \lb) \U(x) = 2 \int_x^\infty \beta(y)\kappa(x,y) \U(y) dy, \qquad x \geqslant0, \\ \\ \tau\U(x=0)=0 ,\qquad \U(x)\geq0, \qquad \int_0^\infty \U(x)dx =1, \\ \\ \displaystyle -\tau(x) \f{\p}{\p x} (\phi(x)) + ( \beta (x) + \lb) \phi(x) = 2 \beta(x) \int_0^x \kappa(y,x) \phi(y) dy, \qquad x \geqslant0, \\ \\ \phi(x)\geq0, \qquad \int_0^\infty \phi(x)\U(x)dx =1. \end{array} \right. \end{equation} For the first equation (equation on $\U$) we are looking for ${\mathcal D}'$ solutions defined as follows : $\U\in L^1(\mathbb{R}^+)$ is a ${\mathcal D}'$ solution if $\forall\varphi\in{\mathcal C}^\infty_c(\mathbb{R}^+),$ \begin{equation}\label{eq:D'eigenproblem} \displaystyle -\int_0^\infty\tau(x)\U(x)\p_x\varphi(x)\,dx + \lb\int_0^\infty\U(x)\varphi(x)\,dx = \int_0^\infty\beta(x)\U(x)\Bigl(2\int_0^\infty\varphi(y)\kappa(y,x)\,dy-\varphi(x)\Bigr)\,dx. \end{equation} Concerning the dual equation, we are looking for a solution $\phi\in W^{1,\infty}_{loc}(0,\infty)$ such that the equality holds in $L^1_{loc}(0,\infty),\ i.e.$ almost everywhere. When such elements exist, the asymptotic growth rate for a solution to \eqref{eq:temporel} is given by the first eigenvalue $\lb$ and the asymptotic shape is given by the corresponding eigenfunction $\U.$ More precisely, it is proved for a constant fragmentation rate $\beta$ that $u(x,t)e^{-\lb t}$ converges exponentially fast to $\rho\U(x)$ where $\rho=\int u_0(y)dy$ (see \cite{LP,PR}). For more general fragmentation rates, one can use the dual eigenfunction $\phi$ and the so-called ''General Relative Entropy`` method introduced in \cite{MMP1,BP}. It provides similar results but without the exponential convergence, namely that $$\int_0^\infty \bigl|u(y,t)e^{-\lambda t}-\langle u_0,\phi\rangle{\mathcal U}(y)\bigr|\phi(y)\,dy\underset{t\to\infty}{\longrightarrow}0$$ where $\langle u_0,\phi\rangle=\int u_0(y)\phi(y)dy$ (see \cite{MMP1,MMP2}). The eigenvalue problem can also be used in nonlinear cases, such as prion proliferation equations, where there is a quadratic coupling of Equation \eqref{eq:temporel} or \eqref{eq:general} with a differential equation. In \cite{CL2, CL1, Engler, Pruss} for instance, the stability of steady states is investigated. The use of entropy methods in the case of nonlinear problems remains however a challenging and widely open field (see \cite{PTum} for a recent review). \ Existence and uniqueness of eigenelements has already been proved for general fragmentation kernels $\kappa(x,y)$ and fragmentation rates $\beta(x),$ but with very particular polymerization rates $\tau(x),$ namely constant ($\tau\equiv1$ in \cite{BP}), homogeneous ($\tau(x)=x^\mu$ in \cite{M1}) or with a compact support ($Supp\,\tau=[0,x_M]$ in \cite{Doumic}). The aim of this article is to consider more general $\tau$ as \cite{CL1, Silveira} suggest. Indeed, there is no biological justification to consider specific shapes of $\tau$ in the case when $x$ represents a size (mass or volume) or some structuring variable and not the age of a cell (even in this last case it is not so clear that $\f{dx}{dt}=1,$ since biological clocks may exhibit time distorsions). For instance, for the prion proteins, the fact that the small aggregates are little infectious (see \cite{Lenuzza,Silveira}) leads us to include the case of rates vanishing at $x=0.$ Considering fully general growth rates is thus indispensable to take into account biological or physical phenomena in their full diversity. The proof of \cite{BP} can be adapted for non constant rates but still positive and bounded ($0<m<\tau(x)<M$). The paper \cite{M1} gives results for $\tau(0)=0,$ but for a very restricted class of shape for $\tau.$ The paper \cite{Doumic} gives results for $\tau$ with general shape in the case where there is also an age variable (integration in age then allows to recover Problem \eqref{eq:temporel}), but requires a compact support and regular parameters. Here we consider polymerization rates that can vanish at $x=0,$ with general shape and few regularity for the all parameters ($\tau,\ \beta$ and $\kappa$). From a mathematical viewpoint, relaxing as far as possible the assumptions on the rates $\tau,\kappa,\beta,$ as we have done in this article, also leads to a better understanding of the intrinsic mechanisms driving the competition between growth and fragmentation. \begin{theorem}[Existence and Uniqueness]\label{th:eigenelements} Under assumptions \eqref{as:kappa1}-\eqref{as:betatauinf}, there exists a unique solution $(\lb,\U,\phi)$ (in the sense we have defined before) to the eigenproblem \eqref{eq:eigenproblem} and we have $$\lb>0,$$ $$x^\al\tau\U\in L^p(\mathbb{R}^+),\quad\forall \al\geq-\gamma,\quad\forall p\in[1,\infty],$$ $$x^\al\tau\U\in W^{1,1}(\mathbb{R}^+),\quad\forall \al\geq0$$ $$\exists k>0\ s.t.\ \f{\phi}{1+x^k}\in L^\infty(\mathbb{R}^+),$$ $$\tau\f{\p}{\p x}\phi\in L_{loc}^\infty(\mathbb{R}^+).$$ \end{theorem} The end of this paper is devoted to define precisely the assumptions and prove this theorem. It is organized as follows : in Section \ref{se:coefficients} we describe the assumptions and give some examples of interesting parameters. In Section \ref{se:proof} we prove Theorem \ref{th:eigenelements} using $a\ priori$ bounds on weighted norms and then we give some consequences and perspectives in Section \ref{se:csq}. The proof of technical lemmas and theorem can be found in the Appendix. \ \section{Coefficients}\label{se:coefficients} \subsection{Assumptions} For all $y\geq0,\ \kappa(.,y)$ is a nonnegative measure with a support included in $[0,y].$ We define $\kappa$ on $(\mathbb{R}_+)^2$ as follows : $\kappa(x,y)=0\ \text{for}\ x>y.$ We assume that for all continuous function $\psi,$ the application $f_\psi:y\mapsto\int\psi(x)\kappa(x,y)\,dx$ is Lebesgue measurable.\\ The natural assumptions on $\kappa$ (see \cite{Greer} for the motivations) are that polymers can split only in two pieces which is taken into account by \begin{equation}\label{as:kappa1}\int\kappa(x,y) dx = 1.\end{equation} So $\kappa(y,.)$ is a probability measure and $f_\psi\in L^\infty_{loc}(\mathbb{R}^+).$ The conservation of mass imposes \begin{equation}\label{as:kappa2}\int x\kappa(x,y) dx = \frac y 2,\end{equation} a property that is automatically satisfied for a symetric fragmentation ($i.e.\ \kappa(x,y)=\kappa(y-x,y)$) thanks to \eqref{as:kappa1}. For the more general model \eqref{eq:general}, assumption \eqref{as:kappa2} becomes $\int x\kappa(x,y) dx = \frac y n$ to preserv the mass conservation.\\ We also assume that the second moment of $\kappa$ is less than the first one \begin{equation}\label{as:kappa3}\int\f{x^2}{y^2} \, \kappa(x,y) dx \leq c < 1/2\end{equation} (it becomes $c<1/n$ for model \eqref{eq:general}). We refer to the Examples for an explanation of the physical meaning. \ For the polymerization and fragmentation rates $\tau$ and $\beta,$ we introduce the set $${\mathcal P}:=\bigl\{f\geq0\,:\,\exists\mu,\nu\geq0,\ \limsup_{x\to\infty}x^{-\mu}f(x)<\infty\ \text{and}\ \liminf_{x\to\infty}x^\nu f(x)>0\bigr\}$$ and the space $$L^1_0:=\bigr\{f,\ \exists a>0,\ f\in L^1(0,a)\bigl\}.$$ We consider \begin{equation}\label{as:betatauspace}\beta\in L^1_{loc}(\mathbb{R}^{+*})\cap{\mathcal P},\qquad \exists\al_0\geq0\ s.t.\ \tau\in L^\infty_{loc}(\mathbb{R}^{+},x^{\al_0}dx)\cap{\mathcal P}\end{equation} satisfying \begin{equation}\label{as:taupositivity}\forall K\ \text{compact of}\ (0,\infty),\ \exists m_K>0\quad s.t.\quad\tau(x)\geq m_K\ \text{for}\ a.e.\ x\in K\end{equation} (if $\tau$ is continuous, this assumption \eqref{as:taupositivity} is nothing but saying that for all $x>0,\ \tau(x)>0$) and \begin{equation}\label{as:betasupport}\exists b\geq0,\quad Supp\beta=[b,\infty).\end{equation} Assumption \eqref{as:betasupport} is necessary to prove uniqueness and existence for the adjoint problem. \ To avoid shattering (zero-size polymers formation, see \cite{Banasiak,LW}), we assume \begin{equation}\label{as:kappatau}\exists\, C>0,\gamma\geq0\quad s.t.\qquad\int_0^x\kappa(z,y)\,dz\leq \min\Bigl(1,C\Bigl(\f x y\Bigr)^\gamma\Bigr)\qquad\text{and}\qquad\f{x^\gamma}{\tau(x)}\in L^1_0\end{equation} which links implicitely $\tau$ to $\kappa,$ and also \begin{equation}\label{as:betatau0}\f{\beta}{\tau}\in L^1_0.\end{equation} On the other hand, to avoid forming infinitely long polymers (gelation phenomenon, see \cite{EscoMischler1,EscoMischler2}), we assume \begin{equation}\label{as:betatauinf}\lim_{x\rightarrow +\infty}\f{x\beta(x)}{\tau(x)}=+\infty.\end{equation} \begin{remark}\label{rk:kappa} In case when \eqref{as:kappatau} is satisfied for $\gamma>0,$ then \eqref{as:kappa3} is automatically fulfilled (see Lemma \ref{lm:kappa} in the Appendix). \end{remark} \ \subsection{Examples} First we give some examples of coefficients which satisfy or not our previous assumptions.\\ For the fragmentation kernel, we first check the assumptions \eqref{as:kappa1} and \eqref{as:kappa2}. They are satisfied for autosimilar measures, namely $\kappa(x,y)=\f1y\kappa_0(\f xy),$ with $\kappa_0$ a probability measure on $[0,1],$ symmetric in $1/2.$ Now we exhibit some $\kappa_0.$ \ \noindent{\bf General mitosis :} a cell of size $x$ divides in a cell of size $rx$ and one of size $(1-r)x$ (see \cite{M2}) \begin{equation}\label{ex:kappar}\kappa_0^r=\f12(\delta_{r}+\delta_{1-r})\qquad\text{for}\qquad r\in[0,1/2].\end{equation} Assumption \eqref{as:kappatau} is satisfied for any $\gamma>0$ in the cases when $r\in(0,1/2].$ So \eqref{as:kappa3} is also fulfilled thanks to Remark \ref{rk:kappa}. The particular value $r=1/2$ leads to equal mitosis ($\kappa(x,y)=\delta_{x=\f y2}$).\\ The case $r=0$ corresponds to the renewal equation ($\kappa(x,y)=\f12(\delta_{x=0}+\delta_{x=y})$). In this case, we cannot strictly speak of mitosis because the size of the daughters are $0$ and $x.$ It appears when $x$ is the age of a cell and not the size. This particular case is precisely the one that we want to avoid with assumption \eqref{as:kappa3} ; it can also be studied seperately with different tools (see \cite{PTum} for instance). For such a fragmentation kernel, assumption \eqref{as:kappatau} is satified only for $\gamma=0,$ and the moments $\int z^k\kappa_0(z)dz$ are equal to $1/2$ for all $k>0,$ so \eqref{as:kappa3} does not hold true. However, if we consider a convex combination of $\kappa_0^0$ with another kernel such as $\kappa_0^r$ with $r\in(0,1/2],$ then \eqref{as:kappatau} remains false for any $\gamma>0$ but \eqref{as:kappa3} is fulfilled. Indeed we have for $\rho\in(0,1)$ $$\int z^2(\rho\kappa_0^0(z)+(1-\rho)\kappa_0^r(z))\,dz=\f\rho2+\f{1-\rho}2(r^2+(1-r)^2)=\f12(1-2r(1-r)(1-\rho))<\f12.$$ \noindent{\bf Homogeneous fragmentation :} \begin{equation}\label{ex:kappaal}\kappa_0^\al(z)=\f{\al+1}{2}(z^\al+(1-z)^\al)\qquad\text{for}\qquad\al>-1.\end{equation} It gives another class of fragmentation kernels, namely in $L^1$ (unlike the mitosis case). The parameter $\gamma=1+\al>0$ suits for \eqref{as:kappatau} and so \eqref{as:kappa3} is fulfilled. It shows that our assumptions allow fragmentation at the ends of the polymers (called depolymerization, see \cite{Lenuzza}, when $\al$ is close to $-1$) once it is not the extreme case of renewal equation.\\ Uniform repartition ($\kappa(x,y)=\f1y{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{0\leq x\leq y}$) corresponds to $\al=0$ and is also included. \ This last case of uniform repartition is useful because it provides us with explicit formulas for the eigenelements. For instance, we can consider the two following examples. \ \noindent{\bf First example :} $\,\tau(x)=\tau_0,\ \beta(x)=\beta_0x.$\\ In this case, widely used by \cite{Greer}, the eigenelements exist and we have $$\lb=\sqrt{\beta_0\tau_0},$$ $$\U(x)=2\sqrt{\f{\beta_0}{\tau_0}}\Bigl(X+\f{X^2}2\Bigr)e^{-X-\f{X^2}2},\quad\text{with}\ X=\sqrt{\f{\beta_0}{\tau_0}}x,$$ $$\phi(x)=\f12(1+X).$$ \noindent{\bf Second example :} $\ \tau(x)=\tau_0x.$\\ For such $\beta$ for which there exists eigenelements, we have $$\lb=\tau_0\qquad \text{and}\qquad \phi(x)=\f x{\int y\U(y)}.$$ For instance when $\beta(x)=\beta_0x^n$ with $n\in\mathbb{N}^*,$ then the eigenelements exist and we can compute $\U$ and $\phi$ and we have the formulas in Table \ref{tab:examples}. In this table we can notice that $\U(0)>0$ but the boundary condition $\tau\U(0)=0$ is fulfilled. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|} \hline $n=1$&$\lb=\tau_0$&$\U(x)=\f{\beta_0}{\tau_0}e^{-\f{\beta_0}{\tau_0}x}$&$\phi(x)=\f{\beta_0}{\tau_0}x$\\ \hline $n=2$&$\lb=\tau_0$&$\U(x)=\sqrt{\f{2\beta_0}{\pi\tau_0}}e^{-\f12\f{\beta_0}{\tau_0}x^2}$&$\phi(x)=\sqrt{\f{\pi\beta_0}{2\tau_0}}x$\\ \hline $n$ & $\lb=\tau_0$&$\U(x)=\Bigl(\f{\beta_0}{n\tau_0}\Bigr)^{\f1n}\f n{\Gamma(\f1n)}e^{-\f1n\f{\beta_0}{\tau_0}x^n}$&$\phi(x)=\Bigl(\f{\beta_0}{n\tau_0}\Bigr)^{\f1n}\f {\Gamma(\f1n)}{\Gamma(\f2n)}x$\\ \hline \end{tabular} \caption{\label{tab:examples}The example $\tau(x)=\tau_0x,\ \beta(x)=\beta_0x^n$ and uniform repartition $\kappa(x,y)=\f1y{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{0\leq x\leq y}.$ The table gives the eigenelements solution to \eqref{eq:eigenproblem}.} \end{center} \end{table} Now we turn to non-existence cases. Let us consider constant fragmentation $\beta(x)=\beta_0$ with an affine polymerization $\tau(x)=\tau_0+\tau_1x,$ and any fragmentation kernel $\kappa$ which satisfies to assumptions \eqref{as:kappa1}-\eqref{as:kappa2}. We notice that \eqref{as:betatauinf} is not satisfied and look at two instructive cases. \ \noindent{\bf First case :} $\ \tau_0=0.$\\ In this case assumption \eqref{as:betatau0} does not hold true. Assume that there exists $\U\in L^1(\mathbb{R}^+)$ solution of \eqref{eq:eigenproblem} with the estimates of Theorem \ref{th:eigenelements}. Integrating the equation on $\U$ we obtain that $\lb=\beta_0,$ but multiplying the equation by $x$ before integration we have that $\lb=\tau_1.$ We conclude that eigenelements cannot exist if $\tau_1\neq\beta_0.$\\ Moreover, if we take $\kappa(x,y)=\f1y{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{0\leq x\leq y},$ then a formal computation shows that any solution to the first equation of \eqref{eq:eigenproblem} belongs to the plan $Vect\{x^{-1},x^{-\f{2\beta_0}{\tau_1}}\}.$ So, even if $\beta_0=\tau_1$, there does not exist an eigenvector in $L^1.$ \ \noindent{\bf Second case :} $\ \tau_0>0.$\\ In this case \eqref{as:betatau0} holds true but the same integrations than before lead to $$\int x\U(x)\,dx=\f{\tau_0}{\beta_0-\tau_1}.$$ So there cannot exist any eigenvector $\U\in L^1(x\,dx)$ for $\tau_1\geq\beta_0.$ \ \section{Proof of the main theorem}\label{se:proof} The proof of Theorem \ref{th:eigenelements} is divided as follows. We begin with a result concerning the positivity of the $a\ priori$ existing eigenvectors (Lemma \ref{lm:positivity}). We then define, in Section \ref{subsec:truncated}, a regularized and truncated problem for which we know that eigenelements exist (see the Appendix \ref{se:KR} for a proof using the Krein-Rutman theorem), and we choose it such that the related eigenvalue is positive (Lemma \ref{lm:lambdapositivity}). In Section~\ref{subsec:estim}, we give a series of estimates that allow us to pass to the limit in the truncated problem and so prove the existence for the original eigenproblem \eqref{eq:eigenproblem}. The positivity of the eigenvalue $\lb$ and the uniqueness of the eigenelements are proved in the two last subsections. \subsection{A preliminary lemma} Before proving Theorem \ref{th:eigenelements}, we give a preliminary lemma, useful to prove uniqueness of the eigenfunctions. \begin{lemma}[Positivity]\label{lm:positivity} Consider $\U$ and $\phi$ solutions to the eigenproblem \eqref{eq:eigenproblem}.\\ We define $\displaystyle m:=\inf_{x,y}\bigl\{x\,:\,(x,y)\in Supp\,\beta(y)\kappa(x,y)\bigr\}.$ Then we have, under assumptions \eqref{as:kappa1}, \eqref{as:kappa2}, \eqref{as:taupositivity} and \eqref{as:betasupport} $$Supp\,\U=[m,\infty)\qquad\text{and}\qquad\tau\U(x)>0\quad\forall x>m,$$ $$\phi(x)>0\quad\forall x>0.$$ If additionaly $\f1\tau\in L^1_0,$ then $\phi(0)>0.$ \end{lemma} \begin{remark} In case $Supp\,\kappa=\{(x,y)/x\leq y\},$ then $m=0$ and Lemma \ref{lm:positivity} and Theorem \ref{th:eigenelements} can be proved without the connexity condition \eqref{as:betasupport} on the support of $\beta.$ \end{remark} \begin{proof} Let $x_0>0,$ we define $F:x\mapsto \tau(x)\U(x)e^{\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}.$ We have that \begin{equation}\label{Upositivity}F'(x)=2e^{\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}\int\beta(y)\kappa(x,y)\U(y)\,dy\geq0.\end{equation} So, as soon as $\tau\U(x)$ once becomes positive, it remains positive for larger $x.$ \ We define $a:=\inf\{x\,:\,\tau(x)\U(x)>0\}.$ We first prove that $a\leq \f b2.$ For this we integrate the equation on $[0,a]$ to obtain $$\int_0^a\int_a^\infty\beta(y)\kappa(x,y)\U(y)\,dydx=0,$$ $$\int_a^\infty\beta(y)\U(y)\int_0^a\kappa(x,y)\,dxdy=0.$$ Thus for almost every $y\geq\max(a,b),\ \int_0^a\kappa(x,y)\,dx=0.$ As a consequence we have $$1=\int\kappa(x,y)\,dx=\int_a^y\kappa(x,y)\,dx\leq\f1a\int x\kappa(x,y)\,dx=\f y{2a}$$ thanks to \eqref{as:kappa1} and \eqref{as:kappa2}, and this is possible only if $b\geq2a.$ \ Assume by contradiction that $m<a,$ integrating \eqref{eq:eigenproblem} multiplied by $\varphi,$ we have for all $\varphi\in{\mathcal C}_c^\infty$ such that $Supp\,\varphi\subset[0,a]$ \begin{equation}\label{eq:positivity}\int\int\varphi(x)\beta(y)\kappa(x,y)\U(y)\,dydx=0.\end{equation} By definition of $m$ and using the fact that $m<a,$ there exists $(p,q)\in(m,a)\times(b,\infty)$ such that $(p,q)\in Supp\,\beta(y)\kappa(x,y).$ But we can choose $\varphi$ positive such that $\varphi(p)\U(q)>0$ and this is a contradiction with \eqref{eq:positivity}. So we have $m\geq a.$\\ To conclude we notice that on $[0,m],\ \U$ satisfies $$\p_x(\tau(x)\U(x))+\lb\U(x)=0.$$ So, thanks to the condition $\tau(0)\U(0)=0$ and the assumption \eqref{as:taupositivity}, we have $\U\equiv0$ on $[0,m],$ so $m=a$ and the first statement is proved. \ For $\phi,$ we define $G(x):=\phi(x)e^{-\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}.$ We have that \begin{equation}\label{eq:phipositivity}G'(x)=-2e^{-\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}\beta(x)\int_0^x\kappa(y,x)\phi(y)\,dy\leq0,\end{equation} so, as soon as $\phi$ vanishes, it remains null. Therefore $\phi$ is positive on an interval $(0,x_1)$ with $x_1\in\mathbb{R}_+^*\cup\{+\infty\}.$ Assuming that $x_1<+\infty$ and using that $x_1>a=m$ because $\int\phi(x)\U(x)dx=1,$ we can find $X\geq x_1$ such that $$\int_{x_1}^XG'(x)\,dx=-2\int_{x_1}^X\int_0^{x_1}e^{\int_{x_0}^x\f{\lb+\beta(s)}{\tau(s)}ds}\phi(y)\beta(x)\kappa(y,x)\,dy\,dx<0.$$ This contradicts that $\phi(x)=0$ for $x\geq x_1,$ and we have proved that $\phi(x)>0$ for $x>0.$\\ If $\f1\tau\in L^1_0,$ we can take $x_0=0$ in the definition of $G$ and so $\phi(0)>0$ or $\phi\equiv0.$ The fact that $\phi$ is positive ends the proof of the lemma. \qed \end{proof} \ \subsection{Truncated problem} \label{subsec:truncated} The proof of the theorem is based on uniform estimates on the solution to a truncated equation. Let $\eta,\ \delta,\ R$ positive numbers and define $$\tau_\eta(x)=\left\{\begin{array}{ll}\eta&0\leq x\leq\eta\\ \tau(x)&x\geq\eta. \end{array}\right.$$ Then $\tau_\eta$ is lower bounded on $[0,R]$ thanks to \eqref{as:taupositivity} and we denote by $\mu=\mu(\eta,R):=\inf_{[0,R]}\tau_\eta.$ The existence of eigenelements $(\lb_\eta^\delta,\U_\eta^\delta,\phi_\eta^\delta)$ for the following truncated problem when $\delta R<\mu$ is standard (see Theorem \ref{th:KreinRutman} in the Appendix). \begin{equation}\label{eq:truncated} \left \{ \begin{array}{l} \displaystyle \f{\p}{\p x} (\tau_\eta(x) \U_\eta^\delta(x)) + ( \beta(x) + \lb_{\eta}^\delta) \U_\eta^\delta(x) = 2 \int_x^R\beta(y)\kappa(x,y) \U_\eta^\delta(y)\,dy,\qquad 0<x<R, \\ \\ \tau_\eta\U_\eta^\delta(x=0)=\delta,\qquad \U_\eta^\delta(x)>0 , \qquad \int \U_\eta^\delta(x)dx =1, \\ \\ \displaystyle -\tau_\eta(x) \f{\p}{\p x} \phi_\eta^\delta(x) + ( \beta(x) + \lb_\eta^\delta) \phi_\eta^\delta(x) - 2 \beta(x) \int_0^x\kappa(y,x) \phi_\eta^\delta(y)\,dy = \delta\phi_\eta^\delta(0),\qquad 0<x<R, \\ \\ \phi_\eta^\delta(R)=0, \qquad \phi_\eta^\delta(x)>0 , \qquad \int \phi_\eta^\delta(x)\U_\eta^\delta(x)dx =1. \end{array} \right. \end{equation} The proof of the theorem \ref{th:eigenelements} requires $\lb_\eta^\delta>0.$ To enforce it, we take $\delta R=\f\mu2$ and we consider $R$ large enough to satisfy the following lemma. \begin{lemma}\label{lm:lambdapositivity} Under assumptions \eqref{as:kappa1}, \eqref{as:betatauspace} and \eqref{as:betatauinf}, there exists a $R_0>0$ such that for all $R>R_0,$ if we choose $\delta=\f\mu{2R},$ then we have $\lb_\eta^\delta>0.$ \end{lemma} \begin{proof} Assume by contradiction that $R>0$ and $\lb_\eta^\delta\leq0$ with $\delta=\f\mu{2R}.$ Then, integrating between $0$ and $x>0,$ we obtain \begin{eqnarray*} 0&\geq&\lb\int_0^x\U(y)\,dy\\ &=&\delta-\tau(x)\U(x)-\int_0^x\beta(y)\U(y)\,dy+2\int_0^x\int_z^R\beta(y)\kappa(z,y)\U(y)\,dy\,dz\\ &=&\delta-\tau(x)\U(x)+\int_0^x\beta(y)\U(y)\,dy+2\int_x^R\Bigl(\int_0^x\kappa(z,y)\,dz\Bigr)\beta(y)\U(y)\,dy\\ &\geq&\delta-\tau(x)\U(x)+\int_0^x\beta(y)\U(y)\,dy. \end{eqnarray*} Consequently $$\tau(x)\U(x)\geq\delta+\int_0^x\f{\beta(y)}{\tau(y)}\tau(y)\U(y)\,dy$$ and, thanks to Gr\"onwall's lemma, $$\tau(x)\U(x)\geq\delta e^{\int_0^x\f{\beta(y)}{\tau(y)}dy}.$$ But assumption \eqref{as:betatauinf} ensures that for all $n\geq0,$ there is a $A>0$ such that $$\f{\beta(x)}{\tau(x)}\geq\f nx,\qquad\forall x\geq A$$ and thus we have $$\tau(x)\U(x)\geq\delta x^n,\quad\forall x\geq A.$$ Thanks to \eqref{as:betatauspace} we can choose $n$ and $A$ such that $x^{-n}\tau(x)\leq \f\mu4$ for $x\geq A$ and then we have $$1=\int_0^R\U(x)\,dx\geq\int_A^R\U(x)\,dx\geq\delta\int_A^R\f{x^n}{\tau(x)}\,dx\geq\f2R(R-A)$$ what is a contradiction as soon as $R>2A$; so Lemma \ref{lm:lambdapositivity} holds for $R_0=2A.$ \qed \end{proof} \ \subsection{Limit as $\delta\to0$ for $\U_\eta^\delta$ and $\lb_\eta^\delta$} \label{subsec:estim} Fix $\eta$ and let $\delta\rightarrow0$ (then $R\to\infty$ since $\delta R=\f\mu2$). \paragraph{\it First estimate: $\lb_\eta^\delta$ upper bound.} Integrating equation \eqref{eq:truncated} between $0$ and $R,$ we find $$\lb_\eta^\delta\leq\delta+\int\beta(x)\U_\eta^\delta(x)\,dx,$$ then the idea is to prove a uniform estimate on $\int\beta\U_\eta^\delta.$ For this we begin with bounding the higher moments $\int x^\al\beta\U_\eta^\delta$ for $\al\geq\max{(2,\al_0+1)}:=m.$\\ Let $\al\geq\text m,$ according to \eqref{as:kappa3} we have $$\int\f{x^\al}{y^\al}\kappa(x,y)\,dx\leq\int\f{x^2}{y^2}\kappa(x,y)\,dx\leq c<\f12.$$ Multiplying the equation on $\U_\eta^\delta$ by $x^\al$ and then integrating on $[0,R],$ we obtain for all $A\geq\eta$ \begin{eqnarray*} \int x^\al\bigl((1-2c)\beta(x)\bigr)\U_\eta^\delta(x)\,dx&\leq&\al\int x^{\al-1}\tau_\eta(x)\U_\eta^\delta(x)\,dx\\ &=&\al\int_{x\leq A}x^{\al-1}\tau_\eta(x)\U_\eta^\delta(x)\,dx+\al\int_{x\geq A}x^{\al-1}\tau(x)\U_\eta^\delta(x)\,dx\\ &\leq&\al A^{\al-1-\al_0}\sup_{x\in(0,A)}{\{x^{\al_0}\tau(x)\}}+\omega_{A,\al}\int x^\al\beta(x)\U_\eta^\delta(x)\,dx, \end{eqnarray*} where $\omega_{A,\al}$ is a positive number chosen to have $\al\tau(x)\leq\omega_{A,\al} x\beta(x),\ \forall x\geq A.$ Thanks to \eqref{as:kappa3} and \eqref{as:betatauinf}, we can choose $A_\al$ large enough to have $\omega_{A_\al,\al}<1-2c.$ Thus we find \begin{equation}\label{eq:L1bound1}\forall\al\geq m,\,\exists A_\al:\ \forall\eta,\delta>0,\quad\int x^\al\beta(x)\U_\eta^\delta(x)\,dx\leq\f{\al {A_\al}^{\al-1-\al_0}\sup_{(0,A)}{\{x^{\al_0}\tau(x)\}}}{1-2c-\omega_{A_\al,\al}}:=B_\al.\end{equation} The next step is to prove the same estimates for $0\leq\al<m$ and for this we first give a bound on $\tau_\eta\U_\eta^\delta.$ We fix $\rho\in(0,1/2)$ and define $x_\eta>0$ as the unique point such that $\int_0^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}dy=\rho.$ It exists because $\beta$ is nonnegative and locally integrable, and $\tau_\eta$ is positive. Thanks to assumption \eqref{as:betatau0}, we know that $x_\eta\underset{\eta\to0}{\longrightarrow}x_0$ where $x_0>0$ satisfies $\int_0^{x_0}\f{\beta(y)}{\tau(y)}dy=\rho,$ so $x_\eta$ is bounded by $0<\underline x\leq x_\eta\leq\overline x.$ Then, integrating \eqref{eq:truncated} between $0$ and $x\leq x_\eta,$ we find \begin{eqnarray*} \tau_\eta(x)\U_\eta^\delta(x)&\leq&\delta+2\int_0^{x}\int\beta(y)\U_\eta^\delta(y)\kappa(z,y)\,dy\,dz\\ &\leq&\delta+2\int\beta(y)\U_\eta^\delta(y)\,dy\\ &=&\delta+2\int_0^{x_\eta}\beta(y)\U_\eta^\delta(y)\,dy+2\int_{x_\eta}^\infty\beta(y)\U_\eta^\delta(y)\,dy\\ &\leq&\delta+2\sup_{(0,x_\eta)}\{\tau_\eta\U_\eta^\delta\}\int_0^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\,dy+\f2{x_\eta^m}\int_0^\infty y^m\beta(y)\U_\eta^\delta(y)\,dy\\ &\leq&\delta+2\rho\sup_{(0,x_\eta)}\{\tau_\eta\U_\eta^\delta\}+\f2{x_\eta^m}B_m. \end{eqnarray*} Consequently, if we consider $\delta\leq1$ for instance, we obtain \begin{equation}\label{eq:Linfbound1}\sup_{x\in(0,\underline x)}\tau_\eta(x)\U_\eta^\delta(x)\leq\f{1+2B_m/\underline x^m}{1-2\rho}:=C\end{equation} so $\tau_\eta\U_\eta^\delta$ is uniformly bounded in a neighborhood of zero.\\ Now we can prove a bound $B_\al$ for $x^\al\beta\U_\eta^\delta$ in the case $0\leq\al<m.$ Thanks to the estimates \eqref{eq:L1bound1} and \eqref{eq:Linfbound1} we have \begin{eqnarray}\label{eq:L1bound2} \int x^\al\beta(x)\U_\eta^\delta(x)\,dx&=&\int_0^{\overline x}x^\al\beta(x)\U_\eta^\delta(x)\,dx+\int_{\overline x}^Rx^\al\beta(x)\U_\eta^\delta(x)\,dx\nonumber\\ &\leq&\overline x^\al\sup_{(0,\overline x)}\{\tau_\eta\U_\eta^\delta\}\int_0^{\overline x}\f{\beta(y)}{\tau_\eta(y)}\,dy+\overline x^{\al-m}\int_{\overline x}^Rx^m\beta(x)\U_\eta^\delta(x)\,dx\nonumber\\ &\leq&C\rho\overline x^\al+B_m\overline x^{\al-m}:=B_\al. \end{eqnarray} Combining \eqref{eq:L1bound1} and \eqref{eq:L1bound2} we obtain \begin{equation}\label{eq:L1bound3}\forall\al\geq0,\,\exists B_\al:\ \forall\eta,\delta>0,\quad\int x^\al\beta(x)\U_\eta^\delta(x)\,dx\leq B_\al,\end{equation} and finally we bound $\lb_\eta^\delta$ \begin{equation}\label{eq:lambdaupperbound}\lb_\eta^\delta=\delta +\int\beta\U_\eta^\delta\leq\delta +B_0.\end{equation} So the family $\{\lb_\eta^\delta\}_\delta$ belong to a compact interval and we can extract a converging subsequence \mbox{$\lb_\eta^\delta\underset{\delta\to0}{\longrightarrow}\lb_\eta.$} \ \paragraph{\it Second estimate : $W^{1,1}$bound for $x^\al\tau_\eta\U_\eta^\delta,\ \al\geq0.$} We use the estimate \eqref{eq:L1bound3}. First we give a $L^\infty$bound for $\tau_\eta\U_\eta^\delta$ by integrating \eqref{eq:truncated} between $0$ and $x$ \begin{equation}\label{eq:Linfbound2}\tau_\eta(x)\U_\eta^\delta(x)\leq\delta+2\int_0^R\beta(y)\U_\eta^\delta(y)\,dy\leq\delta+2B_0:=D_0.\end{equation} Then we bound $x^\al\tau_\eta\U_\eta^\delta$ in $L^1$ for $\al>-1.$ Assumption \eqref{as:betatauinf} ensures that there exists $X>0$ such that\\ $\tau(x)\leq x\beta(x),\ \forall x\geq X,$ so we have for $R>X$ \begin{eqnarray*} \int x^\al\tau_\eta(x)\U_\eta^\delta(x)\,dx&\leq&\sup_{(0,X)}\{\tau_\eta\U_\eta^\delta\}\int_0^X x^\al\,dx+\int_X^Rx^{\al+1}\beta(x)\U_\eta^\delta(x)\,dx\\ &\leq&\sup_{(0,X)}\{\tau_\eta\U_\eta^\delta\}\f{X^{\al+1}}{\al+1}+B_{\al+1}:=C_\al. \end{eqnarray*} Finally \begin{equation}\label{eq:L1bound4} \forall\al>-1,\,\exists C_\al:\ \forall\eta,\delta>0,\quad\int x^\al\tau_\eta(x)\U_\eta^\delta(x)\,dx\leq C_\al \end{equation} and we also have that $x^\al\U_\eta^\delta$ is bounded in $L^1$ because $\tau\in{\mathcal P}$ (see assumption \eqref{as:betatauspace}).\\ A consequence of \eqref{eq:L1bound3} and \eqref{eq:L1bound4} is that $x^\al\tau_\eta\U_\eta^\delta$ is bound in $L^\infty$ for all $\al\geq0.$ We already have \eqref{eq:Linfbound2} and for $\al>0$, we multiply \eqref{eq:truncated} by $x^\al,$ integrate on $[0,x]$ and obtain $$ x^\al\tau_\eta(x)\U_\eta^\delta(x)\leq\al\int_0^R y^{\al-1}\tau_\eta(y)\U_\eta^\delta(y)\,dy+2\int_0^R y^\al\beta(y)\U_\eta^\delta(y)\,dy\leq\al C_\al+2B_\al:=D_\al, $$ that give immediately \begin{equation}\label{eq:Linfbound3} \forall\al\geq0,\,\exists D_\al:\ \forall\eta,\delta>0,\quad\sup_{x>0}x^\al\tau_\eta(x)\U_\eta^\delta(x)\leq D_\al. \end{equation} To conclude we use the fact that neither the parameters nor $\U_\eta^\delta$ are negative and we find by the chain rule, for $\al\geq0$ $$\int\bigl|\f\p{\p x}(x^\al\tau_\eta(x)\U_\eta^\delta(x))\bigr|dx\leq\al\int x^{\al-1}\tau_\eta(x)\U_\eta^\delta(x)\,dx+\int x^\al\bigl|\p_x(\tau_\eta(x)\U_\eta^\delta(x))\bigr|\,dx\nonumber$$ \begin{equation}\label{eq:W11bound}\hspace{4cm}\leq\al\int x^{\al-1}\tau_\eta(x)\U_\eta^\delta(x)\,dx+\lb_\eta^\delta\int x^\al\U_\eta^\delta(x)\,dx+3\int x^\al\beta(x)\U_\eta^\delta(x)\,dx \end{equation} and all the terms in the right hand side are uniformly bounded thanks to the previous estimates.\\ \ Since we have proved that the family $\{x^\al\tau_\eta\U_\eta^\delta\}_\delta$ is bounded in $W^{1,1}(\mathbb{R}^+)$ for all $\al\geq0,$ then, because $\tau_\eta$ is positive and belongs to ${\mathcal P},$ we can extract from $\{\U_\eta^\delta\}_\delta$ a subsequence which converges in $L^1(\mathbb{R}^+)$ when $\delta\to0.$ Passing to the limit in equation \eqref{eq:truncated} we find that \begin{equation}\label{eq:truncated2}\left\{\begin{array}{l}\displaystyle\f{\p}{\p x} (\tau_\eta(x) \U_\eta(x))+(\beta(x)+\lb_{\eta})\U_\eta(x) = 2 \int_x^\infty\beta(y)\kappa(x,y) \U_\eta(y)\,dy,\\ \\ \U_\eta(0)=0,\quad\U_\eta(x)\geq0,\quad\int\U_\eta=1,\end{array}\right.\end{equation} with $\lb_\eta\geq0.$ \ \subsection{Limit as $\eta\to0$ for $\U_\eta$ and $\lb_\eta$} All the estimates \eqref{eq:L1bound1}-\eqref{eq:W11bound} remain true for $\delta=0.$ So we still know that the family $\{x^\al\tau_\eta\U_\eta\}_\eta$ belongs to a compact set of $L^1,$ but not necessarily $\{\U_\eta\}_\eta$ because in the limit $\tau$ can vanish at zero. We need one more estimate to study the limit $\eta\to0.$ \paragraph{\it Third estimate: $L^\infty$bound for $x^\al\tau_\eta\U_\eta,\ \al\geq-\gamma$.} We already know that $x^\al\tau_\eta\U_\eta$ is bounded for $\al\geq0.$ So, to prove the bound, it only remains to prove that $x^{-\gamma}\tau_\eta\U_\eta$ is bounded in a neighborhood of zero. Let define $f_\eta:x\mapsto\sup_{(0,x)}\tau_\eta\U_\eta.$ If we integrate \eqref{eq:truncated2} between $0$ and $x'<x,$ we find $$\tau_\eta(x')\U_\eta(x')\leq2\int_0^{x'}\int\beta(y)\U_\eta(y)\kappa(z,y)\,dy\,dz\leq2\int_0^x\int\beta(y)\U_\eta(y)\kappa(z,y)\,dy\,dz$$ and so for all $x$ $$f_\eta(x)\leq2\int_0^x\int\beta(y)\U_\eta(y)\kappa(z,y)\,dy\,dz.$$ We consider $x_\eta$ and $\underline x$ defined in the first estimate and, using \eqref{as:kappatau} and \eqref{as:betatau0}, we have for all $x<x_\eta$ \begin{eqnarray*} f_\eta(x)&\leq&2\int_0^x\int\beta(y)\U_\eta(y)\kappa(z,y)\,dy\,dz\\ &=&2\int\beta(y)\U_\eta(y)\int_0^x\kappa(z,y)\,dz\,dy\\ &\leq&2\int_0^\infty\beta(y)\U_\eta(y)\min\Bigl(1,C\Bigl(\f x y\Bigr)^\gamma\Bigr)\,dy\\ &=&2\int_0^x\beta(y)\U_\eta(y)\,dy+2C\int_x^{x_\eta}\beta(y)\U_\eta(y)\Bigl(\f x y\Bigr)^\gamma \,dy+2C\int_{x_\eta}^\infty\beta(y)\U_\eta(y)\Bigl(\f x y\Bigr)^\gamma \,dy\\ &=&2\int_0^x\f{\beta(y)}{\tau_\eta(y)}\tau_\eta(y)\U_\eta(y)\,dy+2Cx^\gamma\int_x^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\f{\tau_\eta(y)\U_\eta(y)}{y^\gamma}\,dy+2C\int_{x_\eta}^\infty\beta(y)\U_\eta(y)\Bigl(\f x y\Bigr)^\gamma \,dy\\ &\leq&2f_\eta(x)\int_0^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\,dy+2Cx^\gamma\int_x^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\f{f_\eta(y)}{y^\gamma}\,dy+2C\|\beta\U_\eta\|_{L^1}\f{x^\gamma}{x_\eta^\gamma}. \end{eqnarray*} We set ${\mathcal V}_\eta(x)=x^{-\gamma}f_\eta(x)$ and we obtain $$(1-2\rho){\mathcal V}_\eta(x)\leq K+2C\int_x^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}{\mathcal V}_\al(y)\,dy.$$ Hence, using Gr\"onwall's lemma, we find that $\displaystyle{\mathcal V}_\eta(x)\leq\f {Ke^{\f{2C\rho}{1-2\rho}}}{1-2\rho}$ and consequently \begin{equation}\label{eq:0Linfbound}x^{-\gamma}\tau_\eta(x)\U_\eta(x)\leq\f {Ke^{\f{2C\rho}{1-2\rho}}}{1-2\rho}:=\wt C,\quad\forall x\in[0,\underline x].\end{equation} \ This last estimate allows us to bound $\U_\eta$ by $\f{x^\gamma}{\tau}$ which is in $L^1_0$ by the assumption \eqref{as:kappatau}. Thanks to the second estimate, we also have that $\int x^\al\U_\eta$ is bounded in $L^1$ and so, thanks to the Dunford-Pettis theorem (see \cite{Brezis} for instance), $\{\U_\eta\}_\eta$ belong to a $L^1$-weak compact set. Thus we can extract a subsequence which converges $L^1-$weak toward $\U.$ But for all $\e>0,\ \{x^\al\U_\eta\}_\eta$ is bounded in $W^{1,1}([\e,\infty))$ for all $\al\geq1$ thanks to \eqref{eq:W11bound} and so the convergence is strong on $[\e,\infty).$ Then we write \begin{eqnarray*} \int|\U_\eta-\U|&=&\int_0^\e|\U_\eta-\U|+\int_\e^\infty|\U_\eta-\U|\\ &\leq&2\wt C\int_0^\e\f{x^\gamma}{\tau(x)}+\int_\e^\infty|\U_\eta-\U|. \end{eqnarray*} The first term on the right hand side is small for $\e$ small because $\f{x^\gamma}{\tau}\in L^1_0$ and then the second term is small for $\eta$ small because of the strong convergence. Finally $\U_\eta\underset{\eta\to0}{\longrightarrow}\U$ strongly in $L^1(\mathbb{R}^+)$ and $\U$ solution of the eigenproblem \eqref{eq:eigenproblem}. \ \subsection{Limit as $\delta,\eta\to0$ for $\phi_\eta^\delta$} We prove uniform estimates on $\phi_\eta^\delta$ which are enough to pass to the limit and prove the result. \paragraph{\it Fourth estimate : uniform $\phi_\eta^\delta$-bound on $[0,A]$.} Let $A>0,$ our first goal is to prove the existence of a constant $C_0(A)$ such that $$\forall\eta,\delta,\qquad\sup_{(0,A)}{\phi_\eta^\delta}\leq C_0(A).$$ We divide the equation on $\phi_\eta^\delta$ by $\tau_\eta$ and we integrate between $x$ and $x_\eta$ with $0<x<x_\eta,$ where $x_\eta,$ bounded by $\underline x$ and $\overline x,$ is defined in the first estimate. Considering $\delta<\f{\mu(1-2\rho)}{\overline x}$ (fulfilled for $R>\f{\overline x}{2(1-2\rho)}$ since $\delta=\f{\mu}{2R}$), we find \begin{eqnarray*} \phi_\eta^\delta(x)&\leq&\phi_\eta^\delta(x_\eta)+2\int_x^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\int_0^y\kappa(z,y)\phi_\eta^\delta(z)\,dz+x_\eta\f\delta \mu\phi_\eta^\delta(0)\\ &\leq&\phi_\eta^\delta(x_\eta)+\sup_{(0,x_\eta)}\{\phi_\eta^\delta\}\Bigl(2\int_0^{x_\eta}\f{\beta(y)}{\tau_\eta(y)}\int_0^y\kappa(z,y)\,dz+x_\eta\f\delta\mu\Bigr) \end{eqnarray*} and we obtain $$\sup_{x\in(0,\underline x)}{\phi_\eta^\delta(x)}\leq\f 1 {1-2\rho-\delta\overline x/\mu}\phi_\eta^\delta(x_\eta).$$ Using the decay of $\phi_\eta^\delta(x)e^{-\int_{\underline x}^x\f{\beta+\lb_\eta^\delta}{\tau_\eta}},$ there exists $C(A)$ such that $$\sup_{x\in(0,A)}{\phi_\eta^\delta(x)}\leq C(A)\phi_\eta^\delta(x_\eta).$$ Noticing that $\int\phi_\eta^\delta(x)\U_\eta^\delta(x)dx =1,$ we conclude $$1\geq\int_0^{x_\eta}\phi_\eta^\delta(x)\U_\eta^\delta(x)dx\geq \phi_\eta^\delta(x_\eta)\int_0^{x_\eta}e^{-\int_x^{x_\eta}\f{\beta+\lb_\eta^\delta}{\tau_\eta}}\U_\eta^\delta(x)\,dx,$$ so, as $x_\eta\to x_0$ and $\int_0^{x_0}\U(x)dx>0$ (thanks to Lemma \ref{lm:positivity} and because $x_0>b\geq a$), we have \begin{equation}\label{eq:phi0bound}\sup_{(0,A)}{\phi_\eta^\delta}\leq C_0(A).\end{equation} \ \paragraph{\it Fifth estimate : uniform $\phi_\eta^\delta$-bound on $[A,\infty)$.} Following an idea introduced in \cite{PR} we notice that the equation in \eqref{eq:truncated} satisfied by $\phi_\eta^\delta$ is a transport equation and therefore satisfies the maximum principle (see Lemma \ref{lm:supersolution} in the Appendix). Therefore it remains to build a supersolution $\overline\phi$ that is positive at $x=R,$ to conclude $\phi_\eta^\delta(x)\leq\overline\phi(x)$ on $[0,R].$ This we cannot do on $[0,R],$ but on a subinterval $[A_0,R]$ only. So we begin with an auxiliary function $\overline\vp(x)=x^k+\theta$ with $k$ and $\theta$ positive numbers to be determined. We have to check that on $[A_0,R]$ $$-\tau(x)\f{\p}{\p x}\overline\vp(x)+(\lb_\eta^\delta+\beta(x))\overline\vp(x)\geq 2\beta(x)\int\kappa(y,x)\overline\vp(y)\,dy+\delta\phi_\eta^\delta(0),$$ {\it i.e.} $$-k\tau(x)x^{k-1}+(\lb_\eta^\delta+\beta(x))\overline\vp(x)\geq\Bigl(2\theta+2\int\kappa(y,x)y^k\,dy\Bigr)\beta(x)+\delta\phi_\eta^\delta(0).$$ For $k\geq2,$ we know that $\int\kappa(y,x)\f{y^k}{x^k}\,dy\leq c<1/2$ so it is sufficient to prove that there exists $A_0>0$ such that we have \begin{equation}\label{eq:sursolution1}-k\tau(x)x^{k-1}+(\lb_\eta^\delta+\beta(x))(x^k+\theta)\geq(2\theta+2cx^k)\beta(x)+\delta C_0(1)\end{equation} for all $x>A_0,$ where $C_0$ is defined in \eqref{eq:phi0bound}. For this, dividing \eqref{eq:sursolution1} by $x^{k-1}\tau(x),$ we say that if we have \begin{equation}\label{eq:sursolution2}(1-2c)\f{x\beta(x)}{\tau(x)}\geq k+\f{2\theta\beta(x)+\delta C_0(1)}{x^{k-1}\tau(x)},\end{equation} then \eqref{eq:sursolution1} holds true. Thanks to assumptions \eqref{as:betatauspace} and \eqref{as:betatauinf} we know that there exists $k>0$ such that for any $\theta>0,$ there exists $A_0>0$ for which \eqref{eq:sursolution2} is true on $[A_0,+\infty).$ Then we conclude by choosing the supersolution $\overline\phi(x)=\f{C_0(A_0)}\theta\overline\vp(x)$ so that $$\overline\phi(x)\geq\phi_\eta^\delta(x)\quad \text{on} \ [0,A_0],$$ and on $[A_0,R],$ we have \begin{equation}\label{eq:supersol}\left\{\begin{array}{l} -\tau(x)\f{\p}{\p x}\overline\phi(x)+(\lb_\eta^\delta+\beta(x))\overline\phi(x) \geq 2\beta(x)\int_0^x\kappa(y,x)\overline\phi(y)\,dy+\delta\phi_\eta^\delta(0), \\ \\ \overline\phi(R)>0, \end{array}\right.\end{equation} which is a supersolution to the equation satisfied by $\phi_\eta^\delta.$ Therefore $\phi_\eta^\delta\leq\overline\phi$ uniformly in $\eta$ and $\delta$ and we get \begin{equation}\label{eq:phibound}\exists k,\theta,C\ s.t.\ \forall\eta,\delta,\quad\phi_\eta^\delta(x)\leq(Cx^k+\theta).\end{equation} \ Equation \eqref{eq:truncated} and the fact that $\phi_\eta^\delta$ is uniformly bounded in $L^\infty_{loc}(\mathbb{R}^+)$ give immediately that $\p_x\phi_\eta^\delta$ is uniformly bounded in $L^\infty_{loc}(\mathbb{R}^+,\tau(x)dx),$ so in $L^\infty_{loc}(0,\infty)$ thanks to \eqref{as:taupositivity}. \ Then we can extract a subsequence of $\{\phi_\eta^\delta\}$ which converges ${\mathcal C}^0(0,\infty)$ toward $\phi.$ Now we check that $\phi$ satisfied the adjoint equation of \eqref{eq:eigenproblem}. We consider the terms of \eqref{eq:truncated} one after another.\\ First $(\lb_\eta^\delta+\beta(x))\phi_\eta^\delta(x)$ converges to $(\lb+\beta(x))\phi(x)$ in $L^\infty_{loc}.$\\ For $\p_x\phi_\eta^\delta,$ we have an $L^\infty$ bound on each compact of $(0,\infty).$ So it converges $L^\infty-*weak$ toward $\p_x\phi.$\\ It remains the last term which we write, for all $x>0,$ $$\int_0^x\kappa(y,x)(\phi_\eta^\delta(y)-\phi(y))\,dy\leq\|\phi_\eta^\delta-\phi\|_{L^\infty(0,x)}\underset{\eta,\delta\to0}{\longrightarrow}0.$$ The fact that $\int\phi\U=1$ comes from the convergence $L^\infty-L^1$ when written as $$1=\int\phi_\eta^\delta(x)\U_\eta^\delta(x)\,dx=\int\f{\phi_\eta^\delta(x)}{1+x^k}(1+x^k)\U_\eta^\delta(x)\,dx\longrightarrow\int\f{\phi(x)}{1+x^k}(1+x^k)\U(x)\,dx=\int\phi\U.$$ \bigskip\bigskip At this stage we have found $(\lb,\U,\phi)\in\mathbb{R}^+\times L^1(\mathbb{R}^+)\times{\mathcal C}(\mathbb{R}^+)$ solution of \eqref{eq:eigenproblem}. The estimates announced in Theorem \ref{th:eigenelements} also follow from those uniform estimates. It remains to prove that $\lb>0$ and the uniqueness. \ \subsection{Proof of $\lb>0$} We prove a little bit more, namely that \begin{equation}\label{eq:lowerbound}\lb\geq\f12\sup_{x\geq0}\{\tau(x)\U(x)\}. \end{equation} We integrate the first equation of \eqref{eq:eigenproblem} between $0$ and $x$ and find \begin{eqnarray*} 0\leq\lb\int_0^x\U(y)\,dy&=&-\tau(x)\U(x)-\int_0^x\beta(y)\U(y)\,dy+2\int_0^x\int_z^\infty\beta(y)\kappa(z,y)\U(y)\,dy\,dz\\ &\leq&-\tau(x)\U(x)+2\int_0^\infty\int_z^\infty\beta(y)\kappa(z,y)\U(y)\,dy\,dz\\ &=&-\tau(x)\U(x)+2\int_0^\infty\beta(y)\U(y)\,dy\\ &=&-\tau(x)\U(x)+2\lb, \end{eqnarray*} Hence $2\lb\geq \tau(x)\U(x)$ and \eqref{eq:lowerbound} is proved. \ \subsection{Uniqueness} We follow the idea of \cite{M1}. Let $(\lb_1,\U_1,\phi_1)$ and $(\lb_2,\U_2,\phi_2)$ two solutions to the eigenproblem \eqref{eq:eigenproblem}. First we have \begin{eqnarray*} \lb_1\int\U_1(x)\phi_2(x)\,dx&=&\int\Bigl(-\p_x(\tau(x)\U_1(x))-\beta(x)\U_1(x)+2\int_x^\infty\beta(y)\kappa(x,y)\U_1(y)\,dy\Bigr)\phi_2(x)\,dx\\ &=&\int\Bigl(\tau(x)\p_x\phi_2(x)-\beta(x)\phi_2(x)+2\beta(x)\int_0^x\kappa(y,x)\phi_2(y)\,dy\Bigr)\U_1(x)\,dx\\ &=&\lb_2\int\U_1(x)\phi_2(x)\,dx \end{eqnarray*} and then $\lb_1=\lb_2=\lb$ because $\int\U_1\phi_2>0$ thanks to Lemma \ref{lm:positivity}. \\ For the eigenvectors we use the General Relative Entropy method introduced in \cite{MMP1,MMP2}. For $C>0,$ we test the equation on $\U_1$ against $\sgn\bigl(\f{\U_1}{\U_2}-C\bigr)\phi_1,$ $$0=\int\Bigl[\p_x(\tau(x)\U_1(x))+(\lb+\beta(x))\U_1(x)-2\int_x^\infty\beta(y)\kappa(x,y)\U_1(y)\,dy\,\Bigr]\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx.$$ Deriving $\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\tau(x)\U_2(x)\phi_1(x)$ we find $$\begin{array}{l}\displaystyle \int\p_x(\tau(x)\U_1(x))\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx=\int\p_x\Bigl(\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\tau(x)\U_2(x)\phi_1(x)\Bigr)\,dx\\ \\ \displaystyle\hspace{1cm}+\int\p_x(\tau(x)\U_2(x))\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx-\int\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\p_x(\tau(x)\U_2(x)\phi_1(x))\,dx \end{array}$$ and then $$\begin{array}{l}\displaystyle \int\p_x(\tau(x)\U_1(x))\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx=\\ \\ \displaystyle\hspace{2cm}2\int\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\Bigl[\int_0^x\beta(x)\kappa(y,x)\U_2(x)\phi_1(y)\,dy-\int_x^\infty\beta(y)\kappa(x,y)\U_2(y)\phi_1(x)\,dy\Bigr]\,dx\\ \\ \displaystyle\hspace{4cm}+2\int\int_x^\infty\beta(y)\kappa(x,y)\U_2(y)\,dy\,\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx\\ \\ \displaystyle\hspace{6cm}-\int(\lb+\beta(x))\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\U_2(x)\phi_1(x)\,dx, \end{array}$$ \ $$\begin{array}{l}\displaystyle \int\p_x(\tau(x)\U_1(x))\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx=\\ \\ \displaystyle\hspace{2cm}2\int\int\beta(y)\kappa(x,y)\Bigl[\Bigl|\f{\U_1}{\U_2}(y)-C\Bigr|-\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\Bigr]\U_2(y)\phi_1(x)\,dxdy\\ \\ \displaystyle\hspace{4cm}+2\int\int_x^\infty\beta(y)\kappa(x,y)\U_2(y)\,dy\,\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx\\ \\ \displaystyle\hspace{6cm}-\int(\lb+\beta(x))\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\U_2(x)\phi_1(x)\,dx. \end{array}$$ \ So $$\begin{array}{l}\displaystyle 0=2\int\int\beta(y)\kappa(x,y)\Bigl[\Bigl|\f{\U_1}{\U_2}(y)-C\Bigr|-\Bigl|\f{\U_1}{\U_2}(x)-C\Bigr|\Bigr]\U_2(y)\phi_1(x)\,dxdy\\ \\ \displaystyle\hspace{4cm}+2\int\int_x^\infty\beta(y)\kappa(x,y)\U_2(y)\,dy\,\f{\U_1}{\U_2}(x)\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx\\ \\ \displaystyle\hspace{6cm}-2\int\int_x^\infty\beta(y)\kappa(x,y)\U_1(y)\,dy\,\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\phi_1(x)\,dx \end{array}$$ \ $$0=\int\int\beta(y)\kappa(x,y)\U_2(y)\Bigl|\f{\U_1}{\U_2}(y)-C\Bigr|\Bigl[1-\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\sgn\Bigl(\f{\U_1}{\U_2}(y)-C\Bigr)\Bigr]\phi_1(x)\,dxdy.$$ \ Hence $\Bigl[1-\sgn\Bigl(\f{\U_1}{\U_2}(x)-C\Bigr)\sgn\Bigl(\f{\U_1}{\U_2}(y)-C\Bigr)\Bigr]=0$ on the support of $\kappa(x,y)$ for all $C$ thus $\f{\U_1}{\U_2}(x)=\f{\U_1}{\U_2}(y)$ on the support of $\kappa(x,y)$ and \begin{equation}\label{eq:uniqueness}\p_x\f{\U_1}{\U_2}(x)=\int\beta(y)\kappa(x,y)\Bigl(\f{\U_1}{\U_2}(y)-\f{\U_1}{\U_2}(x)\Bigr)\f{\U_2(y)}{\U_2(x)}\,dy=0\end{equation} so $\displaystyle\f{\U_1}{\U_2}\equiv cst=1.$ \ We can prove in the same way that $\phi_1=\phi_2$ even if we can have $\U\equiv0$ on $[0,m]$ with $m>0.$ Indeed in this case we know that $\beta\equiv0$ on $[0,m]$ and so $$\phi_i(x)=\phi_i(0)e^{\int_0^x\f{\lb}{\tau(s)}ds}\quad\forall x\in[0,m],\ i\in\{1,2\}.$$ \ \section{Conclusion, Perspectives}\label{se:csq} We have proved the existence and uniqueness of eigenelements for the aggregation-fragmentation equation \eqref{eq:temporel} with assumptions on the parameters as large as possible, in order to render out the widest variety of biological or physical models. It gives access to the asymptotic behaviour of the solution by the use of the General Relative Entropy principle. \ A following work is to study the dependency of the eigenvalue $\lb$ on parameters $\tau$ and $\beta$ (see \cite{M2}). For instance, our assumptions allow $\tau$ to vanish at zero, what is a necessary condition to ensure that $\lb$ tends to zero when the fragmentation tends to infinity. Such results give precious information on the qualitative behaviour of the solution. \ Another possible extension of the present work is to prove existence of eigenelements in the case of time-periodic parameters, using the Floquet's theory, and then compare the new $\lb_F$ with the time-independent one $\lb$ (see \cite{Lepoutre}). Such studies can help to choose a right strategy in order to optimize, for instance, the total mass $\int x u(t,x) dx$ in the case of prion proliferation (see \cite{CL1}) or on the contrary minimize the total population $\int u(t,x) dx$ in the case of cancer therapy (see \cite{Lepoutre, Clairambault}). \ Finally, this eigenvalue problem could be used to recover some of the equation parameters like $\tau$ and $\beta$ from the knowledge of the asymptotic profile of the solution, as introduced in \cite{DPZ, PZ} in the case of symmetric division ($\tau=1$ and $\kappa=\delta_{x=\frac{y}{2}}$), by the use of inverse problems techniques. The method of \cite{PZ} has to be adapted to our general case, in order to model prion proliferation for instance, or yet to recover the aggregation rate $\tau$ ; this is another direction for future research. \vspace{1cm} {\bf Aknowledgment} The authors thank a lot Beno\^it Perthame for his precious help and his corrections. \newpage{\noindent\LARGE\bf Appendix}
{'timestamp': '2010-02-10T11:53:16', 'yymm': '0907', 'arxiv_id': '0907.5467', 'language': 'en', 'url': 'https://arxiv.org/abs/0907.5467'}
\section{Introduction} \label{intro} \textbf{Legislation Networks} were first introduced in 2015 \cite{MKoniaris2017} and further discussed in \cite{NSakhaee2016} \cite{NSakhaee2017}. These networks are essential to explore the relationship between legislation and societies' evolution \cite{NSakhaee2016}. There are many obvious benefits from studying Legislation Networks \cite {NSakhaee2016} \cite{NSakhaee2017} \cite{PZhang2007} \cite{Fowler2007}, but building these networks is not always a straightforward task, as only a few legislation systems provide machine-readable documents or structured databases \cite {EURlex} \cite{Metalex}. The majority of legislation systems supply legal documents in human-readable format. For example the New Zealand Parliamentary Council Office provides machine-readable XML files \cite{PCO} only for the current active Acts, which constitute around ten percent of the entire set of Acts. All other historic documents are scanned and supplied by a third party institute in Portable Document Format (PDF) \cite{NZLII}. To extract information from legal documents, the first step is the conversion of images into text if the text is not available. This concept is well studied as \textbf{optical character recognition (OCR)} \cite{OCR}, and there are several techniques and tools developed to convert typewritten or handwritten images to text. OCR is the first step of the proposed framework, and for the case study we selected ABBYY FineReader \cite{ABBYY}. \textbf{Information Extraction (IE)} involves locating and extracting specific information from text \cite{IE0}. Information Extraction assumes that in each individual text file there are one or more entities that are similar to those in other text documents but differing in the details \cite{IE2}. IE approaches in the legal domain are considerably different from other knowledge areas because of the two main characteristics of legal texts. Legal documents exhibit a wide range of internal structure, and they often have a significant amount of manual editorial value added. One of the earliest information retrieval approaches for legal materials based on searching inside the document was proposed in 1978 \cite{IE1}. Later works mainly used natural supervised learning techniques to retrieve the required data from legal texts, but with a substantial error \cite{Cheng2009} \cite{Textnailing2017}. In the proposed framework of this study several IE tasks are used, and more are described later in this section. \textbf{Named entity recognition (NER)} is one of the main sub-tasks of IE. The goal of this task is to find each occurrence of a named entity in the text \cite{NER1}. Entities usually include people, locations, quantities, and organizations but also more specific entities such as the names of genes and proteins \cite{NER2}, the names of college courses \cite{NER3}, and drugs \cite{NER4}. In the New Zealand legislation corpus, entities could be the name of legislative documents such as Acts, Regulations, Bills, Orders, or Case-Laws \cite{NSakhaee2016}. In the case study which is discussed \autoref{Application}, the main required entities inside the text documents are is the names of the New Zealand Acts. The main traditional NER algorithm that identifies and classifies the named entities is statistical sequence modeling \cite{NER1}. But there are other modern approaches based on combinations of lists, rules, and supervised machine learning \cite{NER5}. To extract the require information for Legislation Network, there are clear rules to identify the named entities, and the classification of the entities is not needed. Therefore the second NER approach is more appropriate and discussed further for the proposed framework. The next IE task which is used in our study is to detect the relationships that exist among the recognized entities. This task is called \textbf{relation extraction (RE)} \cite{NER1}. The earliest algorithm for relation extraction is the use of lexico-syntactic patterns \cite{RE1}. This algorithm is still valid and widely used, but there are other algorithms introduced later such as supervised learning \cite{NER1} and bootstrapping \cite{boot}\cite{boot1}. Considering that legislation texts are well structured, it is assumed that there is a large collection of previously annotated material that can define the rules for classifiers. \textbf{Approximate string matching} techniques find items in a database when there may be a spelling mistake or other error in the keyword \cite{ASM1}. This is becoming a more relevant issue for fast-growing knowledge areas such as information retrieval \cite{ASM4}. Various techniques are studied to address the identity uncertainty of the objects, and briefly are reviewed in this study. These techniques could be distance based, token based, or a hybrid model of the distance and token based models.\\~ Damerau-Levenshtein metrics are the main approximate string matching techniques to address the distance functions between the two strings \cite{ASM2} \cite{ASM3}. The most famous function in this category is \textit{edit-distance}, and it is defined as the minimum number of changes required to convert one string into the other \cite{ASM3}. Several alternative distance functions to the edit-distance have been proposed such as $q$-gram and maximal matches \cite{ASM5}.\\~ The next set of techniques are token based or probabilistic object identification methods adapted for string matching tasks \cite{FSM1}\cite{FSM2} \cite{ASM1}. \textit{Jaccard similarity} and \textit{cosine similarity} are common token based measures widely used in the information retrieval community\cite{FSM1}. Hybrid techniques combine distance-based and token-based string matching measures such as Jaro-Winkler \cite{FSM4}. All of the string matching algorithms have been developed by filtering and bit-parallelism approaches.\\~ The fastest algorithms use a combination of filters to discard most of the text by focusing on the potential matches. Hybrid models significantly improve precision and recall reducing the error in a range between $0.1$ to $0.2$ \cite{ASM4}. Network inferences require high accuracy of data \cite{Error1}. For this study various string matching techniques are examined for the Legislation Network and comparing the results, a hybrid model of \textit{Jaccard similarity} and \textit{edit-distance} is used as described in the next section. \textbf{The main contribution of this study} is the proposed Information Extraction framework which engages several processes and enables the researcher to have access to the network information from historic documents. This framework makes it possible to study the Legislation Network as a dynamic graph. In this paper the case study covers all Acts in New Zealand legislation corpus including historic, expired, repealed and consolidated Acts as at end of September 2018. This comes to a set of 23870 PDF files of which about $87\%$ are in scanned image format. \autoref{sample} shows a sample image of an average quality scanned PDF document. The proposed framework suggests a high-performance procedure to derive network information from such poor quality documents. In the following sections, examples and the experimental results are used to illustrate the framework, its performance, and its potential applications. In this section a summary of the required Information Extraction processes and methodologies is discussed. In the next section the proposed framework is presented and various examples are explained. Then the case study analysis and the application of the proposed framework are examined. Next, a number of experiments are designed and studied to evaluate the accuracy of the extracted information and to study the robustness of Legislation Network. The study finishes with a quick review on the novelty and the importance of discovering the time-varying behaviour of the Legislation Network. \begin{figure} \centering \includegraphics[scale=.7]{docexample.pdf} \caption{Married Women Property Protection Act 1860} \label{sample} \end{figure} \section{The proposed Information Extraction framework} In this section, the Information Extraction framework to build the Legislation Network is discussed. \autoref{IEF} depicts the overview of the proposed framework.\\~ The process starts with the conversion of non machine readable files to text by using \textit{OCR} available tools. This step is relatively straightforward, but could be time-consuming considering the number of documents in the study. As mentioned earlier, in the case study the tool named ABBYY FineReader \cite{ABBYY} is used. The average accuracy of this step is just above 80 percent and implies the need for a typos analysis step that is discussed in section \ref{Approximate String Matching}. \subsection{Text Canonicalization} The next step in the proposed framework is \textit{text canonicalization} \cite{canon}. There are several required tasks to convert all of the text files into a unique format, so the rules can be defined more easily while running the Information Extraction tasks. The text canonicalization step could be implemented via different approaches depending to the text style and language. In this paper, some of the common tasks are suggested, and two potentially required tasks are described.\\~ In the case study the designed system transfers all letters to \textit{Lowercase}. This transition applies a level of consistency across the text documents and the Information Extraction rules. In the experiments, the system also replaces \textit{Special Characters} with generic tags in the text. The only character which is not replaced is the parenthesis, as it is often used in the title of legislation. The other suggested generic text canonicalization task is to replace multiple spaces with one space. \begin{wrapfigure}{l}{5.5cm} \includegraphics[scale=.7]{IEProcess-3.jpg} \caption{Legal Text IE Framework} \label{IEF} \end{wrapfigure} Apart from the general text canonicalization steps, there are other potential corrections that shape the text to make it a better input for the Information Extraction process. The first is to remove text margins that OCR mistakenly merges to the main body of text. \autoref{sample} includes examples of these margins that might impact the Information Extraction rules and result in error. As an example in the case study, the phrase \textit{short title} is often used in the margin, and OCR merges it to the nearest part of the text. This might impact the named entity recognition task, so the system removes this phrase and many of its possible misspelled forms from the text. The next recommended text canonicalization step is to resolve the misspelling issues for the keywords used in the Information Extraction process. As an example in this study, Acts are the main entities, and the rule to recognize them uses the keyword \textit{Act}. So the system corrects some of the possible misspelling forms of the word \textit{-act-}. \begin{table}[h] \centering{}\protect\caption{OCR and Text canonicalization result comparison} \label{tab:canonicalization} \adjustbox{max height=\dimexpr\textheight-6.5cm\relax, max width=\textwidth}{ \begin{tabular}{l l l} \hline OCR & I$\sim$ The Short Title' of this' Act shall be, the ((Married Short TItle,'Vomen's 'Property'Protectio\pounds Aot, 1860." \\ \hline Text canonicalization & I the short title of this act shall be the ((married vomens propertyprotectio act 1860\\ \hline \end{tabular} } \end{table} To provide a better explanation of this step, \autoref{tab:canonicalization} provides an example referring to third paragraph of the \autoref{sample} image. As can be seen the text canonicalization converts the text to a simpler unique structure prepares it for the next steps of \textit{named entity recognition} and \textit{relation extraction}. Typo resolution is not expected at this step, being covered under the last step via Approximate String Matching. \subsection{Named Entity Recognition and Relation Extraction} As explained, the text canonicalization step normalizes the text files to a unique format and prepares them for in-depth information extraction steps. To extract the network node information, a combined Named Entity Recognition approach is suggested which engages rules and supervised learning. To identify the rules, a sample set of the documents should be reviewed. The sample size is not necessarily large, but a stratified sampling approach is suggested to eliminate the impact of time-period style and author's writing style. \autoref{Entities} shows examples of the entities in the case study based on the Acts recognition rules. Acts are the main part of the New Zealand legislation system and as explained before the case study only considers the Acts. \begin{table}[h] \centering{}\protect\caption{Entities, types and examples} \label{Entities} \adjustbox{max height=\dimexpr\textheight-6.5cm\relax, max width=\textwidth}{ \begin{tabular}{ l l l l } \hline Type & Tag & Sample & Canonicalized text \\ \hline Year & YR & 1860 & the short title of this act shall be the ((married vomens propertyprotectio act 1860 \\ Act & ACT & married vomens propertyprotectio act 1860 & the short title of this act shall be the ((married vomens propertyprotectio act 1860\\ \hline \end{tabular} } \end{table} In the case study stratified sampling method is used and the strata are five different time periods in a range of more than 200 years. A total number of 55 text files are reviewed, and several clear rules engaging a set of keywords and lists are built to identify the named entities. \autoref{NERrules} provides examples of the rules in each stratum and $y$ represents the year in which the Act is commenced. \begin{table}[h] \centering{}\protect\caption{Examples of the Named Entity Recognition rules} \label{NERrules} \adjustbox{max height=\dimexpr\textheight-6.5cm\relax, max width=\textwidth}{ \begin{tabular}{ l l l l } \hline Stratum & Keyword example & Rule example & Sample document \\ \hline $y<1850$ & \textit{ordinance} & an [\textit{keyword}] to \textit{any phrase} of [\textit{act name}] [date] & Police Magistrates Act 1841 \\ $1850<y<1900$ & \textit{short title}, \textit{shall be} & the [\textit{keyword}] of this act [\textit{keyword}] the [\textit{act name}] [\textit{year}] & Customs Tariff Act 1873 \\ $1900<y<1950$ & \textit{amend}, \textit{consolidate} & an act to [\textit{keyword}] \textit{any phrase} of the [\textit{act name}] [date] & Mining Act 1926 \\ $1950<y<2000$ & \textit{meaning}, \textit{section} & same [\textit{keyword}] as in [\textit{keyword}] [any number] of the [\textit{act name}] [\textit{year}] & Copyright Act 1962 \\ $2000<y$ & act & this [\textit{keyword}] is the [\textit{act name}] [\textit{year}] & Social Security Act 2018\\ \hline \end{tabular} } \end{table} Alongside recognizing the entities, to extract the network edge information, a rule based Relation Extraction approach is suggested considering that legislation texts are contextually structured. To identify the rules, this study suggests to use the same sample set which is used for the Named Entity Recognition. From the case study it is observed that the style of writing legislation has changed considerably over time, so the sampling approach is very important to minimize the impact of various text styles. By reviewing the sample files, there is a large collection of previously annotated material that can define the rules for relation classifiers. For the case study, as explained total number of 55 text files are reviewed, and several classifier rules engaging a set of keywords are built to identify the relations between the named entities. \autoref{Relations} summarizes the entity relation list for the case study and provides examples. \begin{table}[h] \centering{}\protect\caption{Relations example} \label{Relations} \adjustbox{max height=\dimexpr\textheight-6.5cm\relax, max width=\textwidth}{ \begin{tabular}{ l l l l } \hline Relation & Type & Canonicalized text & Sample document \\ \hline Title & TIT & the short title of this act shall be the ((married vomen propertyprotectio act 1860 & Married Women Property Protection Act 1860 \\ Citation & CIT & within the meaning of section 5 of the companies act 1993 & Trade Marks Act 2002 \\ Amendment & AMD & section 25.1b amended, by section 5.2 of the trade marks amendment act 2005 & Trade Marks Act 2002\\ Partial Repeal & PRP & section 5(1) repealed, by section 4(8) of the trade marks amendment act 2011 & Trade Marks Act 2002\\ Repeal & FRP & acts repealed. 1860, No. 9.the married vyomens pfoperty protection act, 1860. & Married Women Property Protection Act 1880\\ \hline \end{tabular} } \end{table} This suggested process can be generalized for any other case study in Legislation Network building process considering that legislation texts are coherently structured. So there is always a large collection of previously annotated material that can define the rules for entity recognition and relation classifiers. \subsection{Approximate String Matching}\label{Approximate String Matching} Named entity recognition identifies the Acts and relation extraction recognizes the relationship between them. So these two steps result in an initial version of the node list and the edge list of the intended Legislation Network. However testing this network shows that the extracted data is shoddy with an average error rate of 12 \footnote{To estimate this error rate, a cluster sampling method is used to randomly choose ten sets of 30 entities. By manual check of the samples, the rate of incorrectly matched entities is observed.} percent, so another step is required to resolve typo issues and imperfect entities. This poor-quality data implies the need for the approximate string matching step. To run this step two main components are required, the technique and the correct pattern. \autoref{ASMexample} provides an example which shows the first match as the output of the proposed approximate string matching technique. \begin{table}[h] \centering{}\protect\caption{Approximate string matching example} \label{ASMexample} \adjustbox{max height=\dimexpr\textheight-6.5cm\relax, max width=\textwidth}{ \begin{tabular}{ l l l } \hline Technique & Extracted entity & First match \\ \hline Hybrid Model & married vomens propertyprotectio act 1860 & married women property protection act 1860 \\ \hline \end{tabular} } \end{table} As mentioned earlier after the implementation of different approximate string matching techniques, a hybrid model of \textit{Jaccard} and \textit{Edit-Distance} is designed and proposed. \begin{figure}[h] \centering \includegraphics[scale=.5]{ASM.pdf} \caption{Precision and Recall comparison of the approximate string matching techniques} \label{ASMPic} \end{figure} Algorithm \ref{ASM} shows the proposed hybrid model, and \autoref{ASMPic} compares the results of the hybrid model with Edit-Distance and Jaccard techniques in terms of precision and recall of the approximate string matching step. To run this comparison, a stratified sampling technique is used with different time periods being the groups\footnote{Time periods: before 1800, 1800-1850, 1850-1900, 1900-1950, 1950-2000, 2000-2018}. \begin{algorithm} \caption{Approximate String Matching of Legislation}\label{ASM} \begin{algorithmic}[1] \Procedure{Legislation Name Matching}{} \State $string1 \gets \textit{Extracted legislation name}$ \State $\textit{masterlist} \gets \text{Open }\textit{Legislation Title Master List}$ \State $j \gets \textit{1}$ \State $\textit{tline} \gets \text{The first line of }\textit{masterlist}$. \State $GetOut \gets \textit{0}$ \While {$<GetOut \neq 0> \AND <tline \neq 0>$} \\~ \State $string2 \gets \textit{tline}$. \State $m(j) \gets \textit{Jaccard(string1, string2)}$ \State $n(j) \gets \textit{EditDistance(string1, string2)}$ \If {$<m(j) = 0.5> \OR <n(j) = 0> $} \State $\textit{GetOut}$ \EndIf \State $j \gets j+1$. \State $\textit{tline} \gets \text{The next line of }\textit{masterlist}$ \State \textbf{close}; \EndWhile \State $[x1 , I1] \gets \textit{max(m)}$ \State $[y1 , I2] \gets \textit{min(n)}$ \If {$y1 \textit{ is smaller than or equal to 5}$} \State $ match \gets I2$ \ElsIf {$x1 \textit{ is bigger than 0}$} \State $ match \gets I1$ \EndIf \EndProcedure \end{algorithmic} \end{algorithm} The graph shows the error rates in each time sample of documents based on the chosen approximate string matching model. For example for the documents commenced prior to 1850, the first marker point at each graph line shows the false negative error and the precision of the chosen approximate string matching method. As can be seen samples from this oldest groups of acts show a higher error rate regardless of the approximate string matching method, and Edit-Distance performs slightly better for the old documents comparing to the Jaccard index. In summary the proposed hybrid model performs significantly better than the other two methods for all documents regardless of their age with less than two percent of false-negative error and average precision of more than 98 percent. In the case study the pattern which is used for the approximate string matching step is the list of all NZ Acts provided by NZLII \cite{NZLII}. In case of not having access to such a master list, the typo resolution could be more time consuming. Approximate string matching considerably improves the quality of the extracted information, result in reliable edge list and node list. Later in this study the evaluation of the final extracted data set and the robustness of the network is discussed. The robustness study proves the value of a high performing approximate string matching technique which improves the data quality significantly. \section{Application}\label{Application} The proposed Information Extraction framework resolves the historic data limitation in previous studies \cite{NSakhaee2016}\cite{NSakhaee2017} and results in a large and reliable dynamic network data set which is called LegiNet and is available at \cite{dataverse}. This dynamic \cite{DynamicGraph} and complex network has a very intersecting range of characteristics and behaviours. To maintain the subject consistency of this paper, more in-depth analysis of network behaviours are delayed to the future studies. In this section, generic network science characteristics of the case studied network are discussed, and an overall view of the structural and node importance evolution is presented. \autoref{NewvsOld} compares the produced network based on the Information Extraction process with the earlier versions of the network that was built with parsing of limited available XML resources. As illustrated the network size and structure is significantly changed comparing to its earlier versions. \begin{table}[h] \centering{}\protect\caption{NZ Legislation Network, this study versus previous studies} \label{NewvsOld} \resizebox{\textwidth}{!}{\begin{tabular}{ l l l l l l l} \hline Network & Nodes & Edges & Average degree & Average CC\footnote{The average clustering coefficient (CC) is calculated based on the assumption that the network is directed using the approach that discussed in \cite{NSakhaee2016}} & Average path-length & Network type\\ \hline This study & 16385 & 137751 & 8.407 & 0.216 & 4.873 & dynamic\\ Previous studies & 3856 & 33884 & 8.878 & 0.39 & 3.569 & one snapshot\\ \hline \end{tabular}} \end{table} \autoref{graph} and \autoref{measures} capture the overall evolution of the Legislation Network in New Zealand from 1267 to the second quarter of 2018. To visualize the data, a network force-directed approach is used. \begin{figure} \begin{center} \begin{minipage}{0.3\linewidth} \includegraphics[width=0.82\linewidth]{1267-1839.jpg} \subcaption{1267-1839}\label{a1} \end{minipage}% \begin{minipage}{0.3\linewidth} \includegraphics[width=0.82\linewidth]{1840s.jpg} \subcaption{1840s} \end{minipage}% \begin{minipage}{0.3\linewidth} \includegraphics[width=0.82\linewidth]{1850s.jpg} \subcaption{1850s}\label{c1} \end{minipage}% \vspace{-\parskip} \begin{minipage}{0.3\linewidth} \includegraphics[width=0.82\linewidth]{1860s.jpg} \subcaption{1860s}\label{d1} \end{minipage}% \begin{minipage}{0.3\linewidth} \includegraphics[width=0.82\linewidth]{1870s.jpg} \subcaption{1870s} \end{minipage}% \begin{minipage}{0.3\linewidth} \includegraphics[width=0.82\linewidth]{1900s.jpg} \subcaption{1880-1909} \end{minipage}% \vspace{-\parskip} \begin{minipage}{0.3\linewidth} \includegraphics[width=\linewidth]{1910.jpg} \subcaption{1910s} \end{minipage}% \begin{minipage}{0.3\linewidth} \includegraphics[width=\linewidth]{1920.jpg} \subcaption{1920s} \end{minipage}% \begin{minipage}{0.3\linewidth} \includegraphics[width=0.9\linewidth]{1950.jpg} \subcaption{1930-1959} \end{minipage}% \vspace{-\parskip} \begin{minipage}{0.3\linewidth} \includegraphics[width=\linewidth]{1970.jpg} \subcaption{1960-1979} \end{minipage}% \begin{minipage}{0.3\linewidth} \includegraphics[width=.97\linewidth]{2000.jpg} \subcaption{1980-2009} \end{minipage}% \begin{minipage}{0.3\linewidth} \includegraphics[width=\linewidth]{current.jpg} \subcaption{2010s} \end{minipage}% \end{center} \caption{Overview of the network structure evolution}\label{graph} \end{figure} In the layouts in \autoref{graph} each node is placed depending on their connection to the other nodes. As can be seen references between the Acts first appear in the 1840s, but the data-set visually looks like a graph since the 1850s and it gets denser from the 1870s. \begin{table}[h] \centering{}\protect\caption{Overview of the network measures evolution} \label{measures} \resizebox{\textwidth}{!}{\begin{tabular}{ l l l l l l l l l l l l l} \hline Time & 1267-1839 & 1840s & 1850s & 1860s & 1870s & 1880-1909 & 1910s & 1920s & 1930-1959 & 1960-1979 & 1980-2009 & 2010s\\ \hline Number of nodes & 28 & 148 & 315 & 939 & 1945 & 4712 & 5473 & 6292 & 8622 & 11940 & 15524 & 16199\\ Number of edges & 0& 12 & 64 & 1252 & 4756 & 14851 & 18767 & 24538 & 44859 & 70683 & 121019 & 130969\\ Average degree & 0 & 0.081 & 0.203 & 1.333 & 2.445 & 3.152 & 3.429 & 3.900 & 5.203 & 5.920 & 7.796 & 8.085\\ Average path length & 0 & 1 & 1.046 & 2.605 & 3.592 & 8.061 & 7.514 & 8.301 & 6.164 & 5.554 & 5.051 & 4.927\\ Directed CC & 0 & 0 & 0.001 & 0.007 & 0.12 & 0.13 & 0.133 & 0.143 & 0.161 & 0.193 & 0.213 & 0.212\\ Small-world\footnote{The small-world sigma $\sigma$ is calculated by comparing clustering coefficient and average path length of each network to 50 equivalent random network with same average degree as suggested by \cite{Sigma}} $\sigma$ & NA & 0 & 0.447 & 1.165 & 15.587 & 75.173 & 82.519 & 80.122 & 24.239 & 16.131 & 195.837 & 208.084\\ \hline \end{tabular}} \end{table} As can be seen in \autoref{measures} the graphs show some small-world properties from 1860s with $\sigma>1$ and small-world property of the graphs is significant from 1970s comparing to 50 random graphs. As illustrated overlay the network gets denser and the average degree is growing. More significant clusters are observed during the most recent decades which can be seen in \autoref{graph}. These clusters could be the outcome of housekeeping activities such as edge and node removal, or it could be the result of mature referencing approach in legal drafting process. Both of the above hypotheses should be examined in future studies. \begin{figure} \begin{center} \begin{minipage}{0.45\linewidth} \includegraphics[width=\linewidth]{1267-1869chart.pdf} \centering\includegraphics[width=0.6\linewidth]{1267-1869words.png} \subcaption{1860s}\label{a2} \end{minipage}% \begin{minipage}{0.45\linewidth} \includegraphics[width=\linewidth]{1870-1909chart.pdf} \centering\includegraphics[width=0.6\linewidth]{1870-1909words.png} \subcaption{1870-1909}\label{b2} \end{minipage}% \vspace{-\parskip} \begin{minipage}{0.45\linewidth} \includegraphics[width=.95\linewidth]{1910-1959chart.pdf} \centering\includegraphics[width=0.6\linewidth]{1910-1959words.png} \subcaption{1910-1959}\label{c2} \end{minipage}% \begin{minipage}{0.45\linewidth} \includegraphics[width=\linewidth]{1960-1979chart.pdf} \centering\includegraphics[width=0.6\linewidth]{1960-1979words.png} \subcaption{1960-1979}\label{d2} \end{minipage}% \vspace{-\parskip} \begin{minipage}{0.45\linewidth} \includegraphics[width=\linewidth]{1980-2009chart.pdf} \centering\includegraphics[width=0.6\linewidth]{1980-2009words.png} \subcaption{1980-2009}\label{e2} \end{minipage}% \begin{minipage}{0.45\linewidth} \includegraphics[width=\linewidth]{2010-2018chart.pdf} \centering\includegraphics[width=0.6\linewidth]{2010-2018words.png} \subcaption{2010-2018}\label{f2} \end{minipage}% \end{center} \caption{Time evolution of the top ten legislation and the top subjects in the top twenty legislation}\label{chart} \end{figure} Based on the network structure information provided in \autoref{measures} and \autoref{graph}, six different time periods are chosen for the centrality evolution analysis. \autoref{chart} captures the time evolution of the top 10 nodes and the most frequent words \footnote{To find the frequent words, Textalyzer Python module is used. The frequent prepositions, conjunctions and articles are excluded from the analysis.} in the top 20 nodes based on Katz prestige centrality measure. As mentioned earlier, prior to the 1860s the graphs don't show significant small-world properties. The visual presentation in \autoref{a1} to \autoref{c1} also reflects that the network can be considered as a random graph during this period. So the Katz centrality degree distribution is nearly a uniform distribution in these time periods and is excluded from \autoref{chart}. \autoref{a2} shows the most important nodes with the impression that \textbf{Land} was the most important law subject back at that time. In the next selected time period the network shows small-world properties, and as can be seen in \autoref{b2} the centrality measure shows a higher kurtosis with the word \textbf{Council} being the most frequent topic in legal domain. Similarly the other graphs reflect the change in the network structure and highlights the relationship between the laws and the socio-economic requirements of the country. In the current decade, with the new sets of legislation being introduced and referenced to the older documents, the centrality measure is increased comparing to the previous decade, and the hot legal topics show a change which could be a good reflection of the society's needs. \section{Evaluation and Robustness} In this section the performance of the proposed framework is discussed. As explained in the previous section, the main goal of the study is to extract the information to build Legislation Network. The framework includes Named Entity Recognition, Relation Extraction, and Approximate String Matching jointly to extract the network's node and edge information. In this section the proposed framework is evaluated and the related errors are calculated. The familiar metrics of recall and precision measures are used to evaluate the system. High \textbf{precision} means that the framework returns substantially more relevant results than irrelevant ones, while high \textbf{recall} means that the process returns most of the relevant results. At the end of this section the impact of the identified errors on the network structure is explained, and the robustness is assessed. \subsection{Error Estimation, Precision, and Recall} In the proposed Information Extraction process, Named Entity Recognition is combined with Approximate String Matching to recognize, validate and optimize the entities (the nodes and the edges). \autoref{Error} illustrates the occurrence of the false-positive and false-negative errors in this process and helps in studying the robustness of the network. \begin{figure}[h] \centering \includegraphics[scale=1]{Error_diagram.jpg} \caption{Error Diagram} \label{Error} \end{figure} If the Entity Recognition process finds the entities, then there is a possibility that the Approximate String Matching process fails to find the correct match. A type \textit{I} error $\alpha_1$ occurs when the approximate string matching process fails to find the correct match. From one side this issue contributes to the false-positive error because it adds invalid entities to the output. These invalid entities impact the accuracy of the node list and the edge list of Legislation Network. To estimate the $\bar\alpha_1$ in the case study, a cluster sampling method is used to randomly choose ten sets of 30 entities. By manual check of the samples, the rate of incorrectly matched entities is observed. A Kolmogorov-Smirnov test suggests that the estimated error $\bar\alpha_1$ has a normal distribution with the parameters in \autoref{errorrates}. $\beta_1$ also occurs when approximate string matching system picks a wrong match for the entities. This issue can contribute to the false-negative error because those entities that are wrongly matched to other entities are missing from the data set. The estimation methodology and the estimated value of $\bar\beta_1$ are equal to that of $\bar\alpha_1$ as indicated in \autoref{errorrates}. $\beta_2$ is measuring the Information Extraction rules' performance. If it fails to recognize entities, then those entities are missed, and it results in another type of false-negative error. The estimation process for $\bar\beta_2$ is different from the previous two errors, and it is harder to address. For the case study a sample set of 30 text files are randomly chosen using cluster sampling method. Then all of the extracted entities for each document is compared to the actual entities in a human involving process. The list of missing entities is categorized into two parts: caused by a typo, or caused by insufficient rules to recognize the entity. The rate of missing entities caused by weak or missing rules is calculated for each document and denoted by $\bar\beta_2$. The Kolmogorov-Smirnov results through all of the 30 documents show that $\bar\beta_2$ has a normal distribution with the parameters in \autoref{errorrates}.\\~ \begin{table}[h] \centering{}\protect\caption{Errors, sensitivity and specificity} \label{errorrates} \renewcommand{\arraystretch}{2} \begin{tabular}{ l l l l l | l l } \hline Measure & $\bar\alpha_1$ & $\bar\beta_1$ & $\bar\beta_2$ & $\bar\beta_3$ & $\bar\alpha$ & $\bar\beta$\\ \hline $\mu$ & 0.0160 & 0.0160 & 0.0012 & 0.0007 & 0.0160 & 0.0179\\ $\sigma$ & 0.0012 & 0.0012 & 0.0001 & 0.0001 &0.0012 & 0.0012\\ \hline \end{tabular} \renewcommand{\arraystretch}{1} \end{table} $\beta_3$ addresses the error when typos cause problem for recognizing the entities. The estimation process is very similar to that of $\bar\beta_2$. A sample of 30 text files are collected. Then the rate of missing entities caused by OCR typos is calculated for each document and addressed as $\bar\beta_3$. The Kolmogorov-Smirnov results through the selected documents show that $\bar\beta_3$ has a normal distribution with the parameters in \autoref{errorrates}. In the sample it is observed that the typos that cause the entity recognition failure are only numeric typos. For example OCR might produce an error and convert 1987 to l987 by misspelling number 1 to letter l. Then the Information Extraction rules are impacted to recognize l987 as a year, so the entity is missed.\\~ As \autoref{Error} shows $\alpha_1$ is the only false positive error which contributes to the \textbf{overall false positive error} of the system. \autoref{errorrates} captures $\bar\alpha$, assuming that $\bar\alpha$ estimates the overall type one error. To estimate the \textbf{overall false negative error} of the system, $\beta_1$, $\beta_2$, and $\beta_3$ are considered as mutually exclusive events. From \autoref{Error} it is also clear that the intersections of each two of these errors are empty, so they are independent. \autoref{errorrates} shows $\bar\beta$, the estimated value for the overall false negative, or the type \textit{II} error.\\~ To calculate the Precision and Recall, \autoref{precision} and \autoref{recall} are used. \small \begin{flalign}\label{precision} Precision &= \frac{True Positive}{True Positive+False Positive}= \frac{1-\bar\alpha-\bar\beta}{1-\bar\beta} \end{flalign} \begin{flalign}\label{recall} Recall &= \frac{True Positive}{True Positive+False Negative}= \frac{1-\bar\alpha-\bar\beta}{1-\bar\alpha} \end{flalign} \normalsize Referring to the above equations and \autoref{errorrates}, the Precision of 98.37\% and Recall of 98.18\% are calculated. These outcomes suggest the high performance of the Proposed Information Extraction framework that results in a high data reliability of the output Legislation Network. As explained in section \ref{Approximate String Matching}, the proposed hybrid Approximate String Matching technique substantially reduces the errors. It is important to mention that at the earlier stages of the study by using the classic string matching techniques, the error rates were considerably higher, and the accuracy of the network was questionable. A time consuming examination process engaging manual checks was applied to propose the hybrid model which resulted in impressive performance and high precision and recall. The improvement obviously involved a lot of efforts and time, but resulted in accuracy and confidence in Legislation Network studies. \subsection{Robustness} With a coherent understanding of the errors, it is very important to study the robustness of the network to the error. The robustness study proves the importance of the data accuracy which supports the value of the proposed hybrid model for the approximate string matching. In this section to study the network robustness, diameter and three major centrality measures are used. To understand the diameter robustness of the network, attack and failure analysis is required. As discussed earlier Legislation Network in general show scale-free characteristics. So it is expected to observe a reasonable error tolerance of the network as the result of random failures, but vulnerability as the result of attacks\cite{robust2}. To study the network robustness to node failures the method is to randomly remove a fraction of nodes $f$ and recalculate the diameter of the network $d$. To study the network robustness to attack by removing a fraction $f$ of the largest nodes \footnote{Based on their connectivity (total degree)} and observe the change to the the diameter $d$. The results of both failure and attack to the nodes are captured in \autoref{Robust1}. The observed tolerance to failures and the vulnerability to attacks shows that the connectivity is provided by a few highly connected nodes, and majority of nodes have only few edges. \begin{figure}[h] \centering \includegraphics[scale=.4]{Robust.png} \caption{Changes of the network diameter $d$ as the function of fraction $f$}\label{Robust1} \end{figure} As can be seen the vulnerability to the attacks starts immediately after removing a small fraction $f = 0.3\%$ of the highly connected nodes. This scenario of attack is highly unlikely in Legislation Network considering the high Precision and Recall of the proposed data extraction process. As discussed in the previous studies \cite{NSakhaee2016} \cite{NSakhaee2017}, the most relevant centrality measure for Legislation Network is the Katz second prestige measure. In recent studies, the reliability of different centrality measurements against network manipulation has been addressed \cite{CRobust2} \cite{CRobust}, but Katz prestige centrality is not much discussed. In this paper the Katz centrality, betweenness centrality, and degree centrality robustness of Legislation Network against edge deletion error is studied. To address the robustness, four major measures of accuracy that proposed in \cite{CRobust2} and \cite{CRobust} are used. These measures are Top 1, Top 3, Top 10 percent, and the Pearson correlation to compare the centrality measures between the true network and the manipulated network. \begin{figure}[h] \begin{minipage}{0.35\linewidth} \includegraphics[width=\linewidth]{KatzCentrailtyRobust.png} \subcaption{Katz Prestige Centrality} \end{minipage}% \begin{minipage}{0.35\linewidth} \includegraphics[width=\linewidth]{BetweennessRobust.png} \subcaption{Betweenness Centrality} \end{minipage}% \begin{minipage}{0.35\linewidth} \includegraphics[width=\linewidth]{DegreeRobust.png} \subcaption{In-degree Centrality} \end{minipage}% \caption{Robustness of the top nodes as the function of fraction of manipulated edges \textit{f}} \label{centralityrobust} \end{figure} The error level is considered as a specific percentage value from the set of {1\%, 5\%, 10\%, 20\%}, that is relative to the number of manipulated edges from the original true network. \autoref{centralityrobust} shows the results of the different Centrality measure as the function of the fraction of manipulated edges $f$. For each fraction level, the test is repeated for 100 times, and the graphs show the average of the all sampled sets. \autoref{pearson} shows the Pearson correlation between the nodes centrality in manipulated network and the original network when 10\% of the edges are randomly deleted. \begin{table}[h] \centering{}\protect\caption{Node centrality Pearson correlation between the manipulated network and the original network} \label{pearson} \adjustbox{max height=\dimexpr\textheight-6.5cm\relax, max width=\textwidth}{ \begin{tabular}{ l l l l } \hline Measure & Katz prestige centrality & Betweenness centrality & Degree centrality\\ \hline Significance (p-value) & $2.2e^-16$ & $0.001$ & $0.003$ \\ Correlation & $0.939$ & $0.948$ & $0.67$ \\ \hline \end{tabular} } \end{table} The pattern and level of robustness of the three selected centrality measures considered in this paper are are not as similar as suggested in \cite{CRobust2}. In-degree centrality shows more fragility comparing to betweenness and Katz measures. This difference could be related to the network topology as suggested by \cite{CRobust}. The results also confirm the findings of \cite{CRobust} \cite{CRobust2} that accuracy declines monotonically with increasing error. As can be seen in the graph, the Katz centrality is fairly robust to the edge deletion when less than 20 percent of the network structure is touched. The graphs indicate a moderate fragility when the network structure is hugely manipulated. For example the removal of 20 percent of the edges somehow impacts the in-degree centrality. However in more than 90 percent of these extreme samples, the top 1 node in the manipulated network is a member of top ten percent of nodes in the original graph. The results imply that centrality measures on Legislation Network are quite robust under small amounts of error (such as 5 percent or under) and to some extent fragile under bigger data errors. So the reliability of the network information is very important for in-depth network studies. As explained earlier the the precision and the recall of the proposed Information Extraction process is above 98 percent, so it is reasonable to compute the centrality measures when studying the Legislation Network. \section{Conclusion} This study focused on the \textbf{time} as a very important attribute in understanding and analyzing legislation. Legislation Network has been discussed in recent years, but the importance of having access to the historic legislation was never discussed much. This paper underlined the value of studying legislation as dynamic networks, and proposed a new Information Extraction process to achieve a highly accurate Legislation Network. The performance of the data extraction framework is examined, is compared to the previous studies and proved to be considerably high. This work contributed to the literature of network Information Extraction from old documents, and insisted on the value and applications of the dynamic Legislation Network. The proposed process can be used not only in the legal domain but also in various research areas involving documented knowledge, facts, and cases. Analyzing a dynamic Legislation Network is a novel approach to understand the underlying process behind the generation of the laws, and to study the behaviour, culture and growth of societies. This subject is very interesting, but mathematically complicated. So it will be discussed in a separate study.
{'timestamp': '2018-12-05T02:20:12', 'yymm': '1812', 'arxiv_id': '1812.01567', 'language': 'en', 'url': 'https://arxiv.org/abs/1812.01567'}
\section{Introduction} \blfootnote{Authors contributed equally to this work.} The digital transformation's impact on professional practices has led to a rapid evolution of in-demand job skills, making it difficult to track these changes for enterprises and workers. Manual evaluation is becoming exceedingly complex and time-consuming, justifying the need for an automatic evaluation of these changes \cite{bakhshi_future_2017}. One way to study these changes in job ads is automatic skills recognition \cite{squicciarini_demand_2021}. However, job offer data is not easy to access even in job offers web platforms, mainly due to intellectual property issues. Moreover, the annotation needed to achieve good skill recognition performances through supervised machine learning is another complex and costly task. Indeed, many datasets offering job descriptions are accessible online such as the \verb|mycareersfuture| public dataset \cite{bhola_retrieving_2020}. However, as \cite{khaouja2021survey} reported in their article, very few public annotated datasets exist, and none are in French. As contributions, in this article, we propose \guillemet{\href{https://github.com/iid-ulaval/FIJO-dataset}{\color{red}French Insurance Job Offer (FIJO)}}, a free and public non-annotated and annotated dataset, to facilitate research in this domain. This dataset focuses on soft skills, which describe the way employees work alone and with others, instead of hard skills, which represent a more formal knowledge used at work \cite{lyu2021soft}. Also, we explore the training of a token-wise NER French skill detection algorithm in the field of insurance with state-of-the-art algorithms\footnote{All code used to obtain these results will be available on GitHub \href{https://github.com/iid-ulaval/FIJO-code}{\color{red}here}.}. The rest of this paper is structured as follows. Firstly, we begin with a brief overview of the literature on skill detection in \autoref{sec:relatedwork}. Secondly, we will present FIJO in \autoref{sec:data}, including how we constructed our French complexity corpus, some statistics and analysis of the dataset. Third, we will present our skill detection algorithm, the training settings and our results in \autoref{sec:skilldetection}. Finally, we will draw some concluding remarks in \autoref{sec:conclusion}. \section{Related work} \label{sec:relatedwork} The first approach to recognizing skills inside a job offer is using statistical techniques. It is a commonly used approach in many pieces of work such as \cite{malherbe_bridge_2016} who detect hard and soft skills in job offers by matching a list of keywords in the text with a skill database. Their skill database uses external databases knowledge, namely DBPedia and StackOverflow. Other works \cite{sodhi_content_2010, gardiner_skill_2018, squicciarini_demand_2021} do not rely on skills databases but use content analysis to detect the presence of certain words or concepts in offers. Other approaches using machine learning to detect skills in job offers have received lots of attention recently \cite{khaouja2021survey}. The skill recognition problem has been modeled either using topic modeling through Latent Dirichlet Analysis \cite{gurcan_big_2019}, text classification with CNN and LSTM \cite{sayfullina2018learning}, or NER with LSTM \cite{jia_representation_2018} and transformer-based models \cite{tamburri_dataops_2020}. However, these pieces of work mainly focus on specialized skills (e.~g. IT skill \cite{gurcan_big_2019}), focus on soft skill \cite{sayfullina2018learning} or are applied on English job ads only. As per \cite{khaouja2021survey}, the conclusion in their survey on skill identification shows that very few datasets are available online. Most of the recent works do not release their dataset nor mention the reasons for the non-publication. For example, \cite{cerioli20205} uses content analysis, and 5 million non-annotated jobs adds to determine whether or not testing software is a standard in the IT industries. The possible reason for the lack of publication by the authors was possibly due to the intellectual property constraint from their industrial partner. However, a recent new public dataset released by \cite{bhola_retrieving_2020} focuses on extracting identified hard skills in job ads. The dataset consists of 20,298 job ads, where each ad includes nearly 20 hard skills on average. A unique skill term corresponds to a unique class. Thus, the overall dataset includes 2,548 skill classes. For example, \guillemet{Microsoft Word} is a skill, and \guillemet{Microsoft Excel} is another skill. This new dataset lacks annotation for soft skills that have been more required than hard skills by enterprises in the past decade \cite{lyu2021soft}. \vspace{-1em} \section{Data} \label{sec:data} FIJO was created in partnership with four Canadian insurance companies. The dataset consists of non-annotated and annotated French job ads published by them, as well as their metadata (e. g. date of publication) between the years 2009 to 2020. Each job offer's text was manually extracted and semi-manually cleaned using the following procedure: removal of carriage return in an incomplete sentence (this is due to bullet point text) and multiple carriage returns, removal of bullet point character, normalization of the apostrophe punctuation characters, and removing of trailing whitespace at the beginning or end of a sentence. In order to protect the interests of the companies to whom the published data belongs, we chose to de-identify the job ads before making them publicly available. This process consists of three steps. Firstly, we used regular expressions to substitute the different variations of the companies' names and email addresses present in the offers. Next, a SpaCy French pre-trained NER model (\verb|fr_core_news_lg|) was used to identify potential names and locations to help with the following step. Finally, a manual check was conducted on each offer to substitute the following elements: names, locations, postal addresses and miscellaneous elements that could help identify the companies (i. e. products, department names). \autoref{tab:substitution tags} describes the substitution tags employed. \begin{table} \centering \begin{tabular}{lc} \toprule Substitution tag & Description \\ \midrule <anon\_name> & A person's name \\ <anon\_location> & A postal address or a city \\ <anon\_company> & One of the companies' names \\ <anon\_misc> & An element that can help identify one of the companies \\ \bottomrule \end{tabular} \caption{Substitution tags used for de-identification} \label{tab:substitution tags} \end{table} \vspace{-1em} \subsection{Dataset Statistics} The dataset is composed of 867 de-identified French job ads. As shown in \autoref{fig:histofferlen}, job ads lengths vary greatly, with an average length of 300.97 and a standard deviation of 119.78 tokens\footnote{Punctuations are computed as a token. We do so since our pre-processing procedure does not include the removal of punctuation nor lemmatization, or stemming.}. We can also observe that a few offers (16) are outliers with a length of more than 572. Moreover, \autoref{tab:nonannotatedstatistics} presents statistics of the dataset, where the lexical richness corresponds to the ratio of a job offer's number of unique words over the vocabulary cardinality without removing the stop words or normalizing them \cite{van2007comparing}. We can see that the lexical richness is relatively low, which means they are quite similar in terms of vocabulary. \begin{table} \centering \begin{tabular}{lclclc} \toprule Average \# of Words & 300.97 & \# of Words & 260,942 & Average \# of Sentences & 20.66\\ \# of Unique Words & 5,931 & Average Sentence Length & 14.57 & Average Lexical Richness & 0.023\\ \bottomrule \end{tabular} \caption{Unannotated dataset statistics} \label{tab:nonannotatedstatistics} \end{table} \begin{figure} \centering \begin{minipage}{.4\textwidth} \centering \captionsetup{width=.9\linewidth} \includegraphics[width=\linewidth]{stats/sentence_len_trunc.pdf} \caption{Ads length by words} \label{fig:histofferlen} \end{minipage}% \begin{minipage}{.6\textwidth} \centering \captionsetup{width=.9\linewidth} \includegraphics[width=\linewidth]{donnee_anot/donneeannotee.png} \caption{Example of a French annotation where each color refers to a class (i. e. skill)} \label{fig:annotatonexample} \end{minipage} \end{figure} \subsection{Annotated Dataset} To learn to identify soft skills inside ads, 47 offers were annotated, and more precisely, each job offer sentence, for a total of 499 annotations. Our annotation process consists of creating a skills reference, which defines the skills used for the annotations, and randomly selecting 47 offers to be annotated by a domain expert. Annotation was conducted with non-overlapping sentence entities done individually. However, the overall job offer was given as a reference to the annotator for context. Each entity contains at least one word or, at most, the complete sentence. Based on the skill groups of the \href{http://catalogue.iugm.qc.ca/GED_IUG/109311392759/Referentielcomp.pdf}{\textit{AQESSS}} public skills repositories and the one used by our insurance partners, which are based on the commercial \href{https://www.kornferry.com/}{Korn Ferry} and \href{https://humance.ca/}{SPB} repositories, a set of four skills have been identified. Namely, \guillemet{Thoughts}, \guillemet{Results}, \guillemet{Relational} and \guillemet{Personal}\footnote{The tag are written in French in the dataset. Namely, \guillemet{Pensée}, \guillemet{Résultats}, \guillemet{Relationnel} and \guillemet{Personnel}.}. The number of classes has been limited to four mainly because, in general, learning algorithms are not known for being inefficient on a large number of tags \cite{JMLR:v15:gupta14a} but also because of the possible confusion between skills during the annotation process (see \autoref{sec:limitations}). \autoref{fig:annotatonexample} presents an example of a sentence annotation. \subsection{Annotated Dataset Statistics} First, as shown in \autoref{fig:ent_nb} our annotated portion of FIJO consists of 932 entities distributed unevenly between the four classes. We can see that the class with the most entities is \guillemet{Thoughts} with 317 entities, followed by \guillemet{Personal} with 297 and \guillemet{Relational} with 216. The class with the lowest number of entities is \guillemet{Results} with 102 entities. Second, as illustrated in \autoref{fig:ent_len}, our entities are on average 9.6 long with a standard deviation of 7.14 tokens. Their length ranges from a single token to 50 tokens, but 50\% are below 8. Moreover, Table~\ref{tab:annotatedstatistics} presents statistics of the dataset. We have a similar average number of words and sentences as per the non-annotated dataset. However, our annotated dataset uses fewer words and even fewer unique words. Finally, \autoref{fig:stopwords} presents the number of occurrences of stop words in an entity text (\blueemph{blue}) or outside of an entity (\rougeemph{red}). It shows that some stop words are overly represented in entities, such as \guillemet{\textit{de}} and \guillemet{\textit{des}} that are mostly in annotations since long skills tend to contain a high number of stop words. \begin{figure} \centering \begin{minipage}{.45\textwidth} \centering \captionsetup{width=.9\linewidth} \begin{tabular}{lclc} \toprule Av. \# of & & \# of &\\\midrule Words & 270.26 & Words & 12,702 \\ Sentences & 16.28 & Unique Words & 1,902\\\bottomrule \end{tabular} \captionof{table}[foo]{Annotated dataset statistics} \label{tab:annotatedstatistics} \end{minipage} \begin{minipage}{.49\textwidth} \centering \captionsetup{width=.9\linewidth} \includegraphics[width=\textwidth]{stats/entities_numbers.pdf} \caption{Number of entities per class} \label{fig:ent_nb} \end{minipage} \begin{minipage}{.49\textwidth} \centering \captionsetup{width=.9\linewidth} \centering \includegraphics[width=\textwidth]{stats/entities_len.pdf} \caption{Distribution of entities length base on their tokens} \label{fig:ent_len} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \captionsetup{width=\linewidth} \centering \includegraphics[width=\textwidth]{stats/stopwords.pdf} \caption{Number of time a stopwords occur in an entity (\blueemph{blue}) or out of an entity (\rougeemph{red})} \label{fig:stopwords} \end{minipage} \vspace{-1em} \end{figure} \vspace{-0.5em} \subsection{Dataset Limitations} \label{sec:limitations} We have identified three limitations to our dataset: unbalanced entities classes, lexical overlapping and soft skill identification. Firstly, as illustrated in \autoref{fig:ent_nb}, our annotated dataset is composed of an unbalanced number of classes where two classes are more represented than the two others. When classes imbalance exists in training data, a classification algorithm will typically over-classify the majority group (\guillemet{Thoughts}) due to its increased prior probability. As a result, the instances belonging to the minority group (\guillemet{Results}) will likely be misclassified more often than those belonging to the majority group \cite{johnson2019survey}. Secondly, \autoref{fig:pca_tf-idf} illustrates the 2-dimension PCA of the TF-IDF score of each entity's text after stop words removal and lemmatization, separated by class. It shows that we can separate the terms (and centroids) present in skills from those that are not. For example, the \guillemet{Personal} (purple) centroid (upper left) and \guillemet{Thoughts} (red) centroid (down left) can be distinctly separated from each other and the other two centroids. Such separation may make it easier to discern the two cases since skills use specific terms that are less common in other skill texts. For example, the word \guillemet{\textit{collaborer}} (collaborate) appears only in the \guillemet{Personal} entities. By contrast, well-distributed terms among the different groups may be more challenging. For example, the word \guillemet{\textit{atteindre}} (achieve) occurs in all four classes. However, we can also see that some terms are quite close to each other, possibly leading to a more difficult distinction between the four classes' word terminologies. \begin{figure} \centering \captionsetup{width=\linewidth} \includegraphics[width=0.6\textwidth]{stats/PCAvocab.png} \caption{2-dimensions\protect\footnotemark{} PCA of the TF-IDF separated by entities groups with the removal of stop words and lemmatization of terms} \label{fig:pca_tf-idf} \end{figure} \footnotetext{The third dimension is not represented here due to difficulty representing the third dimension in a non-reactive figure. However, we would like to clarify that using the third dimension makes the \guillemet{Relational} and \guillemet{Results} class way more spaced out than in two dimensions only.} Finally, soft skill identification is not an easy task, as mentioned by \cite{squicciarini_demand_2021}, and our dataset reflects it. Firstly, some distinctions between skills can be quite confusing, as seen in the example in \autoref{fig:anot_conf1}. This example can be read as \guillemet{Welcoming visitors and responding to their various requests for information} and is tagged as Thoughts. However, one reader might find that such a skill could also represent a Relational one. Thus, creating a confusing distinction between some examples. Secondly, some examples contain two consecutive skills of the same class separated by coordinating conjunctions as seen in \autoref{fig:anot_conf2}. We can see that the French coordinating conjunction \guillemet{\textit{et}} (and) split the two Personal entities. However, such coordinating conjunction is not always used to do so. It can also be used as an addition of information, such as in \guillemet{\textit{vérifier et contrôler}} (verify and control). Thus, it can be quite challenging to determine if a subset of a sentence is one or two skills in some cases. Finally, it is common to see job ads that list expected soft skills in the same sentence, but all skills do not belong to the same skill class. An example of such switching between two entity types is illustrated in \autoref{fig:anot_conf3}. We can see that the first token is an entity, followed by another entity of a different type and the rest of the sentence is another entity of the same class as the first entity. This kind of \guillemet{squeezing} of two entities sharing the same class around an entity of a different class can be challenging for a NER model. All these limitations justify the fact that, in this article, we apply token-wise approaches to the dataset to start with an easier learning task. \begin{figure} \begin{minipage}{\textwidth} \centering \captionsetup{width=\linewidth} \includegraphics[width=0.55\textwidth]{donnee_anot/confusion_class.PNG} \caption{An example of a possible confusion between the classes \guillemet{Thoughts} and \guillemet{Relational} (\guillemet{Welcoming visitors and responding to their various requests for information})} \label{fig:anot_conf1} \end{minipage} \begin{minipage}{.48\textwidth} \centering \captionsetup{width=\linewidth} \centering \includegraphics[width=\textwidth]{donnee_anot/overlap.PNG} \caption{An example of two consecutive skill annotations with the same class that are separated by a coordinating conjunction} \label{fig:anot_conf2} \end{minipage} \hspace{1pt} \begin{minipage}{0.48\textwidth} \centering \captionsetup{width=\linewidth} \centering \includegraphics[width=\textwidth]{donnee_anot/overlap1.PNG} \caption{An example of an entity \guillemet{squeezed} between two other entity of a different class} \label{fig:anot_conf3} \end{minipage} \vspace{-1em} \end{figure} \section{Skill detection} \label{sec:skilldetection} Since skill detection is a sequence classification task similar to NER, we approach it using a recurrent neural network, namely a bidirectional long short-term memory (bi-LSTM) network \cite{lstm}. First of all, we encode each word in a given sequence using FastText's pre-trained French embedding model \cite{Bojanowski2017EnrichingWV} which produces 300-dimensional word embeddings. Once a sequence is encoded, we feed each of the word embeddings to our \textbf{bi-LSTM}{} which has a hidden state dimension of 300 and obtain a new representation for each word. The final step in the prediction process is to classify each word using a fully connected network comprised of one linear layer followed by a softmax activation function. LSTM-based classifiers have proven to be quite efficient in terms of performance for sequence classification tasks. However, these models usually require a large amount of data to guarantee such performances. Therefore, since our annotated dataset contains a limited amount of data, we also use a pre-trained transformer model \cite{transformer} in a transfer learning setting. Our model of choice is CamemBERT \cite{martin-etal-2020-camembert} a French transformer encoder based on the BERT architecture \cite{DBLP:conf/naacl/DevlinCLT19}. Consequently, we use it to encode our text sequence, and we employ a fully connected network identical to the one employed with the \textbf{bi-LSTM}{} model in order to accomplish the classification. We experiment with two configurations of this model, one in which CamemBERT's weights are frozen and one in which they are not. We dub these models \textbf{CamemBERT frozen}{} and \textbf{CamemBERT unfrozen}{} respectively. To further investigate the sensibility of our models to the amount of training data, as well as the transfer learning potential of the pre-trained transformer model, we experiment with different data subset sizes and report the results in \autoref{sec:results}. \subsection{Experiments} We train each of the aforementioned models five times using different random initialization seeds ($[5, 10, 15, 20, 25]$). The models were trained for 300 epochs at most with an initial learning rate of $0.01$ for \textbf{bi-LSTM}{} and \textbf{CamemBERT frozen}{} and of $0.0001$ for \textbf{CamemBERT unfrozen}{} as suggested by \cite{DBLP:conf/naacl/DevlinCLT19}. Therefore, a learning rate schedule that decreased the learning rate by a factor of 0.1 after every five epochs without any decrease of the validation cross-entropy loss was applied. An early stopping with a patience of 15 epochs to prevent overfitting. Additionally, for the \textbf{CamemBERT unfrozen}{} model, we experimented with the training procedure and hyperparameters proposed in \cite{mosbach2021on} in order to address the possible training instability associated with fine-tuning transformer-based language models. As such, an additional five experiments (using the same random seeds) were run with \textbf{CamemBERT unfrozen}{} by limiting the number of epochs to $20$ and scheduling the learning rate as follows: we start the training with a linear learning rate warmup (i. e. the learning rate was linearly increased) up to $0.2\mathrm{e}{-5}$ for the first $10\%$ of epochs, followed by a linear learning rate decay for the rest of the training epochs. We use \textbf{CamemBERT unfrozen warmup}{} to refer to this model. The training data was divided using a $80\%-10\%-10\%$ train-validation-test split with simple random sampling, resulting in a total of $400$ training samples. We also experiment with different training data subsets. Each subset is composed by sampling the first X data samples from the full training set placed in order, with $\mathrm{X} \in \{50, 100, 150, 200, 350, 400\}$. The models and training procedures were implemented using Poutyne \cite{poutyne}, HuggingFace's Transformers \cite{wolf-etal-2020-transformers} and spaCy \cite{Honnibal_spaCy_Industrial-strength_Natural_2020}. \subsection{Results} \label{sec:results} \begin{table} \centering \captionsetup{width=\linewidth} \resizebox{\textwidth}{!}{\begin{tabular}{ccccc} \toprule Data subset size & \textbf{CamemBERT unfrozen warmup} & \textbf{CamemBERT unfrozen} & \textbf{CamemBERT frozen} & \textbf{bi-LSTM} \\ \midrule 50 & $46.40 \pm 3.72$ & $61.25 \pm 5.23$ & $46.55 \pm 0.59$ & $25.53 \pm 0.58$ \\ 100 & $66.71 \pm 6.86$ & $75.24 \pm 2.13$ & $56.02 \pm 0.22$ & $43.74 \pm 15.64$ \\ 150 & $75.73 \pm 2.39$ & $77.67 \pm 1.54$ & $59.87 \pm 0.28$ & $47.92 \pm 7.72$ \\ 200 & $76.40 \pm 2.57$ & $79.78 \pm 1.12$ & $64.23 \pm 0.21$ & $55.44 \pm 6.43$ \\ 250 & $78.97 \pm 2.03$ & $80.23 \pm 3.82$ & $65.78 \pm 1.64$ & $57.71 \pm 10.65$ \\ 300 & $78.18 \pm 0.78$ & $74.99 \pm 4.76$ & $61.99 \pm 0.22$ & $53.24 \pm 2.85$ \\ 350 & $78.27 \pm 3.73$ & $78.72 \pm 2.42$ & $\mathbf{67.47 \pm 0.21}$ & $56.34 \pm 4.25$ \\ 400 & $\mathbf{80.85 \pm 1.67}$ & $\mathbf{83.69 \pm 1.80}$ & $67.29 \pm 0.23$ & $\mathbf{60.69 \pm 9.23}$ \\ \bottomrule \end{tabular}} \caption{Mean token-wise accuracy and one standard deviation on test data across different seeds for training subsets (bold values correspond to maximum mean accuracy for a model)} \label{tab:results} \vspace{-1em} \end{table} \autoref{tab:results} presents the mean token-wise accuracy and one standard deviation on the test set for the four trained models where bold values correspond to maximum mean accuracy for each model. As expected, fine-tuning the complete CamemBERT model yields the best performance with both \textbf{CamemBERT unfrozen}{} and \textbf{CamemBERT unfrozen warmup}{} leading the scoreboard. Based on the accuracy, the best single model performance is obtained with \textbf{CamemBERT unfrozen}{}. However, a McNemar normalize test \cite{mcnemar1947note} using the contingency table illustrated in \autoref{tab:contigency} of both unfrozen models using only the best seed model per approach (i. e. 20 and 25, respectively) yielded a p-value of $0.5334$. Thus, we can only assume no significant difference between the predictive models. \begin{table} \begin{tabular}{cc|cc} \toprule \multicolumn{2}{c}{\multirow{2}{*}{}} & \multicolumn{2}{c}{\textbf{CamemBERT unfrozen}} \\ \multicolumn{2}{c}{} & Correct & Incorrect \\ \midrule \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{CamemBERT} \\ \textbf{unfrozen warmup}\end{tabular}} & Correct & 1367 & 78 \\ & Incorrect & 87 & 94 \\ \bottomrule \end{tabular} \caption{Contingency table for unfrozen models (token-wise) using only the best seed model per approach (i. e. 20 and 25, respectively)} \label{tab:contigency} \end{table} Moreover, the \textbf{CamemBERT unfrozen}{} model seems to suffer from a certain degree of instability, as shown by the consistently high standard deviation. This issue mostly persists when using a learning rate warmup followed by a linear decay as proposed by \cite{mosbach2021on}. \textbf{CamemBERT frozen}{} is the least sensitive to random initialization while \textbf{bi-LSTM}{} presents the highest sensitivity and the lowest performance. When it comes to training subsets, we can observe that all models perform best with a high amount of data. However, performance seems quite close across the 200 to 350 subset size range. We hypothesize that this is due to the data distribution of the training and test sets since companies use slightly different ways to express skills. For example, one uses a bullet point style to enumerate skills in a pragmatic approach, while another uses a more situational approach that puts skill within context. Thus, Figure \ref{fig:split_stats} shows that both the train and test sets contain skills belonging, in the majority, to one company (\skyblueemph{sky blue}). Two more companies (\sulueemph{light green} and \violetemph{pink}) are present in the test set, while their skills are underrepresented in the training set. Furthermore, \autoref{fig:subset_stats} shows that the dominant company's (\skyblueemph{sky blue}) skills are well represented in most training subsets, including smaller-sized ones, while augmenting the number of training samples mostly adds skills related to the company that is not present at all in the test set (\pinkemph{red}). It means that the training is quite sensitive to how the data is shuffled because of the limited number of data samples. Therefore, more data would need to be annotated to reach a more balanced dataset and use a stratified random sampling instead of the current simple random sampling to reflect imbalance better when splitting the data. \begin{figure} \centering \captionsetup{width=\linewidth} \raisebox{-10ex} { \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{results/Train_set.pdf} \end{minipage} \begin{minipage}{.5\textwidth} \centering \centering \includegraphics[width=\textwidth]{results/Validation_set.pdf} \end{minipage}} \begin{minipage}{.5\textwidth} \centering \centering \includegraphics[width=\textwidth]{results/Test_set.pdf} \end{minipage} \caption{Number of tokens per skill in the annotated dataset (the \guillemet{O} tag means that a token doesn't belong to a skill) where each colour represents one of our four company partners} \vspace{-1em} \label{fig:split_stats} \end{figure} \begin{figure} \centering \captionsetup{width=\linewidth} \includegraphics[width=1.1\textwidth]{results/subsets.pdf} \caption{Number of tokens per skill in the training subsets dataset where each colour represents one of our four company partners} \label{fig:subset_stats} \end{figure} Finally, \autoref{tab:acc_per_class} presents the mean token-wise accuracy and one standard deviation per skill on the test set for \textbf{CamemBERT unfrozen warmup}{} and \textbf{CamemBERT unfrozen}{} for a subset size of $400$. We can see that for both approaches, the best performance occurs in the class with the lowest example (Results). We hypothesize that it might be due to the \guillemet{simplicity} of those examples that are shorter than the average with an average length of around seven tokens. Thus, these examples are possibly more straightforward than the others, leading to an easier classification. Also, we observe a higher variance on both Thoughts and Personal class for both our models. These two tags have the highest standard deviation, even if they are the classes with the most examples. It means that the training is quite sensitive to the initialization of the models. Because we have a limited number of data samples, minimizing such instability during training is more difficult. Therefore, more data would need to be annotated to reach a more stable training for all classes. Furthermore, we can see that there is still room for improvement for most tags. \begin{table} \centering \captionsetup{width=\linewidth} \resizebox{\textwidth}{!}{\begin{tabular}{ccccc|ccccc} \toprule \multicolumn{5}{c}{\textbf{CamemBERT unfrozen warmup}} & \multicolumn{5}{c}{\textbf{CamemBERT unfrozen}} \\ O & Thoughts & Results & Relational & Personal & O & Thoughts & Results & Relational & Personal \\ \midrule $84.50 \pm 4.51$ & $82.72 \pm 7.84$ & $91.21 \pm 0.00$ & $73.41 \pm 4.44$ & $80.77 \pm 4.97$ & $83.30 \pm 3.63$ & $80.97 \pm 6.56$ & $92.31 \pm 4.85$ & $77.73 \pm 3.48$ & $85.21 \pm 9.65$ \\ \bottomrule \end{tabular}} \caption{Mean token-wise accuracy and one standard deviation per skill for \textbf{CamemBERT unfrozen warmup}{} and \textbf{CamemBERT unfrozen}{} on the test set for a subset size of $400$ (the \guillemet{O} tag means that a token does not belong to a skill)} \label{tab:acc_per_class} \end{table} \subsection{Error Analysis} Using the approaches that yielded the higher accuracy (\textbf{CamemBERT unfrozen}{}), we conducted an error analysis on is 24 errors. We found that most of these were types that are similar to the cases illustrated in \autoref{fig:anot_conf2}, namely two consecutive skill annotations with the same class but separated by a coordination conjunction. The NER identified the two skills as a single skill in all those error cases. Moreover, some cases (3) were a similar error type where a part of the sentence, before a coordination conjunction, is not an entity as illustrated in \autoref{tab:firstexample}. The figure introduces each token's ground truth and prediction (\guillemet{Prob} rows), along with the model probabilities, using the same color scheme as \autoref{fig:pca_tf-idf}, namely \rougeemph{red} is the \guillemet{Thoughts} class, \mauveemph{purple} is \guillemet{Personal}, \blueemph{blue} is \guillemet{Relational}, \orangeemph{orange} is \guillemet{Results}, and \vertemph{green} is a word not in an entity. We can see that not only does the NER wrongly predict the class, Personal rather than Thoughts, but it also wrongly predicted that the first part of the sentence, \guillemet{\textit{les dossiers sont de nature courante}}, is also a skill. We hypothesize that it is due to two things. First of all, the presence of the words \guillemet{\textit{dossiers}}, \guillemet{\textit{nature}}, \guillemet{\textit{courante}} that appear in other Personal examples. For instance, the word \guillemet{\textit{dossiers}} appears 49 times in a Personal entity, which could confuse our model as to whether such a sentence piece is a skill. Second of all, the presence of the coordinating conjunction \guillemet{\textit{et}} plus the determinant \guillemet{\textit{les}} which mostly appear within an entity and rarely appear just outside of it (a token before). We argue that our model annotated the overall sentence as a skill in that specific case due to the overwhelming examples of such cases. However, the entity class prediction is inconsistent with the sentence vocabulary distribution. The second part of the sentence is composed of words that only appear in Thoughts examples, such as \guillemet{\textit{décision}}, \guillemet{\textit{analyse}} and \guillemet{\textit{recherche}}. It leads to lower probability confidence of our NER, where those three words have the lowest probabilities. \begin{figure} \captionsetup{width=\linewidth} \centering \begin{tabular}{l|ccccccccc} \toprule Token & \vertemph{les} & \vertemph{dossiers} & \vertemph{sont} & \vertemph{de} & \vertemph{nature} & \vertemph{courante} & \vertemph{et} & \rougeemph{les} & \rougeemph{décisions} \\ Prob & \mauveemph{0.82} & \mauveemph{0.80} & \mauveemph{0.83} & \mauveemph{0.84} & \mauveemph{0.83} & \mauveemph{0.83} & \mauveemph{0.78} & \mauveemph{0.53} & \mauveemph{0.49} \\\midrule Token & \rougeemph{requièrent} & \rougeemph{un} & \rougeemph{niveau} & \rougeemph{habituel} & \rougeemph{d'} & \rougeemph{analyse} & \rougeemph{et} & \rougeemph{de} & \rougeemph{recherche} \\ Prob & \mauveemph{0.81} & \mauveemph{0.85} & \mauveemph{0.82} & \mauveemph{0.83} & \mauveemph{0.78} & \mauveemph{0.51} & \mauveemph{0.73} & \mauveemph{0.74} & \mauveemph{0.64} \\\bottomrule \end{tabular} \caption{Example of a wrongly predicted sentence using the best seed \textbf{CamemBERT unfrozen}{} model where color represent the skill class (\rougeemph{red} is the \guillemet{Thoughts} class, \mauveemph{purple} is \guillemet{Personal}, \blueemph{blue} is \guillemet{Relational}, \orangeemph{orange} is \guillemet{Results}, and \vertemph{green} is a word not in an entity)} \label{tab:firstexample} \end{figure} \section{Conclusion} \label{sec:conclusion} This article presents a new public dataset in French, including annotated and non-annotated job offers in the insurance domain. It aims to develop machine learning models to perform automatic skill recognition inside job ads, which is an increasingly useful task for understanding the evolution of the labour market. The dataset statistics and characteristics show limitations that could have made it challenging to perform the learning task well. It can be further improved by rebalancing companies' and skill classes' to make the annotated dataset more representative of the non-annotated distribution. Moreover, the impact of lexical overlapping and soft skill identification could be lowered by allowing more experts to annotate more job ads. In any case, this dataset will be improved by adding annotations. Despite these dataset limitations, we have obtained interesting results with pre-trained models despite the size of the dataset with a token-wise approach. Although the skill-wise problem is closer to our main objective, our preliminary experiments on the skill-wise problem with common NLP algorithms seem to lead to poor accuracy. Since our results here are token-wise and not skill-wise, it is harder to extract the correct span of skill entities, and consequently, we cannot conclude the actual number of skills inside the non-annotated dataset. Thus, our work is a first step toward discovering some trends in the labour market and studying the evolution of skills. As our next step, our objective is to make efficient models identifying skills instead of tokens. Indeed, our models cannot distinguish two skills with the same tag if they are next to each other. Therefore, we need to detect the beginning of each skill inside the text. To achieve that, we plan to use the BIO tags scheme instead of IO \cite{konkol2015segment}. However, adding a new beginning tag for each skill group would probably reduce the overall accuracy because of the small size of the dataset. At last, improving the cleaning phase by detecting more precisely the usual conjunctions between two skills could be another way to keep the token-wise results and identify the skills more efficiently. We have made some error analyses, but this kind of analysis is limited by the limited explainability possibilities of deep learning models. In the long term, we would like to make our model more explainable both to help us understand the strengths and weaknesses of our model and explain results to recruiters and human resources staff to allow them to adapt their needs in recruitment. To do so, we plan to explore counterfactual generation \cite{madaan2021generate, fern2021text}. Finally, we have only explored a few possibilities this dataset can offer. Some other tasks that can be performed could include the measurement of the impact of redaction style (e. g. long sentences vs. bullet points, different ways of addressing a potential applicant) on the performance, the impact of gendered wording \cite{gaucher2011evidence}, the impact of COVID and teleworking on skill requirements \cite{gaffney2021trends} or even the extension of the dataset to include new tools used by human resources such as social media \cite{ruparel2020influence}. \section*{Acknowledgements} This research was made possible thanks to the support of the Future Skills Centre and four Canadian insurance companies. We wish to thank the reviewers for their comments regarding our work and methodology. \printbibliography[heading=subbibintoc] \end{document} \section{Introduction} \blfootnote{Authors contributed equally to this work.} The digital transformation's impact on professional practices has led to a rapid evolution of in-demand job skills, making it difficult to track these changes for enterprises and workers. Manual evaluation is becoming exceedingly complex and time-consuming, justifying the need for an automatic evaluation of these changes \cite{bakhshi_future_2017}. One way to study these changes in job ads is automatic skills recognition \cite{squicciarini_demand_2021}. However, job offer data is not easy to access even in job offers web platforms, mainly due to intellectual property issues. Moreover, the annotation needed to achieve good skill recognition performances through supervised machine learning is another complex and costly task. Indeed, many datasets offering job descriptions are accessible online such as the \verb|mycareersfuture| public dataset \cite{bhola_retrieving_2020}. However, as \cite{khaouja2021survey} reported in their article, very few public annotated datasets exist, and none are in French. As contributions, in this article, we propose \guillemet{\href{https://github.com/iid-ulaval/FIJO-dataset}{\color{red}French Insurance Job Offer (FIJO)}}, a free and public non-annotated and annotated dataset, to facilitate research in this domain. This dataset focuses on soft skills, which describe the way employees work alone and with others, instead of hard skills, which represent a more formal knowledge used at work \cite{lyu2021soft}. Also, we explore the training of a token-wise NER French skill detection algorithm in the field of insurance with state-of-the-art algorithms\footnote{All code used to obtain these results will be available on GitHub \href{https://github.com/iid-ulaval/FIJO-code}{\color{red}here}.}. The rest of this paper is structured as follows. Firstly, we begin with a brief overview of the literature on skill detection in \autoref{sec:relatedwork}. Secondly, we will present FIJO in \autoref{sec:data}, including how we constructed our French complexity corpus, some statistics and analysis of the dataset. Third, we will present our skill detection algorithm, the training settings and our results in \autoref{sec:skilldetection}. Finally, we will draw some concluding remarks in \autoref{sec:conclusion}. \section{Related work} \label{sec:relatedwork} The first approach to recognizing skills inside a job offer is using statistical techniques. It is a commonly used approach in many pieces of work such as \cite{malherbe_bridge_2016} who detect hard and soft skills in job offers by matching a list of keywords in the text with a skill database. Their skill database uses external databases knowledge, namely DBPedia and StackOverflow. Other works \cite{sodhi_content_2010, gardiner_skill_2018, squicciarini_demand_2021} do not rely on skills databases but use content analysis to detect the presence of certain words or concepts in offers. Other approaches using machine learning to detect skills in job offers have received lots of attention recently \cite{khaouja2021survey}. The skill recognition problem has been modeled either using topic modeling through Latent Dirichlet Analysis \cite{gurcan_big_2019}, text classification with CNN and LSTM \cite{sayfullina2018learning}, or NER with LSTM \cite{jia_representation_2018} and transformer-based models \cite{tamburri_dataops_2020}. However, these pieces of work mainly focus on specialized skills (e.~g. IT skill \cite{gurcan_big_2019}), focus on soft skill \cite{sayfullina2018learning} or are applied on English job ads only. As per \cite{khaouja2021survey}, the conclusion in their survey on skill identification shows that very few datasets are available online. Most of the recent works do not release their dataset nor mention the reasons for the non-publication. For example, \cite{cerioli20205} uses content analysis, and 5 million non-annotated jobs adds to determine whether or not testing software is a standard in the IT industries. The possible reason for the lack of publication by the authors was possibly due to the intellectual property constraint from their industrial partner. However, a recent new public dataset released by \cite{bhola_retrieving_2020} focuses on extracting identified hard skills in job ads. The dataset consists of 20,298 job ads, where each ad includes nearly 20 hard skills on average. A unique skill term corresponds to a unique class. Thus, the overall dataset includes 2,548 skill classes. For example, \guillemet{Microsoft Word} is a skill, and \guillemet{Microsoft Excel} is another skill. This new dataset lacks annotation for soft skills that have been more required than hard skills by enterprises in the past decade \cite{lyu2021soft}. \vspace{-1em} \section{Data} \label{sec:data} FIJO was created in partnership with four Canadian insurance companies. The dataset consists of non-annotated and annotated French job ads published by them, as well as their metadata (e. g. date of publication) between the years 2009 to 2020. Each job offer's text was manually extracted and semi-manually cleaned using the following procedure: removal of carriage return in an incomplete sentence (this is due to bullet point text) and multiple carriage returns, removal of bullet point character, normalization of the apostrophe punctuation characters, and removing of trailing whitespace at the beginning or end of a sentence. In order to protect the interests of the companies to whom the published data belongs, we chose to de-identify the job ads before making them publicly available. This process consists of three steps. Firstly, we used regular expressions to substitute the different variations of the companies' names and email addresses present in the offers. Next, a SpaCy French pre-trained NER model (\verb|fr_core_news_lg|) was used to identify potential names and locations to help with the following step. Finally, a manual check was conducted on each offer to substitute the following elements: names, locations, postal addresses and miscellaneous elements that could help identify the companies (i. e. products, department names). \autoref{tab:substitution tags} describes the substitution tags employed. \begin{table} \centering \begin{tabular}{lc} \toprule Substitution tag & Description \\ \midrule <anon\_name> & A person's name \\ <anon\_location> & A postal address or a city \\ <anon\_company> & One of the companies' names \\ <anon\_misc> & An element that can help identify one of the companies \\ \bottomrule \end{tabular} \caption{Substitution tags used for de-identification} \label{tab:substitution tags} \end{table} \vspace{-1em} \subsection{Dataset Statistics} The dataset is composed of 867 de-identified French job ads. As shown in \autoref{fig:histofferlen}, job ads lengths vary greatly, with an average length of 300.97 and a standard deviation of 119.78 tokens\footnote{Punctuations are computed as a token. We do so since our pre-processing procedure does not include the removal of punctuation nor lemmatization, or stemming.}. We can also observe that a few offers (16) are outliers with a length of more than 572. Moreover, \autoref{tab:nonannotatedstatistics} presents statistics of the dataset, where the lexical richness corresponds to the ratio of a job offer's number of unique words over the vocabulary cardinality without removing the stop words or normalizing them \cite{van2007comparing}. We can see that the lexical richness is relatively low, which means they are quite similar in terms of vocabulary. \begin{table} \centering \begin{tabular}{lclclc} \toprule Average \# of Words & 300.97 & \# of Words & 260,942 & Average \# of Sentences & 20.66\\ \# of Unique Words & 5,931 & Average Sentence Length & 14.57 & Average Lexical Richness & 0.023\\ \bottomrule \end{tabular} \caption{Unannotated dataset statistics} \label{tab:nonannotatedstatistics} \end{table} \begin{figure} \centering \begin{minipage}{.4\textwidth} \centering \captionsetup{width=.9\linewidth} \includegraphics[width=\linewidth]{stats/sentence_len_trunc.pdf} \caption{Ads length by words} \label{fig:histofferlen} \end{minipage}% \begin{minipage}{.6\textwidth} \centering \captionsetup{width=.9\linewidth} \includegraphics[width=\linewidth]{donnee_anot/donneeannotee.png} \caption{Example of a French annotation where each color refers to a class (i. e. skill)} \label{fig:annotatonexample} \end{minipage} \end{figure} \subsection{Annotated Dataset} To learn to identify soft skills inside ads, 47 offers were annotated, and more precisely, each job offer sentence, for a total of 499 annotations. Our annotation process consists of creating a skills reference, which defines the skills used for the annotations, and randomly selecting 47 offers to be annotated by a domain expert. Annotation was conducted with non-overlapping sentence entities done individually. However, the overall job offer was given as a reference to the annotator for context. Each entity contains at least one word or, at most, the complete sentence. Based on the skill groups of the \href{http://catalogue.iugm.qc.ca/GED_IUG/109311392759/Referentielcomp.pdf}{\textit{AQESSS}} public skills repositories and the one used by our insurance partners, which are based on the commercial \href{https://www.kornferry.com/}{Korn Ferry} and \href{https://humance.ca/}{SPB} repositories, a set of four skills have been identified. Namely, \guillemet{Thoughts}, \guillemet{Results}, \guillemet{Relational} and \guillemet{Personal}\footnote{The tag are written in French in the dataset. Namely, \guillemet{Pensée}, \guillemet{Résultats}, \guillemet{Relationnel} and \guillemet{Personnel}.}. The number of classes has been limited to four mainly because, in general, learning algorithms are not known for being inefficient on a large number of tags \cite{JMLR:v15:gupta14a} but also because of the possible confusion between skills during the annotation process (see \autoref{sec:limitations}). \autoref{fig:annotatonexample} presents an example of a sentence annotation. \subsection{Annotated Dataset Statistics} First, as shown in \autoref{fig:ent_nb} our annotated portion of FIJO consists of 932 entities distributed unevenly between the four classes. We can see that the class with the most entities is \guillemet{Thoughts} with 317 entities, followed by \guillemet{Personal} with 297 and \guillemet{Relational} with 216. The class with the lowest number of entities is \guillemet{Results} with 102 entities. Second, as illustrated in \autoref{fig:ent_len}, our entities are on average 9.6 long with a standard deviation of 7.14 tokens. Their length ranges from a single token to 50 tokens, but 50\% are below 8. Moreover, Table~\ref{tab:annotatedstatistics} presents statistics of the dataset. We have a similar average number of words and sentences as per the non-annotated dataset. However, our annotated dataset uses fewer words and even fewer unique words. Finally, \autoref{fig:stopwords} presents the number of occurrences of stop words in an entity text (\blueemph{blue}) or outside of an entity (\rougeemph{red}). It shows that some stop words are overly represented in entities, such as \guillemet{\textit{de}} and \guillemet{\textit{des}} that are mostly in annotations since long skills tend to contain a high number of stop words. \begin{figure} \centering \begin{minipage}{.45\textwidth} \centering \captionsetup{width=.9\linewidth} \begin{tabular}{lclc} \toprule Av. \# of & & \# of &\\\midrule Words & 270.26 & Words & 12,702 \\ Sentences & 16.28 & Unique Words & 1,902\\\bottomrule \end{tabular} \captionof{table}[foo]{Annotated dataset statistics} \label{tab:annotatedstatistics} \end{minipage} \begin{minipage}{.49\textwidth} \centering \captionsetup{width=.9\linewidth} \includegraphics[width=\textwidth]{stats/entities_numbers.pdf} \caption{Number of entities per class} \label{fig:ent_nb} \end{minipage} \begin{minipage}{.49\textwidth} \centering \captionsetup{width=.9\linewidth} \centering \includegraphics[width=\textwidth]{stats/entities_len.pdf} \caption{Distribution of entities length base on their tokens} \label{fig:ent_len} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \captionsetup{width=\linewidth} \centering \includegraphics[width=\textwidth]{stats/stopwords.pdf} \caption{Number of time a stopwords occur in an entity (\blueemph{blue}) or out of an entity (\rougeemph{red})} \label{fig:stopwords} \end{minipage} \vspace{-1em} \end{figure} \vspace{-0.5em} \subsection{Dataset Limitations} \label{sec:limitations} We have identified three limitations to our dataset: unbalanced entities classes, lexical overlapping and soft skill identification. Firstly, as illustrated in \autoref{fig:ent_nb}, our annotated dataset is composed of an unbalanced number of classes where two classes are more represented than the two others. When classes imbalance exists in training data, a classification algorithm will typically over-classify the majority group (\guillemet{Thoughts}) due to its increased prior probability. As a result, the instances belonging to the minority group (\guillemet{Results}) will likely be misclassified more often than those belonging to the majority group \cite{johnson2019survey}. Secondly, \autoref{fig:pca_tf-idf} illustrates the 2-dimension PCA of the TF-IDF score of each entity's text after stop words removal and lemmatization, separated by class. It shows that we can separate the terms (and centroids) present in skills from those that are not. For example, the \guillemet{Personal} (purple) centroid (upper left) and \guillemet{Thoughts} (red) centroid (down left) can be distinctly separated from each other and the other two centroids. Such separation may make it easier to discern the two cases since skills use specific terms that are less common in other skill texts. For example, the word \guillemet{\textit{collaborer}} (collaborate) appears only in the \guillemet{Personal} entities. By contrast, well-distributed terms among the different groups may be more challenging. For example, the word \guillemet{\textit{atteindre}} (achieve) occurs in all four classes. However, we can also see that some terms are quite close to each other, possibly leading to a more difficult distinction between the four classes' word terminologies. \begin{figure} \centering \captionsetup{width=\linewidth} \includegraphics[width=0.6\textwidth]{stats/PCAvocab.png} \caption{2-dimensions\protect\footnotemark{} PCA of the TF-IDF separated by entities groups with the removal of stop words and lemmatization of terms} \label{fig:pca_tf-idf} \end{figure} \footnotetext{The third dimension is not represented here due to difficulty representing the third dimension in a non-reactive figure. However, we would like to clarify that using the third dimension makes the \guillemet{Relational} and \guillemet{Results} class way more spaced out than in two dimensions only.} Finally, soft skill identification is not an easy task, as mentioned by \cite{squicciarini_demand_2021}, and our dataset reflects it. Firstly, some distinctions between skills can be quite confusing, as seen in the example in \autoref{fig:anot_conf1}. This example can be read as \guillemet{Welcoming visitors and responding to their various requests for information} and is tagged as Thoughts. However, one reader might find that such a skill could also represent a Relational one. Thus, creating a confusing distinction between some examples. Secondly, some examples contain two consecutive skills of the same class separated by coordinating conjunctions as seen in \autoref{fig:anot_conf2}. We can see that the French coordinating conjunction \guillemet{\textit{et}} (and) split the two Personal entities. However, such coordinating conjunction is not always used to do so. It can also be used as an addition of information, such as in \guillemet{\textit{vérifier et contrôler}} (verify and control). Thus, it can be quite challenging to determine if a subset of a sentence is one or two skills in some cases. Finally, it is common to see job ads that list expected soft skills in the same sentence, but all skills do not belong to the same skill class. An example of such switching between two entity types is illustrated in \autoref{fig:anot_conf3}. We can see that the first token is an entity, followed by another entity of a different type and the rest of the sentence is another entity of the same class as the first entity. This kind of \guillemet{squeezing} of two entities sharing the same class around an entity of a different class can be challenging for a NER model. All these limitations justify the fact that, in this article, we apply token-wise approaches to the dataset to start with an easier learning task. \begin{figure} \begin{minipage}{\textwidth} \centering \captionsetup{width=\linewidth} \includegraphics[width=0.55\textwidth]{donnee_anot/confusion_class.PNG} \caption{An example of a possible confusion between the classes \guillemet{Thoughts} and \guillemet{Relational} (\guillemet{Welcoming visitors and responding to their various requests for information})} \label{fig:anot_conf1} \end{minipage} \begin{minipage}{.48\textwidth} \centering \captionsetup{width=\linewidth} \centering \includegraphics[width=\textwidth]{donnee_anot/overlap.PNG} \caption{An example of two consecutive skill annotations with the same class that are separated by a coordinating conjunction} \label{fig:anot_conf2} \end{minipage} \hspace{1pt} \begin{minipage}{0.48\textwidth} \centering \captionsetup{width=\linewidth} \centering \includegraphics[width=\textwidth]{donnee_anot/overlap1.PNG} \caption{An example of an entity \guillemet{squeezed} between two other entity of a different class} \label{fig:anot_conf3} \end{minipage} \vspace{-1em} \end{figure} \section{Skill detection} \label{sec:skilldetection} Since skill detection is a sequence classification task similar to NER, we approach it using a recurrent neural network, namely a bidirectional long short-term memory (bi-LSTM) network \cite{lstm}. First of all, we encode each word in a given sequence using FastText's pre-trained French embedding model \cite{Bojanowski2017EnrichingWV} which produces 300-dimensional word embeddings. Once a sequence is encoded, we feed each of the word embeddings to our \textbf{bi-LSTM}{} which has a hidden state dimension of 300 and obtain a new representation for each word. The final step in the prediction process is to classify each word using a fully connected network comprised of one linear layer followed by a softmax activation function. LSTM-based classifiers have proven to be quite efficient in terms of performance for sequence classification tasks. However, these models usually require a large amount of data to guarantee such performances. Therefore, since our annotated dataset contains a limited amount of data, we also use a pre-trained transformer model \cite{transformer} in a transfer learning setting. Our model of choice is CamemBERT \cite{martin-etal-2020-camembert} a French transformer encoder based on the BERT architecture \cite{DBLP:conf/naacl/DevlinCLT19}. Consequently, we use it to encode our text sequence, and we employ a fully connected network identical to the one employed with the \textbf{bi-LSTM}{} model in order to accomplish the classification. We experiment with two configurations of this model, one in which CamemBERT's weights are frozen and one in which they are not. We dub these models \textbf{CamemBERT frozen}{} and \textbf{CamemBERT unfrozen}{} respectively. To further investigate the sensibility of our models to the amount of training data, as well as the transfer learning potential of the pre-trained transformer model, we experiment with different data subset sizes and report the results in \autoref{sec:results}. \subsection{Experiments} We train each of the aforementioned models five times using different random initialization seeds ($[5, 10, 15, 20, 25]$). The models were trained for 300 epochs at most with an initial learning rate of $0.01$ for \textbf{bi-LSTM}{} and \textbf{CamemBERT frozen}{} and of $0.0001$ for \textbf{CamemBERT unfrozen}{} as suggested by \cite{DBLP:conf/naacl/DevlinCLT19}. Therefore, a learning rate schedule that decreased the learning rate by a factor of 0.1 after every five epochs without any decrease of the validation cross-entropy loss was applied. An early stopping with a patience of 15 epochs to prevent overfitting. Additionally, for the \textbf{CamemBERT unfrozen}{} model, we experimented with the training procedure and hyperparameters proposed in \cite{mosbach2021on} in order to address the possible training instability associated with fine-tuning transformer-based language models. As such, an additional five experiments (using the same random seeds) were run with \textbf{CamemBERT unfrozen}{} by limiting the number of epochs to $20$ and scheduling the learning rate as follows: we start the training with a linear learning rate warmup (i. e. the learning rate was linearly increased) up to $0.2\mathrm{e}{-5}$ for the first $10\%$ of epochs, followed by a linear learning rate decay for the rest of the training epochs. We use \textbf{CamemBERT unfrozen warmup}{} to refer to this model. The training data was divided using a $80\%-10\%-10\%$ train-validation-test split with simple random sampling, resulting in a total of $400$ training samples. We also experiment with different training data subsets. Each subset is composed by sampling the first X data samples from the full training set placed in order, with $\mathrm{X} \in \{50, 100, 150, 200, 350, 400\}$. The models and training procedures were implemented using Poutyne \cite{poutyne}, HuggingFace's Transformers \cite{wolf-etal-2020-transformers} and spaCy \cite{Honnibal_spaCy_Industrial-strength_Natural_2020}. \subsection{Results} \label{sec:results} \begin{table} \centering \captionsetup{width=\linewidth} \resizebox{\textwidth}{!}{\begin{tabular}{ccccc} \toprule Data subset size & \textbf{CamemBERT unfrozen warmup} & \textbf{CamemBERT unfrozen} & \textbf{CamemBERT frozen} & \textbf{bi-LSTM} \\ \midrule 50 & $46.40 \pm 3.72$ & $61.25 \pm 5.23$ & $46.55 \pm 0.59$ & $25.53 \pm 0.58$ \\ 100 & $66.71 \pm 6.86$ & $75.24 \pm 2.13$ & $56.02 \pm 0.22$ & $43.74 \pm 15.64$ \\ 150 & $75.73 \pm 2.39$ & $77.67 \pm 1.54$ & $59.87 \pm 0.28$ & $47.92 \pm 7.72$ \\ 200 & $76.40 \pm 2.57$ & $79.78 \pm 1.12$ & $64.23 \pm 0.21$ & $55.44 \pm 6.43$ \\ 250 & $78.97 \pm 2.03$ & $80.23 \pm 3.82$ & $65.78 \pm 1.64$ & $57.71 \pm 10.65$ \\ 300 & $78.18 \pm 0.78$ & $74.99 \pm 4.76$ & $61.99 \pm 0.22$ & $53.24 \pm 2.85$ \\ 350 & $78.27 \pm 3.73$ & $78.72 \pm 2.42$ & $\mathbf{67.47 \pm 0.21}$ & $56.34 \pm 4.25$ \\ 400 & $\mathbf{80.85 \pm 1.67}$ & $\mathbf{83.69 \pm 1.80}$ & $67.29 \pm 0.23$ & $\mathbf{60.69 \pm 9.23}$ \\ \bottomrule \end{tabular}} \caption{Mean token-wise accuracy and one standard deviation on test data across different seeds for training subsets (bold values correspond to maximum mean accuracy for a model)} \label{tab:results} \vspace{-1em} \end{table} \autoref{tab:results} presents the mean token-wise accuracy and one standard deviation on the test set for the four trained models where bold values correspond to maximum mean accuracy for each model. As expected, fine-tuning the complete CamemBERT model yields the best performance with both \textbf{CamemBERT unfrozen}{} and \textbf{CamemBERT unfrozen warmup}{} leading the scoreboard. Based on the accuracy, the best single model performance is obtained with \textbf{CamemBERT unfrozen}{}. However, a McNemar normalize test \cite{mcnemar1947note} using the contingency table illustrated in \autoref{tab:contigency} of both unfrozen models using only the best seed model per approach (i. e. 20 and 25, respectively) yielded a p-value of $0.5334$. Thus, we can only assume no significant difference between the predictive models. \begin{table} \begin{tabular}{cc|cc} \toprule \multicolumn{2}{c}{\multirow{2}{*}{}} & \multicolumn{2}{c}{\textbf{CamemBERT unfrozen}} \\ \multicolumn{2}{c}{} & Correct & Incorrect \\ \midrule \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{CamemBERT} \\ \textbf{unfrozen warmup}\end{tabular}} & Correct & 1367 & 78 \\ & Incorrect & 87 & 94 \\ \bottomrule \end{tabular} \caption{Contingency table for unfrozen models (token-wise) using only the best seed model per approach (i. e. 20 and 25, respectively)} \label{tab:contigency} \end{table} Moreover, the \textbf{CamemBERT unfrozen}{} model seems to suffer from a certain degree of instability, as shown by the consistently high standard deviation. This issue mostly persists when using a learning rate warmup followed by a linear decay as proposed by \cite{mosbach2021on}. \textbf{CamemBERT frozen}{} is the least sensitive to random initialization while \textbf{bi-LSTM}{} presents the highest sensitivity and the lowest performance. When it comes to training subsets, we can observe that all models perform best with a high amount of data. However, performance seems quite close across the 200 to 350 subset size range. We hypothesize that this is due to the data distribution of the training and test sets since companies use slightly different ways to express skills. For example, one uses a bullet point style to enumerate skills in a pragmatic approach, while another uses a more situational approach that puts skill within context. Thus, Figure \ref{fig:split_stats} shows that both the train and test sets contain skills belonging, in the majority, to one company (\skyblueemph{sky blue}). Two more companies (\sulueemph{light green} and \violetemph{pink}) are present in the test set, while their skills are underrepresented in the training set. Furthermore, \autoref{fig:subset_stats} shows that the dominant company's (\skyblueemph{sky blue}) skills are well represented in most training subsets, including smaller-sized ones, while augmenting the number of training samples mostly adds skills related to the company that is not present at all in the test set (\pinkemph{red}). It means that the training is quite sensitive to how the data is shuffled because of the limited number of data samples. Therefore, more data would need to be annotated to reach a more balanced dataset and use a stratified random sampling instead of the current simple random sampling to reflect imbalance better when splitting the data. \begin{figure} \centering \captionsetup{width=\linewidth} \raisebox{-10ex} { \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\textwidth]{results/Train_set.pdf} \end{minipage} \begin{minipage}{.5\textwidth} \centering \centering \includegraphics[width=\textwidth]{results/Validation_set.pdf} \end{minipage}} \begin{minipage}{.5\textwidth} \centering \centering \includegraphics[width=\textwidth]{results/Test_set.pdf} \end{minipage} \caption{Number of tokens per skill in the annotated dataset (the \guillemet{O} tag means that a token doesn't belong to a skill) where each colour represents one of our four company partners} \vspace{-1em} \label{fig:split_stats} \end{figure} \begin{figure} \centering \captionsetup{width=\linewidth} \includegraphics[width=1.1\textwidth]{results/subsets.pdf} \caption{Number of tokens per skill in the training subsets dataset where each colour represents one of our four company partners} \label{fig:subset_stats} \end{figure} Finally, \autoref{tab:acc_per_class} presents the mean token-wise accuracy and one standard deviation per skill on the test set for \textbf{CamemBERT unfrozen warmup}{} and \textbf{CamemBERT unfrozen}{} for a subset size of $400$. We can see that for both approaches, the best performance occurs in the class with the lowest example (Results). We hypothesize that it might be due to the \guillemet{simplicity} of those examples that are shorter than the average with an average length of around seven tokens. Thus, these examples are possibly more straightforward than the others, leading to an easier classification. Also, we observe a higher variance on both Thoughts and Personal class for both our models. These two tags have the highest standard deviation, even if they are the classes with the most examples. It means that the training is quite sensitive to the initialization of the models. Because we have a limited number of data samples, minimizing such instability during training is more difficult. Therefore, more data would need to be annotated to reach a more stable training for all classes. Furthermore, we can see that there is still room for improvement for most tags. \begin{table} \centering \captionsetup{width=\linewidth} \resizebox{\textwidth}{!}{\begin{tabular}{ccccc|ccccc} \toprule \multicolumn{5}{c}{\textbf{CamemBERT unfrozen warmup}} & \multicolumn{5}{c}{\textbf{CamemBERT unfrozen}} \\ O & Thoughts & Results & Relational & Personal & O & Thoughts & Results & Relational & Personal \\ \midrule $84.50 \pm 4.51$ & $82.72 \pm 7.84$ & $91.21 \pm 0.00$ & $73.41 \pm 4.44$ & $80.77 \pm 4.97$ & $83.30 \pm 3.63$ & $80.97 \pm 6.56$ & $92.31 \pm 4.85$ & $77.73 \pm 3.48$ & $85.21 \pm 9.65$ \\ \bottomrule \end{tabular}} \caption{Mean token-wise accuracy and one standard deviation per skill for \textbf{CamemBERT unfrozen warmup}{} and \textbf{CamemBERT unfrozen}{} on the test set for a subset size of $400$ (the \guillemet{O} tag means that a token does not belong to a skill)} \label{tab:acc_per_class} \end{table} \subsection{Error Analysis} Using the approaches that yielded the higher accuracy (\textbf{CamemBERT unfrozen}{}), we conducted an error analysis on is 24 errors. We found that most of these were types that are similar to the cases illustrated in \autoref{fig:anot_conf2}, namely two consecutive skill annotations with the same class but separated by a coordination conjunction. The NER identified the two skills as a single skill in all those error cases. Moreover, some cases (3) were a similar error type where a part of the sentence, before a coordination conjunction, is not an entity as illustrated in \autoref{tab:firstexample}. The figure introduces each token's ground truth and prediction (\guillemet{Prob} rows), along with the model probabilities, using the same color scheme as \autoref{fig:pca_tf-idf}, namely \rougeemph{red} is the \guillemet{Thoughts} class, \mauveemph{purple} is \guillemet{Personal}, \blueemph{blue} is \guillemet{Relational}, \orangeemph{orange} is \guillemet{Results}, and \vertemph{green} is a word not in an entity. We can see that not only does the NER wrongly predict the class, Personal rather than Thoughts, but it also wrongly predicted that the first part of the sentence, \guillemet{\textit{les dossiers sont de nature courante}}, is also a skill. We hypothesize that it is due to two things. First of all, the presence of the words \guillemet{\textit{dossiers}}, \guillemet{\textit{nature}}, \guillemet{\textit{courante}} that appear in other Personal examples. For instance, the word \guillemet{\textit{dossiers}} appears 49 times in a Personal entity, which could confuse our model as to whether such a sentence piece is a skill. Second of all, the presence of the coordinating conjunction \guillemet{\textit{et}} plus the determinant \guillemet{\textit{les}} which mostly appear within an entity and rarely appear just outside of it (a token before). We argue that our model annotated the overall sentence as a skill in that specific case due to the overwhelming examples of such cases. However, the entity class prediction is inconsistent with the sentence vocabulary distribution. The second part of the sentence is composed of words that only appear in Thoughts examples, such as \guillemet{\textit{décision}}, \guillemet{\textit{analyse}} and \guillemet{\textit{recherche}}. It leads to lower probability confidence of our NER, where those three words have the lowest probabilities. \begin{figure} \captionsetup{width=\linewidth} \centering \begin{tabular}{l|ccccccccc} \toprule Token & \vertemph{les} & \vertemph{dossiers} & \vertemph{sont} & \vertemph{de} & \vertemph{nature} & \vertemph{courante} & \vertemph{et} & \rougeemph{les} & \rougeemph{décisions} \\ Prob & \mauveemph{0.82} & \mauveemph{0.80} & \mauveemph{0.83} & \mauveemph{0.84} & \mauveemph{0.83} & \mauveemph{0.83} & \mauveemph{0.78} & \mauveemph{0.53} & \mauveemph{0.49} \\\midrule Token & \rougeemph{requièrent} & \rougeemph{un} & \rougeemph{niveau} & \rougeemph{habituel} & \rougeemph{d'} & \rougeemph{analyse} & \rougeemph{et} & \rougeemph{de} & \rougeemph{recherche} \\ Prob & \mauveemph{0.81} & \mauveemph{0.85} & \mauveemph{0.82} & \mauveemph{0.83} & \mauveemph{0.78} & \mauveemph{0.51} & \mauveemph{0.73} & \mauveemph{0.74} & \mauveemph{0.64} \\\bottomrule \end{tabular} \caption{Example of a wrongly predicted sentence using the best seed \textbf{CamemBERT unfrozen}{} model where color represent the skill class (\rougeemph{red} is the \guillemet{Thoughts} class, \mauveemph{purple} is \guillemet{Personal}, \blueemph{blue} is \guillemet{Relational}, \orangeemph{orange} is \guillemet{Results}, and \vertemph{green} is a word not in an entity)} \label{tab:firstexample} \end{figure} \section{Conclusion} \label{sec:conclusion} This article presents a new public dataset in French, including annotated and non-annotated job offers in the insurance domain. It aims to develop machine learning models to perform automatic skill recognition inside job ads, which is an increasingly useful task for understanding the evolution of the labour market. The dataset statistics and characteristics show limitations that could have made it challenging to perform the learning task well. It can be further improved by rebalancing companies' and skill classes' to make the annotated dataset more representative of the non-annotated distribution. Moreover, the impact of lexical overlapping and soft skill identification could be lowered by allowing more experts to annotate more job ads. In any case, this dataset will be improved by adding annotations. Despite these dataset limitations, we have obtained interesting results with pre-trained models despite the size of the dataset with a token-wise approach. Although the skill-wise problem is closer to our main objective, our preliminary experiments on the skill-wise problem with common NLP algorithms seem to lead to poor accuracy. Since our results here are token-wise and not skill-wise, it is harder to extract the correct span of skill entities, and consequently, we cannot conclude the actual number of skills inside the non-annotated dataset. Thus, our work is a first step toward discovering some trends in the labour market and studying the evolution of skills. As our next step, our objective is to make efficient models identifying skills instead of tokens. Indeed, our models cannot distinguish two skills with the same tag if they are next to each other. Therefore, we need to detect the beginning of each skill inside the text. To achieve that, we plan to use the BIO tags scheme instead of IO \cite{konkol2015segment}. However, adding a new beginning tag for each skill group would probably reduce the overall accuracy because of the small size of the dataset. At last, improving the cleaning phase by detecting more precisely the usual conjunctions between two skills could be another way to keep the token-wise results and identify the skills more efficiently. We have made some error analyses, but this kind of analysis is limited by the limited explainability possibilities of deep learning models. In the long term, we would like to make our model more explainable both to help us understand the strengths and weaknesses of our model and explain results to recruiters and human resources staff to allow them to adapt their needs in recruitment. To do so, we plan to explore counterfactual generation \cite{madaan2021generate, fern2021text}. Finally, we have only explored a few possibilities this dataset can offer. Some other tasks that can be performed could include the measurement of the impact of redaction style (e. g. long sentences vs. bullet points, different ways of addressing a potential applicant) on the performance, the impact of gendered wording \cite{gaucher2011evidence}, the impact of COVID and teleworking on skill requirements \cite{gaffney2021trends} or even the extension of the dataset to include new tools used by human resources such as social media \cite{ruparel2020influence}. \section*{Acknowledgements} This research was made possible thanks to the support of the Future Skills Centre and four Canadian insurance companies. We wish to thank the reviewers for their comments regarding our work and methodology. \printbibliography[heading=subbibintoc] \end{document}
{'timestamp': '2022-04-12T02:45:48', 'yymm': '2204', 'arxiv_id': '2204.05208', 'language': 'en', 'url': 'https://arxiv.org/abs/2204.05208'}
\section{Introduction} \label{sec:introduction} Recent years have seen the development of several techniques to control isolated neutral molecules in the gas-phase. Molecular beams of polar molecules can be dispersed with strong inhomogeneous electric fields, producing pure samples of individual conformers, cluster stoichiometries or even single quantum-states~\cite{Filsinger:PRL100:133003, Filsinger:ACIE48:6900, Trippel:PRA86:033202, Nielsen:PCCP13:18971, Horke:ACIE53:11965, Meerakker:CR112:4828, Chang:IRPC34:557}. We can, furthermore, control the alignment and orientation of complex gas-phase molecules in space~\cite{Holmegaard:PRL102:023001, Filsinger:JCP131:064309, Trippel:MP111:1738, Stapelfeldt:RMP75:543}, allowing one to extract molecular-frame information, such as nuclear or electronic structures, from these samples~\cite{Bisgaard:Science323:1464, Holmegaard:NatPhys6:428}. In combination with the technological developments in free-electron laser (FEL) ultrafast x-ray sources, now providing millijoule-level pulses of hard x-rays with sub-100~fs pulse durations, these control techniques open up the potential to image isolated biomolecules and particles with femtosecond temporal and picometer spatial resolution~\cite{Seibert:Nature470:78, Neutze:Nature406:752, Boll:PRA88:061402, Kuepper:PRL112:083002}. The realization of these experiments crucially depends on a high-density source of intact molecules in the gas-phase, ready for further manipulation and experiments. While for many small stable compounds this is easily achieved using thermal vaporization and seeding into a molecular beam, this approach is not feasible for thermally labile or non-volatile species -- such as most larger biochemically relevant molecules, and biological species in general. Therefore, these samples require the development of gentle vaporization techniques, that still produce a pure and high-density sample of molecules in the gas-phase. Furthermore, technical requirements for central-facility experiments, such as a well-defined and fixed interaction point and capabilities for long uninterrupted measurement times, need to be fulfilled. One approach to achieve relatively dense ensembles of labile neutral molecules is laser-induced acoustic desorption (LIAD), which has been introduced over 30 years ago~\cite{Lindner:AnalChem57:895}, but received relatively little attention since. What sets LIAD apart from other laser-based vaporization techniques, such as laser desorption~\cite{DeVries:ARPC58:585}, is that it avoids any direct interaction between the desorption laser and the molecular sample, making this technique applicable to light-sensitive and labile compounds. The basic principle of LIAD is that samples get deposited on one side of an opaque substrate -- often a thin metal foil -- while the other side of this substrate gets irradiated with a laser pulse. This laser pulse induces acoustic and thermal waves within the substrate, which travel through the material and lead to desorption of molecules on the front side. The physical mechanism behind this desorption process is currently very poorly understood, \ie, even the nature of the desorption process (thermal, acoustic, stress-induced) is not clearly established and, furthermore, it is highly dependent on the employed substrate and sample preparation method~\cite{Zinovev:AnalChem79:8232}. Nonetheless, the LIAD technique has been used in a number of mass spectrometry studies~\cite{Golovlev:ijmsip169:69, Peng:ACIE45:1423, Nyadong:AnalChem84:7131}. Notably, the Kenttämaa group coupled LIAD to a Fourier transform ion cyclotron mass spectrometer~\cite{Perez:IJMS198:173, Shea:AnalChem79:2688, Shea:AnalChem78:6133} and a quadrupole linear time-of-flight mass spectrometer~\cite{Habicht:AnalChem82:608, Gao:JASMS22:531}. They used this source to study peptides and large organic compounds up to mass $\ordsim500$~u. Recently, the LIAD methodology has also been applied to study the dynamics of intact aminoacids on the femtosecond and attosecond timescale using ion-yield and photoelectron spectroscopy~\cite{Calvert:PCCP14:6289, Belshaw:JPCL3:3751, Calegari:Science346:336}. In a seminal paper in 2006, Peng \etal showed the applicability of LIAD to significantly larger systems and particles, successfully desorbing viruses, bacteria and cells and storing them in a quadrupole ion trap for precise mass measurements~\cite{Peng:ACIE45:1423, Zhang:AnalChem88:5958}. The Campbell group furthermore established a closely related technique, termed ``laser-induced forward transfer'' for the gentle vaporization of large nanoparticles~\cite{Bulgakov:JOSAB31:C15, Goodfriend:APPA122:154}. Here, we present our new LIAD-source setup, designed for use in central facilities. It allows for prolonged measurement times through automatic sample replenishment, whilst keeping the interaction point fixed. This is realized through the use of a long metal tape as the LIAD substrate, which is constantly forwarded -- akin to an old-fashioned cassette tape -- to provide fresh sample. A reproducible layer of molecules is prepared on this foil by spraying aerosolized samples onto the band. This technique yields a stable and reproducible signal for many hours of measurement time. As a test system we use the amino acid phenylalanine and characterize the produced molecular plume using strong-field ionization, evaluating the number density, spatial extend and temporal distribution. By convoluting the initial plume temporal distribution with a Maxwell-Boltzmann velocity distribution, the forward velocity and the translational temperature in the moving frame were derived. While the velocity does not increase with desorption laser intensity, the translational temperature does increase and, furthermore, we observe enhanced fragmentation. These observations are consistent with a desorption model based on surface stress between the foil band and islands of deposited molecules, which was previously proposed~\cite{Zinovev:AnalChem79:8232}. \section{Experimental method} \label{sec:Experimental method} \begin{figure} \centering \includegraphics{Taper_setup} \caption{LIAD setup with sample delivery based on a rotating tape drive. A taper platform holds a long metal tape with sample applied on the front surface. A UV desorption laser irradiates the foil from the back, desorbing molecules. These are then ionized by a femtosecond laser beam and detected using a time-of-flight mass spectrometer. See supplementary information for further details.} \label{fig:setup} \end{figure} A schematic of our new LIAD setup is shown in \autoref{fig:setup}; further details regarding the setup and sample preparation are given in the supplementary information. Briefly, sample is deposited on the front side of a tantalum foil band of 10~\um thickness and 10~mm width, while the backside gets irradiated with a pulsed desorption laser. We use tantalum as a substrate due to its very high melting point of $3290$~K and hence its ability to withstand higher desorption laser intensities. During data collection the foil band is constantly moved across the desorption laser spot to provide fresh sample, as further discussed below. In order to create a stable coverage of sample on the foil, we aerosolized samples using a gas-dynamic virtual nozzle (GDVN)~\cite{DePonte:JPD41:195505, Beyerlein:RSI86:125104} to create and deposit an aerosol on the foil, where it sticks and rapidly dries out. Full details of the sample preparation and deposition process, including details regarding sample concentration, spray rate, speed of the foil band, and an estimate of total deposited material are given in the supplementary information. Molecules are desorbed using $\ordsim8$~ns duration laser pulses at 355~nm, focused to a 300~\um (FWHM) spot on the foil. Desorbed molecules are strong-field ionized by 40~fs pulses from a Ti:Sapphire laser, with typical field strengths of $4\times10^{13}$~W/cm$^{2}$. Produced cations are detected by a conventional linear time-of-flight mass spectrometer (TOF-MS), with a typical mass resolution $m/\Delta{m}>1000$. \section{Results and Discussion} \label{sec:results} \subsection{Characterizing LIAD by strong-field ionization} \label{sec:SFI} We characterize the desorbed molecular plume using strong-field ionization (SFI) from a focused femtosecond Ti:Sapphire laser as a universal probe~\cite{Calvert:PCCP14:6289, Teschmit:JCP147:144204}. \begin{figure} \centering \includegraphics[width=\linewidth]{mass_spectrum} \caption{Mass spectrum of phenylalanine; (a) recorded using LIAD and SFI from a femtosecond laser beam and (b) reference spectrum for electron impact ionization~\cite{NIST:webbook:2017}. The intensity in both spectra is normalized to the dominant mass peak at 74~u.} \label{fig:tof} \end{figure} The observed TOF-MS of phenylalanine (PA) is shown in \autoref{fig:tof} and compared to a literature spectrum obtained using electron-impact ionization (EI)~\cite{NIST:webbook:2017}. Both spectra are normalized to the most abundant fragment ion at mass 74~u, corresponding to loss of a benzyl-radical fragment. It is evident that both ionization schemes strongly induce fragmentation, however, we note that using SFI a significant contribution from intact PA is observed at 165~u; this could even be enhanced using shorter duration laser pulses~\cite{Calvert:PCCP14:6289}. We observe no evidence for the production of larger clusters of PA, and hence attribute this channel to desorption of intact PA monomers. Furthermore, we observe an additional fragmentation peak at 28~u in the SFI data, corresponding to CNH$_2$, \eg, C-NH$_2^+$ or HC=NH$^+$, fragment ions, which is absent in the EI mass spectrum. These spectra clearly demonstrate the production of intact PA following desorption from the foil band. We do not observe the emission of any tantalum atoms or clusters, which would easily be ionized by the SFI probe, since the ionization potential of tantalum is lower than of PA. This indicates that the desorption laser does not penetrate through the foil band nor ablate metal from the foil by other means. \begin{figure} \centering \includegraphics{shot-to-shot_scan} \caption{(a) Parent ion yield as a function of desorption laser shot without sample replenishment. Data have been averaged over 30~shot-wide intervals (horizontal bars); the solid line corresponds to a power-law fit. (b) Parent ion signal as a function of desorption laser shot while moving the foil band at 50~\um/s. The blue line corresponds to single-shot measurements, red markers correspond to averaged data for 50 shots, showing a standard deviation below 10\%.} \label{fig:stability} \end{figure} To assess the depletion of sample from the foil and determine the required moving speed for sample replenishment, we measured the parent ion yield as a function of the number of desorption laser shots onto the same spot. The resulting abundances are shown in \subautoref{fig:stability}{a}, where the solid line represent a power-law fit of the form $y = A\times{}x^n$, with an exponent of n = $-0.68\pm0.03$. We observe a rapid decay of signal, reaching around 10\% after 330 desorption laser shots. Similar power-law behavior has previously been observed and rationalized with the existence of several isolated desorption centers on the foil~\cite{Zinovev:AnalChem79:8232}. This is consistent with our observation of many large crystalline islands, see supplementary information, many of which fall within the desorption laser spot size. During further data collection the foil band is continuously moved at 50~\um/s, corresponding to a movement to a new sample spot every $\ordsim120$ desorption laser shots. The corresponding shot-to-shot signal stability for the moving foil band is shown in \subautoref{fig:stability}{b}. The signal exhibits large fluctuations with a single shot standard deviation of 70\% of the mean value. No long-term drift of the overall signal levels is observed. Averaging over 50 desorption laser shots reduces the standard deviation to below 10\%, as indicated by the red markers and error bars in \autoref{fig:stability}. Further data points in this manuscript are typically averaged over 1200 desorption laser shots, resulting in a standard deviation of $\ordsim2.5$\%. \subsection{Molecular plume properties} \label{sec:plume} In the following we investigate the spatial extent, density, velocity, and translational temperature of the ``plume'' of molecules desorbed from the foil band. We estimate absolute number densities from ion counting measurements and the known interaction volume as defined by our ionization laser. In \subautoref{fig:spatial_profile}{a} we show the measured number density of parent ions in the center of the desorbed plume as a function of distance from the foil band. We note that the shown densities are lower limits, since their calculation assumes an ionization efficiency of 1 for SFI and considers the measured intact parent ions only, such that any fragmentation induced by the SFI probe will reduce the derived density. The obtained densities exhibit approximately an inverse-square-law behavior with distance from the foil, since the expansion along the laser propagation direction is not reflected in the measurements due to the large Rayleigh length of the ionization laser ($z_R\approx38$~mm). We note that the data point closest to the foil band for the measurement at 0.64~J/cm$^2$ shows a significantly lower than expect density, which we can only explain with a lower density of molecules attached on the desorption foil band for this measurement, due to some instability during the aerosolization process. \begin{figure} \centering \includegraphics{spatial_profiles} \caption{(a) Parent ion density as a function of distance from the foil band, showing inverse-square law behavior. (b) Transverse profile of the molecular plume at three distances from the foil band. Gray shading corresponds to the measured acceptance of the TOF spectrometer, such that the measurement at 4.5~mm does not represent the actual spatial extend of the plume, but the limits of the experimental acceptance. Solid lines correspond to Gaussian fits to the data.} \label{fig:spatial_profile} \end{figure} We assess the spatial extent of the plume, \ie, the transverse profile, by translating the ionization laser in height along the $y$-axis (\autoref{fig:setup}), across the plume of molecules. This is shown in \subautoref{fig:spatial_profile}{b} for three distances between the foil band surface and interaction point. The initial profile close to the foil band is very narrow, with a FWHM of $\ordsim0.6$~mm after 0.5~mm of free flight. The plume then rapidly spreads out, reaching a FWHM of around 2~mm after 2.5~mm propagation and within 4.5~mm of free flight the extent of the plume exceeds the spatial acceptance of the TOF ion optics (indicated by the gray shading in \subautoref{fig:spatial_profile}{b}), such that no accurate data can be measured at larger separations. This rapid diffusion of the plume in space is consistent with the fast drop in density observed as the distance between the foil band and the interaction point is increased, \subautoref{fig:spatial_profile}{a}, and indicates rapid diffusion of the molecular plume in space following desorption from a well-defined spot defined by the desorption laser profile. \begin{figure} \centering \includegraphics{temporal_profiles} \caption{(a) Normalized temporal profiles of intact parent ions following desorption with 0.8~J/cm$^2$, at different distances from the foil. Solid lines correspond to a fit with a Maxwell-Boltzmann distribution convoluted with the desorption time distribution. (b) Normalized temporal profiles of intact parent ions for different desorption laser intensities and otherwise identical settings, obtained at $z=6.5$~mm. While the most probable velocity is approximately constant, the larger desorption laser fluence leads to a much broader velocity distribution.} \label{fig:temporal_profiles} \end{figure} To investigate the longitudinal extend and velocity of the plume of desorbed molecules we measure mass spectra as a function of delay between the desorption and ionization lasers, and at different distances from the foil band. Results for the intact-parent-ion yield following desorption with a fluence of 0.8~J/cm$^2$ are shown in \subautoref{fig:temporal_profiles}{a}. Similar data for other desorption fluences are shown in the supplementary data. It is very clear that even when the interaction point is very close to the foil band a broad temporal profile is observed, lasting several tens of \us, much broader than the 8~ns duration of the desorption-laser pulse. At larger distances from the foil band these distributions widen considerable more, demonstrating that during free flight through the vacuum chamber the plume spreads out also in the longitudinal direction. We identify two physical origins for the observed profiles and their temporal evolution; (i) the desorption process itself that does not release molecules at one instant in time, but with a certain temporal and kinetic energy distribution and (ii) the propagation of molecules in free flight with a certain finite translational velocity distribution. Whereas (i) contains information about the physical desorption mechanism from the foil, the translational velocity spread from (ii) corresponds to the translational temperature in the moving frame of the molecules. In order to accurately fit the measured data, one needs to convolute the initial desorption time distribution from the foil band with the Maxwell-Boltzmann free-flight propagation. Since so far no quantitative model is available to describe this desorption process accurately, we take the experimental data measured closest to the foil band, \ie, 0.5 mm, as a measure of the initial desorption time distribution and numerically convolute this with the Maxwell-Boltzmann model of the free-flight propagation. Details of this convolution procedure and the Maxwell-Boltzmann model are given in the supplementary information. We then perform a global fit of the data for all propagation distances $l$ simultaneously using a common temperature $T$ and offset velocity $v_{0,z}$, while we introduce only a single linear scaling parameter for the different data sets. The latter essentially accounts for the drop in intensity along the probed center-line of the plume. The results of this fit for a desorption laser fluence of 0.8~J/cm$^2$ are shown as solid lines in \subautoref{fig:temporal_profiles}{a}, data for other fluences is provided in the supplementary information. The obtained translational temperatures and forward velocities are summarized in \autoref{tab:fitting}. \begin{table} \centering \caption{Measured translational velocities and temperatures in the moving frame for different desorption laser intensities.} \label{tab:fitting} \begin{tabular}{*{3}{c}} \hline\hline Desorp. Fluence ~(J/cm$^2$) & ~~$T$~(K) ~~& ~~$v_{0,z}$~(m/s)~~ \\ \hline 0.32 & $594$ & $233$ \\ 0.48 & $679$ & $234$ \\ 0.64 & $715 $ & $265$ \\ 0.80 & $758 $ & $224$ \\ \hline\hline \end{tabular} \end{table} \begin{figure*}[t] \centering \includegraphics{fs_power_ns_power_scan} \caption{Ion-yield (a, c) and fragment-to-parent ratios (b, d) as a function of ionization laser intensity (a, b) and desorption laser intensity (c, d). Color coding for all graphs is given in panel c; see \autoref{fig:tof} for assignment of the mass peaks. Solid lines correspond to power-law fits.} \label{fig:intensity} \end{figure*} We observe a strong, nearly linear, dependence of the translational temperature of the molecular plume on the fluence of the desorption laser. Even at the lowest fluence used a translational temperature of nearly 600~K is obtained. In the current experimental setup using SFI we cannot measure the internal (vibrational or rotational) temperature directly. However, given the large density of states in systems such as phenylalanine, and the microsecond timescales of the desorption process, we can assume a large degree of thermalization between the different degrees of freedom. Thus the measured translational temperatures can be considered as a good indicator of the internal temperature of desorbed molecules. Unlike the temperature, the observed forward velocity appears to be approximately constant for the different desorption laser fluences. The slightly elevated velocity for the measurement at 0.64~J/cm$^2$ could be due to instabilities in the sample preparation for this measurement, as mentioned above. Similar observations of identical forward velocity have been previously reported~\cite{Zinovev:AnalChem79:8232, Shea:AnalChem79:2688}. This invariability of the velocity with desorption laser fluence suggests that this might be determined by material properties of the substrate and the molecular sample. \subautoref{fig:temporal_profiles}{b} shows the yield of intact parent ions as a function of desorption laser-ionization laser delay for different desorption fluences. While the peaks of the distribution overlap in time, the distribution is significantly broader for higher fluences. These observations fully support our finding of a constant translational velocity, but increasing translational temperature as the desorption laser fluence is increased (\emph{vide supra}). \subsection{Molecular fragmentation} \label{sec:fragmentation} In how far the observed fragmentation is due to the desorption or the SFI process is hard to assess from the mass spectra in \autoref{fig:tof} alone. In order to disentangle these contributions, we collect mass spectra for different ionization and desorption laser intensities. \subautoref{fig:intensity}{a} shows the ion yield for the PA parent and the three dominant fragment ions as a function of ionization laser intensity, with all ion channels showing a steep increase with increasing laser intensity. These data were fit with a power-law dependence of the form $A\times{}x^n$. \subautoref{fig:intensity}{b} further shows the ratio of fragment-to-parent ion abundances for the three dominant fragment ions, \ie, comparing the relative abundances of the two respective channels. We observe only a very slight increase in fragmentation as the laser intensity increases, in good agreement with previous studies suggesting that SFI induced fragmentation is very sensitive to the employed pulse duration, but not the intensity~\cite{Calvert:PCCP14:6289}. \subautoref{fig:intensity}{c} shows the dependence of ion yields on the intensity of the desorption laser and \subautoref{fig:intensity}{d} the corresponding fragment-to-parent ratios. The overall measured ion abundances are again well described by a power-law fit and show a steep increase for higher intensities, especially noticeable for fragment ions. This is confirmed by the fragment-to-parent ratios, which also significantly increase with laser intensity, indicating enhanced fragmentation. Thus, the desorption-laser interaction clearly induces fragmentation, either directly during the desorption process or thereafter, but prior to ionization, \ie, as molecules travel through the vacuum chamber toward the interaction point. To test the latter, we recorded mass spectra at different distances behind the foil band, changing the laser-laser delay such that we always probe the highest density part of the molecular plume, \ie, we follow the center of the plume as it travels through the vacuum chamber. \begin{figure} \centering \includegraphics[width=1\linewidth]{fragmentation} \caption{(a) Fragment-to-parent ratio recorded at the peak of the molecular plume density for different distances behind the foil. No significant increase in fragmentation is observed as the plume travels through the vacuum chamber. (b) Fragment-to-parent ratio throughout the molecular plume recorded 6.5~mm behind the foil. Molecules desorbed shortly after the arrival of the desorption laser show significantly higher fragmentation than molecules desorbed later.} \label{fig:fragmentation} \end{figure} This data is shown in \subautoref{fig:fragmentation}{a}, collected for distances of 0.5--10.5~mm between the foil band and the interaction point, which corresponds to flight times of around 0--50~\us. Over this distance we observe no significant increase in fragmentation, indicating that fragmentation occurs on much faster timescales, \ie, most likely during the desorption process itself, either while molecules are still attached to the metal substrate or very shortly after desorption into the gas-phase. We now consider the distribution of fragments within a single plume coming from the foil band, \ie, if the fragmentation changes depending on which part of the plume is observed. This is shown in \subautoref{fig:fragmentation}{b}, where we plot the fragment-to-parent ratio for the most abundant molecular fragment ion as a function of desorption-laser-to-ionization-laser delay for a fixed distance from the foil band, \ie, 6.5~mm. We observe an initial peak in the fragment-to-parent ratio at the onset of desorption, \ie, the ``front'' part of the molecular plume, which then decreases on a timescale of tens of microseconds. These timescales are consistent with thermal processes, in particular we associate the observed distribution with the rapid heating of the foil band by the nanosecond laser pulse, causing increased fragmentation, followed by slow dissipation of the thermal energy, \ie, cooling down of the front surface and, hence, reduced fragmentation. Further evidence that the fragmentation occurs during the desorption process and that it is of a thermal nature comes from the comparison of the fragment-to-parent ratios throughout the plume for different desorption laser fluences, also shown in \subautoref{fig:fragmentation}{b}. These clearly show that the highest degree of fragmentation occurs for the most intense desorption laser pulse. This is also consistent with the higher translational temperatures derived for these conditions. Once the foil band cools down, \ie, at longer desorption-laser-to-ionization-laser delays, the fragment-to-parent ratio approaches an asymptotic value independent of initial desorption conditions. \subsection{Nature of the desorption process} Several possible mechanisms have been suggested in the literature for the underlying physical processes occurring in the LIAD process~\cite{Lindner:IJMSIP103:203, Golovlev:APPL71:852, Zinovev:AnalChem79:8232, Goodfriend:APPA122:154}. It is important to note that the experimental conditions for the different published LIAD-based molecule sources are very different; pulsed~\cite{Calvert:PCCP14:6289, Zinovev:AnalChem79:8232} and continuous~\cite{Calegari:Science346:336, Calegari:IEEESTQE21:1} desorption lasers are used and sample preparation methods vary greatly, from the thick sample layer used here of $\sim$500~nmol/cm$^2$~\cite{Calegari:IEEESTQE21:1, Borton:AnalChem85:5720}, to intermediate thicknesses of tens of nmol/cm$^2$~\cite{Shea:AnalChem78:6133, Shea:AnalChem79:2688}, to near-monolayer coverage in other studies~\cite{Zinovev:AnalChem79:8232}. As such, we do not aim to provide a general model for the LIAD mechanisms, but seek to explain our observations and compare these with previous studies where applicable. One of the suggested desorption mechanisms, and indeed the origin of the term ``acoustic desorption''~\cite{Lindner:IJMSIP103:203, Golovlev:APPL71:852}, is the direct momentum transfer from a shock wave induced by the desorption laser in the foil band to the sample molecules. Our data firmly rules out this mechanism for our molecule source. We observe a slow rise in molecular signal on the order of $\ordsim10~\us$, see \autoref{fig:temporal_profiles}, which is not compatible with molecules being ``shaken off'' by an impulse traveling through the foil, as this should lead to a sharp sudden onset of signal as the impulse reaches the front surface, followed by an immediate drop as the impulse is reflected on the surface. Additionally one might expect to observe a periodic revival of signal as the impulses bounces back and forth within the metal foil. We observe no evidence for this behavior. Furthermore, the travel time for a mechanical wave through a 10~\um tantalum foil is approximately 2~ns~\cite{Rigg:JPCS500:032018}, significantly shorter than the delay we observe between the desorption laser impacting on the foil and molecules being desorbed. A purely acoustic desorption mechanism would, furthermore, not explain the observed increase in fragmentation for increased desorption laser fluences. Similar observations have been made previously for a pulsed LIAD setup, and the ``shake off'' mechanism similarly discredited~\cite{Zinovev:AnalChem79:8232}. The other conceptually simple mechanism is a simple thermal one; the incident laser pulse heats up the material from the backside and this thermal energy then diffuses to the front of the foil where it heats up molecules and they eventually desorb. However, the observation that the velocity and, therefore, the kinetic energy of desorbed molecules is independent of the incident desorption laser power and thus surface temperature is not compatible with a purely-thermal desorption model. The observation that the kinetic energy of desorbed molecules is independent of desorption laser fluence indicates that this is determined by material properties of the foil substrate and/or the molecular sample. This observation, along with the increase in translational temperature in the moving frame, is consistent with a desorption model proposed by Zinovev \etal~\cite{Zinovev:AnalChem79:8232}. They explain the LIAD process by an introduction of surface stress between the substrate and the molecular sample -- located in isolated islands on the substrate -- due to the acoustic and/or thermal wave created by the desorption laser. This surface stress can lead to elastic deformation, decomposition, and cracking of sample islands on the foil band and, eventually, to desorption of molecules. In this conceptual model the kinetic energy transferred to a desorbing molecule is independent of the total incident laser power, and rather depends on the intrinsic characteristics of a given sample island and substrate. A higher laser fluence leads to the introduction of more surface stress and the formation of more cracks and deformation sites, leading to an increase in molecular signal, but does not influence the amount of kinetic energy per molecule. At the same time we note that due to thermal conductivity the higher temperature of the substrate reached for higher desorption laser fluences will also heat up deposited sample molecules due to thermal conduction, leading to internally hotter molecules, increased fragmentation as well as higher translational temperatures. While it is difficult to theoretically model the amount of energy transferred to each desorbed molecule, Zinovev \etal provide a simple formula to estimate the energy per analyte molecule based on material properties and thermal stress theory~\cite{Zinovev:AnalChem79:8232}. Based on this we estimate 25--100~meV of energy per molecule for temperature differences of $\Delta{}T=100$--200~K.~\footnote{We evaluated the energy release per molecule based on the known physical constants for anthracene~\cite{Bondi:JAP37:4643}, since data for PA was not available. The thermal expansion coefficient of the film is assumed to be $2.8\times10^{-4}~\mathrm{K}^{-1}$.} This is well within the range of the measured kinetic energy per molecule which is, based on the average velocity observed, around 50~meV. Thus, our data is fully supportive of the proposed surface stress model. \section{Conclusion} \label{sec:Conclusion} We presented an advanced LIAD source for the preparation of gas-phase samples of labile molecules, designed for the use at central-facility light sources such as free-electron lasers. It features a prolonged continuous measurement time through automatic sample replenishment, as well as a fixed interaction point. Uniform sample preparation on the long substrate was achieved using an aerosol spraying method based on thin liquid jets. We have characterized the new source using phenylalanine as a sample molecule and SFI as a universal probe method. We observe a significant fraction of intact molecules being desorbed from the foil, with number densities around $2\times10^9$~cm$^{-3}$ close to the foil band. Due to fragmentation processes induced by the probe, this should be treated as a lower limit. The spatial extend of the molecular plume rapidly spreads out from the point of desorption, leading to a corresponding drop in density. The plume forward translational velocity and temperature in the moving frame are derived by convoluting a Maxwell-Boltzmann velocity distribution with the initial temporal profile near the foil band. The forward velocity, and hence kinetic energy, of molecules desorbed from the foil does not depend on the desorption laser intensity. In contrast to this, the translational temperature clearly increases with increasing desorption intensity. We investigated the fragmentation processes and observe increased fragmentation at higher desorption laser intensity, consistent with the translational temperature behavior. Furthermore, we show that the amount of fragmentation depends on the time of desorption from the foil: shortly after the laser pulse molecules are observed to be hottest, and subsequently they cool down on thermal timescales (10s of \us) as the substrate itself cools down. These observations are fully supported by the previously proposed surface-stress model of the LIAD process. Our characterization measurements show that our new source produces a stable high-density signal of intact molecules in the gas-phase. With automatic sample replenishment it provides very long continuous measurement times. The produced molecular plume is well suited for further gas-phase experiments and manipulation, and work is currently underway towards integrating this source into a buffer-gas-cooling setup for the production of cold molecules~\cite{Hutzler:CR112:4803}, which can then be further manipulated using electric fields~\cite{Chang:IRPC34:557}. One could also envision to make use of this desorption technique for the entrainment of molecules into supersonic beams, similar to matrix-assisted laser desorption approaches~\cite{Teschmit:JCP147:144204}. \section{Supporting Information} The supporting information contains details regarding the \begin{itemize} \itemsep=0pt \item Experimental setup \item Sample preparation and deposition \item Derivation of Maxwell-Boltzmann velocity distributions \item Temporal profiles at different desorption intensities \end{itemize} \begin{acknowledgments} In addition to DESY, this work has been supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) through the Consolidator Grant COMOTION (ERC-614507-Küpper), by the excellence cluster ``The Hamburg Center for Ultrafast Imaging -- Structure, Dynamics and Control of Matter at the Atomic Scale'' of the Deutsche Forschungsgemeinschaft (CUI, DFG-EXC1074), and by the Helmholtz Gemeinschaft through the ``Impuls- und Vernetzungsfond''. Z.\,H.\ gratefully acknowledges a scholarship of the Joachim-Herz-Stiftung and support by the PIER Helmholtz Graduate School. \end{acknowledgments} \vspace*{1em} \bibliographystyle{achemso}
{'timestamp': '2018-02-21T02:07:17', 'yymm': '1710', 'arxiv_id': '1710.06684', 'language': 'en', 'url': 'https://arxiv.org/abs/1710.06684'}
\section{Results} \label{sec:inference} \begin{table*} \caption{{t-statistic (p-value=<0.05:*; =<0.5:**; >0.5: ***) and Cohen's-d (95\% confidence intervals do not overlapping with zero: *) of activity-level and statistical features for personality traits in DPQ and MCPQ-R:} Night (12am-6am), Morning (6am-12pm), and Afternoon (12pm-6pm) are denoted by N, M, and A, respectively; t-statistic and Cohen's-d values are sorted in descending order, with the highest value of each trait in boldface. Results for activity-level features (left) and statistical features (right) are shown separately. Standard notation min, max, mean, median, std were used for minimum, maximum, mean, median, and standard deviation of the signal, respectively. Other notations include: acceleration -- acceleration value was calculated in Section~\ref{subsec:data_processing_pipeline}, acc -- accelerometer, gyro -- gyroscope, x,y,z -- axes of the accelerometer and gyroscope, \% -- percentage of time spent doing a particular activity, ecdf -- empirical cumulative distribution function.} \label{tab:tstatistics} \resizebox{1.02\textwidth}{!}{% \begin{tabular}{l l r l r | l r l r} & \multicolumn{4}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Activity-Level}} & \multicolumn{4}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Statistical}} \\ \cmidrule{2-9} & \multicolumn{8}{c}{\cellcolor[HTML]{FFFFFF}\textbf{DPQ}} \\ \cmidrule{2-9} & & \textit{t-statistic} & & \textit{Cohen's-d} & & \textit{t-statistic} & & \textit{Cohen's-d} \\ % % \multirow{5}{1.9cm}{\textbf{Fearfulness}} & \textbf{sedentary \% (A)} & \textbf{(-) 3.55*} & \textbf{sedentary \% (A)} & \textbf{0.88*} & \textbf{gyro z histogram\_5 (N)} & \textbf{(+) 17.92*} & \textbf{gyro z histogram\_5 (N)} & \textbf{4.46*} \\ & light \% (N) & (-) 3.12* & light \% (N) & 0.78* & gyro z histogram\_5 (M) & (+) 13.07* & gyro z histogram\_5 (M) & 3.25* \\ & acceleration std (M) & (+) 2.46* & acceleration std (M) & 0.61* & gyro z histogram\_6 (N) & (+) 12.86* & gyro z histogram\_6 (N) & 3.21* \\ & sedentary \% (M) & (-) 2.43* & sedentary \% (M) & 0.60* & gyro z zero\_crossing\_rate (N) & (+) 9.94* & gyro z zero\_crossing\_rate (N) & 2.47* \\ & sleep \% (A) & (+) 2.37* & acceleration std (A) & 0.59* & gyro z histogram\_6 (M) & (+) 9.56* & gyro z histogram\_6 (M) & 2.38* \\ \arrayrulecolor{Gray} \cmidrule{2-9} % % \multirow{5}{1.9cm}{\textbf{Aggression Towards People}} & \textbf{light \% (M)} & \textbf{(+) 1.74**} & \textbf{light \% (M)} & \textbf{0.43} & \textbf{gyro y histogram\_8 (N)} & \textbf{(-) 4.44*} & \textbf{gyro y histogram\_8 (N)} & \textbf{1.11*} \\ & light \% (A) & (+) 1.31** & light \% (A) & 0.32 & gyro y histogram\_4 (N) & (+) 4.40* & gyro y histogram\_4 (N) & 1.09* \\ & sleep \% (M) & (-) 1.16*** & sleep \% (M) & 0.29 & acc x histogram\_5 (M) & (-) 4.38* & gyro y histogram\_8 (M) & 1.09* \\ & moderate-vigorous \% (N) & (-) 1.08*** & moderate-vigorous \% (N) & 0.27 & gyro y histogram\_8 (M) & (-) 4.35* & acc x histogram\_5 (M) & 1.09* \\ & light \% (N) & (+) 0.72*** & light \% (N) & 0.17* & gyro y histogram\_7 (M) & (-) 4.33* & gyro y histogram\_7 (M) & 1.08* \\ \arrayrulecolor{Gray} \cmidrule{2-9} % % \multirow{5}{1.9cm}{\textbf{Excitability}} & \textbf{sedentary \% (A)} & \textbf{(+) 2.05*} & \textbf{sedentary \% (A)} & \textbf{0.51*} & \textbf{gyro z histogram\_5 (M)} & \textbf{(-)v5.86*} & \textbf{gyro z histogram\_5 (M)} & \textbf{1.45*} \\ & acceleration std (A) & (-) 1.50** & acceleration std (A) & 0.37 & gyro z histogram\_5 (N) & (-) 5.85* & gyro z histogram\_5 (N) & 1.45* \\ & light \% (M) & (-) 1.44** & light \% (M) & 0.36 & gyro z zero\_crossing\_rate (M) & (-) 5.67* & gyro z zero\_crossing\_rate (M) & 1.41* \\ & sedentary \% (M) & (+) 1.32** & sedentary \% (M) & 0.33 & gyro z zero\_crossing\_rate (N) & (-) 5.63* & gyro z zero\_crossing\_rate (N) & 1.40* \\ & acceleration max (A) & (-) 1.27* & acceleration max (A) & 0.32 & gyro y zero\_crossing\_rate (N) & (-) 5.54* & gyro y zero\_crossing\_rate (N) & 1.37* \\ \arrayrulecolor{Gray} \cmidrule{2-9} % % \multirow{5}{1.9cm}{\textbf{Responsiveness to Training}} & \textbf{light \% (M)} & \textbf{(-) 2.17*} & \textbf{light \% (M)} & \textbf{0.52*} & \textbf{acc x mean (M)} & \textbf{(+) 4.06*} & \textbf{acc x mean (M)} & \textbf{0.99*} \\ & light \% (A) & (-) 1.76** & light \% (A) & 0.42 & gyro x ecdf\_\_perc\_0 (M) & (+) 3.87* & gyro x histogram\_8 (A) & 0.97* \\ & sleep \% (M) & (+) 1.52** & sleep \% (M) & 0.38 & gyro z ecdf\_\_perc\_0 (M) & (+) 3.81* & gyro x histogram\_7 (A) & 0.95* \\ & light \% (N) & (-) 1.08** & acceleration std (A) & 0.27 & gyro z median (M) & (+) 3.73* & gyro z histogram\_8 (M) & 0.95* \\ & acceleration std (A) & (+) 1.06** & light \% (N) & 0.26 & gyro z median (N) & (+) 3.71** & gyro x ecdf\_\_perc\_0 (N) & 0.94* \\ \arrayrulecolor{Gray} \cmidrule{2-9} % % \multirow{5}{1.9cm}{\textbf{Aggression Towards Animals}} & \textbf{sedentary \% (M)} & \textbf{(+) 2.80*} & \textbf{sedentary \% (M)} & \textbf{0.69*} & \textbf{acc y neighbourhood\_peaks (M)} & \textbf{(+) 5.43*} & \textbf{acc y neighbourhood\_peaks (M)} & \textbf{1.35*} \\ & sleep \% (M) & (-) 2.61* & sleep \% (M) & 0.65* & acc z neighbourhood\_peaks (M) & (+) 5.20* & acc z neighbourhood\_peaks (M) & 1.29* \\ & acceleration mean (M) & (+) 2.33* & acceleration mean (M) & 0.58* & gyro y zero\_crossing\_rate (M) & (+) 5.13* & gyro y zero\_crossing\_rate (M) & 1.27* \\ & acceleration median (M) & (+) 2.32* & acceleration median (M) & 0.58* & gyro x neighbourhood\_peaks (M) & (+) 5.06** & gyro x neighbourhood\_peaks (M) & 1.25* \\ & acceleration median (N) & (+) 2.32* & acceleration median (N) & 0.58* & gyro x ecdf\_0 (M) & (-) 4.99* & gyro z ecdf\_percentile\_count\_0 (M) & 1.24* \\ \arrayrulecolor{black} \cmidrule{2-9} & \multicolumn{8}{c}{\cellcolor[HTML]{FFFFFF}\textbf{MCPQ-R}} \\ \arrayrulecolor{black} \cmidrule{2-9} & & \textit{t-statistic} & & \textit{Cohen's-d} & & \textit{t-statistic} & & \textit{Cohen's-d} \\ % % \multirow{5}{1.9cm}{\textbf{Extraversion}} & \textbf{acceleration min (M)} & \textbf{(+) 2.34*} & \textbf{acceleration min (M)} & \textbf{0.60*} & \textbf{gyro x histogram\_4 (N)} & \textbf{(-) 4.86*} & \textbf{gyro x histogram\_4 (N)} & \textbf{1.24*} \\ & acceleration max (A) & (+) 2.19* & acceleration max (A) & 0.56* & gyro x zero\_crossing\_rate (N) & (-) 4.23* & gyro x zero\_crossing\_rate (N) & 1.08* \\ & acceleration min (A) & (+) 2.09* & acceleration min (A) & 0.54* & acc y histogram\_8 (M) & (-) 3.71* & acc y histogram\_1 (A) & 0.94* \\ & acceleration min (N) & (+) 1.96** & acceleration min (N) & 0.50* & acc y histogram\_1 (A) & (+) 3.68* & acc y histogram\_8 (M) & 0.92* \\ & acceleration mean (N) & (+) 1.95** & acceleration mean (N) & 0.50* & gyro y auc (M) & (-) 3.45* & gyro y auc (M) (N) & 0.86* \\ \arrayrulecolor{Gray} \cmidrule{2-9} % % \multirow{5}{1.9cm}{\textbf{Motivation}} & \textbf{sedentary \% (M)} & \textbf{(+) 3.22*} & \textbf{sedentary \% (M)} & \textbf{0.80*} & \textbf{gyro z histogram\_7 (M)} & \textbf{(-) 4.45*} & \textbf{gyro z histogram\_7 (M)} & \textbf{1.10*} \\ & acceleration std (M) & (-) 3.11* & acceleration std (M) & 0.76* & gyro z histogram\_6 (M) & (-) 4.25** & gyro z histogram\_6 (M) & 1.06* \\ & sleep \% (A) & (-) 2.58* & acceleration min (M) & 0.65* & gyro z histogram\_9 (N) & (-) 4.18** & gyro z median (M) & 1.05* \\ & acceleration min (M) & (+) 2.57* & sleep \% (A) & 0.64* & gyro z median (M) & (-) 4.18* & gyro z median (N) & 1.05* \\ & sleep \% (M) & (-) 2.40* & sleep \% (M) & 0.60* & gyro z histogram\_7 (A) & (-) 4.18* & gyro z median (A) & 1.04* \\ \arrayrulecolor{Gray} \cmidrule{2-9} % % \multirow{5}{1.9cm}{\textbf{Training Focus}} & \textbf{light \% (A)} & \textbf{(-) 1.88**} & \textbf{light \% (A)} & \textbf{0.46} & \textbf{gyro x histogram\_4 (M)} & \textbf{(+) 4.74*} & \textbf{gyro x histogram\_4 (M)} & \textbf{1.17*} \\ & acceleration std (M) & (-) 1.80** & acceleration std (M) & 0.44 & acc x histogram\_5 (M) & (+) 4.71* & acc x histogram\_5 (M) & 1.17* \\ & moderate-vigorous \% (M) & (-) 1.69** & moderate-vigorous \% (M) & 0.41 & acc x mean (M) & (+) 4.56* & acc x mean (M) & 1.12* \\ & sedentary \% (A) & (-) 1.59** & sedentary \% (A) & 0.40 & acc x median (M) & (+) 3.98* & acc x median (M) & 0.98* \\ & light \% (M) & (-) 1.59** & light \% (M) & 0.39 & gyro y negative\_turning\_points (M) & (+) 3.88* & gyro y negative\_turning\_points (M) & 0.96* \\ \arrayrulecolor{Gray} \cmidrule{2-9} % % \multirow{5}{1.9cm}{\textbf{Amicability}} & \textbf{sleep \% (N)} & \textbf{(-) 3.80*} & \textbf{sleep \% (N)} & \textbf{0.97*} & \textbf{gyro y zero\_crossing\_rate (N)} & \textbf{(-) 9.79*} & \textbf{gyro y zero\_crossing\_rate (N)} & \textbf{2.42*} \\ & sleep \% (M) & (-) 3.44* & sleep \% (M) & 0.87* & gyro y zero\_crossing\_rate (M) & (-) 9.15* & gyro y zero\_crossing\_rate (M) & 2.30* \\ & sedentary \% (M) & (+) 3.29* & light \% (M) & 0.84* & gyro y histogram\_4 (M) & (-) 9.03* & gyro y histogram\_4 (M) & 2.22* \\ & light \% (M) & (+) 3.20* & sedentary \% (M) & 0.83* & gyro y histogram\_4 (N) & (-) 8.89* & gyro y histogram\_4 (N) & 2.21* \\ & accleration min (M) & (+) 3.17* & accleration min (M) & 0.83* & gyro y histogram\_5 (M) & (-) 7.82* & gyro y histogram\_5 (M) & 1.94* \\ \arrayrulecolor{Gray} \cmidrule{2-9} % % \multirow{5}{1.9cm}{\textbf{Neuroticism}} & \textbf{acceleration std (M)} & \textbf{(+) 3.02*} & \textbf{acceleration std (M)} & \textbf{0.75*} & \textbf{gyro z histogram\_8 (N)} & \textbf{(+) 6.19*} & \textbf{gyro z histogram\_8 (N)} & \textbf{1.53*} \\ & moderate-vigorous \% (M) & (+) 2.72* & acceleration min (M) & 0.67* & gyro z histogram\_7 (N) & (+) 5.80* & gyro z histogram\_7 (N) & 1.44* \\ & acceleration min (M) & (-) 2.69* & moderate-vigorous \% (M) & 0.67* & gyro z histogram\_9 (N) & (+) 5.78* & gyro z histogram\_9 (N) & 1.43* \\ & acceleration max (A) & (-) 2.49* & acceleration max (A) & 0.62* & gyro z histogram\_9 (M) & (+) 5.53* & gyro z histogram\_9 (M) & 1.37* \\ & acceleration min (A) & (-) 2.44* & acceleration min \% (A) & 0.61* & gyro z ecdf\_percentile\_1 (M) & (+) 5.43* & gyro z ecdf\_percentile\_1 (M) & 1.35* \\ \arrayrulecolor{black} \hline \end{tabular} } \end{table*} \begin{table*} \caption{{Inference results for each personality trait in DPQ and MCPQ-R for different types of models:} Mean ($\bar{S}$) and Standard Deviation ($S_\sigma$) area under the receiver operating characteristic curve (AUC) computed from five iterations. Results are presented as $\bar{S} (S_\sigma)$, where $S$ is AUC score, and the highest performing model is marked in bold text. SVM: Support Vector Machines; L-GBM: Light Gradient Boosting Machine; NB: Naive Bayes; RF: Random Forest.} \label{tab:accuracies_model_types} \resizebox{\textwidth}{!}{% \begin{tabular}{c c c c c c} & \multicolumn{5}{c}{\cellcolor[HTML]{FFFFFF}\textbf{DPQ}} \\ \arrayrulecolor{black} \cmidrule{2-6} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Fearfulness}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Aggression Towards People}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Excitability}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Responsiveness to Training}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Aggression Towards Animals}} \\ \arrayrulecolor{Gray} \hline \textbf{Baseline} & .50 (.00) & .50 (.00) & .50 (.00) & .50 (.00) & .50 (.00) \\ \arrayrulecolor{Gray} \hline \textbf{SVM} & .71 (.02) & .66 (.05) & .59 (.03) & \textbf{.72 (.06)} & .63 (.09) \\ \textbf{L-GBM} & .76 (.09) & \textbf{.68 (.06)} & .61 (.08) & .66 (.03) & .59 (.10) \\ \textbf{NB} & .71 (.05) & .63 (.08) & \textbf{.62 (.02)} & .65 (.03) & .59 (.04) \\ \textbf{RF} & \textbf{.78 (.07)} & .65 (.08) & .62 (.08) & .70 (.09f) & \textbf{.68 (.06)} \\ \arrayrulecolor{Gray} \hline & \multicolumn{5}{c}{\cellcolor[HTML]{FFFFFF}\textbf{MCPQ-R}} \\ \arrayrulecolor{black} \cmidrule{2-6} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Extraversion}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Motivation}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Training Focus}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Amicability}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Neuroticism}} \\ \arrayrulecolor{Gray} \hline \textbf{Baseline} & .50 (.00) & .50 (.00) & .50 (.00) & .50 (.00) & .50 (.00) \\ \arrayrulecolor{Gray} \hline \textbf{SVM} & \textbf{.64 (.01)} & .70 (.02) & .73 (.06) & .67 (.03) & .62 (.09) \\ \textbf{L-GBM} & .62 (.07) & .69 (.11) & .70 (.08) & .71 (.05) & .70 (.08) \\ \textbf{NB} & .64 (.11) & .72 (.05) & .81 (.02) & .70 (.08) & .70 (.02) \\ \textbf{RF} & .59 (.07) & \textbf{.76 (.07)} & \textbf{.89 (.13)} & \textbf{.74 (.08)} & \textbf{.73 (.11)} \\ \arrayrulecolor{Gray} \hline \end{tabular}% } \end{table*} \begin{table*} \caption{{Random Forest inference results for each personality trait in DPQ and MCPQ-R for different types of features:} Mean ($\bar{S}$) and Standard Deviation ($S_\sigma$) Area Under the receiver operating characteristic Curve (AUC) computed from five iterations. Results are presented as $\bar{S} (S_\sigma)$, where $S$ is AUC, and the highest performing model is marked in bold text. ACT: activity level features; STAT: statistical features; DEM: dog demographic attributes including its sex, weight, age, training rating, and whether neutered; O-INFO: dog owner's sex and personality traits.} \label{tab:accuracies_feature_types} \resizebox{\textwidth}{!}{% \begin{tabular}{c c c c c c} & \multicolumn{5}{c}{\cellcolor[HTML]{FFFFFF}\textbf{DPQ}} \\ \cmidrule{2-6} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Fearfulness}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Aggression Towards People}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Excitability}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Responsiveness to Training}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Aggression Towards Animals}} \\ \arrayrulecolor{Gray} \hline \textbf{B1} & .50 (.00) & .50 (.00) & .50 (.00) & .50 (.00) & .50 (.00) \\ \textbf{B2: O-INFO} & .54 (.10) & .48 (.15) & .40 (.18) & .52 (.11) & .49 (.09) \\ \textbf{B3: DEM} & .57 (.08) & .49 (.06) & .48 (.08) & .55 (.07) & .50 (.12) \\ \arrayrulecolor{Gray} \hline \textbf{G1: ACT} & .67 (.05) & .47 (.02) & .60 (.06) & .53 (.08) & .56 (.05) \\ \textbf{G2: STAT} & .55 (.05) & .47 (.13) & .25 (.09) & .58 (.07) & .53 (.04) \\ \textbf{G3: ACT+DEM} & \textbf{.80 (.11)} & .63 (.13) & .47 (.12) & .61 (.15) & .67 (.05) \\ \textbf{G4: STAT+DEM} & .63 (.08) & .57 (.11) & .33 (.11) & \textbf{.70 (.04)} & .59 (.04) \\ \textbf{G5: ACT+STAT} & .78 (.07) & .65 (.08) & \textbf{.62 (.08)} & .70 (.09) & \textbf{.68 (.06)} \\ \textbf{G6: ACT+STAT+DEM} & .61 (.03) & \textbf{.65 (.04)} & .51 (.09) & .61 (.09) & .51 (.06) \\ \hline & \multicolumn{5}{c}{\cellcolor[HTML]{FFFFFF}\textbf{MCPQ-R}} \\ \arrayrulecolor{black} \cmidrule{2-6} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Extraversion}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Motivation}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Training Focus}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Amicability}} & \multicolumn{1}{c}{\cellcolor[HTML]{FFFFFF}\textbf{Neuroticism}} \\ \arrayrulecolor{Gray} \hline \textbf{B1} & .50 (.00) & .50 (.00) & .50 (.00) & .50 (.00) & .50 (.00) \\ \textbf{B2: O-INFO} & .43 (.02) & .52 (.04) & .56 (.03) & .47 (.10) & .51 (.11) \\ \textbf{B3: DEM} & .44 (.05) & .54 (.08) & .56 (.04) & .49 (.09) & .52 (.10) \\ \arrayrulecolor{Gray} \hline \textbf{G1: ACT} & .41 (.03) & .63 (.06) & .47 (.06) & .41 (.04) & .57 (.02) \\ \textbf{G2: STAT} & .64 (.06) & .36 (.11) & .34 (.04) & .71 (.05) & .69 (.09) \\ \textbf{G3: ACT+DEM} & .40 (.09) & \textbf{.77 (.09)} & \textbf{.89 (.08)} & .38 (.07) & .47 (.02) \\ \textbf{G4: STAT+DEM} & \textbf{.63 (.04)} & .33 (.08) & .43 (.06) & .70 (.08) & .67 (.09) \\ \textbf{G5: ACT+STAT} & .59 (.07) & .76 (.07) & .89 (.13) & \textbf{.74 (.08)} & \textbf{.73 (.11)} \\ \textbf{G6: ACT+STAT+DEM} & .62 (.04) & .33 (.08) & .43 (.07) & .63 (.06) & .70 (.09) \\ \arrayrulecolor{Gray} \hline \end{tabular}% } \end{table*} \subsection{Activity Level and Statistical Features Discriminating Dog Personality}\label{subsec:statistical_analysis} The statistical features showed high $t$-statistic, low $p$-values, and high Cohen's-$d$ values for a majority of dog personality traits on both questionnaires (Table~\ref{tab:tstatistics}). Most of these features were derived from gyroscope, and were captured during the morning. The activity-level features also showed high Cohen's-d values, but not as high as the statistical ones. \begin{itemize}[leftmargin=*,align=left] \item DPQ: For \emph{Fearfulness}, the percentage of time spent in sedentary activity in the afternoon had a Cohen's-d of 0.88, and the amount time of time spent doing light activity at night had a Cohen's-d of 0.78; both of which have a large effect size. This translates into saying that a dog's sedentary activity in the afternoon or light activity at night are both informative of high vs. low levels of fearfulness. These were in fact the highest Cohen's-d obtained for any activity-level features. In general, the lowest Cohen's-d values came for \emph{Aggression Towards People}. For that trait, light activity percentage at night, morning, and afternoon had Cohen's-d of 0.17, 0.43, and 0.32, respectively, while moderate-vigorous activity level during night had a Cohen's-d of 0.27. These features had above small effect size with low reliability (because the 95\% confidence interval was crossing zero in many cases), thus no association could have been drawn. \item MCPQ-R: Across all traits, the statistical features had comparably similar values for $t$-statistic and Cohen's-$d$ to those previously obtained by DPQ. However, activity-level features had Cohen's-d for MCPQ-R higher compared to those for DPQ, showing a better discriminative capability of high class vs. low class for MCPQ-R traits compared to DPQ traits. In particular, statistical features capturing acceleration (i.e., dog movements throughout the day) had Cohen's-d values above 0.5, suggesting a link between Extraversion and high levels of activity; a finding in line with previous work~\cite{carrier2013exploring}. \end{itemize} These results suggest that both activity levels and the statistical features have discriminative power to various degrees, allowing us to draw conclusions on which features are associated with dog personality (\textbf{RQ\textsubscript{2}}). \begin{figure*} \captionsetup{labelfont=normalfont} \centerline{\includegraphics[width=0.99\linewidth]{figures/dpq_nma_results_v6.png}} \caption{{Area Under the receiver operating characteristic Curve (AUC) score comparison for DPQ traits with models that used features from:} night (N); morning (M); afternoon (A); and all time periods (N+M+A). Feature type combinations (G1-G6 from Table~\ref{tab:accuracies_feature_types}) that provided the AUC score is marked in white color at the bottom of each bar.} \label{fig:nma_dpq_results} \end{figure*} \begin{figure*} \captionsetup{labelfont=normalfont} \centerline{\includegraphics[width=0.99\linewidth]{figures/mcpqr_nma_results_v6.png}} \caption{{Area Under the receiver operating characteristic Curve (AUC) score comparison for MCPQ-R traits with models that used features from:} night (N); morning (M); afternoon (A); and all time periods (N+M+A). Feature type combinations (=G1-G6 from Table~\ref{tab:accuracies_feature_types}) that provided the AUC score is marked in white color at the bottom of each bar.} \label{fig:nma_mcpqr_results} \end{figure*} \subsection{Predicting Dog Personality} Table~\ref{tab:accuracies_model_types} shows the classification results for four model types predicting dog personality from both activity levels and statistical features. The columns show the performance for each personality trait, while the rows show model types. The baseline for all experiments is 0.5 as the testing sets were balanced~\cite{meegahapola2022sensing}. For DPQ, the best AUCs were obtained with Random Forest (in the range of 0.62-0.80), so for MCPQ-R (in the range of 0.63-0.89). AUC was highest for \emph{Training Focus} with a score of 0.89 using Random Forest, and lowest for \emph{Excitability} with a score of 0.59 using SVM. In general, the AUC scores of DPQ questionnaire were lower compared to those of MCPQ-R, meaning that our sensed features are better at classifying MCPQ-R traits (in line with the results in Section~\ref{subsec:statistical_analysis}). Given that Random Forest models showed the best performance for the majority of inferences, for brevity, we only present that model's results in the remainder of this section. Table~\ref{tab:accuracies_feature_types} shows the performance for various feature type combinations, and two additional baseline models using: \emph{a)} dog owner's sex and personality (B2: O-INFO), and \emph{b)} dog demographics (i.e., sex, weight, age, training rating, and whether neutered) (B3: DEM). Models that used activity-level features alone performed with AUC scores in the range of 0.47-0.67 for DPQ, and 0.41-0.63 for MCPQ-R. Statistical features performed worse than activity-level features with AUC scores in the range of 0.25-0.58 for DPQ, and 0.34-0.71 for MCPQ-R. This suggests that, while being more interpretable, activity-level features offer higher predictive accuracies. When adding dog demographics to the models, their performance increased by considerable margins. The best performance for most dog personality traits was obtained when either one or both sensed feature types (activity-level and statistical features) were combined with demographic features. Overall, DPQ traits had AUC scores in the range of 0.62-0.80, with two traits above 0.70, while MCPQ-R traits had scores in the range of 0.63-0.89, with four traits having scores above 0.70. These results show that a combination of sensed features is predictive of dog personality traits with reasonable AUC scores above 0.70. When the same features were computed at different times of day, they contributed differently to the predictive power. Figure~\ref{fig:nma_dpq_results} (DPQ) and Figure~\ref{fig:nma_mcpqr_results} (MCPQ-R) show varying performances for different time period-specific models (night, morning, or afternoon) compared to generic models that used the features computed throughout the whole day. For DPQ, models that used morning features were the best for predicting \emph{Fearfulness} and \emph{Excitability}, afternoon features were the best for \emph{Responsiveness to Training} and \emph{Aggressiveness Towards Animals}, and finally, night features were the best for \emph{Aggressiveness Towards People}. For MCPQ-R, models that used period-specific features did not yield better results---with the exception of \emph{Motivation} that yielded an AUC of 0.90 with night features. These results suggest that it would be better to use period-specific models for DPQ, and generic models for MCPQ-R. Overall, these results suggest that a specific combination of features works best (i.e., using activity levels, statistical features, and demographics together), that the same feature computed at different times of day contributes differently to prediction power, yielding higher accuracy when predicting DPQ traits (\textbf{RQ\textsubscript{3}}). \begin{figure*} \captionsetup{labelfont=normalfont} \centerline{\includegraphics[width=0.9\linewidth]{figures/owner_ratings_2x_v3.png}} \caption{Response distribution from dog owners about features they would like to see in future dog wearables. Most of them mentioned measurements of aspects that are hard to quantify (e.g., psychological and behavioral states as proxies to a dog's health, feelings, mood, stress, and personality).} \label{fig:owner_responses} \end{figure*} \subsection{\textbf{Follow-up Survey with Dog Owners}}\label{subsec:qualitative_analysis} Dog owners answered a series of open-ended questions (in a free-text form) about the usability of Patchkeeper and a series of Likert-scale questions about the desirability of features in future dog monitoring wearables (Figure~\ref{fig:owner_responses}). We structured their responses into two sections: \emph{Usability of PatchKeeper} and \emph{Future dog monitoring wearables.} As a convention, for dog owners' quotes, we use the letter \textbf{P} (indicating a human participant) followed by the dog ID from Table~\ref{tab:participants} (e.g., \textbf{P1} is the owner of Dog\#1). \noindent \paragraph{Usability of the PatchKeeper device.} A majority of dog owners had a good experience using the device. \textbf{P5} mentioned that \emph{`` it [PatchKeeper] was easy to use and charge''}, and \textbf{P7} stated that \emph{``dog didn't mind wearing it''}. While the device was overall easy to use, owners had mixed feelings about its battery life. One said that \emph{``battery life is too short [...] charging it every day was a hassle''} (\textbf{P9}), while another stated that \textit{``battery lasted a full day without an issue''} (\textbf{P2}). The split opinions about battery life, however, are very subjective and might be driven by the charging patterns of other owner's devices. For example, for a person who uses a smartwatch (e.g., Apple or Samsung Galaxy watches) that needs to be charged daily, the Patchkeeper's battery life might seem fair. However, for people who are accustomed to devices with longer battery lives (e.g., AmazFit, Fitbit or Garmin), PatchKeeper's battery life would be perceived to be much shorter. In terms of developing tools for dog monitoring, owners provided a range of answers, including health monitoring and understanding their pet better. For health monitoring, \textbf{P12} mentioned that \emph{``such devices can be useful to catch dog problems early [...] like maybe breathing too fast constantly, or cardiac problems''}. For understanding their pet better, \textbf{P3} put it nicely that \textit{``as dogs cannot speak [...] a device that allows my dog to `speak' and `express her feelings' is worth everything''}. \noindent \paragraph{Future dog monitoring wearables.} Dog owners expressed their opinion about a set of features for future dog wearable devices. They provided answers on a Likert scale of 1 to 7 (1 – strongly not preferred; 3 – not preferred; 5 – preferred; 7 – strongly preferred), and their responses are summarized in Figure~\ref{fig:owner_responses}. Most dog owners (66.67\%) strongly preferred features that would allow them to monitor their dog's health. The second most desirable feature was to understand how dogs feel when they meet other dogs. Moreover, even though it was sixth in terms of the mean score, 58.33\% of dog owners (7 out of 12) gave a rating of seven out of seven for a feature that measured the personality of their dogs. This shows that dog owners were polarized regarding knowing dog personality. Another feature that received a very high rating was the ability of a wearable to track the mood and stress of dogs. In fact, most of the preferred features were about measures that are hard to quantify (e.g., psychological and behavioral states as proxies for a dog's health, feelings, mood, stress, and personality). \section{Introduction} \label{sec:introduction} When it comes to dog adoption, breed may not be the only important factor to consider~\cite{bradshaw2021impact, personality_over_breed} as humans tend to favor, for example, a pet's looks (e.g., attractiveness~\cite{chersini2018dog} based on poses and facial areas~\cite{isgate2018makes, hecht2015seeing}) and perceived human-directed sociability~\cite{lamb2021role}. However, according to a study from the Animal Farm Foundation~\cite{animal_foundation}, one in every four pets that are chosen based on breed (or looks) end up in shelters and rescues. By contrast, personality traits tend to offer a more comprehensive behavioral description of a dog, which is consistent over time and context~\cite{gartner2015pet}. Dog personality has been described in the literature using a variety of traits including fearfulness, energy levels, aggression, excitability, motivation, and amicability~\cite{jones2005temperament, jones2008development, ley2008personality}. Not only could dog personality assessment reduce the number of owner-dog mismatches, but it could also put an end to (unfortunate) cases in which dogs get flocked into shelters or destroyed by authorities when expelled from their homes~\cite{corsetti2021different}. In fact, a few dog agencies and shelters are already experimenting with the use of dog personality traits for matchmaking dogs with future owners~\cite{nhhumane2022, knowsley2021meet}. Further, like humans, dogs also need different levels and types of companionship, activities, and emotional connection depending on their inherent personality traits~\cite{burgesspetcare2021tailor, knowsley2021meet}. It is therefore extremely important to identify a set of activities that `work' for a dog, and to find the right companion dogs for socializing~\cite{caroline2022do}, not least because inadequate socialization may escalate the pet's fear levels and may lead to aggression~\cite{shibashake2017do}. Dog personality assessment is typically done through observational assessments by experts or psychological scales administered to the dog's owner \cite{cox2020understanding, jones2008development}. The former is expensive, time-consuming, and requires highly specialized facilities, and the latter is time-consuming, is prone to biases, and requires knowledge of someone who already knows the dog very well~\cite{jones2005temperament, jones2008development, rowan1992shelters}. That is why we set out to computationally assess dog personality in everyday settings (compared to highly specialized facilities or laboratory settings) with wearables. In the wearable sensing literature, studies used devices for monitoring dog activity~\cite{weiss2013wagtag, ladha2013dog, chambers2021deep, byrne2018predicting, ladha2013dog}, detecting pruritic behaviors (i.e., scratching, head shaking)~\cite{griffies2018wearable}, and tracking breathing patterns~\cite{cotur2022bioinspired}. This stream of research recently inspired the fast-growing market of pet wearables~\cite{zamansky2019log} with a number of consumer-grade platforms readily available such as FitBark\footnote{\url{https://www.fitbark.com/}} (location, activity, and sleep tracking), PetPace\footnote{\url{https://petpace.com/}} (vital signs and behavior tracking), and PitPat\footnote{\url{https://www.pitpat.com/}} (activity tracking with gamified social elements). Similar to how passive sensing of human personality drives the design of personalized apps~\cite{khwaja2019modeling}, sensor-based modeling of dog behavior through activity trackers has the potential to benefit both dogs and owners~\cite{zamansky2019log}. It has been found to impact owners' motivation to increase their mutual physical activities with their dogs, and increased human awareness to animals' needs \cite{jones2014use}. However, computational personality assessment techniques for dogs are non-existent. Therefore, we set out to develop and test an automatic way of operationalizing dog personality through passively sensed data from wearables. In so doing, we made three sets of contributions: \begin{itemize}[leftmargin=*,align=left] \item We developed a wearable device, called ``Patchkeeper'', which can be easily strapped on a dog's chest (Section~\ref{sec:patchkeeper}). The device is equipped with accelerometer and gyroscope sensors. Since its processing pipeline was initially developed for wearable data obtained from human subjects, we conducted a validation study of our device and the pipeline on dogs, together with a consumer-grade dog activity monitor. We found that our device is capable of determining four activity levels: moderate-vigorous activity with an accuracy of 92\%; light and sedentary activity with an accuracy of 96\%; and sleep with an accuracy of 98\%. \item We launched a data collection campaign to recruit dog owners whose pets participated in a one-week study. The campaign was launched on four social media platforms (i.e., posts were made on Twitter, Facebook, Instagram, and NextDoor) and was also spread via word of mouth (Section~\ref{sec:animal_study}), resulting in a total of 22 dogs being successfully recruited and monitored for one week (i.e., the entire period of study). Dog owners answered two validated questionnaires (the Dog Personality Questionnaire (DPQ)~\cite{jones2008development} and the Refined Monash Canine Personality Questionnaire (MCPQ-R)~\cite{ley2009refinement}), and provided self-reports about their dog's activities (e.g., images were taken when walking the dog). Using the passively sensed data, we developed a data processing pipeline and extracted two types of features: (a) activity-level features (e.g., \% of sleep in the morning, \% of sedentary activity in the afternoon) and (b) statistical features (e.g., acceleration histogram) (Section~\ref{sec:dataset}). We statistically analyzed the extracted features along with the self-reports from the two questionnaires and found that both types of features could discriminate dog personality traits (e.g., high or low fearfulness), with features capturing dog activity between 6am and 12pm (morning) being more informative for personality trait inferences than features capturing activity in the rest of the day. This is expected as most dogs will be the most active and full of energy in the mornings after a dedicated sleep, and that was reflected in the signal captured from our device's sensors. \item We set up an inference task to predict dog personality traits using both activity-level and statistical features (Section~\ref{sec:statistical_analysis}). Our models achieved AUC scores in the range of 0.63-0.90 with a time-window-based setup (i.e., using the same features computed at different times of the day) (Section~\ref{sec:inference}). Interestingly, statistical features (e.g., acceleration histogram) were more informative than activity-level features (e.g., sedentary); yet, despite explaining more variance in personality traits, the former set of features is less interpretable than the latter one, opening up the need for Explainable AI in this kind of wearables too. When it comes to the usability of dog monitoring wearables, dog owners had split opinions about battery life (some found a day of battery life to be sufficient, while others expressed the opposite). For the development of future monitoring wearables, the majority of dog owners stressed their immense value, echoing a dog owner's statement: ``as dogs cannot speak, a device that allows my dog to `speak' and `express her feelings' is worth everything''. \end{itemize}{} \section{Conclusion} \label{sec:conclusion} We built a device called ``Patchkeeper'' that can be strapped on a dog's chest and that measures its activity through an accelerometer and a gyroscope. We experimented with the device on 12 dogs and collected sensor activity data for a week, along with dog personality test results. By matching these two datasets, we trained machine learning classifiers that predicted dog personality from activity data. We found that a combination of activity-level features (describing the activity as sleeping, sedentary, light, or moderate-vigorous) and statistical features (describing temporal and statistical aspects of the time-series accelerometer and gyroscope data) extracted from sensor data, together with dog demographics worked the best. We also found that the same feature computed at different times of day contributed differently to prediction power, with morning features being predictive of fearfulness and excitability, afternoon features being predictive of responsiveness to training and aggressiveness towards animals, and night features being predictive of aggressiveness towards people and motivation. \section{Discussion} \label{sec:discussion} \subsection{Summary of the Results} In summary, our results showed that features captured from the inertial measurement unit along with dog demographic features are predictive of dog personality traits with reasonable AUC scores in the range of 0.62-0.89 (Table~\ref{tab:accuracies_feature_types}) in leave-K-dogs-out setting, with K = 4 (around 33\% of data was used for testing). In addition, as shown in Table~\ref{tab:accuracies_model_types}, we found that random forest classifiers performed best for the majority of inferences. However, results in Figure~\ref{fig:nma_dpq_results} and Figure~\ref{fig:nma_mcpqr_results} showed that, by using separate models for a particular time of day (i.e., night, morning, or afternoon), and by using different feature groups for each model (i.e., different combinations of activity level, statistical, and demographic features), led to increased performance. In particular, we observed an increased performance across all five personality traits under DPQ (AUC scores of 0.89 for \emph{Fearfulness}; 0.75 for \emph{Aggression towards People}; 0.65 for \emph{Excitability}; 0.78 for \emph{Responsiveness to Training}; and 0.75 for \emph{Aggression towards Animals}), while the performance gains for MCPQ-R (AUC scores of 0.63 for \emph{Extraversion}; 0.90 for \emph{Motivation}; 0.89 for \emph{Training Focus}; 0.74 for \emph{Amicability}; and 0.73 for \emph{Neuroticism}) was only visible for one personality trait, that is, Motivation. This highlights that capturing data from a certain period of the day provides better predictive power for certain personality traits, while for other traits, using all available features was a better option. \subsection{Implications} Our work has both theoretical and practical implications. From a theoretical standpoint, our work adds empirical evidence to the growing body of research on animal personality~\cite{gosling2008personality}. We corroborated the previous findings suggesting that more extraverted dogs are associated with higher levels of activity (measured through our device's accelerometer sensor)~\cite{carrier2013exploring}, while amicable dogs engage in light activity (as previous work found~\cite{carrier2013exploring}). When it comes to aggression towards other animals, we found a moderate association with light activity. While previous work associated aggression with higher activity levels, they did so by studying Siberian Husky dogs~\cite{wan2013drd}; a breed not well represented in our sample. Our models also showed high performance for inferring fearfulness. Even though activity levels and fearfulness were not directly linked in previous literature, a possible explanation could be attributed to the relationship between activity levels and negative emotions (or stress), and negative emotions and stress have been associated with fearfulness~\cite{beaudet1994predictive, rayment2015applied, jones2014use}. Overall, these results corroborate previous findings in the literature. At the same time, our study provides a fine-grained empirical analysis of the relationship between personality traits and activity levels, extending our theoretical understanding. Beyond animal personality research, our work contributes to the Animal-Computer Interaction literature, including recruitment techniques for in-the-wild studies and the use of pre-trained models for animal activity monitoring. In this work, we argue that dog monitoring could go beyond the typical activity level recognition to capture hard-to-quantify psychological aspects such as dog personality. Turning into our data processing pipeline, we showed that state-of-the-art wearable processing pipelines tailored to humans transferred, to a great extent, to our animal study. In a way, this might seem obvious because accelerometer data capture motion. However, only when data from dogs were properly scaled and processed (using a 1G scale) the pipeline and the inferred activities started to work (\S\ref{subsec:activity-level}). This finding holds great promise for future research in dog activity recognition. As for recruitment techniques, it was evident that dog recruitment required the element of trust. Word of mouth and recruiting in physical proximity through NextDoor turned out to be the best technique. Traditional techniques such as mailing lists, distributing leaflets, and posting on social media (Twitter, Facebook, Instagram) worked to a lesser extent. Another challenge was to retain participants. Given the battery life of 24 hours (even though it is similar to consumer-grade wearables such as Apple Watches\footnote{around 18 hours of battery: \href{https://www.apple.com/uk/apple-watch-series-7/}{https://www.apple.com/uk/apple-watch-series-7/}}), dog owners found it time-consuming and cumbersome to charge the device daily. From a practical perspective, our findings speak to both dog owners and shelters. A practical application would be a `dog health app' that tracks a dog's behavior patterns over time and detects its personality or psychological aspects, such as the pet's valence, arousal, and stress levels. Such an app could increase dog owners' awareness of their pet's health and allow them to take proactive actions (e.g., walk the pet to reduce its stress levels). In addition to dog health monitoring, our work can be used for dog socializing. Future platforms could offer owners the ability to receive personalized recommendations for their pets. For example, an owner could subscribe to a service wherein tailored pet social activities are recommended, or their pet is matched with another `like-minded' pet. Similar to how dating apps allow like-minded individuals to match, such a platform could offer the same experiences for a dog-dog social matching. Finally, dog shelters could benefit by developing platforms for matching dogs with prospective owners. Currently, matching dogs to owners based on personality is difficult for both small and large shelters because of the sheer amount of effort needed to characterize dog personality. On the one hand, using experts to do personality assessments needs specialized facilities and money. On the other hand, psychological scales are time-consuming, prone to biases, and require someone who knows the dog very well. In contrast, letting dogs wear a device like Patchkeeper for a week (or a few days) and obtaining a data-driven personality assessment could be immensely useful. \subsection{Limitations and Future Work} Our work has several limitations that call for future research. First, we modeled dog personality by considering time-level (i.e., features were extracted by considering the signal for three time periods: night, morning, afternoon) and day-level (i.e., features were extracted by combining the signal of the three time periods) data. This allowed us to obtain several data points of the same dog on different days. In this way, we ensured robustness but also increased the relatively small (N=12 dogs) dataset size for inferences (up to 72 dog days). Second, while we obtained reasonable results with a small sample size, future studies could replicate our methodology with larger sample sizes. However, it is also worth noting that recruiting pet dogs for an in-the-wild study that runs for several days is challenging, and previous work had to resort to similar/lower sample sizes~\cite{hirskyj2021forming, weiss2013wagtag, ladha2013dog}. Additionally, while we enforced the same time schedule for the data collection to obtain comparable results, we acknowledge that different dogs might have different routines. Dogs with similar psychological readings (i.e., personality), but different routines might end up with different physiological readings (i.e., activity levels). Thus future studies could account for dog routines. Our findings are based on the assumption that dog activity levels serve as a good proxy for activity types. Given that the range of activity types in dogs is narrower (e.g., walking, eating, sleeping, running~\cite{hussain2022activity}) compared to that of humans, that was a reasonable assumption. While we prompted dog owners to share activity-type labels with supporting pictures and videos with us (e.g., eating, drinking, running, playing), not all owners were compliant, preventing any further analysis. Future studies could attempt to disentangle dog activity types from dog activity levels by building upon our results. Third, while we resorted to previous literature to control for factors that might have influenced our results (e.g., dog demographics such as sex, age, and neutering~\cite{kubinyi2009dog,lofgren2014management}), future studies may well incorporate additional factors such as the size of the dog's living environment, or even the presence of other pets in that environment. Fourth, all the dogs in the study, are from the same city in the United Kingdom. Thus, whether these results replicate in other cities or countries remains a subject of future work, specially given that the generalization of mobile sensing-based models across countries is an important topic of interest \cite{meegahapola2023generalization, meegahapola2021smartphone}. Fifth, data collection occurred during the summer period and might not be generalizable. Therefore, future studies could explore whether our findings generalize to other seasons (i.e., winter, fall, spring), when dog behaviors vary (especially in countries closer to hemispheres, where weather drastically changes in different seasons\footnote{\url{https://www.pdsa.org.uk/pet-help-and-advice/pet-health-hub/conditions/seasons-in-dogs}}). Sixth, capturing personality could be done in many ways, and, in this study, we only focused on two commonly used personality measurement questionnaires that are filled by dog owners, which were specifically designed for shelter rehoming. Future work should evaluate these scales in the context of other facilities in the USA, and could also explore other dog personality measurement techniques (i.e., test batteries, experts). \section{Background and Related Work}\label{sec:related_work} Next, we surveyed various lines of research that our work draws upon, and grouped them into four main areas: \emph{i)} dog personality research; \emph{ii)} psychological scales for assessing dog personality; \emph{iii)} monitoring dog activity with wearable sensing; and \emph{iv)} activity levels and dog personality. \subsection{Dog Personality Research}\label{subsec:dog_personality_research} Dogs have personality~\cite{jones2008development,ley2009refinement,ley2008personality}, which refers to a set of dog behaviors and traits that are consistent over time and context~\cite{gartner2015pet, gosling2008personality}. These traits stem from the Five-Factor Model of personality, a.k.a. the Big-Five Traits~\cite{costa1992four}. As with personality, temperament is also being used in literature to describe both human and animal behavior. Researchers on animals and human infants tend to use the term temperament, while those studying human children and adults tend to use the term personality, with the two terms often being used interchangeably~\cite{mccrae2000nature}. On the one hand, temperament has been defined as the inherited, early appearing tendencies that continue throughout life and serve as the foundation for personality~\cite{goldsmith1987roundtable, jones2005temperament}; a definition that has not been widely adopted by animal researchers~\cite{gosling2001mice}. On the other hand, personality psychologists often study phenomena including temperament and character traits, attitudes, physical and bodily states, moods, and life stories~\cite{john2000personality}. Therefore, a broad definition includes characteristics of individuals that describe and account for consistent patterns of feeling, thinking, and behaving~\cite{pervin1997}. As the distinction between temperament and personality has not been maintained consistently in the literature, we echo the statement by Jones and Gosling \cite{jones2005temperament, jones2008development}, that is, \emph{the term ``temperament'' is used whenever possible while the term ``personality'' is more appropriate when, for example, referring to work that explicitly discusses personality research. Hence, we use the term personality throughout the paper.} In the scientific literature, Elliott Humphrey first hinted at the idea of dogs having personality in 1934~\cite{humphrey1934mental}. He described German Shepherd dogs with the traits of jealousy, apport, wildness-tameness, affection, initiative, attentiveness, curiosity, alertness, fighting and protection instincts, willingness to bite humans, confidence, self-right, energy, willingness, and intelligence. Seventy years later, by reviewing more than 50 scientific articles on dog personality, Jones and Gosling~\cite{jones2005temperament} found several inconsistencies, and proposed the first five-factor dog personality instrument, covering the dimensions of reactivity, fearfulness, responsiveness to training, submissiveness, and aggression. Building on Jones and Gosling's seminal work, researchers have incrementally added other dimensions such as calmness, boldness, trainability, and sociability~\cite{kubinyi2009dog}; extraversion, neuroticism, self-assuredness (motivation), training focus, and amicability~\cite{ley2009refinement}; stranger-directed sociability, activity, aggressiveness, and trainability~\cite{mirko2012preliminary}; and playfulness, chase-proneness, curiosity/fearlessness, sociability, and aggressiveness~\cite{svartberg2006breed}. Researchers, however, have split views when it comes to predictors of dog personality. Some studies found that different breeds have similar personalities~\cite{ley2009refinement, schneider2013temperament, svartberg2006breed}, while others reported the lack of evidence for it~\cite{mirko2012preliminary, sinn2010personality}. Two other attributes linked to personality traits are whether the dog is neutered or not (neutering is a surgical procedure to prevent a dog from reproducing) and its sex. Kubinyi et al.~\cite{kubinyi2009dog} found that not neutered dogs are more calm, while Lofgren et al.~\cite{lofgren2014management} found that neutered female dogs were less excitable and sought lower levels of attention. There is also evidence that older dogs are more calm~\cite{kubinyi2009dog} with lower amounts of fear~\cite{lofgren2014management} compared to their younger counterparts. Hence, as mentioned above, even though not conclusive, there is evidence that static attributes such as sex, age, and neutering could be associated with dog personality~\cite{kubinyi2009dog,lofgren2014management}. \subsection{Psychological Scales for Assessing Dog Personality}\label{subsec:psychological_scale_dog_personality} While there are many dog personality measurement questionnaires~\cite{ani11051234, posluns2017comparing}, two widely established and validated psychological scales are: \emph{a)} the Dog Personality Questionnaire (DPQ)~\cite{jones2008development}, and \emph{b)} the Refined Monash Canine Personality Questionnaire (MCPQ-R)~\cite{ley2009refinement}. Next, we explain each scale. \begin{itemize}[leftmargin=*,align=left] \item \textbf{DPQ:} Building on the work of Jones and Gosling~\cite{jones2005temperament}, the development of this scale aimed at reducing the time and resources (i.e., trained assessors, money, facilities) for dog personality assessment. Amanda Jones started from 1200 dog descriptors (i.e., statements describing dog behavior) identified in the literature and narrowed them down to 360 statements~\cite{jones2008development}. Then, in two studies with over 6000 participants, they narrowed these statements down to 75 items, grouped in five factors of \emph{Fearfulness}, \emph{Aggression towards People}, \emph{Excitability}, \emph{Responsiveness to Training}, and \emph{Aggression towards Animals}. Scores for these traits can be derived using a list of statements marked by the dog owner on a Likert scale from 1 to 7 (1: disagree strongly; 7: agree strongly). \item \textbf{MCPQ-R:} This is the refined version of the original MCPQ questionnaire~\cite{ley2008personality}. The original questionnaire was developed using an adjective-based technique similar to the Big-Five Model of personality~\cite{john1990big}. Ley et al. \cite{ley2009refinement} revised the original MCPQ in a study with more than 450 participants. This led to the development of MCPQ-R, which consists of five factors: \emph{Extraversion} (perceived energy level of the dog), \emph{Motivation} (perceived persistence in the face of distractions---e.g., begging for food, finding a particular toy), \emph{Training Focus} (perceived trainability of the dog), \emph{Amicability} (perceived tolerance of the dog while being around humans and animals), and \emph{Neuroticism} (perceived nervous or cautious behavior of the dog). To assess these traits, dog owners rate 26 words (e.g., friendly, obedient, hyperactive) that describe their dog's personality by marking each word with the appropriate number from 1 to 6 (1 = really does not describe my dog; 6 = really describes my dog). \end{itemize} Even though the two scales come with different constructs, a fair amount of convergence has been observed~\cite{posluns2017comparing} between neuroticism (MCPQ-R) and fearfulness (DPQ); excitability (DPQ) and extraversion (MCPQ-R); responsiveness to training (DPQ) and training focus (MCPQ-R). While other widely used questionnaires such as the Canine Behavioral Assessment and Research Questionnaire (C-BARQ)~\cite{hsu2003development} were developed, recent research suggested that it is not suitable for general research use because it was designed to identify specific dog behavioral problems~\cite{cox2020understanding}. Hence, in the current study, we focused on DPQ and MCPQ-R questionnaires that capture a total of ten personality traits (factors). \subsection{Dog Monitoring with Wearable Sensing} Dog tracking and activity detection have gained much popularity due to advancements in sensor technology~\cite{hussain2022activity}, which led to a number of commercial dog monitoring products (e.g., FitBark, PetPace, PitPat). However, tying wearable sensing to behavioral tests (like dog personality in our case) is just starting to gain traction. In Animal-Computer Interaction research, prior studies focused on systems that facilitate better communication and interaction between dogs and owners~\cite{hirskyj2021forming} as well as among dogs~\cite{hirskyj2019internet}. Personality and dog behavior were also studied as part of certain games such as the spin-the-bottle~\cite{cox2020understanding}, concluding that dogs' preferences for human involvement were likely attributed to subtle differences in personality traits or prior training experiences. Brugarolas et al.~\cite{brugarolas2015wearable} developed a non-invasive wearable sensor system for measuring dogs' vital signs using electrocardiogram (ECG), photoplethysmogram (PPG), and inertial measurement units (IMU). In a longitudinal study of monitoring puppies' cardiac changes, Foster et al.~\cite{foster2020preliminary} developed machine learning models for predicting puppies' Behavior Checklist (BCL) scores (including changes in energy and smoothness of movement, vocalization, tongue flicking, use of coping strategies, body language, and changes in responsiveness to the handler), achieving up to 90\% of accuracy. Weiss et al.~\cite{weiss2013wagtag} developed WagTag that infers three dog activity levels (i.e., walk, run, and minimal), and concluded that personal models for predicting activity levels are better than universal models. Ladha et al.~\cite{ladha2013dog} also demonstrated that 17 dog activities (e.g., barking, running, chewing, digging) can be inferred with an accuracy of 70\% from a collar-worn wearable with accelerometers. More recently, Chambers et al.~\cite{chambers2021deep} used deep-learning models to infer dog activities with a collar-worn accelerometer, and showed that activities such as eating and drinking could be inferred with high accuracy, while behaviors such as licking, petting, rubbing, and sniffing were harder to identify. Beyond activity tracking, Griffies et al.~\cite{griffies2018wearable} used wearables to detect pruritic behaviors (i.e., scratching, head shaking). In a laboratory study with over 360 dogs, they showed that algorithms could be trained to infer head shaking and scratching with sensitivities over 70\% and specificities over 90\%. Wearable devices have also been used to monitor dog breathing patterns with reasonable accuracies~\cite{cotur2022bioinspired}. \subsection{Dog Activity Levels and Personality Traits}\label{subsec:actvity_levels_personality} Prior work in animal-computer interaction and canine behavior has highlighted certain relationships between personality traits and activity levels. For example, previous studies found that more extroverted dogs showed higher activity levels in the park~\cite{carrier2013exploring}, higher energy levels~\cite{gosling2003dog}, with significantly greater proportions of time spent with other dogs. Amicable dogs showed frequent behaviors indicative of play (high activity level), while neurotic dogs showed higher frequencies of hunched posture (low activity level)~\cite{carrier2013exploring}. Hence, extraversion, amicability, and neuroticism (traits that come from MCPQ-R) can be directly linked to activity levels. Further, even though not directly studied, prior studies linked psychological aspects such as fearfulness and aggression (corresponding to the three traits of DPQ--- fearfulness, aggression towards people, and aggression towards animals) to activity levels. For example, in domestic dogs, it has been found that a higher degree of impulsivity correlates with high activity levels~\cite{wan2013drd}, poor attention span~\cite{vas2007measuring}, and human-directed aggression~\cite{rayment2015applied, peremans2003estimates}. Further, previous studies linked activity levels to negative emotions and stress~\cite{beaudet1994predictive, rayment2015applied, jones2014use}, which, in turn, can be seen as the roots of fearfulness and aggression~\cite{beaudet1994predictive}. Moreover, neuroticism has been directly linked to activity levels in some studies \cite{carrier2013exploring}, but it has also been observed to be converging with fearfulness according to other studies \cite{posluns2017comparing}, hence providing evidence on how activity-levels could be indirectly informative of fearfulness. Studies have also found that excessively high or low activity levels are predictive of successful dog training (i.e., trainability and certain levels of fearfulness~\cite{weiss2002selecting}; traits captured from DPQ and MCPQ-R). In summary, previous wearable sensing literature explored aspects such as monitoring dog activity, detecting pruritic behavior, and tracking breathing patterns. While previous literature explored a few aspects concerning the relationship between dog activity and personality traits, this relationship still represents an under-explored area. Our study aims to partly fill this gap by exploring the relationship between ten personality traits captured from two canine personality questionnaires and dog activity. \section{Research Questions} \label{sec:rqs} We set out to explore whether dog personality can be automatically inferred from wearable data in everyday settings by answering three questions: \begin{itemize}[wide, labelwidth=!, labelindent=0pt] \item[\textbf{RQ\textsubscript{1}:}] Which dog activity-level features and statistical ones can be extracted from wearable data? \item[\textbf{RQ\textsubscript{2}:}] Which dog activity-level features and statistical ones are associated with dog personality? \item[\textbf{RQ\textsubscript{3}:}] To what extent activity-level, statistical, and demographic features are predictive of dog personality? \end{itemize}{} \section{Patchkeeper} \label{sec:patchkeeper} \begin{figure*} \captionsetup{labelfont=normalfont} \centerline{\includegraphics[width=\linewidth]{figures/device_diagram.png}} \caption{\emph{A:} The Patchkeeper device is attached to an elastic adjustable band. The device comes with two lights: a \emph{green light} indicating whether the device is ON; and a \emph{red light} indicating whether the device is in charging mode. \emph{B:} The band can be strapped on the dog's chest. } \label{fig:device_and_dogs} \end{figure*} Patchkeeper (Figure~\ref{fig:device_and_dogs}a) is a wearable device developed at {Nokia Bell Labs} for behavioral monitoring of both humans and animals. It contains a photoplethysmography (PPG) sensor, an electrocardiogram (ECG) sensor, an accelerometer, a gyroscope, and a microphone. In the current study, only the inertial measurement unit (IMU) sensors (i.e., accelerometer and gyroscope) were used, and PPG, ECG, and microphone were not used due to the dog hair and privacy concerns (more details in Section~\ref{subsec:device_sensors_used}). The IMU sensor is a BMI160 from Bosch Sensortec\footnote{\url{https://www.bosch-sensortec.com/products/motion-sensors/imus/bmi160/}}. It is a small, low-power, low-noise 16-bit chip designed for mobile applications. It provides highly accurate gyroscope and accelerometer data in real time. The IMU's sampling rate was set to 50 samples per second. This sampling rate allowed for striking the right balance between obtaining reasonably fine-grained data for our analysis and storage capacity requirements. The microcontroller unit (MCU) is an nRF52840 from Nordic Semiconductor\footnote{\url{https://www.nordicsemi.com/products/nrf52840}}, which contains a 64 MHz Cortex-M4 processor with floating point unit (FPU). All data was saved in a micro-SD card on the printed circuit board (PCB). The device dimensions are 76x52x15mm with a weight of 56 grams. It contains a 400mAh lithium polymer battery, which can last more than 24 hours while continuously recording data. The battery takes around two hours to be fully charged and comes with a USB-C charging port for hassle-free charging with any commercially available charger. With a two-hour daily charge, the device runs continuously without any loss of data. The device has a switch with ON and OFF sides marked with red and green colors. For better user experience, we included different lights on the device (Figure~\ref{fig:device_and_dogs}a): \emph{(i)} a green light flashing every 10 seconds indicates that the device is ON, it is working properly, and data is being recorded; \emph{(ii)} a static red light indicates that the device is fully charged, and \emph{(iii)} a flashing red light indicates an issue with the device or the memory card. \section{Animal Study} \label{sec:animal_study} Having developed our custom-made wearable device to collect dog activity data, we conducted a one week in-the-wild study to understand the link between dog behavior and personality. \subsection{Materials and Apparatus}\label{subsec:materials} Each dog owner received a package, fitted in a medium-sized letter envelope, weighing approximately 500 grams. The package contained: a Patchkeeper device, a charging cable, three black elastic straps, a consent form, an information sheet, questionnaires (i.e., DPQ, MCPQ-R, and a post-study questionnaire), and a pre-paid return envelope. Upon completion of the study, the owner shipped back the package using the pre-paid return package. \subsubsection{{Patchkeeper and Elastic Straps.}}\label{subsec:device_sensors_used} As the device can be used on both human and animal subjects (Section~\ref{sec:patchkeeper}), and given the requirement of continuous monitoring for one week, we decided to deactivate the ECG and PPG sensors, and the audio microphone. ECG and PPG were disabled for two reasons: first, they relied on skin conductance, which is made difficult by dog hair; and second, they required additional straps, which would place additional effort on the owners, making it more likely for them to drop out. Audio was also deactivated due to privacy reasons as the device would otherwise continuously capture audio throughout the day. It would be extremely awkward to listen to intimate moments or any audio conversation that creeps into the device due to the pet's movements. To ensure that the device would fit various dog sizes, we used an adjustable elastic band that can be strapped to the pet's chest (Figure~\ref{fig:device_and_dogs}b). These are off-the-shelf straps that can be found on Amazon and are comfortable to wear. The device can be simply attached to the strap using a sticky patch. We also considered alternative areas (e.g., neck) to place the device, weighing various aspects. First, some breeds have a more pronounced dewlap (loose, saggy skin around the neck/throat) than others, whilst the chest is generally not affected in such a way. Second, it has been found that the skin near the axilla (armpit) and ventral abdomen (lower chest/thorax, top of the belly) is significantly thinner than that in the dorsal (top of the dog) areas~\cite{theerawatanasirikul2012histologic}. Third, double coated breeds (e.g., the golden retriever, Samoyed, and German shepherd included in this study) have coarse guard hairs and dense undercoats, with this being particularly pronounced in the dorsal areas but less so near the axilla and ventral abdomen areas. Taking all these aspects into consideration, the chest area (behind the forelegs) has the benefits of thinner skin whilst removing breed-to-breed variation in our sample (previous work also favored the chest area~\cite{foster2020preliminary}). To ensure that the position of the device did not impact the results, we intentionally used wider straps with 5cm wide rubbered features and strong tensioning force to make the device intimately connected to the fixed location at the pet's chest. During pilot studies in the design of the device, we estimated an approximate sensor dislocation of $\pm$2cm, which was sufficient to guarantee fixed sensor location over a long period of use. \subsubsection{{Questionnaires.}} Dog owners answered two types of questionnaires. A Pre-Study Questionnaire (Q1) and a Post-Study Questionnaire (Q2). Q1 was completed before the study and had two sections. The first section captured demographic information of the owner (i.e., age, sexual identity, occupation status, and ethnicity), followed up with the Personality Inventory (TIPI)~\cite{gosling2003very}, which is a 10-item measure of the Big Five (or Five-Factor Model) dimensions. The second section captured basic information about the dog (i.e., the dog's age, breed, sexual identity, weight, typical activity levels, disease conditions, and whether it is neutered or not). Q2 was completed after the study and had two sections as well. The first section captured user experience and dog owners' perceived utility of wearable platforms for dog monitoring. The user experience of the Patchkeeper was captured by a Likert scale of 1 to 7 (1 = very bad, 7 very good) and corresponding feedback. In a similar vein, we captured the perceived utility of commercial wearable pet monitoring devices in general by a Likert scale of 1-7 (1 = not very important; 7 = very important) and corresponding feedback. Additionally, we asked dog owners to rate on a scale of 1-7 (1 = strongly not preferred, 7 = strong preferred) their likelihood of adopting a mobile app that uses Patchkeeper's data for dog monitoring. We provided sample options including: monitoring activity types, identifying when dogs are not in a healthy state, finding a community of dogs with a similar personality, or monitoring the mood and stress of dogs. The second section of Q2 asked owners to complete the Dog Personality Questionnaire (DPQ)~\cite{jones2008development} and the Refined Monash Canine Personality Questionnaire (MCPQ-R)~\cite{ley2009refinement}. \begin{table*} \captionsetup{labelfont=normalfont} \caption{Overview of dog demographics.} \label{tab:participants} \centering \begin{tabular}{lccccc} \textbf{\cellcolor[HTML]{FFFFFF} Dog ID} & \textbf{\cellcolor[HTML]{FFFFFF} Breed} & \textbf{\cellcolor[HTML]{FFFFFF} Sex} & \textbf{\cellcolor[HTML]{FFFFFF} Weight} & \textbf{\cellcolor[HTML]{FFFFFF} Neutered?} & \textbf{\cellcolor[HTML]{FFFFFF} Birth Year} \\ \hline \#1 & Golden Retriever & Female & 30 kg & Yes & 2018 \\ \#2 & Golden Retriever & Male & 35 kg & No & 2021 \\ \#3 & Poodle (Toy) & Male & 8 kg & No & 2020 \\ \#4 & Dalmadoodle - 75\% Poodle, 25\% Dalmatian & Female & 13 kg & Yes & 2020 \\ \#5 & Golden Retriever & Male & 40 kg & No & 2021 \\ \#6 & Working English Setter & Male & 29 kg & Yes & 2017 \\ \#7 & Boxer & Female & 25 kg & No & 2020 \\ \#8 & Samoyed & Male & 25 kg & No & 2018 \\ \#9 & Cockapoo & Female & 10 kg & Yes & 2011 \\ \#10 & Working English Setter & Female & 31 kg & No & 2016 \\ \#11 & Mixed & Male & 15 kg & Yes & 2021 \\ \#12 & Cavalier King Charles Spaniel & Female & 8.5 kg & Yes & 2019 \\ \bottomrule \end{tabular} \end{table*} \subsubsection{{Information Sheet and Consent Form.}} The information sheet described the study protocol (Section~~\ref{subsec:study_protocol}). The consent form highlighted two aspects: confidentiality and voluntary participation. In terms of confidentiality, the form explained that all data would be kept confidential except in cases where the researchers were legally obligated to report specific incidents (e.g., dog abuse). The collected phone numbers and email addresses will not be used in any scientific output, and that confidentiality will be preserved by: \emph{a)} assigning code numbers for dog owners in all research documents; and \emph{b)} keeping notes, data, and any other dog owner identifiers in a password-protected hard drive, securely stored at the facilities of the {Nokia Bell Labs}. In terms of voluntary participation, the form explained that a signature was required to participate. Additionally, withdrawal from the study was allowed at any time and without giving a reason, even after signing the consent form. Upon withdrawal, all data will be deleted. \subsubsection{{Pre-Paid Letter Cover and WhatsApp Hotline.}} To ease dog owners participation, we included in our package a pre-paid return package. Upon completion, they placed all materials received into the return package and posted it. To have effective communication with the dog owners throughout the study, we used a dedicated WhatsApp number as a hotline. This number was used by the first author to deal with matters related to the study (e.g., unable to place the strap or charge the device). \begin{figure*} \captionsetup{labelfont=normalfont} \centerline{\includegraphics[width=\linewidth]{figures/study_protocol.png}} \caption{Our study protocol has three phases. In the \emph{pre-study} phase, dog owners received the study package, including the PatchKeeper device and questionnaires; in the \emph{study} phase, data collection for seven days took place; and, in the \emph{post-study} phase, dog owners returned the package and answered a follow-up survey about their experience using the device.} \label{fig:protocol} \end{figure*} \subsection{Study Protocol}\label{subsec:study_protocol} The study protocol has three periods: pre-study, study period, and post-study (Figure~\ref{fig:protocol}). \subsubsection{Pre-Study} Once the dog owners received the package, they familiarized themselves with the device and answered the pre-study questionnaire. During that period, they were encouraged to ask questions via the hotline, and were instructed to fully charge the Patchkeeper and send a picture of the dog with the device turned on, after a full charge every day (this was a preemptive measure to ensure compliance, but, at the same time, to guarantee data quality). \subsubsection{Study Period.} During the seven-day period, the device captured sensor data between 12am and 6pm (continuously for 18 hours), and it could be charged for two hours between 6pm and 12am. Enforcing the same charging schedule across all dog owners enabled us to obtain comparable data across dogs. Of course, this comes with the caveat that we might not have 2-3 hours of data between evening hours; a drawback that we were willing to accept to ensure high quality data during other time slots. In summary, each evening, the dog owners would remove the strap from the dog, turn the device off, fully charge it, turn it back on, and put it back on the dog. Afterward, they would send a message with a picture of the dog wearing the device via the hotline. In the morning, they would again be asked to check that the device was working and was correctly positioned around the dog's chest. During all other time periods, no interaction was required from dog owners as the device would automatically capture all data. Furthermore, we encouraged the dog owners to voluntarily send us in-situ self-reports (in the form of images or short video clips) of various dog activities throughout the day. \subsubsection{Post-Study.}\label{subsec:post_study} During that period, dog owners answered the post-study questionnaire. They placed all materials and apparatus in the pre-paid package, and shipped it back to the return address. Upon successful completion of the study, dog owners received a \$25 Amazon gift voucher and a report summarizing their dog activity profile over the seven days of the study. \subsection{Recruitment} Recruitment for in-the-wild human studies is typically difficult~\cite{ellard2015finding}, and so it is for animal studies. We employed two techniques that were proven (un)successful to varying degrees. \begin{itemize}[leftmargin=*,align=left] \item Social media and local communities (Twitter, Facebook, Instagram, and NextDoor): Twitter and Facebook are used to advertise scientific studies~\cite{whitaker2017use, sibona2012purposive}---both channels were not very successful in this study. Instead, Instagram posts on profiles dedicated to dogs with 1000s of followers were successful to some extent. Finally, Nextdoor\footnote{\url{https://nextdoor.com/}}, a social media site for local communities, was the most successful recruitment strategy (40\% of the dogs were recruited through it). A banner of the study was also shared within the communities of Cambridge Dog Meetup. \item Word of Mouth: One researcher from Nokia Bell Labs, who is not part of conducting the study, participated in the study with his dog. He spoke to his neighbors about the study, who also signed up. Shortly after, this created a snowball effect (30\% of the dogs were recruited through word of mouth). \end{itemize} Having a variety of recruitment techniques, we were able to reach out to 31 dog owners in Cambridge, United Kingdom. Of these, 22 signed up for the study and received the package. 10 of them withdrew during the study for various reasons: high temperature, including a heatwave, making it difficult for the dog to wear the strap continuously (2/10), owners going away for summer holidays (2/10), strap not holding to the body of the dog due to its curvy shape (1/10), dogs not being in healthy conditions (i.e., leg injury after a run, wound on the neck, bug bites) during the time of the experiment (3/10), and dogs not appearing to feel happy about wearing the strap (2/10). This left us with 12 healthy dogs that successfully completed the study. Note that these 12 dogs were all healthy (as reported by their owners), and every morning the first author checked with the owners whether any of the dogs displayed peculiar behavior (e.g., snagging on objects, appearing to feel uncomfortable) due to the wearable. No such incident was reported. However, we had an incident wherein a dog jumped into a body of water, destroying the device. This dog continued the study later with a replacement device. The recruitment took place during the summer period, with starting dates ranging from July to August. The study was approved by Nokia Bell Labs, and the study protocol stated that the collected data will be analyzed for research purposes only. In accordance to GDPR, no researcher involved in the study could have tracked the identities of the dog owners after the end of the study, and all responses were analyzed after anonymization at an aggregated level. \section{Methodology} \label{sec:statistical_analysis} Using the extracted features and self-reported personality (Section~\ref{sec:dataset}), we set out to understand which features are associated with dog personality (\textbf{RQ\textsubscript{2}}), and, to what extent these features are predictive of personality (\textbf{RQ\textsubscript{3}}). In so doing, we defined our dependent variables, conducted a series of statistical analyses, and developed machine learning classifiers to predict dog personality, which we describe next. \subsection{Dependent Variables} The ten personality traits of both DPQ and MCPQ-R (five each) served as our dependent variables for both statistical analyses and classification tasks. We binarized each personality trait (i.e., whether a dog scored high or low in a given trait ---for example, high or low in fearfulness) using the median value across all dogs in our dataset. In other words, we computed the median value across all dogs for each trait, and binarized each trait to be either high or low. The choice of binary traits was reinforced by previous literature on inferring human personality from mobile data~\cite{khwaja2019modeling, vinciarelli2014survey}. \subsection{Statistical Analyses} These analyses allowed us to identify statistically significant features that help discriminate between high and low personality scores for each trait (Table~\ref{tab:tstatistics}). We report the top five features for each personality trait with: \emph{a)} the highest $t$-statistic\footnote{$p$-values~\cite{Greenland2016} are marked with an asterisk (*) after bonferroni correction~\cite{weisstein2004bonferroni}.}~\cite{Kim2015}, and \emph{b)} the highest Cohen's-d\footnote{the 95\% confidence intervals~\cite{Lakens2013} overlapping with zero are marked with an asterisk (*).}~\cite{Rice2005}. As a rule of thumb, a Cohen's-d of 0.2 illustrates a small effect size, 0.5 a medium effect size, while 0.8 a large effect size~\cite{kim2015statistical}. Results are presented in Section~\ref{subsec:statistical_analysis}. \subsection{Classification and Cross-Validation Methods} In the next set of experiments, we used Python with Keras~\cite{chollet2015keras} and scikit-learn~\cite{pedregosa2011scikit} frameworks. For dimensionality reduction, we used principle component analysis (PCA)~\cite{abdi2010principal} and retained features with a variance of 95\% (leaving us with 3-5 features to train models, depending on the set of features used---we discuss these features in a subsequent section). All experiments were done with the leave-k-dogs-out strategy (in a similar manner to the leave-one/K-out setting, which is typical cross-validation for human subjects \cite{meegahapola2021one}) in which data in training and testing splits do not come from the same dog. Hence, this is subject-independent~\cite{meegahapola2022sensing}. We conducted all experiments with five iterations and K = 4 --- that is, in each experiment, four dogs were left out for the testing set such that this set contains instances from both high and low scores of each personality trait. This allowed us to measure the mean and standard deviation of the models' performance across iterations. Given the small sample size, the choice of four dogs was a reasonable one. As for performance metric, we chose the area under the receiver operating characteristic curve (AUC), which is a holistic measure assessing how well a model performs for both classes (i.e., distinguishing high and low traits)~\cite{bradley1997use}. In total, we set up three experiments, and tested four types of models (S1) using a combination of features (S2 and S3). \begin{itemize}[leftmargin=*,align=left] \item S1---Model Types: Given the small dataset size (typical in animal studies \cite{hirskyj2021forming, weiss2013wagtag, ladha2013dog}), we used four types of classifiers: \emph{(a)} Support Vector Machines (SVM)~\cite{noble2006support}, \emph{(b)} Light Gradient Boosting Machine (L-GBM)~\cite{ke2017lightgbm}, \emph{(c)} Naive Bayes (NB)~\cite{webb2010naive}, and \emph{(d)} Random Forest (RF)~\cite{cutler2011random}. \item S2---Feature Types: As previously mentioned (Section~~\ref{subsec:data_processing_pipeline}), we generated two main types of features from the inertial data: activity-level features ({ACT}) and statistical features ({STAT}). The former set of features is typically more interpretable but costly to obtain due to the power processing needed to generate the features, while the latter set of features is not computationally expensive but it is less interpretable. In addition, we used demographic attributes ({DEM}) such as sex of the dog, weight, age, neutered, and training rating as input to the model because prior work suggested connections between these attributes and dog personality (Section~\ref{sec:related_work}). As the dog owner's personality has been previously linked to the dog's personality and well-being~\cite{kubinyi2009dog, hoglin2021long}, we also used the dog owner's information ({O-INFO}) including their sex and personality traits captured from the Personality Inventory~\cite{gosling2003very}. \item S3---Time of Day: These features were computed for three time periods of the day (i.e., night, morning, and afternoon). In S1 and S2, we used all the features. As mentioned in Section~\ref{sec:statistical_analysis}, the same feature captured at different times of day may have differences in statistical significance values. For example, sleeping time in the morning (M) could be informative to discriminate high and low levels of Extraversion, while sleeping time in the afternoon (A) might not be. Hence, in this set of experiments, we incorporated the time period of the day, and sought to understand whether developing separate models for different time periods yield better performance. For example, if a model trained with data from only morning features performs better, it would mean that we only need six hours of data from a dog to perform the inference. \end{itemize} \section{Dataset} \label{sec:dataset} Having successfully deployed Patchkeeper in an in-the-wild study and collected more than 1300 hours of accelerometer and gyroscope data, we then applied a processing pipeline to that data. \begin{figure*} \captionsetup{labelfont=normalfont} \centerline{\includegraphics[width=0.8\linewidth]{figures/validation_study.png}} \caption{Comparison of dog\#3 is monitored by activity levels generated from Patchkeeper (top) and Pitpat (bottom) for 24 hours.} \label{fig:validation_study} \end{figure*} \begin{figure*} \captionsetup{labelfont=normalfont} \centerline{\includegraphics[width=0.8\linewidth]{figures/activity_level_plot.png}} \caption{Example of four types of activity levels of dog\#1 (i.e., sleep, sedentary, light, and moderate-vigorous) generated from Patchkeeper.} \label{fig:activity_level_diagram} \end{figure*} \subsection{Data Processing Pipeline}\label{subsec:data_processing_pipeline} \subsubsection{{Activity-Level features.}} \label{subsec:activity-level} This set of features describes dog behaviors derived from accelerometer data, and is interpretable. To extract these features, we used a state-of-the-art data processing pipeline to convert the triaxial data to acceleration~\cite{doherty2017large, willetts2018statistical}. The processing included four steps: \emph{a)} ten-second samples from static sections (no movement) of accelerometer data were obtained to optimize the gain and offset for each of the X, Y, and Z axes to fit a unit gravity sphere~\cite{willetts2018statistical}; \emph{b)} data were re-sampled at 100Hz using linear interpolation, and acceleration was calculated using the euclidean norm of X, Y, and Z axis values; \emph{c)} a fourth-order butterworth filter was used to remove noise; and \emph{d)} one gravity (1G) unit was removed from the data, and the remaining negative values were truncated at zero. Next, using non-overlapping time windows of 60 seconds, 126 time and frequency domain features such as mean, standard deviation, median, minimum, maximum, 25\textsuperscript{th} and 75\textsuperscript{th} percentiles of vector magnitude, kurtosis, and skewness were generated~\cite{willetts2018statistical, doherty2018gwas, walmsley2021reallocation}. Using these features, we used a pre-trained model based on Hidden Markov Models and Balanced Random Forests~\cite{walmsley2022reallocation} to classify acceleration into four different activity levels: \emph{sleep}, \emph{sedentary}, \emph{light}, and \emph{moderate-vigorous}. These activity levels are in line with prior studies on dog activity-levels~\cite{morrison2013associations, ortmeyer2018combining, weiss2013wagtag}. As the data processing pipeline was initially developed for wrist wearables worn by humans, we conducted a validation step to ensure transferability to animals. To do that, we used a consumer-grade dog activity monitor called PitPat\footnote{\url{https://www.pitpat.com/}} on two dogs (dog\#3 and dog\#8) for three days. These two dogs also took part in the larger in-the-wild study. In total, we collected over 120 hours of sensor data from both devices, and a total of 83 self-reports (e.g., the dog is sleeping, running) from dog owners. A comparison of our 24-hour data processing pipeline and PitPat's output is shown in Figure~\ref{fig:validation_study}. In terms of ground truth obtained from PitPat (in total, we analyzed 100 data points), our model performed with an accuracy of: 98\% in detecting sleeping (sections where PitPat showed no activity); 92\% in detecting high intense moderate-vigorous activities (sections where PitPat showed a peak in activity levels), and 96\% in detecting sedentary or light activity levels (sections where PitPat showed a medium level of activities). In terms of self-reported ground truth (including pictures), our model was 91\% accurate in determining the 83 activity levels provided by dog owners. This answered our \textbf{RQ\textsubscript{1}}, allowing us to conclude that activity levels can be extracted with accuracies over 90\%. Having established the reliability of our data processing pipeline, we first obtained how long a particular dog had been engaging in activities at different levels (i.e., percentage of time spent in sleep, sedentary, light, and moderate-vigorous activity levels), resulting in four features. We then used the acceleration time series to extract statistical features such as its minimum, maximum, mean, median, and standard deviation, resulting in five features. For simplicity, we call these nine features \emph{activity-level} features throughout the paper. \subsubsection{{Statistical Features.}} This set of features was derived from complex associations in the time series of both accelerometer (x,y,z) and gyroscope (x,y,z) and, as such, is less interpretable compared to the activity-level features but computationally less expensive to obtain. To extract these statistical features, we used the tsfel library~\cite{barandas2020tsfel}. The library allowed us to extract 56 features (e.g., min, max, std, mean, median, kurtosis, skewness, absolute energy, zero crossing rate, histogram, and empirical cumulative distribution function) that describe temporal and statistical aspects of the time series nature of the data\footnote{\url{https://tsfel.readthedocs.io/en/latest/descriptions/feature_list.html}}. \subsubsection{{Unit of Analysis.}} A typical way of capturing temporal dynamics in HCI and UbiComp studies is to use time windows at different times of day when calculating features~\cite{obuchi2020predicting, wang2020predicting, wang2017predicting, nepal2020detecting, wang2022first, constantinides2018personalized}. A large time window of eight hours, dividing the day into three periods, has been previously used in dog studies, and it has been found, for example, that studying night sleep separately from day sleep provided more meaningful insights about sleeping patterns~\cite{schork2022cyclic, bodizs2020sleep} than studying sleeping during the whole day. Drawing from this prior line of work, we resorted to three time windows for our analysis: \emph{a)} night (N): time from 12am to 5.59am; \emph{b)} morning (M): time from 6am to 11.59am.; and \emph{c)} afternoon (A): time from 12pm to 5.59pm. For example, at night, a dog could be showing activity levels as 60\% sleeping, 20\% sedentary, 15\% light, and 5\% moderate-vigorous. Hence, for each time window, we extracted a total of 65 features, including the nine activity-level and the 56 statistical features. \begin{figure*}[t] \begin{center} \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/dpq_distribution_v3.png} \caption{\centering DPQ} \label{fig:dpq} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/mcpq_distribution_v4.png} \caption{\centering MCPQ-R} \label{fig:mcpqr} \end{subfigure} \caption{Average personality trait scores for five personality factors measured through DPQ (in a Likert scale 1-7) and MCPQ-R (in a Likert scale 1-6), respectively. Scores for MCPQ-R were higher (except Neuroticism), whereas scores for DPQ were more spread out. } \label{fig:personality_scores} \end{center} \vspace{-0.2 in} \end{figure*} \subsection{Descriptive Statistics of Personality Traits and Activity Levels} The distributions of personality traits are shown in Figure~\ref{fig:personality_scores}, and a summary of statistics of the recruited dogs is in Table~\ref{tab:participants}. Recruited dogs were over one year old, with a mean age of three years and ten months, 65\% of them were female dogs, and were all medium to small dogs. In terms of dog personalities, the DPQ factors: \emph{Fearfulness}, \emph{Aggression Towards People}, and \emph{Aggression Towards Animals} had average scores below 3.2, whereas the \emph{Excitability} and \emph{Responsiveness to Training} had high average scores above 4.8. For MCPQ-R, \emph{Neuroticism} had an average score of 2.67, while the other four factors had average scores on or above 4.38. Overall, the mean percentages across the personality dimensions were comparable to previous studies~\cite{ley2009refinement, carrier2013exploring}. Further, Figure~\ref{fig:activity_time_periods} shows the average activity level across all dogs as a percentage of total time ($y$-axis) for different time periods of the day ($x$-axis). The night was predominantly spent sleeping (63.3\%) whereas morning was predominantly spent at other types of activity such as sedentary (36.5\%), light (13.5\%), or moderate-vigorous (9.3\%). \begin{figure*} \captionsetup{labelfont=normalfont} \centerline{\includegraphics[width=0.8\linewidth]{figures/percentage_of_time_2x.png}} \caption{Percentage of time spent by all dogs on average in each of the four activity level for sleep, sedentary, light, moderate-vigorous at three different time of day (Night, Morning, Afternoon). Dogs slept more at night, engaged in sedentary activity in the morning and the afternoon, and engaged in physical activity (light or moderate-vigorous) during the morning.} \label{fig:activity_time_periods} \end{figure*}
{'timestamp': '2023-01-27T02:01:26', 'yymm': '2301', 'arxiv_id': '2301.06964', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.06964'}
\section{Introduction} Shockley and Queisser used the detailed balance (DB) formalism to show that the efficiency of a solar cell made from a semiconductor with a single band gap can never exceed 31\% under unconcentrated black-body sunlight \citep{Shockley1961}. Intermediate band (IB) materials -- semiconductors with allowed electronic states deep in the gap, as shown in Figure \ref{fig:bands}a -- enable solar cells to break this limit by absorbing sub-gap photons without harming the voltage of the cell \citep{Luque1997}. In the radiative limit, the maximum efficiency of an intermediate band solar cell (IBSC) at one sun concentration is 47\%, significantly exceeding the Shockley-Queisser limit \citep{Luque1997}. Several intermediate band devices have been demonstrated, but high efficiencies have not been realized due to nonradiative recombination \citep{Okada2015}. The quantum ratchet (QR) solar cell has been proposed as an improved implementation of the IBSC \citep{Yoshida2012}. The intermediate band QR and conduction band QR implementations are shown in Figure \ref{fig:bands}b-c, respectively. The original idea of a IBQR solar cell is to increase the lifetime of the IB. In the case of the IBQR, carriers relax from the IB to a ratchet band (RB), which can suppress recombination to the valence band (VB). The ratchet also enables improved voltage matching between the subgap transitions and the band-to-band transitions \citep{Pusch2016,Pusch2019}. The CBQR has the ratchet step above the conduction band edge, and an analogous valence band QR (not shown) has the ratchet step below the valence band edge. All three QR designs realize the voltage-matching improvements and can achieve detailed balance maximum efficiencies of 48.5\% at one sun, greater than that of IBSCs. There have, however, been few QR experimental realizations and there are few suggestions for material systems \citep{Vaquero-Stainer2018}. In both IBSC and QRSC devices, the IB and QR regions are added to standard \emph{pn} junctions in hopes of increasing current in the device, but if lifetimes are sufficiently short in the IB region, the IBSC or QRSC may even have lower current than the reference \emph{pn} junction. Both IBSCs and QRSCs have an \emph{n-}IB-\emph{p} architecture, implying the holes created at the front of the cell must travel through the IB region to be collected. If hole lifetimes in the IB or QR regions are short, the nonradiative losses in the IB region will exceed the extra current generation, making efficiencies less than for the \emph{pn-}diode solar cell alone \citep{Krich2012,Wilkins2019}. The electronically-coupled upconverter (ECUC) is a less-studied architecture, which provides the potential to realize the same efficiency as a QRSC while being less sensitive to nonradiative processes \citep{MacDonald2008,Harder2005}. As shown in Figure \ref{fig:bands}d, the ECUC has an \emph{n-p-}IB structure, with the IB region having a larger band gap than the standard semiconductor, unlike in the IBSC and QRSC where the large band gap $E_{CV}$ can be uniform through the device. As with IBSC and QRSC, the ECUC allows absorption of subgap photons, with the resulting carriers injected into the standard semiconductor. The minority carriers produced by absorption in the \emph{pn} junction never transit the IB region, so the current added from IB absorption can be obtained strictly as an addition, and low quality upconverter material cannot harm the cell as can occur in the IBSC/QRSC. However, the ECUC requires more complicated 2D contacts to avoid extracting current from the IB, with one possibility shown in Figure \ref{fig:Schematic_ECUC}. The detailed balance limiting efficiencies for the ECUC have not previously been calculated. In this work, we demonstrate that the QRSC and ECUC are mathematically equivalent in the DB limit, yet the ECUC may be a more practical implementation in actual devices. We show that, as with the QRSC, the ECUC configuration has the potential to exceed IBSC efficiencies at 1 sun. We perform a global optimization showing the maximum efficiencies possible as functions of $E_{g1}$ and $E_{g2}$ and also consider a case study of an ECUC based on crystalline silicon (c-Si), the most widely used and studied PV material. We show that there is potential to improve on c-Si solar cells using an ECUC. \begin{figure} \begin{centering} \includegraphics[width=1\columnwidth]{band_diagrams} \par\end{centering} \caption{Band diagrams of (a) IBSC, (b) IBQR, (c) CBQR, and (d) ECUC. The red, green, and blue processes for the ratchets and ECUC are equivalent in detailed balance. \label{fig:bands}} \end{figure} \begin{figure} \centering{}\includegraphics[width=1\columnwidth]{ecuc_v2}\caption{Schematic of a potential device architecture for the ECUC. \label{fig:Schematic_ECUC}} \end{figure} \section{Detailed Balance Model} We use the well-known detailed balance formalism to model the ECUC and QRSC. We first show that in detailed balance, the ECUC and QRSC are mathematically equivalent, then we use this method to compute the limiting efficiencies for ECUC. Detailed balance calculations assume all recombination is radiative, carriers have infinite mobility, and the cell is thick enough to assure full absorption of photons for each allowable transition. We further assume perfect photon selectivity, with each photon absorbed only by the highest-energy transition energetically permitted, to minimize thermalization losses; this condition is called non-overlapping absorptions and is not required for detailed balance \citep{Luque1997,Krishna2016,Cuadra2004}. Since the carriers have infinite mobility, \begin{equation} \mu_{CV}=qV_{\text{ext}},\label{eq:mucv} \end{equation} where $q$ is the elementary charge, $\mu_{CV}$ is the quasi-Fermi level difference between the electrons and holes, and $V_{\text{ext}}$ is the external voltage. We take $q=1$. Another key assumption is that there is one electron-hole pair generated/lost for each photon absorbed/emitted. Since all recombination events are assumed to be radiative, this assumption allows the current in the device to be written in terms of the photon fluxes $\phi$ in and out of the device. These fluxes obey the modified Planck spectrum \citep{Wurfel1982} \begin{align} \phi(E_{\min,AB},&E_{\max,AB},T,\mu_{AB})\\ &=\frac{2F}{h^{3}c^{2}}\int_{E_{\min,AB}}^{E_{\max,AB}}\frac{E^{2}dE}{e^{\left(E-\mu_{AB}\right)/kT}-1}, \nonumber \end{align} where the process between bands $A$ and $B$ absorbs photons with energies between $E_{\min,AB}$ and $E_{\max,AB}$, $T$ is the temperature, $\mu_{AB}$ is the chemical potential difference between carriers in bands $A$ and $B$, $h$ is Planck's constant, $c$ is the speed of light, $k$ is Boltzmann's constant, and $F$ is the geometrical factor denoting the fraction of light incident on the cell. For the sun, \begin{equation} F_{\text{sun}}=X\cdot\pi\left(\frac{\text{radius of sun }}{\text{distance between earth and sun}}\right)^{2}, \end{equation} where $X$ is the solar concentration factor, and for emission from the cell, \begin{equation} F_{\text{cell}}=\pi. \end{equation} In detailed balance, we have two photon sources: the sun and the cell. We can denote the photons absorbed from the sun in transitions between bands $A,B$ by \begin{equation} \dot{N}_{AB}^{\text{sun}}=\phi\left(E_{\min,AB},E_{\max,AB},T_{s},0\right), \end{equation} and the photons emitted by the cell in transitions between bands $A,B$ by \begin{equation} \dot{N}_{AB}^{\text{cell}}=\phi\left(E_{\min,AB},E_{\max,AB},T_{a},\mu_{AB}\right), \end{equation} where $T_{s}$ is the solar radiation temperature, which we take to be 6000~K and $T_{a}$ is the ambient temperature, which we take to be 300~K. The current extracted from band $A$ is the difference between absorbed and emitted photons involving band $A$, \begin{equation} J_{A}=\sum_{B}\pm\left(\dot{N}_{AB}^{\text{sun}}-\dot{N}_{AB}^{\text{cell}}(\mu_{AB})\right), \end{equation} with the sign depending on whether the $AB$ absorption process creates (+) or destroys (-) carriers in band $A$. For all of the devices, the total current is the net current extracted from either the CB or the VB, which are equal. For an ECUC, the total current is \begin{equation} J_{C}^{ECUC}=\nd_{CV}^{\text{sun }}-\nd_{CV}^{\text{cell}}(\mu_{CV})+\nd_{CI}^{\text{sun }}-\nd_{CI}^{\text{cell}}(\mu_{CI}).\label{eq:jc} \end{equation} We also assume that no current is extracted from the intermediate band, so \begin{equation} J_{I}^{ECUC}=0=\nd_{IV}^{\text{sun }}-\nd_{IV}^{\text{cell}}(\mu_{IV})-\nd_{CI}^{\text{sun }}+\nd_{CI}^{\text{cell}}(\mu_{CI}).\label{eq:ji} \end{equation} Note that the CI processes in Eq.~\ref{eq:ji} enter with the negative sign, as optical absorption from IB to CB removes an IB carrier. With equations \ref{eq:mucv},\ref{eq:jc},\ref{eq:ji}, and the fact that \begin{equation} \mu_{CV}=\mu_{CI}+\mu_{IV}, \end{equation} we can solve for the chemical potentials and compute $J(V)$. These equations are of the same form as in the original IBSC calculation \citep{Luque1997}, but the ECUC has different band gaps in the different regions. Note that the $\mu_{CV}$ terms use $E_{g1}$ as their lower threshold. For an IBQR, we assume the carriers in the IB and RB share a common quasi-Fermi level, so $\mu_{CI}=\mu_{CR}$ \citep{Yoshida2012}. Then, the net current from the CB is \begin{equation} J_{C}^{IBQR}=\nd_{CV}^{\text{sun }}-\nd_{CV}^{\text{cell}}(\mu_{CV})+\nd_{CR}^{\text{sun }}-\nd_{CR}^{\text{cell}}(\mu_{CR}),\label{eq:jcqr} \end{equation} and the net current in the IB is \begin{equation} J_{I}^{IBQR}=0=\nd_{IV}^{\text{sun }}-\nd_{IV}^{\text{cell}}(\mu_{CR})-\nd_{CR}^{\text{sun }}+\nd_{CR}^{\text{cell}}(\mu_{CR}).\label{eq:jiqr} \end{equation} These equations for the ECUC and IBQR are equivalent. As shown in Figure \ref{fig:bands}d, $E_{CI}+E_{IV}=E_{g2}$ for the ECUC. If we choose $E_{CV}$ for the IBQR to equal $E_{g1}$ for the ECUC then the first two terms in each of Eqs.~\ref{eq:jcqr} and \ref{eq:jiqr} are equal to the equivalent terms in Eqs.~\ref{eq:jc} and \ref{eq:ji}. Further, if $E_{CR}$ for the IBQR equals $E_{CI}$ for the ECUC, and $E_{IV}+E_{CR}$ for the IBQR equals $E_{g2}$ for the ECUC, then the last two terms in each of those equations become equivalent. Therefore the ECUC equations are equal to the IBQR equations. Similarly, if $E_{IV}+E_{RI}=E_{g2}$ for the CBQR or $E_{IR}+E_{CI}=E_{g2}$ for the valence band QR (VBQR), then the equations also become equivalent to the ECUC. Since the equations for QR and ECUC are no different in detailed balance, the limiting efficiencies are also the same. Figure \ref{fig:Maximum-ECUC-efficiency} shows the maximum ECUC efficiencies at $X=1$ and $X=1/F_{\text{sun}}=46200$, which is the maximum value. The peak efficiencies and band gaps for these cases are shown in Table \ref{tab:global1sun}. The diagonal border at $E_{g1}=E_{g2}$ represents standard IB solar cells, and at one sun concentration (left), the detailed balance efficiency is highest at $E_{g1}\ne E_{g2}$. This result indicates that the ECUC has higher limiting efficiency than IBSC, similar to QR \citep{Yoshida2012}, spectrally-selective reflectors \citep{Strandberg2010}, and overlapping absorptions \citep{Krishna2016}. Therefore, the ECUC can exceed both the IBSC limit and the Shockley-Queisser limit. Figure \ref{fig:Maximum-ECUC-efficiency} shows that there is a wide range of band gaps that can potentially achieve this goal. At full concentration, the highest efficiency lies on $E_{g1}=E_{g2}$, so there is no gain from ECUC compared to a standard IBSC architecture. Both the ECUC and the IBSC significantly exceed the single junction efficiency limit, which has motivated interest in combining IBSC with concentrator systems \citep{Luque2010,Sogabe2014}. \begin{figure} \begin{centering} \includegraphics[width=1\columnwidth]{effs} \par\end{centering} \caption{Maximum ECUC efficiency in detailed balance, with optimized $E_{I}$, at 1 sun concentration (left) and at full concentration ($X=46200$) (right). The Shockley-Queisser limits of 31\% ($X=1$) and 40.7\% ($X=46200$) are shown with the white contours. Note that an ECUC can only be beneficial if $E_{g1}\protect\leq E_{g2}\protect\leq2E_{g1}$, with the second inequality from the requirement that both sub-gap transitions have energy thresholds below $E_{g1}$. \label{fig:Maximum-ECUC-efficiency}} \end{figure} \begin{table} \caption{\label{tab:global1sun}Maximum efficiencies with a blackbody spectrum at 1 sun and full concentration. Note that there is a symmetry for $E_{I}$ mirrored below and above $E_{g2}/2$; we take the upper values for $E_{I}$. } \centering{}% \begin{tabular}{cccccc} \hline System & $X$ & $E_{g1}$ (eV) & $E_{g2}$ (eV) & $E_{I}$ (eV) & Efficiency\tabularnewline \hline Single-junction & 1 & 1.31 & - & - & 31.0\%\tabularnewline IB solar cell & 1 & 2.42 & - & 1.49 & 46.8\%\tabularnewline ECUC solar cell & 1 & 2.08 & 2.36 & 1.42 & 48.5\%\tabularnewline Single-junction & 46200 & 1.11 & - & - & 40.7\%\tabularnewline IB solar cell & 46200 & 1.95 & - & 1.24 & 63.2\%\tabularnewline ECUC solar cell & 46200 & 1.95 & 1.95 & 1.24 & 63.2\%\tabularnewline \hline \end{tabular} \end{table} \section{Case study: ECUC using c-Si} In this section, we perform a case study of a potential ECUC using silicon as the front $pn$-diode material, since c-Si is an extremely well-understood material. Adding only an intermediate band to an \emph{n-}IB-\emph{p} c-Si solar cell actually harms the efficiency of the cell, even in the detailed balance limit \citep{Krishna2016}. That failure occurs because of silicon's small band gap and the assumption of non-overlapping absorptions. Figure \ref{fig:Maximum-ECUC-efficiency}, however, shows that even with $E_{g1}$ equal to the band gap of c-Si, 1.12 eV, the ECUC allows considerable improvement over the Shockley-Queisser limit. First, we study the optimal range for $E_{g2}$ for an ECUC on silicon. Second, we consider an ECUC made of hydrogenated amorphous silicon (a-Si), which is a higher band-gap material frequently used for heterojunctions with c-Si. We perform a search for the best-suited $E_{I}$ for an a-Si upconverter on c-Si. Figure \ref{fig:Maximum-ECUC-efficiency-1} shows the maximum ECUC efficiency with $E_{g1}=1.12$~eV as a function of $E_{g2}$ and $E_{I}$. The peak efficiencies and band gaps are shown in Table \ref{tab:Maximum-efficiencies-for}. The optimal range of $E_{g2}$ lies approximately between 1.3 and 1.6~eV, with the maximum efficiency at $E_{g2}=1.47\u{eV}$, with $E_{I}$ near $0.9\u{eV}$. As $E_{g2}$ approaches $E_{g1}$, we recover the IBSC efficiency, which is lower than the Shockley-Queisser limit for a device with $E_{g}=1.12$~eV. Note that when $E_{g2}>1.3\u{eV}$, the ECUC improves efficiencies for all values of $E_{I}$. For a large range of band gaps, it is possible to significantly exceed the SQ limit; therefore, there is potential for high efficiency silicon devices if an ECUC is added. \begin{figure} \begin{centering} \includegraphics[width=1\columnwidth]{1_eff_eg2_ei} \par\end{centering} \caption{Maximum ECUC efficiency in detailed balance, with $E_{g1}=1.12\protect\u{eV}$ as a function of $E_{g2}$ and $E_{I}$ at 1 sun concentration. The detailed balance efficiency limit for $E_{g}=1.12$~eV is shown with the white contour.\label{fig:Maximum-ECUC-efficiency-1} Note that the data cutoff at the diagonal (black) occurs because the ECUC requires $E_{I}>E_{g2}-E_{g1}$ and $E_{I}<E_{g1}$.} \end{figure} \begin{table} \caption{\label{tab:Maximum-efficiencies-for}Maximum detailed balance efficiencies for c-Si ($E_{g}=1.12\protect\u{eV}$) at 1 sun concentration. Note that there is a symmetry for $E_{I}$ mirrored below and above $E_{g2}/2$; we take the upper values for $E_{I}$.} \centering{}% \begin{tabular}{ccccc} \hline System & $E_{g1}$ (eV) & $E_{g2}$ (eV) & $E_{I}$ (eV) & Efficiency\tabularnewline \hline Single-junction & 1.12 & - & - & 30.2\%\tabularnewline IB solar cell & 1.12 & - & 0.85 & 29.7\%\tabularnewline ECUC solar cell & 1.12 & 1.47 & 0.86 & 37.4\%\tabularnewline \hline \end{tabular} \end{table} A promising upconverter material is amorphous silicon, since its band gap of $E_{g2}=1.55\u{eV}$ falls in the high-efficiency range \citep{Carlson1976} , and a-Si on c-Si devices are routinely made \citep{Mishima2011}. Figure \ref{fig:Maximum-ECUC-efficiency-2} shows DB efficiency as a function of $E_{I}$ of a device using c-Si and an a-Si ECUC. All values of $E_{I}$ between $E_{g2}-E_{g1}=0.43$~eV and $E_{g1}=$1.12~eV give improved efficiencies over the bare c-Si cell. Doping of a-Si is more complicated than in crystalline semiconductors, as dopants can induce local coordination changes and dangling bonds, and the structures vary depending on deposition method \citep{Carlson1990}. The resulting $E_{I}$ for a dopant in a-Si can thus vary considerably depending on a-Si deposition and dopant precursor and pressure \citep{Carlson1990}. This variation could allow tuning of ECUC energy levels, which is not generally possible in crystalline semiconductor:dopant materials. To date, devices based on doped a-Si have generally desired shallow dopants, as in c-Si, so the most-studied dopants are those that produce relatively shallow states in the band gap, to give high conductivities. For an ECUC, optically active midgap states are desirable, which is the opposite of the standard case. \begin{figure} \begin{centering} \includegraphics[width=1\columnwidth]{cSi_aSi1_ei_eff} \par\end{centering} \caption{Maximum ECUC efficiency vs.~$E_{I}$ for c-Si ($E_{g1}=1.12\protect\u{eV}$) and an a-Si upconverter ($E_{g2}=1.55\protect\u{eV}$). The black dashed line shows the single-junction detailed balance efficiency with $E_{g}=1.12$~eV. The potential dopants are labelled at their respective $E_{I}$. Doping with P is shown with the green dot (optical \citep{Street1984}) and a range of values with the yellow line (electrical activation \citep{Matsuda1980}). Thermal activation energies for B are shown with stars, with red corresponding to doping with BF$_{3}$ \citep{Mahan1983} and purple to B$_{2}$H$_{6}$ \citep{Street1984}. The blue line shows the range of $E_{I}$ from thermal activation for alkali dopants, including Na, K, Rb, and Cs \citep{LeComber1980}. \label{fig:Maximum-ECUC-efficiency-2}} \end{figure} Figure \ref{fig:Maximum-ECUC-efficiency-2} also shows estimated energetic positions for some common dopants in a-Si. The most studied dopants include boron and phosphorus as acceptors and donors, respectively, as in c-Si. Even when a-Si has tetrahedrally coordinated silicon, the bond angle distortions tend to make dopant energy levels lie deeper in the gap than in c-Si \citep{Nichols1987}. As an acceptor, B doping using B$_{2}$H$_{6}$ or BF$_{3}$ gives an electrical activation energy of $E_{I}=0.88-0.91\u{eV}$, with a higher concentration of active dopant states formed from the BF$_{3}$ precursor \citep{Mahan1983,Street1984}. As a donor, P doping using PH$_{3}$ gives optical absorption in a band around $E_{g2}-E_{I}=0.81\u{eV}$ \citep{Street1984}. As can be seen in Fig.~\ref{fig:Maximum-ECUC-efficiency-2}, this energy level appears close to the middle of the band gap, which allows only minimal improvement in these detailed balance calculations. That dip in efficiency for $E_{I}\approx E_{g2}/2$ is an artifact of the non-overlapping absorption condition, as one of the subgap transitions becomes artificially depleted of photons when $E_{I}$ is close to mid-gap. Removing the non-overlapping absorption requirement, which is only a simplifcation for theoretical analysis, reduces the penalty for IB's at mid-gap \citep{Cuadra2004,Krishna2016}, so this mid-gap $E_{I}$ can still be beneficial for the ECUC. Doping with P has also been shown to produce thermal activation energies ranging from 0.74~eV to 0.27~eV, depending on concentration of the precursor, with higher activation energies at lower doping concentrations \citep{Matsuda1980}. Alkali atoms as donors, including Na, K, Rb, and Cs, have been shown to produce thermal activation energies that are similar to each other, ranging from $0.80\u{eV}$ to $0.20\u{eV}$, again with higher activation energies at lower dopant concentrations \citep{LeComber1980}. We interpret these activation energies to be $E_{g2}-E_{I}$. These values contain overlap with the optimal efficiency range for a c-Si/a-Si ECUC. A working ECUC must be optically thick for the subgap photons, which requires either a high dopant concentration or a thick absorber layer. If high dopant concentration is required, the alkali dopant energy levels may be less than than $E_{g2}-E_{g1}$ and thus outside of the useful energy range. The combination of c-Si and a-Si has great potential to make a working ECUC that can improve the efficiency of c-Si solar cells. To realize this potential, the energetic position of those defect states and their optical properties must be characterized, both for the common electrical dopants and possibly a much larger range of potential IB-forming dopants. A wide array of elements may be interesting for a-Si based ECUC, just as a wide array of dopants may be useful for c-Si based IBSC's \citep{Sullivan2015}. \section{Conclusions} The ECUC has the potential to improve IB solar cell designs. Its maximum detailed balance efficiency is equal to that of a QRSC, and it may be easier to produce. Though DB calculations do not consider non-radiative processes, they give upper bounds on the efficiency of all photovoltaic devices. At low solar concentration, ECUC has a higher limiting efficiency than IBSC. This effect is realized in the c-Si case at one sun, where an IBSC with non-overlapping absorptions cannot improve on a standard single-gap solar cell, but an ECUC permits significantly improved efficiency. At high concentration, the DB efficiency limits of IBSC, ECUC, and QRSC are all the same, with a significant gain compared to a single junction device. Moving beyond DB, the ECUC architecture allows improved efficiency even with materials having significant nonradiative recombination. It is thus a promising architecture to pursue for near-term development of IB-based devices. The case of a-Si on c-Si provide a promising platform for developing an ECUC with the potential to significantly improve silicon-based solar cell efficiencies. \begin{acknowledgments} We acknowledge helpful conversations with Daniel MacDonald and Wenjie Yang and support from the Natural Sciences and Engineering Research Council of Canada. \end{acknowledgments}
{'timestamp': '2019-09-18T02:18:50', 'yymm': '1909', 'arxiv_id': '1909.07874', 'language': 'en', 'url': 'https://arxiv.org/abs/1909.07874'}
\section{Abstract} \begin{abstract} Drones will revolutionize 3D modeling. A 3D model represents an accurate reconstruction of an object or structure. This paper explores the design and implementation of \textsc{ares}\xspace, which provides near real-time, accurate reconstruction of 3D models using a drone-mounted LiDAR; such a capability can be useful to document construction or check aircraft integrity between flights. Accurate reconstruction requires high drone positioning accuracy, and, because GPS can be in accurate, \textsc{ares}\xspace uses SLAM. However, in doing so it must deal with several competing constraints: drone battery and compute resources, SLAM error accumulation, and LiDAR resolution. \textsc{ares}\xspace uses careful trajectory design to find a sweet spot in this constraint space, a fast reconnaissance flight to narrow the search area for structures, and offloads expensive computations to the cloud by streaming compressed LiDAR data over LTE. \textsc{ares}\xspace reconstructs large structures to within 10s of cms and incurs less than 100~ms compute latency. \end{abstract} \section{Appendix} \label{s:appendix} \subsection{Drone compute} \begin{figure}[b] \centering \includegraphics[width=0.7\columnwidth]{figs/fig_tx2_ransac.pdf} \caption{Plane-fitting on a TX2} \label{fig:tx2ransac} \end{figure} We ran a plane-fitting algorithm, RANSAC, (a module that we use in our pipeline) on a real-world point cloud trace using drone compute platform (Jetson TX2). We found that (\figref{fig:tx2ransac}) it takes the TX2, on average, 0.5~seconds to process a single point cloud. The 64-beam LiDAR generates 20 point clouds per second whereas plane-fitting accounts for only 5\% of the entire execution time of our reconstruction pipeline. Thus, the TX2 will take 200~seconds to process a single second's worth of data from the 64-beam LiDAR if we ran it at 20 frames per second. To this end, we offload computations from the drone to the cloud. \subsection{Point cloud compression} \label{s:app_compression} \textsc{ares}\xspace uses two techniques (\emph{i.e.,}\xspace viewpoint filtering and Octree compression) to compress LiDAR point clouds to within 1.2 to 4.0 Mbps and transmit them over LTE. \begin{figure}[t] \centering \includegraphics[width=0.7\columnwidth]{figs/fig_real_world_top_down.pdf} \caption{Top down view of reconstructed 3D model for a large real-world complex} \label{fig:3d_real_model} \end{figure} \parab{Viewpoint filtering.} The OS1-64 LiDAR has a 360$\degree$ horizontal field-of-view (FoV) and a 45$\degree$ vertical FoV. In a drone-mounted LiDAR (\figref{fig:lidar_orientation}), only a portion of the full 360$\degree$ contains useful information. Beams directed towards the sky, or towards objects beyond LiDAR range, generate \textit{zero returns}. Viewpoint filtering removes these, and also removes returns from the body of the drone. To compress point clouds, \textsc{ares}\xspace simply removes zero returns. In practice, we have found it to be important to also filter out returns from the drone itself, and also returns further away from the nominal range of the LiDAR, since these are erroneous. So, \textsc{ares}\xspace filters all points closer than 5~m and further than 120~m. \parab{Octree compression.} After filtering the point cloud, \textsc{ares}\xspace compresses the retained data using a standard octree compression algorithm~\cite{octree} designed specifically for point clouds (and hence this is better than data-agnostic compression techniques like gzip). An octree is a three-dimensional tree data structure where each node is a cube that spans a 3D region, and has exactly eight children. The dimensions of the cubes at the leaves of the tree determine the \textit{octree resolution}. The numerical precision used to encode point positions determines the \textit{point resolution}. Octree compression efficiently encodes empty leaves or empty tree-internal nodes (those whose descendant leaves are empty). It also performs inter-frame compression (similar to video encoders), efficiently encoding unchanged leaves or internal nodes between two successive point clouds. As we show in \secref{sec:eval}, we can parameterize octree compression to achieve point-cloud transmission rates of 1.2-4~Mbps. \textsc{ares}\xspace chooses different values of octree resolution and point resolution, two parameters that govern the compressibility of point clouds, to achieve point-cloud transmission rates of 1.2--4~Mbps (\secref{sec:eval}), well within the range of achievable LTE speeds. \subsection{Implementation Details.} \label{s:app_impl} We have implemented \textsc{ares}\xspace using the Point Cloud Library (PCL~\cite{octree}), the Cartographer~\cite{Cartographer} LiDAR SLAM implementation\footnote{We use Cartographer but it can be replaced by other LiDAR SLAM algorithms like LOAM~\cite{zhang2014loam}}, the Boost C++ libraries~\cite{Boost}, and the Robotic Operating System (ROS~\cite{ros}). For the recon phase, we used functions from the Point Cloud Library (PCL~\cite{octree}) for plane-fitting, outlier removal and clustering. Our compression and extraction modules also use PCL and are implemented as ROS nodes. The drift detection module uses a Python package for the Umeyama alignment~\cite{grupp2017evo}. Not counting libraries and packages it uses, \textsc{ares}\xspace is 15,500 lines of code. \subsection{Recon Flight} \label{s:app_recon_flight} The goal of the recon flight is to survey the area and find the boundary of the structure as fast as possible. \textsc{ares}\xspace uses a flight trajectory as shown in \figref{fig:recon_traj} in which parallel scans of length $d$ are separated by a scan width $s$. In designing the recon flight, \textsc{ares}\xspace can change the height, speed and LiDAR orientation of the drone. To find the right set of parameters, we performed an exhaustive parameter sweep. \parab{Optimum height for recon.} To find the optimum height for the recon flight, we planned recon trajectories for a 20~m building (within a 300~m x 300~m area) in AirSim at different heights (from 40~m to 90~m). We flew the drone and ran the boundary estimation on the collected \textit{highly compressed} LiDAR point clouds at 10~Hz. For each height, we collected data and ran the boundary detection module five times. Higher flights increase scan width (~\figref{fig:lidar_coverage_area}) at the expense of point density. However, \textsc{ares}\xspace's boundary detection algorithm is robust to lower density point clouds (up till 80~m) and can accurately estimate the boundary of the building from a height of upto 80~m. \figref{fig:recon_height} shows the 2D boundary detection accuracy, completeness (lower is good) and flight duration (as a proxy for battery usage) as a function of the height of the drone. We find that at 80~m (or 60~m from the building), \textsc{ares}\xspace can jointly optimize for battery efficiency and boundary detection accuracy. At 80~m, \textsc{ares}\xspace can complete the recon flight in 150~seconds and estimate the boundary to within 2.5~m accuracy and completeness. Beyond 80~m, the scan width and point density decrease. This results in longer flights and higher boundary detection accuracy and completeness. \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{figs/fig_recon_trajectory.pdf} \caption{Recon flight trajectory for \textsc{ares}\xspace.} \label{fig:recon_traj} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{figs/fig_boundary_estimation_height_vs_battery.pdf} \caption{Finding the right height for boundary detection accuracy and battery efficiency in the recon flight.} \label{fig:recon_height} \end{figure} \parab{Optimum speed for recon.} To find the optimum speed for the recon flight, we planned a recon trajectory for the drone to fly over the same 20~m building at a height of 80~m from the ground. We flew the drone in the planned trajectory at speeds from 1~m/s to 8~m/s and ran boundary detection on the \textit{highly compressed} point clouds at 10~Hz. For each speed, we collected data and ran the boundary detection module five times. \figref{fig:recon_speed} illustrates the effect of drone speed on the boundary detection accuracy, completeness and the flight duration. A higher speed results in lower flight duration but at the expense of boundary detection accuracy and completeness. Even then, \textsc{ares}\xspace robustly extracts the boundary up till 6~m/s. At higher speeds, the overlap between consecutive frames is smaller and hence \textsc{ares}\xspace cannot accurately stitch the frames together. As such, \textsc{ares}\xspace flies the drone at the sweet spot \emph{i.e.,}\xspace 4~m/s where the flight duration is approximately 150~seconds and accuracy and completeness are 2.5~m. \parab{Optimum LiDAR orientation.} LiDAR orientation controls scan width and point cloud overlap. A parallel orientation means larger overlap but small scan width $s$. On the other hand, a perpendicular orientation means smaller overlap but larger scan width $s$. Larger scan width $s$ means a smaller flight duration (\figref{fig:recon_traj}). A large overlap means better scan matching accuracy. Since \textsc{ares}\xspace uses GPS for stitching in the recon phase, so it is robust to the overlap. Hence, to minimize flight duration, it uses a perpendicular orientation of the LiDAR. We conducted experiments (omitted for brevity) without different orientations of the LiDAR and confirmed that a perpendicular orientation minimzes flight duration without any loss in accuracy/completeness. \begin{figure}[b] \centering \includegraphics[width=0.5\columnwidth]{figs/fig_boundary_estimation_speed_vs_battery.pdf} \caption{Finding the right speed for boundary detection accuracy and battery efficiency in the recon flight.} \label{fig:recon_speed} \end{figure} \parab{Boundary extraction for different buildings.} To show that \textsc{ares}\xspace can accurately extract the 2D boundary of any building, we collected LiDAR traces of a drone flying over five different buildings in Airsim at a height of 80~m and speed of 4~m/s. We collected data over each building five times. Then, we ran boundary detection on the \textit{highly compressed} point clouds at 10~Hz. We summarize the boundary detection accuracy, completeness and the flight duration in \tabref{tab:recon_buildings}. As expected, the flight duration for all buildings is independent of the underlying building. For all building types, \textsc{ares}\xspace can accurately extract the boundary of all buildings within 2.5~m accuracy and completeness. This shows that \textsc{ares}\xspace's boundary detection is scalable to all building shapes. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Structure\\ type\end{tabular} & \begin{tabular}[c]{@{}c@{}}Flight \\ duration (s)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Comp.\\ (m)\end{tabular} \\ \hline Star-shaped & 150 & 1.39 & 1.67 \\ \hline H-shaped & 150 & 1.31 & 1.83 \\ \hline Plus-shaped & 150 & 1.35 & 1.55 \\ \hline Pentagon & 150 & 2.58 & 2.58 \\ \hline Rectangular & 150 & 2.50 & 2.53 \\ \hline \end{tabular} \caption{\textsc{ares}\xspace boundary estimation accuracy, completeness and flight duration for different building types using high compression.} \label{tab:recon_buildings} \end{table} \parab{Effect of point cloud compression.} To evaluate the effect of point cloud compression on boundary extraction, we compressed a real-world over the 70~m~x~40~m~x~20~m building with the four different compression profiles described above. Then, we ran our boundary extraction algorithm on the compressed traces. \tabref{tab:recon_compression} shows that \textsc{ares}\xspace's boundary extraction algorithm is robust to compression. While bringing down bandwidth by a factor of 377, for high compression, \textsc{ares}\xspace only trades off 36~cm in accuracy and 24~cm in completeness. With higher bandwidths promised with the emergence of 5G, \textsc{ares}\xspace can achieve the same boundary extraction accuracy as an uncompressed trace. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Compression\\ profile\end{tabular} & \begin{tabular}[c]{@{}c@{}}Required \\ bandwidth (Mbps)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Comp.\\ (m)\end{tabular} \\ \hline Uncompressed & 480.0 & 1.09 & 1.09 \\ \hline View-point & 42.7 & 1.09 & 1.09 \\ \hline Lossless & 7.86 & 1.09 & 1.09 \\ \hline Low & 3.80 & 1.09 & 1.10 \\ \hline Medium & 2.50 & 1.13 & 1.07 \\ \hline High & 1.27 & 1.45 & 1.33 \\ \hline \end{tabular} \caption{\textsc{ares}\xspace boundary estimation accuracy and completeness for different levels of compression.} \label{tab:recon_compression} \end{table} \parab{Effect of sub-sampling.} \textsc{ares}\xspace's boundary detection algorithm runs at 10~fps. A Ouster-64 beam LiDAR generates 20 point clouds per second. So, the boundary detection algorithm must be robust to sub-sampling of point clouds. Our evaluations show that, for a drone traveling at 4~m/s, it works well even when using one point cloud every 3 seconds. Because \textsc{ares}\xspace's boundary detection uses GPS for stitching, it does not need overlap between 3D frames. \subsection{Data Collection} \label{s:app_data_coll} In this section, we perform a parameter sensitivity study to find the optimum parameters for running SLAM accurately on \textit{real-world} UAV flights. To do this, we report positioning error generated by SLAM. For the lack of accurate ground truth in the real-world, we compare SLAM positions against a GPS trace. Positioning accuracy is directly related to 3D model RMSE because these poses are used to position 3D point cloud in generating a 3D model. A higher positioning error leads to a higher reconstruction error and vice-versa. \parab{Effect of drone speed.} Because GPS is erroneous, we only draw qualitative conclusions. As \tabref{tab:real_speed_vs_rmse}, taken from our drone traces, shows, slower flights have lower SLAM error than faster one, and parallel orientations have lower SLAM error than perpendicular. \parab{Effect of drone height.} Similarly, SLAM error increases with height and, in real-world traces, the parallel orientation seems to be significantly better than the perpendicular orientation (\tabref{tab:real_height_vs_rmse}). At a distance of 20~m from the surface of the building, the parallel orientation has the minimum positioning error \emph{i.e.,}\xspace 1.25~m. Beyond 20~m for parallel and 40~m for perpendicular, SLAM loses track completely because of lower point density. \begin{table}[htbp] \footnotesize \centering \begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{LiDAR Orientation} & \multicolumn{2}{c|}{Drone model collection speed (m/s)} \\ \cline{2-3} & 1.5 m/s & 3.0 m/s \\ \hline Parallel & 1.25 & 3.33 \\ \hline Perpendicular & 3.12 & 7.64 \\ \hline \end{tabular} \caption{Positioning errors for parallel and perpendicular LiDAR orientations at different speeds for \textit{real-world traces} at a vertical height of 20~m from the building.} \label{tab:real_speed_vs_rmse} \end{table} \begin{table}[htbp] \footnotesize \centering \begin{tabularx}{0.48\textwidth} {|c|>{\centering\arraybackslash}X|>{\centering\arraybackslash}X|>{\centering\arraybackslash}X|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}LiDAR\\ orientation\end{tabular}} & \multicolumn{3}{c|}{Drone model collection height from building (m)} \\ \cline{2-4} & 20 & 40 & 60 \\ \hline Parallel & 1.25 & 5.41 & $\infty$ \\ \hline Perpendicular & 2.18 & $\infty$ & $\infty$ \\ \hline \end{tabularx} \caption{Positioning errors different LiDAR orientations at different heights for \textit{real-world traces} at 1~m/s.} \label{tab:real_height_vs_rmse} \end{table} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figs/fig_3D_models.pdf} \caption{Reconstructed 3D models at different levels of compression. Top-left: ground truth, top-right: low compression, bottom-left: medium compression, bottom-right: high compression} \label{fig:3d_model} \end{figure*} \section{Conclusions} \label{s:conclusions} In this paper, we have taken a step towards accurate, near-real time 3D reconstruction using drones. Our system, \textsc{ares}\xspace, uses novel techniques for navigating the tension between cellular bandwidths, SLAM positioning errors, and compute constraints on the drone. It contains algorithms for estimating building geometry, for determining excessive SLAM drift, and for recovering from excessive drift. It can achieve reconstruction accuracy to within 10s of centimeters in near real-time, even after compressing LiDAR data enough to fit within achievable LTE speeds. Future work can include using more sophisticated drone battery models, cooperative reconstruction of large campuses using multiple drones, and generalizing further to structures of arbitrary shape. \section{Abstract} \begin{abstract} A 3D model represents an accurate reconstruction of a object. This paper explores the design and implementation of \textsc{ares}\xspace, which provides near real-time, accurate, autonomous reconstruction of 3D models using a drone-mounted LiDAR; such a capability can be useful to document building construction or check aircraft integrity between flights. Accurate reconstruction requires high drone positioning accuracy, and, because GPS can be inaccurate, \textsc{ares}\xspace uses SLAM. However, in doing so it must deal with several competing constraints: drone battery and compute resources, SLAM error accumulation, and LiDAR resolution. \textsc{ares}\xspace uses careful trajectory design to find a sweet spot in this constraint space and a fast reconnaissance flight to narrow the search area for structures, and offloads SLAM to the cloud by streaming compressed LiDAR data over LTE. It reconstructs buildings to within 10s of cms and incurs less than 100~ms compute latency. \end{abstract} \section{DART Design} \label{sec:design} \begin{figure}[b] \centering \includegraphics[width=0.6\columnwidth]{figs/overview} \caption{\textsc{ares}\xspace architecture\label{fig:overview}} \end{figure} Because drone-based 3-D reconstruction is a complex multi-dimensional problem (\tabref{tab:challenges}), we have focused on a geometrically regular, but important, subset of structures for reconstruction: \textit{buildings}. As this section will make clear, even this choice poses significant challenges. It also brings out the computation and communication issues in 3-D reconstruction that are the main focus of this paper. In \secref{s:slam-phase}, we discuss what it would take to generalize to other, more complex, structures. \parab{Overview.} To use \textsc{ares}\xspace, a user specifies: a) an \textit{area of interest}, and b) a minimum \textit{target point density}. Point density is the number of points per unit area on the surface of a point cloud; this knob controls the quality of the 3D model. In the area of interest, \textsc{ares}\xspace guides a drone to automatically discover buildings, and constructs a 3D model of the buildings in \textit{near real-time} (\emph{i.e.,}\xspace during the drone flight) while \textit{minimizing flight duration} at that given minimum point density. (To a first approximation, drone battery usage increases with flight duration; we have left it to future work to incorporate drone battery models.) \textsc{ares}\xspace splits its functionality across two components: (a) a lightweight subsystem that runs on the drone, and (b) a cloud-based component that discovers buildings, generates drone trajectories, and reconstructs the 3D models on-the-fly. \textsc{ares}\xspace's cloud component (\figref{fig:overview}) generates an efficient \textit{reconnaissance} trajectory over the area of interest to discover the \textit{rooftop geometry} of buildings. Extracting the geometry from LiDAR data can be computationally intensive (\secref{sec:intro}), so \textsc{ares}\xspace \textit{streams compressed point clouds} to a cloud service during flight over a cellular (LTE) connection. The cloud service extracts the geometry, then prepares a more careful \textit{model collection} trajectory that designs a minimal duration flight while ensuring high 3D model accuracy. During this second flight, the drone also streams compressed point clouds, and the cloud service runs SLAM to estimate point cloud poses, and composes the received point clouds into the building's 3D model. Mid-flight, the cloud service may \textit{re-calibrate} the trajectory dynamically to minimize drift accumulation. Below, we first describe model collection (\secref{s:slam-phase}), since that is the most challenging of \textsc{ares}\xspace's components. We then describe how \textsc{ares}\xspace extracts the rooftop geometry (\secref{s:area-inter-reconn}), then conclude by describing point-cloud compression (\secref{s:compression}). \subsection{Model Collection} \label{s:slam-phase} Given the building geometry (\secref{s:area-inter-reconn}), \textsc{ares}\xspace designs a \textit{model collection trajectory} to capture the 3D model (\secref{sec:intro}) of the building. \parab{What constitutes a good 3D model?} Prior work on 3D reconstruction~\cite{reconstructionmetrics} has proposed two metrics, \textit{accuracy} and \textit{completeness}. Consider a 3D model $M$ and a corresponding ground-truth $M_g$. Accuracy is the Root Mean Square Error (RMSE) of the distance from each point in $M$ to the nearest point in $M_g$. Completeness is the RMSE of the distance from each point in $M_g$ to the nearest point in $M$. If both values are zero, $M$ perfectly matches $M_g$. If $M$ captures all points in $M_g$, but the positions of the points are erroneous, then both accuracy and completeness will be non-zero. If $M$ captures only one point in $M_g$, but positions it correctly, its accuracy is perfect, but completeness is poor. \begin{figure*}[t] \begin{minipage}{0.38\linewidth} \includegraphics[width=\linewidth]{figs/lidar_orient} \caption{Parallel and perpendicular LiDAR orientation\label{fig:lidar_orientation}} \end{minipage} \begin{minipage}{0.32\linewidth} \centering \includegraphics[width=0.7\linewidth]{figs/fig_slam_vs_degree_of_overlap} \caption{Impact of orientation on SLAM positioning error.} \label{fig:overlap_error} \end{minipage} \begin{minipage}{0.29\linewidth} \centering \includegraphics[width=0.6\linewidth]{figs/equidense} \caption{Equi-dense trajectory scan width} \label{fig:equidense} \end{minipage} \end{figure*} \parab{Trajectories, SLAM and 3D reconstruction error.} As compared to an autonomous vehicle, a drone-mounted LiDAR can only perceive ($\frac{45\degree}{360\degree}$) of the 3D point cloud. This makes scan matching more difficult for SLAM. Thus, the trajectory of the drone flight can impact 3D model completeness and accuracy, in part because a poorly designed trajectory can increase SLAM error. In designing the drone's trajectory, \textsc{ares}\xspace can control the following parameters: the actual \textit{path} of the drone over the building, its \textit{speed}, its \textit{height}, and the \textit{orientation} of the LiDAR with respect to the ground. We now discuss the qualitative impact of these parameter choices; later (\secref{sec:eval}), we empirically quantify the best parameter choices. \textit{\textbf{Orientation}} impacts accuracy. At a fixed height and speed, a \textit{parallel} orientation of the LiDAR (\figref{fig:lidar_orientation}) in which its scan plane aligns with the drone's direction of motion, results in higher overlap between two successive point clouds than with a \textit{perpendicular} trajectory, therefore, lower SLAM error and better accuracy. \figref{fig:overlap_error}, obtained using the methodology described in \secref{sec:eval}, quantifies this intuition: different orientations have different degrees of overlap, and as overlap decreases, SLAM's positioning error increases. A parallel orientation (0$\degree$) has the lowest SLAM error because it has the highest visibility lifespan. (\textit{Visibility lifespan}, the time for which a point on the building's surface is visible during flight, is a proxy for overlap; a longer lifespan indicates greater overlap). \textit{\textbf{Speed}} impacts model accuracy. If the drone flies fast, two successive point clouds will have fewer overlapping points, resulting in errors in the SLAM's pose transformations and (therefore) pose estimates (for a reason similar to \figref{fig:overlap_error}) which leads to poor 3D model accuracy. So, \textsc{ares}\xspace must fly as slow as possible. \textit{\textbf{Height}} impacts both accuracy and completeness. Because LiDAR beams are radial, the higher a drone flies, the less dense the points on the surface of the building. Lower density results in worse completeness. Accuracy is also worse, because the likelihood of matching the same point on the surface between two scans decreases with point density. For instance, the positioning errors for point densities of 2.2 points per m$^{2}$ and 3.0 points per m$^{2}$ are 2.5~m and 1.0~m respectively (graph omitted for brevity). So, \textsc{ares}\xspace must fly as low as possible. The drone's path must ensure \textit{\textbf{coverage}} of the buildings rooftop and sides. Consider a narrow and wide building. The drone must fly several times over the building to capture all its surfaces. If it flies low, slowly, and at a parallel orientation, the flight duration can be significant. Over long durations, SLAM accumulates \textit{drift}, which can worsen model accuracy and completeness. \textsc{ares}\xspace designs \textit{equi-dense trajectories} to control model completeness, and uses \textit{offline data-driven parameter estimation} to find the choice of speed, height and orientation. To minimize drift accumulation, \textsc{ares}\xspace performs online \textit{drift estimation and re-calibration}. We describe these below. \parab{Equi-dense Trajectories.} An equi-dense trajectory ensures that the resulting model is (a) complete, and (b) captures the building with a point density that is no less than the specified minimum target point density $d$. \parae{Point density depends on LiDAR parameters and height.} The height (more generally, distance for vertical surfaces like the sides of the building) at which a LIDAR flies from a surface governs the average density of points it obtains from that surface; larger heights result in lower \textit{point density}. For a given LiDAR configuration, we can compute the point density as a function of height. For instance, for an Ouster LiDAR with 64 beams, horizontal resolution of 1024, and a vertical field of view of 45$\degree$, to a first approximation, two consecutive beams are at an angular separation of 0.7$\degree$ ($\frac{45}{64}$) and lasers from same beam are 0.35$\degree$ ($\frac{360}{1024}$) apart. Using geometry, for a surface at a distance $h$ from the drone, we can compute the projection of the LIDAR on that surface. Using this projection, we can compute the point density throughout the whole point cloud. Central regions of the point cloud have much higher density than regions at the extremities. \begin{figure*}[t] \centering \begin{subfigure}{0.18\linewidth} \includegraphics{figs/mod_traj1_updown_small.pdf} \caption{The lateral flight} \label{fig:mod_lateral} \end{subfigure} \begin{subfigure}{0.18\linewidth} \includegraphics{figs/mod_traj2_updown_small.pdf} \caption{The longitudinal flight} \label{fig:mod_longitudinal} \end{subfigure} \begin{subfigure}{0.18\linewidth} \includegraphics{figs/mod_traj4_updown_small.pdf} \caption{An alternative flight plan} \label{fig:mod_total} \end{subfigure} \begin{subfigure}{0.18\linewidth} \includegraphics{figs/mod_traj3_updown_small.pdf} \caption{Mid-flight re-calibration} \label{fig:mod_mid_flight} \end{subfigure} \caption{Model collection trajectory design} \label{fig:model_trajectory} \end{figure*} \parae{Coverage depends on height.} The density of points at which a drone captures a surface depends on its height $h$. Given a height $h$, \figref{fig:equidense} shows the coverage of the LiDAR on a given surface. In general, the drone can only capture a subset of this \textit{full} coverage region with a minimum target point density $d$ (shown by target density region in \figref{fig:equidense}). Now, suppose the drone's trajectory performs a rectilinear scan over the surface, like the one shown in \figref{fig:mod_lateral}. Then, to ensure that \textsc{ares}\xspace captures the entire surface at least at a density $d$, the \textit{scan width} must be equal to or smaller than the width of the target density coverage region (\figref{fig:equidense}). \parae{\textsc{ares}\xspace estimates scan width from LiDAR parameters.} To estimate the width of the target-density coverage region, \textsc{ares}\xspace uses the LiDAR parameters, and models LiDAR geometry, to derive a function which returns the scan width for a given target density $d$, and a given height $h$. It models the beams of the LiDAR as having equal angular separation, so it can compute the points at which these beams intersect with the plane at a height (or distance) $h$ away. Given a target point density $d$, \textsc{ares}\xspace can compute the largest width at this height that will ensure minimum point density $d$. This density guarantee is \textit{nominal}; in practice, LiDARs may drop some reflections if they are noisy~\cite{Lidarsim}. Future work can model this noise for better equi-dense trajectory designs. \parab{Trajectory path planning, and parameter selection.} \textsc{ares}\xspace uses a fixed orientation, height and speed for the drone flight; it uses an \textit{offline data-driven} approach to determine these. \parae{Offline simulations to estimate parameters.} As described above, LiDAR orientation, and drone height, and speed determine how well SLAM can estimate positions to ensure accuracy and completeness. We could have tried to analytically model the system to derive the optimal parameter choices. Modeling the LiDAR is feasible (as we discuss above); modeling SLAM's feature matching mathematically is much harder. So, we resort to exploring the space of parameters using simulation. Specifically, we use game engine driven photorealistic simulators like AirSim~\cite{airsim} to sweep the space of parameters. Then, we validate these results using traces that we capture from our real-world prototype. We discuss this methodology in greater detail in \secref{sec:eval}, where we show that there exists a sweet spot in the parameter space that ensures high accuracy and completeness while minimizing flight duration. Specifically, \secref{s:data-collection} shows that a parallel orientation, while flying at a distance of 20~m from the surface (or lower, if necessary, to meet the point density constraint) at 1~m/s gives the best accuracy. \parae{A fixed trajectory.} Given these parameters, the building geometry and scan width, \textsc{ares}\xspace designs a drone flight path. \figref{fig:model_trajectory} describes this for a building shaped like a rectangular solid; \textsc{ares}\xspace supports other building shapes (\secref{sec:eval}). \textsc{ares}\xspace's model collection trajectory starts from an \textit{origin}; this point defines the origin of the coordinate system for the resulting 3D model, then laterally traverses the building to capture the two sides of the building (\figref{fig:mod_lateral}). Its flight path extends a distance $\delta$ beyond the building edges to account for errors in building geometry estimation. As it moves to each side laterally, it moves up/down on each face to capture them at the same minimum point density. Then, it returns to the origin, and traverses longitudinally (\figref{fig:mod_longitudinal}). \parae{Avoiding LiDAR rotations.} Why return to the origin?\footnote{Drones use fiducials (\emph{e.g.,}\xspace a drone landing pad with a distinctive design) to identify the origin.} This \textit{loop closure} maneuver is an important technique in SLAM to correct for drift~\cite{slam_part1}. If loop closure were not necessary, we could have designed a trajectory as shown in \figref{fig:mod_total}. However, this trajectory requires a \textit{rotation} of the drone at the dashed segment to ensure that the lateral and longitudinal segments have the same drone orientation. Rotation can significantly increase SLAM drift; \figref{fig:slam_rotation} shows an example in which the green dashed line depicts the actual (ground truth) drone trajectory, and the blue line SLAM's estimated pose. At the bottom right corner of the trajectory, when the drone rotates, SLAM is completely thrown off. \parab{Drift Estimation and Re-calibration.} \textsc{ares}\xspace uses return-to-origin to re-calibrate SLAM drift. The second, longitudinal, flight starts a new SLAM session; to ``stitch'' the two sessions together, \textsc{ares}\xspace needs to compute a transformation matrix that transforms the coordinate system of the first session to that of the second. \textsc{ares}\xspace uses standard techniques for this. More important, \textsc{ares}\xspace designs the longitudinal flight to start close to the origin, which has two benefits: (a) shorter flight time resulting in lower overall energy consumption and (b) less drift accumulation. \parae{Mid-flight re-calibration for accuracy and flight efficiency.} Return-to-origin re-calibration might also be necessary in the middle of one of the flights (\figref{fig:mod_mid_flight}), if the environment is sparse and SLAM tracking fails. To combat this, \textsc{ares}\xspace could have added more loop closure maneuvers in the lateral and longitudinal flights. However, returning to origin is an expensive operation in terms of the drone's battery. Instead, \textsc{ares}\xspace actively monitors drift-error and returns to the origin only when needed. In that case, the flight resumes at the point at which it detected excessive drift: the direct path from the origin to that point is always shorter than the initial segment, ensuring that the resumed flight starts with a lower drift. \parae{Using deviation from GPS trajectory to detect drift.} However, detecting excessive drift is non-trivial, since \textsc{ares}\xspace has no way of knowing when SLAM's position estimates are wrong, because it does not have accurate ground truth. \textsc{ares}\xspace leverages the GPS readings associated with SLAM poses: each sequence of readings gives a \textit{GPS trajectory}, and \textsc{ares}\xspace attempts to find the best possible match between the GPS trajectory (\emph{e.g.,}\xspace the green line in \figref{fig:slam_rotation}) and the SLAM-generated trajectory (the blue line in \figref{fig:slam_rotation}). If there is a significant deviation, \textsc{ares}\xspace assumes there is a drift and invokes re-calibration. This approach is robust to GPS errors, since it matches the \textit{shape} of the two trajectories, not their precise positions (\algoref{algo:imperfection_detection}). Specifically, \textsc{ares}\xspace continuously executes 3D SLAM on the stream of compressed LiDAR frames from the drone, and estimates the pose of each frame. It synchronizes the GPS timestamps with the LiDAR timestamps (line 1), then transforms GPS readings using the Mercator projection (line 2). It then aligns the GPS trajectory and the SLAM-generated trajectory using the Umeyama algorithm~\cite{umeyama} to determine the rigid transformation matrices (\emph{i.e.,}\xspace translation, and rotation) that best align the SLAM and GPS poses (lines 3-4). \textsc{ares}\xspace partitions trajectories into fixed length segments and after alignment, computes the RMSE between the two trajectories in these segments, and uses these RMSE values as an indicator of excessive drift: if the RMSE is greater than a threshold $\rho$ (lines 5-12), \textsc{ares}\xspace invokes return-to-origin. \begin{figure*}[t] \begin{minipage}{0.49\columnwidth} \includegraphics[width=0.9\columnwidth]{figs/fig_slam_rotation.pdf} \caption{Rotation throws off SLAM} \label{fig:slam_rotation} \end{minipage} \begin{minipage}{0.49\columnwidth} \includegraphics[width=0.8\columnwidth]{figs/fig_blimp_generalization.pdf} \caption{To reconstruct other structures (\emph{e.g.,}\xspace a blimp), \textsc{ares}\xspace wraps them in a rectangular solid and plans a model collection trajectory for it.} \label{fig:bounding_box_blimp} \end{minipage} \begin{minipage}{0.49\columnwidth} \centering \includegraphics[width=0.9\columnwidth]{figs/fig_recon_phase_scan_width.pdf} \caption{Coverage and scan widths for different orientations and heights. \label{fig:lidar_coverage_area}} \end{minipage} \begin{minipage}{0.49\columnwidth} \centering \includegraphics[width=0.6\columnwidth]{figs/fig_boundary_detection.pdf} \caption{\textsc{ares}\xspace's building detector on a real building.} \label{fig:boundary} \end{minipage} \end{figure*} \begin{algorithm}[t] \caption{Detecting Excessive Drift} \label{algo:imperfection_detection} \SetKwFunction{TimeSynchronization}{TimeSynchronization} \SetKwFunction{GPSToMercator}{GPSToMercator} \SetKwFunction{UmeyamaAlignment}{UmeyamaAlignment} \SetKwFunction{TransformTrajectory}{TransformTrajectory} \SetKwFunction{RMSE}{RMSE} \SetKwFunction{IsExcessive}{IsExcessive} \SetKwFunction{Append}{Append} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{SLAM poses $S$ and GPS tags $G$} \Output{Imperfect regions $I$} \BlankLine $ S^{'}, G^{'} \leftarrow $ \TimeSynchronization {$S$, $G$} \\% time sync poses $ G_{a}^{'} \leftarrow $ \GPSToMercator { $G^{'}$ } \\% convert GPS to mercator $ t_{cw} \leftarrow $ \UmeyamaAlignment { $G_{a}^{'}$, $S^{'}$ } \\% find tcw to align SLAM and GPS $ S_{a}^{'} \leftarrow $ \TransformTrajectory { $S^{'}$, $t_cw$ } \\% transform SLAM trajectory using tcw \ForEach { $s_{a-i}^{'}$ in $S_{a}^{'}$, $g_{a-i}^{'}$ in $G_{a}^{'}$ } { $r_{i} \leftarrow$ \RMSE { $s_{a-i}^{'}$, $g_{a-i}^{'}$ } \\ \eIf { \IsExcessive {$r_{i}$} } { $I$.\Append {$g_{a-i}$} } { pass } } \end{algorithm} \parab{Generalizing to other structures.} Some aspects of model collection trajectory design depend on the geometry of the structure whose model we seek to reconstruct. To simplify the discussion and because geometric manipulations are not the focus of this paper, we have chosen rectangular buildings. We demonstrate in \secref{sec:eval} that our approach generalizes to other regular building geometries. It also generalizes to other structures that can be tightly bounded within rectangular solids, such as aircraft fuselages or blimps (\figref{fig:bounding_box_blimp}, \secref{sec:eval}). To accommodate arbitrary solid geometries, we expect that our techniques for generating equi-dense trajectories, in-flight re-calibration and our conclusions about orientation, height and speed will apply, but the actual trajectory design (\figref{fig:model_trajectory}) will need to match the shape of the solid. We leave these extensions to future work. \subsection{Estimating Building Geometry} \label{s:area-inter-reconn} Given a region of interest, \textsc{ares}\xspace conducts a reconnaissance (``recon'') flight to determine the boundary of the building's roof. It uses this boundary for trajectory design (\secref{s:slam-phase}). \parab{Goal and requirements.} Accurate model reconstruction requires a low, slow, flight which can be battery-intensive. The recon flight helps \textsc{ares}\xspace scope its model collection to the part of the region of interest that contains the building to reduce battery usage. For instance, if in a large campus, buildings occupy only a small fraction of the surface area, a fast recon flight can reduce overall drone battery consumption. So, we require that the recon flight should have as short a duration as possible. In addition, (a) boundary estimation must not assume prior existence of a 3D model of the building (prior work in the area makes this assumption~\cite{lidarbuildingdetection, lidarimagebuildingdetection}); (b) boundary estimation must be robust to nearby objects like trees that can introduce error; and (c) buildings come in many shapes (\emph{e.g.,}\xspace rectangles, squares, hexagons \emph{etc.}\xspace), so boundary estimation must generalize to these. \parab{The Recon Trajectory.} Similar to model collection, recon uses a rectilinear scan (\figref{fig:mod_lateral}, \figref{fig:recon_traj}). Unlike model collection, however, during recon the drone flies \textit{fast} (4~m/s) and \textit{high} (60~m above the building's roof\footnote{We assume the nominal building heights in an area are known, for example, from zoning restrictions.}), with the LiDAR mounted in a \textit{perpendicular} orientation in order to have the shortest duration flight possible. We justify these parameter choices in \secref{sec:eval}, but \figref{fig:lidar_coverage_area} depicts the intuition for these choices. It shows, for an Ouster-64 LiDAR, the ground coverage area as a function of height. Coverage is highest between 40 and 60~m. At a given height, a perpendicular orientation covers a wider swathe of ground than a parallel orientation; this allows \textsc{ares}\xspace to use a larger scan width $s$ (\figref{fig:lidar_coverage_area}), resulting in a shorter flight. As with model collection, during this flight, \textsc{ares}\xspace streams point clouds to its cloud component, which runs the boundary detection algorithms described below. \parab{Challenges and Overview.} This flight design poses two challenges for boundary detection. First, to detect the building's boundary, it is still necessary to align all point clouds to the same coordinate frame of reference. In recon, \textsc{ares}\xspace cannot use SLAM because fast, high flights can cause SLAM to lose tracking frequently. We show below that, because boundary detection can afford to be approximate, we can use GPS. Second, high and fast flights result in low point density, and boundary detection algorithms must be robust to this. \textsc{ares}\xspace's building boundary detection takes as input the area of interest and outputs the \textit{GPS locations} that constitute the boundary of the building. Model collection uses these outputs (\secref{s:slam-phase}). Boundary detection runs two different algorithms: rooftop \textit{surface extraction}, followed by \textit{boundary estimation}. \parab{Step 1. Surface Extraction.} The cloud component receives GPS-tagged compressed point clouds from the drone. It first uncompresses them, then computes the \textit{surface normal} of every point in the point cloud. A surface normal for a point determines the direction normal to the surface formed by points within a fixed radius of the point. Then, \textsc{ares}\xspace uses RANSAC~\cite{ransac} (a plane-fitting algorithm) to segment the LiDAR points into groups of points that fall onto planes. RANSAC is fast, but is not robust to outliers: (a) it combines into one surface all LiDAR points that satisfy the same planar equation, including disjoint sets of points (\emph{e.g.,}\xspace from trees) at the same height; (b) point clouds can have multiple detected planes (\emph{e.g.,}\xspace building rooftop, ground surface, vehicles \emph{etc.}\xspace), and RANSAC cannot distinguish between these. To address the first issue, in each plane, \textsc{ares}\xspace removes outlying points that are further away from neighboring points in the same plane using a statistical outlier filter. Using the statistical outlier on every point cloud can be compute-intensive. To this end, \textsc{ares}\xspace tunes the statistical outlier's parameters to find a sweet spot between filtering accuracy and performance. To find the rooftop among the multiple detected planes, \textsc{ares}\xspace uses surface normals to compute surface statistics for each plane (\emph{e.g.,}\xspace plane height, 3D centroid, normal variance \emph{etc.}\xspace). It uses these statistics to find the rooftop in the extracted planes. (As an aside, surface normal computation is computationally intensive but is parallelizable, so we use a GPU to accelerate this, as discussed in \secref{sec:eval}). Intuitively, the rooftop is a large, uniformly oriented surface (surface normal variance is low) that lies above the ground plane. \textsc{ares}\xspace can eliminate the ground plane as that plane whose points are consistent with the drone's height. So, it discards all planes that do not satisfy this definition (this includes planes with high variances and the ground surface). At the end of this step, \textsc{ares}\xspace classifies a single plane as the roof surface. To remove the possibility of false positives, \textsc{ares}\xspace uses majority voting to remove erroneous surface detections; it classifies a plane as the rooftop only if it detects it in multiple consecutive frames. Lastly, even though the outlier filter removes small sets of outliers in planes, it is unable to remove large clusters of points belonging to objects like trees found nearby the building. For this, \textsc{ares}\xspace forms clusters of points based on their spatial relationships such that neighboring points belong to the same cluster. This way, points belonging to different objects form clusters. Since the roof is normally a relatively larger surface, \textsc{ares}\xspace simply discards smaller clusters. To do this in near real-time, \textsc{ares}\xspace finds the right set of parameters for the clustering algorithm. \parab{Step 2. Estimating the boundary of the building.} In the previous step, \textsc{ares}\xspace obtains parts of the rooftop from each point cloud. In this step, it uses the drone's GPS location to transform each surface to the same coordinate frame of reference, then combines all surfaces into a single point cloud that represents the extracted rooftop of the building. To extract the boundary of the building, it extracts the \textit{alpha shape}~\cite{alphashape} of the stitched point cloud. A generalization of a convex hull, an alpha shape is a sequence of piecewise linear curves in 2-D encompassing the point cloud representing the rooftop. This allows \textsc{ares}\xspace to generalize to non-convex shapes as well. Finally, to detect the boundary of multiple buildings, \textsc{ares}\xspace clusters the rooftop point clouds. \figref{fig:boundary} shows results from the building boundary detection algorithm on real data taken from our drone. The green rectangle is the ground truth boundary of the building. The blue points illustrate the drone's recon trajectory and the grey points depict locations where \textsc{ares}\xspace detects a rooftop. Some grey points are beyond the building's boundary because the LiDAR has a wide field of view and can see the rooftop even after it has passed it. The red points show the GPS stitched 3D point cloud of the building's rooftop. \subsection{Point-Cloud Compression} \label{s:compression} LiDARs generate voluminous 3D data. For instance, the Ouster OS1-64 LiDAR (which we use in this paper), generates 20 point clouds per second that add upto 480~Mbps, well beyond the capabilities of even future cellular standards. \textsc{ares}\xspace compresses these point clouds to a few Mbps (1.2 to 4.0), using two techniques: viewpoint filtering, and octree compression. We describe these in \secref{s:app_compression}. \section{Evaluation} \label{sec:eval} We evaluate (a) \textsc{ares}\xspace's ability to reconstruct 3D models in near real-time, and (b) the accuracy of these 3D models (~\secref{s:slam-phase}). We also describe our parameter sensitivity analyses for data collection, and evaluate boundary detection performance. \parab{Implementation.} Not counting external libraries and packages, \textsc{ares}\xspace is 15,500 lines of code (discussion in \secref{s:app_impl}). \parab{Simulations.} We evaluate \textsc{ares}\xspace using a photorealistic simulator, AirSim~\cite{airsim}, that models realistic physical environments using a game engine, then simulates drone flights over these environments and records sensor readings taken from the perspective of the drone. AirSim is widely accepted as a leading simulation platform for autonomous vehicles and drones by manufacturers, academia and leading industrial players. AirSim has a parametrizable model for a LiDAR; we used the parameters for the Ouster OS1-64 in our simulation experiments. \textsc{ares}\xspace generates trajectories for the AirSim drone, then records the data generated by the LiDAR, and processes it to obtain the 3D model. For computing the metrics above, we obtain ground truth from AirSim. To build the ground truth 3D model, we flew a drone equipped with a LiDAR several times over the region of interest in AirSim (using exhaustive flights) and then stitched all the resulting point clouds using ground truth positioning information from AirSim. \parab{Real-world Traces.} In addition, we have collected data from nearly 30 flights (each of about 25 minutes) on an M600Pro drone with an Ouster OS1-64 LiDAR on a commercial complex. For almost all experiments, we evaluated \textsc{ares}\xspace on both \textit{real-world} and simulation-driven traces. Simulation-driven traces give us the flexibility to explore the parameter space more (as we show below). However, we use \textit{real-world} traces to validate all these parameter choices and estimate reconstruction accuracy in practice. \parab{Metrics.} In this section, we quantify end-to-end latency, 3D model accuracy and completeness (\secref{s:slam-phase}), and positioning error for a number of experimental scenarios. We also quantify \textsc{ares}\xspace's energy-efficiency (using flight duration as a proxy for drone battery usage) and the computational capabilities of its processing pipeline. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Reconstruction \\ scheme\end{tabular} & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Comp.\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Reconstruction \\ time (s)\end{tabular} \\ \hline \multicolumn{4}{|c|}{ Large building (100~m~x~50~m~x~20~m)} \\ \hline \begin{tabular}[c]{@{}c@{}}Offline-SDF\end{tabular} & $\infty$ & $\infty$ & 3500 \\ \hline \begin{tabular}[c]{@{}c@{}}Offline-TP\end{tabular} & 0.87 & 0.35 & 3900 \\ \hline \begin{tabular}[c]{@{}c@{}}\textsc{ares}\xspace\end{tabular} & \textbf{0.21} & \textbf{0.24} & \textbf{719} \\ \hline \begin{tabular}[c]{@{}c@{}}\textsc{ares}\xspace-raw \end{tabular} & 0.14 & 0.17 & 719 \\ \hline \multicolumn{4}{|c|}{ Small building (50~m~x~50~m~x~20~m)} \\ \hline \begin{tabular}[c]{@{}c@{}}Offline-SDF\end{tabular} & 3.36 & 1.30 & 2400 \\ \hline \begin{tabular}[c]{@{}c@{}}Offline-TP\end{tabular} & 0.62 & 0.43 & 3300 \\ \hline \begin{tabular}[c]{@{}c@{}}\textsc{ares}\xspace\end{tabular} & \textbf{0.25} & \textbf{0.14} & \textbf{656}\\ \hline \begin{tabular}[c]{@{}c@{}}\textsc{ares}\xspace-raw \end{tabular} & 0.21 & 0.09 & 656\\ \hline \end{tabular} \caption{Reconstruction accuracy, completeness (comp.), and time for two large buildings using three schemes: a) offline reconstruction with shortest duration flight (Offline-SDF), b) offline reconstruction with \textsc{ares}\xspace trajectory planning (Offline-TP), c) \textsc{ares}\xspace, and d) \textsc{ares}\xspace with raw traces.} \label{tab:e2e_reconstruction_quality_airsim} \end{table} \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Reconstruction \\ scheme\end{tabular} & \begin{tabular}[c]{@{}c@{}}Bandwidth \\ (Mbps)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Comp.\\ (m)\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Offline SLAM\end{tabular} & 480 & 2.30 & 1.30 \\ \hline \begin{tabular}[c]{@{}c@{}}Online w GPS.\end{tabular} & 3.80 & 1.60 & 0.53 \\ \hline \begin{tabular}[c]{@{}c@{}}\textsc{ares}\xspace\end{tabular} & \textbf{3.80} & \textbf{0.13} & \textbf{0.09} \\ \hline \end{tabular} \caption{Reconstruction quality of a \textit{real-world} 70~x~40~x~20~m building with online and offline approaches, relative to an uncompressed trace.} \label{tab:e2e_reconstruction_quality_real} \end{table} \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Structure\\ type\end{tabular} & \begin{tabular}[c]{@{}c@{}}Flight \\ duration (s)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Comp.\\ (m)\end{tabular} \\ \hline Blimp & 586 & 0.23 & 0.03 \\ \hline Small rect. & 656 & 0.25 & 0.14 \\ \hline Star-shaped & 686 & 0.37 & 0.12 \\ \hline Large rect. & 719 & 0.21 & 0.24 \\ \hline Plus-shaped & 1024 & 0.31 & 0.06 \\ \hline H-shaped & 1044 & 0.34 & 0.10 \\ \hline Pentagonal & 1361 & 0.31 & 0.12 \\ \hline \end{tabular} \caption{\textsc{ares}\xspace 3D reconstruction times (recon and model collection) and quality for different structures at low compression} \label{tab:e2e_reconstruction_vs_building_types} \end{table} \subsection{3D Model Reconstruction} \label{sec:eval:accuracy} \parab{Experiment setup.} To evaluate the end-to-end performance of \textsc{ares}\xspace in building an accurate 3D model in near real-time, we collected and reconstructed the 3D model of two buildings: a) a \textit{large} 50m~x~100m~x~20m (L~x~W~x~H) and, b) a \textit{small} 50m~x~50m~x~20m building in Airsim. We then compared the reconstruction accuracy of these models with two baseline \textit{offline} approaches (\emph{i.e.,}\xspace approaches that reconstruct the 3D model after the drone lands): a) offline reconstruction\footnote{We assume both offline approaches know the exact location of the building. Without this, SLAM accumulates significant drift and reconstructions are very poor.} with the shortest duration flight (Offline-SDF), b) offline reconstruction with \textsc{ares}\xspace's trajectory planning (Offline-TP). We calculated the accuracy and completeness of the models generated by these approaches by comparing them against ground truth models generated from AirSim. Lower is better for accuracy and completeness. For these experiments, \textsc{ares}\xspace uses \textit{compressed point clouds} with bandwidth requirements that are compatible with LTE speeds today (\emph{i.e.,}\xspace upload bandwidth of 3.8~Mbps). \textsc{ares}\xspace-raw shows accuracy and completeness if \textsc{ares}\xspace were to use raw point clouds; we study the effect of compression on \textsc{ares}\xspace model reconstruction more in \secref{s:ablation-study}. \parab{\textsc{ares}\xspace builds significantly accurate models in less time.} As \tabref{tab:e2e_reconstruction_quality_airsim} shows, \textsc{ares}\xspace achieves lower than 25~cm accuracy and completeness for both buildings and reconstructs the entire buildings in just 10-12 minutes (the flight duration). For reconstruction quality, \textsc{ares}\xspace does much better than the two baseline approaches for two reasons: a) careful trajectory planning (TP), and b) in-flight re-calibration. Since Offline-SDF does neither, its accuracy and completeness values are very large. To reconstruct the larger building \emph{i.e.,}\xspace 100~x~50~x~20m building, the drone needs to fly more and accumulates significant drift (as compared to the smaller building) and has poor accuracy and completeness (shown by $\infty$). Offline-TP does better because it uses \textsc{ares}\xspace's trajectory planning, but still exhibits worse accuracy and completeness than \textsc{ares}\xspace because it lacks in-flight calibration. This shows the importance of a real-time quality feedback signal for reconstruction and highlights why offline reconstruction is not accurate even with uncompressed traces. Though \textsc{ares}\xspace uses compressed point clouds, with in-flight re-calibration and trajectory planning, \textsc{ares}\xspace's models are upto 3.5x more accurate and complete. If \textsc{ares}\xspace were to use raw traces (\textsc{ares}\xspace-raw) instead, loss of accuracy and completeness is attributable to SLAM. Relative to a raw trace, compression accounts for 4-7~cm difference in accuracy and completeness. Moreover, \textsc{ares}\xspace reconstructs while the drone is in-flight whereas the other two baseline approaches do reconstruction offline on uncompressed point clouds, incurring up to 4.6$\times$ higher reconstruction time\footnote{In practice, offline reconstruction will have higher reconstruction times because we did not consider time to upload data to the cloud.}. To get a visual feel for the degradation resulting from lower accuracy and completeness, consider \figref{fig:3d_model}, which shows the ground-truth model, together with the \textsc{ares}\xspace reconstructions. With an accuracy of 0.25~m (using 3.8~Mbps upload bandwidth), the model closely matches the ground truth, but the textured building surface on the right shows some small artifacts. These artifacts arise not because of compression, but because of SLAM imperfections (\secref{s:ablation-study}). \parab{\textsc{ares}\xspace generalizes to different building shapes.} Our results so far, and the descriptions in \secref{sec:design}, have focused on rectangular buildings. \textsc{ares}\xspace can accurately reconstruct a variety of building types, as \tabref{tab:e2e_reconstruction_vs_building_types} shows. For these results, we use \textsc{ares}\xspace's default flight parameters and low compression. Larger buildings (pentagonal, plus, and H-shaped) have larger flight durations partly because of their size and because they require two re-calibration steps. Even then, for all buildings, \textsc{ares}\xspace achieves tens of centimeter accuracy and completeness. \parab{\textsc{ares}\xspace generalizes to other types of structures.} To show that \textsc{ares}\xspace can reconstruct other types of 3D structures \emph{e.g.,}\xspace airplanes, helicopters \emph{etc.}\xspace, we modeled a real-world blimp~\cite{blimp} (15~m x 60~m x 15~m) in AirSim. \textsc{ares}\xspace encloses such structures within an enclosing rectangular solid (\figref{fig:bounding_box_blimp}, \secref{s:data-collection}). In less than 10 minutes (\tabref{tab:e2e_reconstruction_vs_building_types}), \textsc{ares}\xspace reconstructed the blimp with an accuracy of 23~cm and completeness of just 3~cm. \parab{High accuracy is possible on real-world traces.} Results from our drone flights validate that real-world data can result in comparable performance (\tabref{tab:e2e_reconstruction_quality_real}). In these experiments, we reconstructed the 3D model of a real-world 70~m~x~40~m~x~20~m building. Because we lack a reference ground truth for real-world data, we use the 3D model generated from raw, uncompressed traces. Offline reconstruction using SLAM after the drone lands fails completely for the same reasons mentioned above (\emph{i.e.,}\xspace no trajectory planning, and no re-calibration). With GPS, it is possible to do in-flight reconstruction, however, the accuracy and completeness being 1.60~m and 0.53~m, make such 3D models unusable. With \textsc{ares}\xspace, on the other hand, we can build accurate, and complete 3D models whose completeness and accuracy are 9 and 13~cm respectively (top-down view of 3D model in \figref{fig:3d_real_model}). \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Compression scheme\end{tabular} & \begin{tabular}[c]{@{}c@{}}Low \end{tabular} & \begin{tabular}[c]{@{}c@{}}Medium \end{tabular} & \begin{tabular}[c]{@{}c@{}}High \end{tabular} \\ \hline \hline \begin{tabular}[c]{@{}c@{}}Upload bandwidth (Mbps)\end{tabular} & 3.80 & 2.50 & 1.27 \\ \hline \begin{tabular}[c]{@{}c@{}}Compression time (ms)\end{tabular} & 62.9 & 65.3 & 65.3 \\ \hline \begin{tabular}[c]{@{}c@{}}Network latency (ms)\end{tabular} & 15.5 & 14.6 & 13.7 \\ \hline \begin{tabular}[c]{@{}c@{}}Extraction time (ms)\end{tabular} & 5.05 & 3.07 & 3.03 \\ \hline \begin{tabular}[c]{@{}c@{}}SLAM time (ms)\end{tabular} & 34.5 & 30.8 & 23.9 \\ \hline \begin{tabular}[c]{@{}c@{}}Total processing time (ms)\end{tabular} & 117 & 113 & 106 \\ \hline \end{tabular} \caption{\textsc{ares}\xspace enables real-time 3D reconstruction over LTE. Each row shows \textit{per frame} latency for that operation.} \label{tab:real_time_experiments} \end{table} \subsection{Performance} \label{s:other-meas-sysn} \parab{Real-time 3D reconstruction over LTE is feasible.} To validate that \textsc{ares}\xspace can collect a 3D model end-to-end in near real-time, we used our implementation to conduct an experiment in which we replayed 15 minutes worth of \textit{real-world data} on the drone compute (a Jetson TX2). It then compressed and streamed point clouds over an LTE connection, to a 16-core AWS VM with 64~GB RAM and a Nvidia T4 GPU. (Our experiment only ran model collection; recon also runs in real-time as discussed below). To compress point clouds, we used three different levels of compression (\textit{low}, \textit{medium} and \textit{high}), corresponding to the following combinations of octree resolution and point resolution (\secref{s:compression}): $(0.25, 0.10)$, $(0.25, 0.25)$ and $(0.50, 0.50)$ (effect of compression is studied in \secref{s:ablation-study}). In our experiments with our drone, we have found achievable LTE throughput to range from 1-4~Mbps; we chose these compression levels to correspond to this range. (In \secref{s:ablation-study}, we discuss how 5G deployments would alter these conclusions). At all three compression modes, \textsc{ares}\xspace was able to stream point clouds in real time (\tabref{tab:real_time_experiments}), and the total end-to-end processing time \textit{per frame} is about 110~ms, of which nearly 65~ms is network latency. Thus, \textsc{ares}\xspace builds the 3D model whilst the drone is in-flight, adds a frame within 100~ms after receiving it and can make available \textit{a complete 3D model of a building in about a 100~ms} after receiving the last frame! \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|} \hline \textsc{ares}\xspace component & Sub-component & {\begin{tabular}[c]{@{}c@{}}Per-frame\\execution time\\ (ms)\end{tabular}} \\ \hline \multirow{8}{*}{\begin{tabular}[c]{@{}c@{}}Recon phase\end{tabular}} & 3D frame compression & 13.0 $\pm 1.3$ \\ \cline {2-3} & 3D frame extraction & 3.0 $\pm 0.3$ \\ \cline {2-3} & GPU normal estimation & 76.0 $\pm 82$ \\ \cline {2-3} & RANSAC plane-fitting & 5.0 $\pm 9.0$ \\ \cline {2-3} & Outlier removal & 0.2 $\pm 0.3$ \\ \cline {2-3} & Rooftop detection & 0.005 \\ \cline {2-3} & Rooftop extraction & 6.0 $\pm 5.0$ \\ \cline {2-3} & Rooftop stitching & 3.0 $\pm 2.0$ \\ \cline {2-3} & Total time & 100 $\pm 90.0$ \\ \hline Model collection & LiDAR SLAM & 37.0 $\pm 3.0$ \\ \hline \multicolumn{2}{|c|}{3D Reconstruction} & 10.3 $\pm 2.0$ \\ \hline \end{tabular} \caption{Processing times for \textsc{ares}\xspace components.} \label{tab:system_execution_time_eval} \end{table} \parab{\textsc{ares}\xspace supports full frame rate processing.} We profiled the execution time of each component of \textsc{ares}\xspace on a 15-minute \textit{real-world} trace. Point cloud compression executes on the drone, and other components run on the AWS VM mentioned above. We use the GPU to offload the computation of surface normals for building detection. During recon, point cloud compression takes 13~ms per frame (\tabref{tab:system_execution_time_eval}). Extracting the building geometry requires 100~ms per frame; with these numbers, we can sustain about 10~fps, so with a 20~fps LiDAR, we process roughly every other frame. Despite this, our building detector is quite accurate (\secref{sec:eval:building}). During model collection, SLAM requires 37~ms per frame, and 3D reconstruction requires about 10~ms (\tabref{tab:system_execution_time_eval}). The former uses 8 cores, so we have been able to run these two components in a pipeline to sustain 20~fps. Thus, a moderately provisioned, cloud VM suffices to run \textsc{ares}\xspace at full frame rate with an end-to-end compute latency of about 100~ms for reconnaissance, and 50~ms for model collection. \subsection{Ablation Studies} \label{s:ablation-study} In this section, we explore how \textsc{ares}\xspace's techniques contribute to 3D reconstruction quality and performance. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Structure \\type\end{tabular}} & \multicolumn{2}{c|}{Flight dur. (s)} & \multicolumn{2}{c|}{Accuracy (m)} & \multicolumn{2}{c|}{Comp. (m)} \\ \cline{2-7} & \begin{tabular}[c]{@{}c@{}}w/o\end{tabular} & \begin{tabular}[c]{@{}c@{}}w\end{tabular} & \begin{tabular}[c]{@{}c@{}}w/o\end{tabular} & \begin{tabular}[c]{@{}c@{}}w\end{tabular} & \begin{tabular}[c]{@{}c@{}}w/o\end{tabular} & \begin{tabular}[c]{@{}c@{}}w\end{tabular} \\ \hline Star-shaped & 613 & 686 & 1.05 & \textbf{0.37} & 0.39 & \textbf{0.12} \\ \hline Small rect. & 573 & 656 & 0.63 & \textbf{0.25} & 0.40 & \textbf{0.14} \\ \hline Large rect. & 694 & 719 & 0.96 & \textbf{0.21} & 0.39 & \textbf{0.24}\\ \hline Plus-shaped & 766 & 1024 & 0.51 & \textbf{0.31} & 0.08 & \textbf{0.06} \\ \hline H-shaped & 866 & 1044 & 1.10 & \textbf{0.34} & 0.27 & \textbf{0.10} \\ \hline Pentagonal & 1062 & 1361 & 1.47 & \textbf{0.31} & 0.42 & \textbf{0.12} \\ \hline \end{tabular} \caption{Flight duration (dur.) and reconstruction quality for buildings at low compression with (w) and without (w/o) re-calibration.} \label{tab:reconstruction_with_recalibration} \end{table} \begin{table}[t] \centering \footnotesize \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Velocity (m/s)} & 0.5 & 1.0 & 2.0 & \begin{tabular}[c]{@{}c@{}}Exhaustive \\ @~1~m/s\end{tabular} \\ \hline \multicolumn{2}{|c|}{Height (m)} & 30 & 40 & 50 & 40 \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Flight\\ duration\\ (s)\end{tabular}} & Recon & 136 & 136 & 136 & - \\ \cline{2-6} & Model coll. & 673 & 343 & 72 & 1520 \\ \cline{2-6} & Re-calib. & 476 & 240 & 272 & - \\ \cline{2-6} & Total time & 1285 & 719 & 480 & 1520 \\ \hline \multicolumn{2}{|c|}{Accuracy (m)} & 0.43 & 0.14 & 0.91 & $\infty$ \\ \hline \multicolumn{2}{|c|}{Completeness (m)} & 0.34 & 0.17 & 0.45 & $\infty$ \\ \hline \end{tabular} \caption{\textsc{ares}\xspace's flight duration for various parameter choices.} \label{tab:battery_usage} \end{table} \parab{Re-calibration helps reduce error.} To show the effect of in-flight re-calibration, we built online 3D models of the 7 large buildings mentioned above using \textsc{ares}\xspace with (w) and without (w/o) re-calibration in Airsim. In these experiments, we evaluate flight duration and reconstruction quality at low compression (3.8~Mbps upload bandwidth) using accuracy and completeness metrics. \tabref{tab:reconstruction_with_recalibration} shows that, on average, at the expense of only 18\% (150~seconds) longer flights, \textsc{ares}\xspace improves accuracy by 65\% (65~cm) and completeness by 55\% (20~cm) with re-calibration flights. Larger buildings (plus-shaped, H-shaped, and pentagonal) require longer aerial flights which accumulate higher drift. This results in relatively more re-calibration flights and hence higher flight duration. Even so, \textsc{ares}\xspace is able to reconstruct these buildings accurately, demonstrating the importance of re-calibration. \parab{Short flight durations can produce accurate models.} \textsc{ares}\xspace strives to reduce drone battery depletion in its design by generating short duration flights without sacrificing accuracy and completeness. To show that \textsc{ares}\xspace's defaults of 1~m/s speed and 40~m height represent the best point in this tradeoff space, we compare it to a lower, slower flight (30~m, 0.5~m/s), and a faster, higher flight (50~m, 2~m/s). \tabref{tab:battery_usage} shows that, on the large building the lower, slower flight has a longer trajectory, resulting in more re-calibrations. The resulting model has worse accuracy and completeness; re-calibration can limit drift error, but not reverse it. A faster, higher flight has a slightly shorter trajectory, but the resulting model's accuracy is very poor, because there is less overlap between point clouds at higher speeds (\secref{s:slam-phase}). Finally, \tabref{tab:battery_usage} also shows the benefits of a recon flight: an exhaustive flight that uses the model collection parameters and does not perform recon is 3$\times$ longer than \textsc{ares}\xspace's flight (and accumulates significant drift, resulting in poor quality 3D models). Results on the small building are qualitatively similar (omitted for brevity). \parab{\textsc{ares}\xspace builds accurate models at low bandwidths.} We explore the impact of compression on accuracy and completeness using (a) a synthetic building in AirSim and (b) \textit{real-world} traces. In addition to the three compression schemes discussed earlier, we compute accuracy and completeness for (a) raw point clouds, (b) viewpoint compression and (c) lossless compression. The first two alternatives provide calibration, while the third alternative explores reconstruction performance under higher bandwidth as would be available, for example, in 5G deployments. As \tabref{tab:reconstruction_vs_compression} shows, viewpoint filtering achieves a 10$\times$ compression throughout. Low compression is an order of magnitude more efficient beyond this. Despite this, \textsc{ares}\xspace can achieve high quality reconstruction. For the AirSim building, consider accuracy: the raw-point cloud has an accuracy of 0.21~m and 0.09~m, which is attributable entirely to SLAM error. View-point filtering does not degrade accuracy since it only omits zero returns. Low compression, with a bandwidth of 3.8~Mbps (easily achievable over LTE and over 100$\times$ more compact than the raw LiDAR output) only adds 4~cm and 5~cm to accuracy and completeness (respectively). Medium and high compression have significantly poorer accuracy and completeness. Similar results hold true for the other AirSim building, so we omit for brevity. Results from our drone flights validate that \textit{real-world} data of a large building (dimensions in \tabref{tab:reconstruction_vs_compression}) can result in comparable performance (\tabref{tab:reconstruction_vs_compression}). Since we lack a reference ground truth for real-world data, we use the 3D model generated from raw traces. With real-world traces, we can build accurate, and complete 3D models that are within 9-13~cm completeness and accuracy for low compression, and about 16-23~cm for medium compression, with respect to the uncompressed traces. This suggests that highly compressed point clouds do not significantly impact accuracy and completeness. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Compression\\ profile\end{tabular} & \begin{tabular}[c]{@{}c@{}}Bandwidth\\ (Mbps)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Completeness\\ (m)\end{tabular} \\ \hline \multicolumn{4}{|c|}{\textit{Real-world} 70~m~x~40~m~x~20~m large building} \\ \hline \begin{tabular}[c]{@{}c@{}}Raw\end{tabular} & 480.0 & 0.00 & 0.00 \\ \hline \begin{tabular}[c]{@{}c@{}}View-point\end{tabular} & 42.7 & 0.00 & 0.00 \\ \hline \begin{tabular}[c]{@{}c@{}}Lossless\end{tabular} & 7.86 & 0.06 & 0.07 \\ \hline \begin{tabular}[c]{@{}c@{}}Low\end{tabular} & 3.80 & 0.13 & 0.09 \\ \hline \begin{tabular}[c]{@{}c@{}}Medium\end{tabular} & 2.50 & 0.23 & 0.16 \\ \hline \begin{tabular}[c]{@{}c@{}}High\end{tabular} & 1.27 & 0.28 & 0.29 \\ \hline \multicolumn{4}{|c|}{\textit{AirSim} 50~m~x~50~m~x~20~m small building} \\ \hline \begin{tabular}[c]{@{}c@{}}Raw\end{tabular} & 480.0 & 0.21 & 0.09 \\ \hline \begin{tabular}[c]{@{}c@{}}View-point\end{tabular} & 42.7 & 0.21 & 0.09 \\ \hline \begin{tabular}[c]{@{}c@{}}Lossless\end{tabular} & 7.86 & 0.22 & 0.10 \\ \hline \begin{tabular}[c]{@{}c@{}}Low\end{tabular} & 3.80 & 0.25 & 0.14 \\ \hline \begin{tabular}[c]{@{}c@{}}Medium\end{tabular} & 2.50 & 0.66 & 0.21 \\ \hline \begin{tabular}[c]{@{}c@{}}High\end{tabular} & 1.27 & 0.73 & 0.24 \\ \hline \end{tabular} \caption{The impact of compression on accuracy/completeness.} \label{tab:reconstruction_vs_compression} \end{table} \parab{Higher bandwidths provide centimeter-level improvements.} The emergence of 5G promises larger upload bandwidths. However, as \tabref{tab:reconstruction_vs_compression} illustrates, room for improvement in accuracy and completeness is small. For the AirSim building, the gap between raw point clouds and low compression accuracy (completeness) is only 4 cm (5cm); for the \textit{real-world} building, it is 7 cm (2cm). \textit{Lossless} point cloud compression, which requires 7.86~Mbps bandwidth comes within 1~cm of the raw point cloud accuracy and completeness for the AirSim building and within 7~cm for the real-world building. \parab{Lower target density worsens completeness.} To demonstrate that users can use the target density tuning knob to obtain less complete models more quickly, we conducted an experiment with \textsc{ares}\xspace (with re-calibration) at two different densities: 7.5 points per m$^2$ and 1 point per m$^2$. For the former, accuracy and completeness were 0.21~m and 0.14~m, and for the latter 0.68~m, 0.17~m respectively. The lower density flight took 20\% less time. As expected, completeness is worse at lower target densities. At the lower density, accuracy is worse because two adjacent scan lines have smaller overlap. Put another way, a side benefit of specifying higher density is the higher accuracy from scan line overlap. \subsection{Data Collection} \label{s:data-collection} \textsc{ares}\xspace relies on a careful parameter sensitivity analysis (in both simulation and on \textit{real-world traces}) to determine model collection flight parameters: speed, height, and orientation (\secref{s:slam-phase}). We have evaluated SLAM error for every combination of drone speed (ranging from 0.5~m/s to 3~m/s), distance from building (10~m to 40~m) and orientation (parallel to perpendicular). We present a subset of these results for space reasons. For these experiments, we use the trajectory described in \figref{fig:mod_total}. We report the average numbers for each experiment. \begin{figure}[t] \begin{minipage}[b]{0.45\columnwidth} \centering \includegraphics[width=0.99\columnwidth]{figs/fig_parallel_vs_other_orientations.pdf} \caption{SLAM errors for LiDAR orientations.} \label{fig:parallel_vs_other_orientations} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{0.45\columnwidth} \centering \includegraphics[width=0.99\columnwidth]{figs/fig_parallel_slam_param_study.pdf} \caption{SLAM error for different speeds and building distances.} \label{fig:parallel_slam_vs_height_speed} \end{minipage} \end{figure} \parab{Best choice of orientation is parallel.} \figref{fig:parallel_vs_other_orientations} plots SLAM error as a function of LiDAR orientation (\figref{fig:lidar_orientation}) with respect to the direction of motion. A parallel orientation has lowest SLAM error (in \figref{fig:parallel_vs_other_orientations}, yaw 0$\degree$ corresponds to parallel and yaw 90$\degree$ to perpendicular), because it has highest overlap between successive frames; as yaw increases, overlap decreases, resulting in higher SLAM error (\secref{s:slam-phase}). \parab{Best choice of distance is 20~m.} \figref{fig:parallel_slam_vs_height_speed} plots the SLAM error as a function of the drone's distance from the building surface for the \textit{parallel} orientation of the LiDAR. Error increases slowly with height; beyond a 20~m distance from the building, the error is more than 1~m. Point densities decrease with height and affect SLAM's ability to track features/points across frames (\secref{s:slam-phase}). Rather than fly lower, \textsc{ares}\xspace operates at a 20~m distance (or a 40~m height, since in our experiments buildings are 20~m tall) to reduce flight duration. \parab{Best choice of speed is 1~m/s.} Speed impacts SLAM positioning error significantly (\figref{fig:parallel_slam_vs_height_speed}). Beyond 1~m/s, SLAM cannot track frames accurately because of lower overlap between frames (\secref{s:slam-phase}). Below 1~m/s \emph{i.e.,}\xspace at 0.5~m/s, the flight duration (in seconds) is twice that of 1~m/s which results in drift error accumulation. To achieve accurate reconstruction, \textsc{ares}\xspace chooses to fly the drone at 1~m/s. \parab{Real-world traces confirm these observations.} \textit{Real-world traces} (\secref{s:app_data_coll}) validate these parameter choices (~\tabref{tab:real_speed_vs_rmse}, ~\tabref{tab:real_height_vs_rmse}) \emph{i.e.,}\xspace fly slow, close to the building and in parallel. \subsection{Boundary Detection} \label{sec:eval:building} \parab{Methodology and metrics.} We use two metrics for building boundary estimation: accuracy, and completeness. Accuracy is the average (2-D) distance between each point (quantized to 0.1~m) on the predicted boundary and the nearest point on the actual building boundary. Completeness, is the average distance between each point on the actual boundary and the nearest point on \textsc{ares}\xspace's predicted boundary. Lower values of accuracy and completeness are better. We use both real-world traces collected from our \textsc{ares}\xspace prototype and synthetic traces from AirSim. To compute ground truth for real-world traces, we pin-pointed the building's boundary on Google Maps~\cite{google_maps}. For AirSim, we collected the ground truth from the Unreal engine. \parab{Boundary detection can run at full frame rate.} \tabref{tab:system_execution_time_eval} shows the time taken for each component of boundary detection, on our \textit{real-world} traces on a single core of a GPU-equipped desktop. The average processing time per point cloud is 100~ms, dominated by GPU-accelerated surface normal estimation (76~ms). This can sustain 10~fps. However, our LiDAR generates 20~fps, so \textsc{ares}\xspace uses every other frame, without sacrificing accuracy. \parab{Boundary detection is accurate.} To evaluate the accuracy of \textsc{ares}\xspace's boundary extraction, we experimented with 3 \textit{real-world traces} collected over a 70~m x 60~m x 20~m building. For these traces, \textsc{ares}\xspace's average accuracy is 1.42~m and its completeness is 1.25~m, even at the highest compression and when it samples every other frames. \parab{Other results.} We extensively evaluated \textsc{ares}\xspace's boundary detection algorithm's robustness to different building shapes (\tabref{tab:recon_buildings}), point cloud compression (\tabref{tab:recon_compression}), and point cloud sub-sampling. Furthermore, we performed an extensive parameter study to find the right flight parameters \emph{i.e.,}\xspace speed (\figref{fig:recon_speed}), height (\figref{fig:recon_height}) and orientation. For brevity, we have included results and discussions in the appendix (\secref{s:app_recon_flight}). We summarize two results. First, recon flights can be short (boundary detection is insensitive to point density and overlap). So, it can use perpendicular orientation, fly at 60~m from the building at 4~m/s. Second, it tolerates sub-sampling upto one point cloud per second. \section{Evaluation} \label{sec:eval} We evaluate (a) \textsc{ares}\xspace's ability to reconstruct 3D models in near real-time, and (b) the accuracy of these 3D models. We also describe our parameter sensitivity analyses for data collection, and evaluate boundary detection performance. \parab{Implementation.} Our evaluations use a complete implementation of \textsc{ares}\xspace, which uses the Point Cloud Library (PCL~\cite{octree}), the Cartographer~\cite{Cartographer} LiDAR SLAM implementation, the Boost C++ libraries~\cite{Boost}, and the Robotic Operating System (ROS~\cite{ros}). For the recon phase, we used functions from the Point Cloud Library (PCL~\cite{octree}) for plane-fitting, outlier removal and clustering. Our compression and extraction modules also use PCL and are implemented as ROS nodes. The drift detection module uses a Python package for the Umeyama alignment~\cite{grupp2017evo}. Not counting libraries and packages it uses, \textsc{ares}\xspace is 15,500 lines of code. \parab{Simulations.} We evaluate \textsc{ares}\xspace using a photorealistic simulator, AirSim~\cite{airsim}, that models realistic physical environments using a game engine, then simulates drone flights over these environments and records sensor readings taken from the perspective of the drone. AirSim has a parametrizable model for a LiDAR; we used the parameters for the Ouster OS1-64 in our simulation experiments. \textsc{ares}\xspace generates trajectories for the AirSim drone, then records the data generated by the LiDAR, and processes it to obtain the 3D model. For computing the metrics above, we obtain ground truth from AirSim. To build the ground truth 3D model, we flew a drone equipped with a LiDAR several times over the region of interest in AirSim (using a slow and low flight) and then stitched all the resulting point clouds using ground truth positioning information from AirSim. \parab{Real-world Traces.} In addition, we have collected data from nearly 30 flights (each of about 25 minutes) on an M600Pro drone with an Ouster OS1-64 LiDAR on a commercial complex. We use this data to validate parameter choices and estimate reconstruction accuracy in practice. \parab{Metrics.} In this section, we quantify end-to-end latency, 3D model accuracy and completeness (\secref{s:slam-phase}), and positioning error for a variety of experimental scenarios. We also quantify \textsc{ares}\xspace's energy-efficiency (using flight duration as a proxy for drone battery usage) and the computational capabilities of its processing pipeline. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Compression\\ profile\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Bandwidth\\ (Mbps)\end{tabular}} & \multicolumn{2}{c|}{Accuracy (m)} & \multicolumn{2}{c|}{Comp. (m)} \\ \cline{3-6} & & \begin{tabular}[c]{@{}c@{}}w/o\end{tabular} & \begin{tabular}[c]{@{}c@{}}w\end{tabular} & \begin{tabular}[c]{@{}c@{}}w/o\end{tabular} & \begin{tabular}[c]{@{}c@{}}w\end{tabular} \\ \hline Raw & 480.00 & 0.62 & \textbf{0.14} & 0.43 & \textbf{0.17} \\ \hline \begin{tabular}[c]{@{}c@{}}View-point\end{tabular} & 42.70 & 0.62 & \textbf{0.14} & 0.43 & \textbf{0.17} \\ \hline \begin{tabular}[c]{@{}c@{}}Low\end{tabular} & 3.80 & 0.63 & \textbf{0.21} & 0.40 & \textbf{0.24} \\ \hline \begin{tabular}[c]{@{}c@{}}Medium\end{tabular} & 2.50 & 0.91 & \textbf{0.37} & 0.40 & \textbf{0.33} \\ \hline \begin{tabular}[c]{@{}c@{}}High\end{tabular} & 1.27 & 1.08 & \textbf{0.52} & 0.63 & \textbf{0.38} \\ \hline \end{tabular} \caption{Reconstruction accuracy and completeness (comp.) for a 50m~x~100m~x~20m building with (w) and without (w/o) recalibration.} \label{tab:e2e_reconstruction_errors_vs_compression_building_one} \end{table} \begin{table}[htbp] \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Compression\\ profile\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Bandwidth\\ (Mbps)\end{tabular}} & \multicolumn{2}{c|}{Accuracy (m)} & \multicolumn{2}{c|}{Comp. (m)} \\ \cline{3-6} & & \begin{tabular}[c]{@{}c@{}}w/o\end{tabular} & \begin{tabular}[c]{@{}c@{}}w\end{tabular} & \begin{tabular}[c]{@{}c@{}}w/o\end{tabular} & \begin{tabular}[c]{@{}c@{}}w\end{tabular} \\ \hline Raw & 480.00 & 0.87 & \textbf{0.21} & 0.35 & \textbf{0.09} \\ \hline \begin{tabular}[c]{@{}c@{}}View-point\end{tabular} & 42.70 & 0.87 & \textbf{0.21} & 0.35 & \textbf{0.09} \\ \hline \begin{tabular}[c]{@{}c@{}}Low\end{tabular} & 3.80 & 0.96 & \textbf{0.25} & 0.40 & \textbf{0.14} \\ \hline \begin{tabular}[c]{@{}c@{}}Medium\end{tabular} & 2.50 & 1.12 & \textbf{0.66} & 0.43 & \textbf{0.21} \\ \hline \begin{tabular}[c]{@{}c@{}}High\end{tabular} & 1.27 & 1.20 & \textbf{0.73} & 0.59 & \textbf{0.24} \\ \hline \end{tabular} \caption{Reconstruction accuracy and completeness (comp.) for a 50m~x~50m~x~20m building with (w) and without (w/o) recalibration.} \label{tab:e2e_reconstruction_errors_vs_compression_building_two} \end{table} \subsection{3D Model Reconstruction} \label{sec:eval:accuracy} \begin{figure*}[t] \centering \includegraphics[width=2.0\columnwidth]{figs/fig_3D_models.pdf} \caption{Reconstructed 3D models at different accuracies} \label{fig:3d_model} \end{figure*} \parab{Experiment setup.} To evaluate the end-to-end performance of \textsc{ares}\xspace in building an accurate 3D model in near real-time, we collected and reconstructed the 3D model of two buildings: a) a \textit{large} 50m~x~100m~x~20m (L~x~W~x~H) and, b) a \textit{small} 50m~x~50m~x~20m building. We then compared the reconstruction performance of these models with and without re-calibration against ground truth models generated from AirSim. Lower is better for accuracy and completeness. \tabref{tab:e2e_reconstruction_errors_vs_compression_building_one} and \tabref{tab:e2e_reconstruction_errors_vs_compression_building_two} show accuracy and completeness, \textit{\textbf{with}} and \textit{\textbf{without}} re-calibration. Furthermore, they show results both for the uncompressed point cloud, as well as various compression options. These compression options include: view-point filtering (\secref{s:compression}), and 3 different levels of compression (\textit{low}, \textit{medium} and \textit{high}), corresponding to the following combinations of octree resolution and point resolution (\secref{s:compression}): $(0.25, 0.10)$, $(0.25, 0.25)$ and $(0.50, 0.50)$. In our experiments with our drone, we have found achievable LTE throughput to range from 1-4~Mbps; we chose these compression levels to correspond to this range. \parab{Sub-meter reconstruction on compressed point clouds.} As \tabref{tab:e2e_reconstruction_errors_vs_compression_building_one} and \tabref{tab:e2e_reconstruction_errors_vs_compression_building_two} shows, viewpoint filtering achieves a 10$\times$ compression. Low compression is an order of magnitude more efficient beyond this. Despite this, \textsc{ares}\xspace can achieve high quality reconstruction. Consider accuracy with re-calibration: the raw-point cloud has an accuracy of 0.14~m and 0.21~m, which is attributable entirely to SLAM error. View-point filtering does not degrade accuracy since it only omits zero returns. Low compression, with a bandwidth of 3.8~Mbps (easily achievable over LTE and over 100$\times$ more compact than the raw LiDAR output) only adds 4~cm to 7~cm to the accuracy metric. Even at high compression, \textsc{ares}\xspace has sub-meter for both buildings. Similarly, completeness is also remarkably good. Low compression adds 5~cm to 7~cm to the completeness, while high compression (350$\times$ more bandwidth efficient than the raw readings) only adds 15~cm to 21~cm to the completeness. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Compression\\ profile\end{tabular} & \begin{tabular}[c]{@{}c@{}}Bandwidth \\ (Mbps)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Completeness\\ (m)\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}View-point\end{tabular} & 42.70 & 0.00 & 0.00 \\ \hline \begin{tabular}[c]{@{}c@{}}Low\end{tabular} & 3.80 & 0.13 & 0.09 \\ \hline \begin{tabular}[c]{@{}c@{}}Medium\end{tabular} & 2.50 & 0.23 & 0.16 \\ \hline \begin{tabular}[c]{@{}c@{}}High\end{tabular} & 1.27 & 0.28 & 0.29 \\ \hline \end{tabular} \caption{Reconstruction from real-world traces, relative to uncompressed trace.} \label{tab:e2e_reconstruction_errors_vs_compression_real_world} \end{table} To understand the degradation resulting from lower accuracy and completeness, consider \figref{fig:3d_model}, which shows the ground-truth model, together with reconstructions at varying fidelity. As accuracy worsens, artifacts in the reconstruction become more visible. With an accuracy of 0.25~m (low compression), the model closely matches the ground truth, but the textured building surface on the right shows some small artifacts. With an accuracy of 0.73~m (high compression), the textures on the right are almost invisible, but textures in other parts of the scene are visible. As we show later, LTE can sustain low compression, so when possible \textsc{ares}\xspace should use this setting. \parab{Re-calibration helps reduce error.} With re-calibration, our low compression achieves an accuracy of 0.21~m and 0.25~m for both buildings. Across the board, re-calibration accounts for 42-71~cm (upto 77~\%) improvement in accuracy, and 7-35~cm (upto 74~\%) improvement in completeness, depending on the degree of compression. \parab{Sub-meter accuracy is possible on real-world traces.} Results from our drone flights validate that real-world data can result in comparable performance (\tabref{tab:e2e_reconstruction_errors_vs_compression_real_world}). Because we lack a reference ground truth for real-world data, we use the 3D model generated from raw, uncompressed traces. With real-world traces, we can build accurate, and complete 3D models that are within 9-13~cm completeness and accuracy for low compression, and about 16-23~cm for medium compression, with respect to the uncompressed traces. These results suggest that highly compressed point clouds do not significantly impact accuracy and completeness. \parab{\textsc{ares}\xspace generalizes to different building shapes.} Our results so far, and the descriptions in \secref{sec:design}, have focused on rectangular buildings. \textsc{ares}\xspace can accurately reconstruct a variety of building types, as \tabref{tab:e2e_reconstruction_vs_building_types} shows. For these results, we use \textsc{ares}\xspace's default flight parameters and low compression. Larger buildings (pentagonal, plus, and H-shaped) have larger flight durations partly because of their size and because they require two re-calibration steps. Even then, in all buildings, \textsc{ares}\xspace achieves sub-meter accuracy and completeness and re-calibration improves accuracy and completeness values by a factor of 3.1 and 2.5 respectively. \parab{Lower target density worsens completeness.} To demonstrate that the user use the target density tuning knob to obtain less complete models more quickly, we conducted an experiment with \textsc{ares}\xspace (with re-calibration) at two different densities: 7.5 points per m$^2$ and 1 point per m$^2$. For the former, accuracy and completeness were 0.21~m and 0.14~m, and for the latter 0.68~m, 0.17~m respectively. The lower density flight took 20\% less time. As expected, completeness is worse at lower target densities. At the lower density, accuracy is worse because two adjacent scan lines have smaller overlap. Put another way, a side benefit of specifying higher density is the higher accuracy from scan line overlap. \parab{\textsc{ares}\xspace generalizes to other types of structures.} To show that \textsc{ares}\xspace can generalise and reconstruct other types of 3D structures \emph{e.g.,}\xspace airplanes, helicopters \emph{etc.}\xspace, we modeled a real-world blimp~\cite{blimp} (15~m x 60~m x 30~m) in AirSim and collected its 3D model using \textsc{ares}\xspace. In less than 7.5 minutes, with re-calibration, \textsc{ares}\xspace reconstructed the blimp with an accuracy of 23~cm and completeness of just 3~cm. This shows that, besides large buildings, \textsc{ares}\xspace can be used to reconstruct other large 3D structures in real-time. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Building \\type\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Flight \\duration (s)\end{tabular}} & \multicolumn{2}{c|}{Accuracy (m)} & \multicolumn{2}{c|}{Comp. (m)} \\ \cline{3-6} & & \begin{tabular}[c]{@{}c@{}}w/o\end{tabular} & \begin{tabular}[c]{@{}c@{}}w\end{tabular} & \begin{tabular}[c]{@{}c@{}}w/o\end{tabular} & \begin{tabular}[c]{@{}c@{}}w\end{tabular} \\ \hline Star-shaped & 493 & 1.05 & \textbf{0.37} & 0.40 & \textbf{0.12} \\ \hline Rectangular & 719 & 0.63 & \textbf{0.21} & 0.40 & \textbf{0.24} \\ \hline Pentagonal & 884 & 1.47 & \textbf{0.31} & 0.42 & \textbf{0.12} \\ \hline Plus-shaped & 888 & 0.51 & \textbf{0.31} & 0.08 & \textbf{0.06} \\ \hline H-shaped & 908 & 1.10 & \textbf{0.34} & 0.27 & \textbf{0.10} \\ \hline \end{tabular} \caption{\textsc{ares}\xspace 3D reconstruction for different building types at low compression with (w) and without (w/o) re-calibration.} \label{tab:e2e_reconstruction_vs_building_types} \end{table} \begin{table*}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Velocity\\ (m/s)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Height\\ (m)\end{tabular}} & \multicolumn{4}{c|}{Flight duration (s)} & \multicolumn{2}{c|}{3D Model} \\ \cline{3-8} & & Recon & \begin{tabular}[c]{@{}c@{}}Model\\ collection\end{tabular} & Re-calibration & Total & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Completeness\\ (m)\end{tabular} \\ \hline 0.5 & 30 & 136 & 673 & 476 & 1285 & 0.43 & 0.34 \\ \hline 1 & 40 & 136 & 343 & 240 & 719 & 0.14 & 0.17 \\ \hline 2 & 50 & 136 & 72 & 272 & 480 & 0.91 & 0.45 \\ \hline \begin{tabular}[c]{@{}c@{}}Exhaustive \\ SLAM flight (1 m/s)\end{tabular} & 40 & \multicolumn{4}{c|}{1520} & 0.14 & 0.17 \\ \hline \end{tabular} \caption{\textsc{ares}\xspace's flight durations for various parameter choices.} \label{tab:battery_usage} \end{table*} \begin{table*}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Compression \\ profile\end{tabular} & \begin{tabular}[c]{@{}c@{}}Bandwidth\\ requirement (mbps)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Compression\\ time (ms)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Network\\ latency (ms)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Extraction\\ time (ms)\end{tabular} & \begin{tabular}[c]{@{}c@{}}SLAM\\ time (ms)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Total\\ time (ms)\end{tabular} \\ \hline Low & 3.8 & 15.49 & 62.87 & 5.05 & 34.49 & 117.91 \\ \hline Medium & 2.5 & 14.64 & 65.31 & 3.07 & 30.78 & 113.78 \\ \hline High & 1.3 & 13.65 & 65.33 & 3.03 & 23.86 & 106.88 \\ \hline \end{tabular} \caption{Feasibility of transmitting compressed point clouds over LTE network.} \label{tab:real_time_experiments} \end{table*} \subsection{\textsc{ares}\xspace Performance} \label{s:other-meas-sysn} \parab{Real-time model collection over LTE is feasible.} To validate that \textsc{ares}\xspace can collect a 3D model end-to-end in near real-time, we used our implementation to conduct an experiment in which a mobile device streamed 10-12 minutes worth of compressed point clouds from a real-world trace, over an LTE connection, to a 16-core GPU-enabled Amazon VM. (Our experiment only ran model collection; recon also runs in real-time as discussed below). At all three compression modes, \textsc{ares}\xspace was able to stream point clouds in real time (\tabref{tab:real_time_experiments}), and the total end-to-end processing time per frame is about 110~ms, of which nearly 65~ms is network latency. Thus, \textsc{ares}\xspace can make available \textit{a 3D model of a building in about a 100~ms} after receiving the last frame! \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|} \hline \textsc{ares}\xspace component & Sub-component & {\begin{tabular}[c]{@{}c@{}}Per-frame\\execution time\\ (ms)\end{tabular}} \\ \hline \multirow{8}{*}{\begin{tabular}[c]{@{}c@{}}Recon phase\end{tabular}} & 3D frame compression & 13.0 $\pm 1.3$ \\ \cline {2-3} & 3D frame extraction & 3.0 $\pm 0.3$ \\ \cline {2-3} & GPU normal estimation & 76.0 $\pm 82$ \\ \cline {2-3} & RANSAC plane-fitting & 5.0 $\pm 9.0$ \\ \cline {2-3} & Outlier removal & 0.2 $\pm 0.3$ \\ \cline {2-3} & Rooftop detection & 0.005 \\ \cline {2-3} & Rooftop extraction & 6.0 $\pm 5.0$ \\ \cline {2-3} & Rooftop stitching & 3.0 $\pm 2.0$ \\ \cline {2-3} & Total time & 100 $\pm 90.0$ \\ \hline Model collection & LiDAR SLAM & 37.0 $\pm 3.0$ \\ \hline \multicolumn{2}{|c|}{3D Reconstruction} & 10.3 $\pm 2.0$ \\ \hline \end{tabular} \caption{Processing times for \textsc{ares}\xspace components.} \label{tab:system_execution_time_eval} \end{table} \parab{\textsc{ares}\xspace supports full frame rate processing.} We profiled the execution time of each component of \textsc{ares}\xspace on a 15-minute real-world trace. Point cloud compression executes on the drone, and other components run on an AWS VM with 16 cores, 64 GB RAM and a Nvidia T4 GPU. We use the GPU to offload the computation of surface normals for building detection. During recon, point cloud compression takes 13~ms per frame (\tabref{tab:system_execution_time_eval}). Extracting the building geometry requires 100~ms per frame; with these numbers, we can sustain about 10~fps, so with a 20~fps LiDAR, we process roughly every other frame. Despite this, our building detector is quite accurate (\secref{sec:eval:building}). During model collection, SLAM requires 37~ms per frame, and 3D reconstruction requires about 10~ms (\tabref{tab:system_execution_time_eval}). The former uses 8 cores, so we have been able to run these two components in a pipeline to sustain 20~fps. Thus, a moderately provisioned, cloud VM suffices to run \textsc{ares}\xspace at full frame rate with an end-to-end compute latency of about 100~ms for reconnaissance, and 50~ms for model collection. \parab{Short flight durations can produce accurate models.} \textsc{ares}\xspace strives to reduce drone battery depletion in its design by generating short duration flights without sacrificing accuracy and completeness. To show that \textsc{ares}\xspace's defaults of 1~m/s speed and 40~m height represent the best point in this tradeoff space, we compare to a lower, slower flight (30~m, 0.5~m/s), and a faster, higher flight (50~m, 2~m/s). \tabref{tab:battery_usage} shows that, on the large building the lower, slower flight has a longer trajectory, resulting in more re-calibrations. The resulting model has worse accuracy and completeness; re-calibration can limit drift error, but not reverse it. A faster, higher flight has a slightly shorter trajectory, but the resulting model's accuracy is very poor, because there is less overlap between point clouds at higher speeds (\secref{s:slam-phase}). Finally, \tabref{tab:battery_usage} also shows the benefits of a recon flight: an exhaustive flight that uses the model collection parameters and does not perform recon is 3$\times$ longer than \textsc{ares}\xspace's flight. Results on the small building are qualitatively similar, so we omit these for brevity. \subsection{Data Collection} \label{s:data-collection} \textsc{ares}\xspace relies on a careful parameter sensitivity analysis to determine model collection flight parameters: speed, height, and orientation (\secref{s:slam-phase}). We have evaluated SLAM error for every combination of drone speed (ranging from 0.5~m/s to 3~m/s), distance from building (10~m to 40~m) and orientation (parallel to perpendicular). We present a subset of these results for space reasons. For these experiments, we use the trajectory described in \figref{fig:mod_total}. We report the average numbers from three runs of each experiment. \begin{figure}[t] \centering \includegraphics[width=0.6\columnwidth]{figs/fig_parallel_slam_param_study.pdf} \caption{SLAM error as a function of drone speed and distance from building.} \label{fig:parallel_slam_vs_height_speed} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.6\columnwidth]{figs/fig_parallel_vs_other_orientations.pdf} \caption{SLAM error for different orientations} \label{fig:parallel_vs_other_orientations} \end{figure} \parab{Best choice of distance is 20~m.} \figref{fig:parallel_slam_vs_height_speed} plots the SLAM error as a function of the drone's distance from the building surface for the \textit{parallel} orientation of the LiDAR. Error increases slowly with height; beyond a 20~m distance from the building, the error is more than 1~m. This is because point densities decrease with height, affecting SLAM's ability to track features or points across frames (\secref{s:slam-phase}). Rather than fly lower, \textsc{ares}\xspace chooses to operate at a 20~m distance (or a 40~m height, since in our experiments buildings are 20~m tall) to reduce flight duration. \parab{Best choice of speed is 1~m/s.} Speed impacts SLAM positioning error significantly (\figref{fig:parallel_slam_vs_height_speed}). Beyond 1~m/s, SLAM cannot track frames accurately because of lower overlap between frames (\secref{s:slam-phase}). Below 1~m/s \emph{i.e.,}\xspace at 0.5~m/s, the flight duration (in seconds) is twice that of 1~m/s which results in drift error accumulation. To achieve sub-meter reconstruction, \textsc{ares}\xspace chooses to fly the drone at 1~m/s. \parab{Best choice of orientation is parallel.} When the drone flies at 1~m/s and 20~m from the building, the parallel orientation has lowest SLAM error (\figref{fig:parallel_vs_other_orientations}) plots in which yaw 0\degree corresponds to parallel and yaw 90\degree to perpendicular). The parallel orientation has highest overlap between successive frames; as yaw increases, overlap decreases, resulting in higher SLAM error (\secref{s:slam-phase}). \parab{Real-world traces confirm these observations.} From the preceding discussion, \textsc{ares}\xspace chooses to fly slow, low, and at a parallel orientation. Our real-world traces confirm these choices. On these traces, because we cannot assess exact positioning error, we compute error relative to GPS. Our results show that SLAM-to-GPS error at 1.5~m/s with parallel orientation is 1.25~m. A perpendicular orientation at the same speed has 2$\times$ higher error, and this error doubles at twice the speed. Similar results hold as a function of height, which we have omitted for brevity. \subsection{Boundary Detection Performance} \label{sec:eval:building} \parab{Methodology and metrics.} We use two metrics for building boundary estimation: accuracy, and completeness. Accuracy is the average (2-D) distance between each point (quantized to 0.1~m) on the predicted boundary and the nearest point on the actual building boundary. Completeness, is the average distance between each point on the actual boundary and the nearest point on \textsc{ares}\xspace's predicted boundary. Lower values of accuracy and completeness are better. We use both real-world traces collected from our \textsc{ares}\xspace prototype and synthetic traces from Airsim. To compute ground truth for real-world traces, we pin-pointed the building's boundary on Google Maps~\cite{google_maps}. For Airsim, we collected the ground truth from the Unreal engine's editor window. \parab{Boundary detection can run at full frame rate.} \tabref{tab:system_execution_time_eval} shows the time taken for each component of boundary detection, on our real-world traces on a single core of a GPU-equipped desktop. The average processing time per point cloud is 100~ms, dominated by GPU-accelerated surface normal estimation (76~ms). This can sustain 10~fps. However, our LiDAR generates 20~fps, so \textsc{ares}\xspace uses every other frame, without sacrificing accuracy. \parab{Boundary detection is accurate.} To evaluate the accuracy of \textsc{ares}\xspace's building boundary detection, we experimented with five building shapes (\emph{i.e.,}\xspace rectangular, pentagonal, H-shaped, star-shaped, and plus-shaped). \textsc{ares}\xspace flies the drone at 4~m/s and at a distance of 60~m away from the building's roof using a perpendicular orientation. For all building shapes, \textsc{ares}\xspace accurately (to within 2.5~m) extracted the boundary of the building (\tabref{tab:boundary_estimation_shapes}) even when using high compression at 10~Hz (\emph{i.e.,}\xspace every other point cloud). The average accuracy is then 1.8~m and completeness is less than 2.0~m for all buildings. On real-world traces collected over a 70~m x 60~m x 20~m building, \textsc{ares}\xspace's accuracy is 1.42~m and its completeness is 1.25~m (\tabref{tab:real_world_building_detector}), even at the highest compression and when it samples every other frames. Relative to view-point filtering, boundary accuracy and completeness drops by only 25~cm, even with two orders of magnitude compression. \begin{table}[t] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Building type\end{tabular} & \begin{tabular}[c]{@{}c@{}}Flight time (s)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Acc. (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Comp. (m)\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Rectangular\end{tabular} & 136 & 2.55 & 2.58 \\ \hline \begin{tabular}[c]{@{}c@{}}Pentagon\end{tabular} & 136 & 2.58 & 2.58 \\ \hline \begin{tabular}[c]{@{}c@{}}H-shaped \end{tabular} & 136 & 1.39 & 1.67 \\ \hline \begin{tabular}[c]{@{}c@{}}Star-shaped\end{tabular} & 136 & 1.31 & 1.83 \\ \hline \begin{tabular}[c]{@{}c@{}}Plus-shaped\end{tabular} & 136 & 1.35 & 1.55 \\ \hline \end{tabular} \caption{Boundary detection accuracy for different building shapes.} \label{tab:boundary_estimation_shapes} \end{table} \parab{Other results.} We obtained these recon flight parameters from an exhaustive parameter sweep. We omit a detailed discussion, but summarize two results. First, recon flights can be short, because boundary detection is less sensitive to point density and overlap. So, it can use perpendicular orientation, fly at 60~m from the building at 4~m/s. Second, boundary detection can tolerate point cloud sub-sampling. \textsc{ares}\xspace already uses only every other frame, but our evaluations show that boundary detection works well even when using one point cloud every 12 seconds. \begin{table}[b] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Compression\\ profile\end{tabular} & \begin{tabular}[c]{@{}c@{}}Bandwidth \\ (Mbps)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Accuracy\\ (m)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Comp.\\ (m)\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}View-point @20~Hz\end{tabular} & 42.70 & 1.16 & 1.06 \\ \hline \begin{tabular}[c]{@{}c@{}}Low @20~Hz\end{tabular} & 3.80 & 1.15 & 1.11 \\ \hline \begin{tabular}[c]{@{}c@{}}Medium @20~Hz\end{tabular} & 2.50 & 1.17 & 1.06 \\ \hline \begin{tabular}[c]{@{}c@{}}High @20~Hz\end{tabular} & 1.27 & 1.42 & 1.25 \\ \hline \begin{tabular}[c]{@{}c@{}}High @10 Hz\end{tabular} & 0.63 & 1.40 & 1.30 \\ \hline \end{tabular} \caption{Boundary detection performance on real-world traces.} \label{tab:real_world_building_detector} \end{table} \section{Introduction} \label{sec:intro} In the coming years, drones will revolutionize several industry segments, ranging from aerial photography to mapping and three-dimensional (3D) reconstruction. Recent estimates put the total market for these drone-based services at 63 billion US dollars by 2025~\cite{drone-market-1}, with 3D reconstruction, the task of generating a digital \textit{3D model} of the environment, accounting for nearly a third of the drone services market~\cite{drone-market-2}. In this paper, we focus on reconstructing the 3D model of large structures specifically buildings\footnote{Though our evaluations mostly focus on buildings, we show (\secref{sec:eval}) that \textsc{ares}\xspace can generalise and be used for other large structures \emph{e.g.,}\xspace blimps as well}. A 3D model (\secref{sec:motivation}) captures the three-dimensional geometry of a structure such as a building. Different representations of this geometry exist, the finest of which is the 3D \textit{point cloud}. This is a collection of points on the surface of an object or structure, where each point has a 3D position and other attributes. In general, 3D reconstruction can useful for a plethora of applications. For instance, they can be used for animations in films and video games, preserving historical and tourist sites, for archaeology, city planning, and to capture buildings or rooftops for repair and solar installation \emph{etc.}\xspace. More recently, these 3D models are used as inputs to immersive virtual reality (VR) applications. The most dominant approach (\secref{sec:related}) to model collection is \textit{photogrammetry}, a technique which infers 3D models from a sequence of 2D camera images (and the positions associated with these images), captured either using drone-mounted cameras~\cite{7139681, 7989530, 8124461, 8628990} or by humans using ordinary mobile devices~\cite{company_hover}. With the increasing commoditization of LiDAR, 3D model reconstruction can use drone-mounted LiDARs; prior work~\cite{uav_lidar_1,uav_lidar_2} uses \textit{remote sensing} technologies, specialized unmanned aerial vehicles (UAVs) with long-range LiDAR sensors and expensive GPS equipment. Both these approaches are compute-intensive and they compute the 3D model \textit{offline} \emph{i.e.,}\xspace after the drone has landed. Although most prior research work has explored such \textit{offline} reconstruction~\cite{uav_lidar_1,uav_lidar_2,7139681, 7989530, 8124461, 8628990} (reporting accuracies in the centimeter to sub-meter range), Industry is starting to use the \textit{time} to do reconstruction as an important market discriminator for brand positioning. Startups promise to deliver models in hours or within a day~\cite{company_drone_deploy} after drone data collection. This trend has inspired newer, more time-sensitive applications of 3D modeling like post-disaster reconstruction for search \& rescue missions~\cite{drone_disaster_relief_1, drone_disaster_relief_2, drone_disaster_relief_3, drone_disaster_relief_4, drone_disaster_relief_5}, for post-flight integrity checks of airplanes~\cite{company_austrian_airline}, for construction site monitoring~\cite{company_drone_deploy}, to map mines \& tunnels~\cite{hovermap, Prometheus}, and to document interior office spaces~\cite{company_kaarta}. Only one company we know of so far promises \textit{real-time} reconstruction. However, this involves a human-operator holding a hand-held kit with a stereo camera to reconstruct interior spaces~\cite{company_kaarta}. \parab{Our focus.} Motivated by these trends, improvements in 3D sensing, and the increasing availability of off-the-shelf drones, we focus on \textit{upto sub-meter accurate, autonomous, real-time reconstruction of large buildings} using a drone-mounted LiDAR. Unlike the hand-held kit developed by~\cite{company_kaarta} which requires a human-in-the-loop to navigate the interior space, we aim to develop a completely automated solution given only an approximate location of the building(s) to reconstruct. To achieve the three requirements \emph{i.e.,}\xspace accuracy, autonomy and real-time 3D reconstruction, our approach devises algorithms to reconnoiter the specified area, then designs a trajectory to obtain the 3D model. However, these tasks are challenging for the following reasons. To achieve sub-meter accuracy, to position 3D point clouds, we cannot use GPS (because it is inaccurate), so we must use Simultaneous Localization and Mapping (SLAM~\cite{slam_part1})\footnote{Two startups we know of~\cite{hovermap, Prometheus} also use SLAM for drone \textit{offline} reconstruction (\secref{sec:related}).}. technologies. However using SLAM on a drone is particularly challenging because: a) relative to vehicle SLAM, on a drone, SLAM algorithms have access to 10~x sparser point clouds, b) at the same time, model collection must be relatively fast because of a drone's limited battery life, and c) both these can result in significant drift error accumulation, inherent to SLAM algorithms. To achieve autonomous reconstruction, identifying the exact location and boundary of the building can be challenging due to the large variety of building shapes and nearby objects like trees \emph{etc.}\xspace. Lastly, to enable real-time reconstruction, processing point clouds can require compute resources well beyond those on today's drones. \parab{Contributions.} To address these challenges, this paper develops a system called \textsc{ares}\xspace, architected as follows: Given a rectangular region that encompasses a building, a cloud-based service generates energy-efficient drone trajectories to find the building and collect its 3D model. As the drone flies, it streams compressed LiDAR data over LTE; the cloud service runs SLAM in near real-time, adjusts the trajectory to minimize drift, and reconstructs the 3D model on the fly (\secref{sec:design}). To minimize drone battery usage, \textsc{ares}\xspace conducts a fast, energy-efficient \textit{reconnaissance} flight to determine the boundaries of the building, then runs a slower \textit{model collection} flight within the building's boundaries that carefully balances drone battery usage while ensuring high reconstruction accuracy. To achieve this, \textsc{ares}\xspace makes three contributions. \textsc{ares}\xspace's first contribution is the design of \textit{model collection trajectories} that navigate the competing constraints \emph{i.e.,}\xspace accuracy and battery life (\secref{s:slam-phase}). Model collection uses SLAM, and SLAM error is sensitive to how the LiDAR is mounted, how fast and at what height the drone flies. Faster, higher flights use less of the drone's battery, but can incur high SLAM error. The converse is true of slower and lower flights. \textsc{ares}\xspace finds a sweet spot in this trade-off space to balance its accuracy goals with drone battery usage. Even so, SLAM can incur \textit{drift} on longer flights. \textsc{ares}\xspace needs to detect excessive drift, and correct for it. It uses a novel algorithm that tracks consistency between GPS traces and SLAM positions to detect excessive drift, then incorporates a \textit{re-calibration} step to correct for it. Even with \textsc{ares}\xspace's efficient \textit{model collection trajectories}, scans over the entire rectangular area can exhaust a drone's battery. To this end, \textsc{ares}\xspace's second contribution is a robust and efficient \textit{building geometry} extraction algorithm (\secref{s:area-inter-reconn}) that helps focus \textit{model collection} efforts only on the building or region of interest. This algorithm, which runs during the reconnaissance flight, works well even with a fast, high flight that can minimize drone flight duration (and hence battery usage). During the flight, this algorithm extracts the building geometry without constructing the 3D model; doing so using the sparse sensor data from fast flights can introduce significant error. Instead, it relies on detecting planar surfaces by using the consistency of surface normals across points on a plane, then estimates building height and boundary from the plane forming the rooftop. \textsc{ares}\xspace then uses this boundary to plan the \textit{model collection trajectories} described above. Its third contribution is the design of a software pipeline, leveraging GPU acceleration, that ensures that these algorithms can run on the cloud in near real-time (\emph{i.e.,}\xspace at the frame rate of the LiDAR), and with minimal end-to-end processing delays (\secref{sec:eval}). Experiments (\secref{sec:eval}) on a photo-realistic drone simulator (Airsim~\cite{airsim}), validated with data from real-world drone flights, demonstrate that \textsc{ares}\xspace can achieve sub-meter reconstruction accuracy even after compressing the raw sensor data by almost two orders of magnitude. An experiment with a complete \textsc{ares}\xspace implementation is able to stream compressed point clouds over LTE and deliver a 3D model about 100~ms after flight completion. \textsc{ares}\xspace's choice of trajectories drains the battery least while achieving the accuracy target, and its pipeline can process frames at 20~fps while incurring a processing latency of less than 100~ms. To our knowledge, \textsc{ares}\xspace is the first to demonstrate on-line, cloud-based, autonomous, sub-meter, 3D model reconstruction in near real-time. \section{Introduction} \label{sec:intro} \parab{Drone-based 3D reconstruction.} The last few years have seen impressive advances in the commoditization of drones. Today, drones can be equipped with on board compute, a cellular (LTE) radio and sophisticated sensors (\emph{e.g.,}\xspace cameras, stereo cameras and LiDAR). Given this, in the coming years, drones will likely revolutionize aerial photography, mapping and three-dimensional (3D) reconstruction. Recent estimates put the total market for these drone-based services at 63 billion US dollars by 2025~\cite{drone-market-1}, with \textit{3D reconstruction}, the task of generating a digital \textit{3D model} of the environment, accounting for nearly a third of the drone services market~\cite{drone-market-2}. \parab{What is a 3D model?} The term \textit{3D model} covers a wide range of geometric representations of the surfaces of objects, from coarse-grained approximations (cylinders, cubes, intersection of planes), to more fine-grain representations such as \textit{meshes} (small-scale surface tesselations that capture structural variations). In this paper, we seek to extract a fine-grain \textit{point-cloud} of a large structure (\emph{e.g.,}\xspace a building, blimp or airplane) which consists of dense \textit{points} on the surface of the structure. Each point has an associated 3D position, together with other attributes (depending on the sensor used to generate the point-cloud). The point-cloud based 3D model can generate all other representations. \parab{Applications.} 3D models are used in animations in films and video games, for preserving historical and tourist sites, for archaeology, city planning, and to capture buildings or rooftops for repair and solar installation \emph{etc.}\xspace, and as inputs to immersive virtual reality (VR) applications. Most prior research work has explored such \textit{offline} reconstruction~\cite{uav_lidar_1,uav_lidar_2,7139681, 7989530, 8124461, 8628990} and report centimeter to sub-meter accuracy. \parab{Towards real-time 3D reconstruction.} The nascent drone-based reconstruction industry is starting to use the \textit{time-to-reconstruction} as an important market discriminator. Startups promise to deliver models in hours or within a day~\cite{company_drone_deploy} after drone data collection. This trend has inspired newer, more time-sensitive applications of 3D modeling like (a) post-disaster reconstruction for search and rescue missions~\cite{drone_disaster_relief_1, drone_disaster_relief_2, drone_disaster_relief_3, drone_disaster_relief_4, drone_disaster_relief_5}, (b) construction site monitoring~\cite{company_drone_deploy}, (c) mapping mines, cell-towers and tunnels~\cite{hovermap, Prometheus}, (d) documenting interior office spaces~\cite{company_kaarta}, and (e) perhaps, most compelling, between-flight modeling of aircraft fuselages or blimps to determine structural integrity~\cite{company_austrian_airline}. Though startups are starting to promise smaller reconstruction times~\cite{hovermap,Prometheus}, they reconstruct 3D models \textit{offline} \emph{i.e.,}\xspace after the drone has landed. Only one company we know of so far promises \textit{real-time} reconstruction~\cite{company_kaarta}, employing a human-operator holding a hand-held kit with a stereo camera (instead of a drone) to reconstruct interior spaces. \parab{The Goal. } Given this, we want to \textit{accurately} reconstruct large 3D structures in \textit{near real-time}, with \textit{no human-intervention}. \parab{Photogrammetry based reconstruction.} Most existing work (\secref{sec:related}) uses \textit{photogrammetry}, which takes as input a sequence of 2D camera images (and the positions associated with these images), captured either using drone-mounted cameras~\cite{7139681, 7989530, 8124461, 8628990} or by humans using ordinary mobile devices~\cite{company_hover}. Photogrammetry \textit{infers} 3D models using a technique known as multi-view stereo~\cite{furukawa15:_multi_view_stereo}, which combines information from successive images, together with information about local motion, and inferred camera parameters such as the focal length. Photogrammetry approaches estimate depth from 2D images so their reconstruction times are relatively large \emph{i.e.,}\xspace from several hours to days~\cite{company_drone_deploy} for large structures. \parab{LiDAR-based reconstruction.} Unlike photogrammetry, LiDAR-based reconstruction can more directly infer 3-D models, because LiDARs directly provide depth information (unlike cameras). A LiDAR sensor measures distances to surfaces using lasers mounted on a mechanical rotating platform. The lasers and mechanical rotator sit within the enclosure of the LiDAR sensor. With each revolution of the lasers, the sensor returns a point cloud, or a \textit{LiDAR frame}. The point cloud is a set of 3D data points, each corresponding to a distance measurement of a particular position of the surrounding environment from the LiDAR. For instance, an Ouster OS1-64 LiDAR has 64 lasers that scan at 20~Hz, with a horizontal and vertical field of view of 360$\degree$ and 45$\degree$ respectively. To reconstruct a structure \emph{e.g.,}\xspace a building, it suffices, in theory, to \textit{merge} points clouds captured from different locations around the building. To understand what it means to merge point clouds, consider two successive clouds $p$ and $p'$. A point $x$ on the surface of the building may appear both in $p$ and $p'$. However, since the drone has moved, this point $x$ appears at different positions (relative to the LiDAR) in the two point clouds. If we precisely transform both point clouds to the same coordinate frame of reference, then the union of points in $p$ and $p'$ constitutes part of the 3D model of the building. \parab{Accurate positioning ensures high-quality models.} This requires the precise position of the LiDAR at the positions where it captured the point clouds. The accuracy of positioning determines model quality. Two metrics define quality. \textit{Accuracy}, the average distance between the position of a point in the model and the corresponding point on the surface (see \secref{sec:design} for a more precise definition), clearly relies on accurate positioning. \textit{Completeness}, the degree to which a model captures the entire structure, also relies on accurate positioning, since positioning errors can lead to gaps in the captured model. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{figs/fig_slam_vs_gps_3d_model.pdf} \caption{GPS-derived (left), SLAM-derived (middle), and \textsc{ares}\xspace-derived (right) models of a large complex on our campus. \label{fig:slam_vs_gps_3d_model}} \end{figure} \parab{Strawman: Using GPS for positioning.} Because drones can carry GPS receivers, GPS can be used to position point clouds. Unfortunately, GPS errors can be several meters in obstructed settings, resulting in poor accuracy and completeness of the 3D model. The left image of \figref{fig:slam_vs_gps_3d_model} shows a 3D model of a building assembled using a drone flight over a commercial building; the building's outline is fuzzy, as are the contours of the trees surrounding the building. The right image shows a 3D reconstruction using techniques proposed in this paper, which does not use GPS. (All reconstructions in \figref{fig:slam_vs_gps_3d_model} use real data captured from a drone-mounted LiDAR~\secref{sec:eval}). High-precision GNSS/RTK receivers can provide more accurate positioning but require additional infrastructure, are costly, and can perform poorly in urban environments due to non line-of-sight signals that degrade accuracy (\emph{e.g.,}\xspace prior work~\cite{Carloc} reports that RTK-enabled receivers can exhibit tens of meters error in downtown environments). Prior work~\cite{uav_lidar_1,uav_lidar_2} has used expensive GPS receivers in the context of \textit{remote sensing}, using specialized unmanned aerial vehicles (UAVs) with long-range LiDAR sensors (but for \textit{offline} model reconstruction). In contrast, in this paper we consider solutions that employ off-the-shelf technologies: drones, and commodity GPS and LiDAR. \parab{An alternative: Using SLAM for positioning.} LiDAR SLAM (Simultaneous Localization And Mapping~\cite{slam_part1, slam_part2}) algorithms can provide accurate \textit{pose} (position and orientation) estimates, which can be used to position point clouds. These algorithms use scan or feature matching techniques to align consecutive LiDAR frames to determine the pose of the drone throughout its flight. For example, scan matching uses techniques~\cite{icp} to find the closest match for each point in the source point cloud $\mathbf{A}$ to a point in the reference point cloud $\mathbf{B}$. It then estimates the transformation (translation and rotation) that best aligns each point in $\mathbf{A}$ to its corresponding point in $\mathbf{B}$. By repeating this process across consecutive LiDAR frames, SLAM can position each frame in the first frame's coordinate frame of reference. \parab{ Challenges.} However, using SLAM on drone-mounted LiDAR is challenging for the following reasons (\tabref{tab:challenges}). \paraspace\noindent$\blacktriangleright$ Reconstruction quality is critically dependent on the design of the drone \textit{trajectory}. \figref{fig:slam_vs_gps_3d_model} (middle) shows reconstruction using SLAM from a poorly planned drone flight. This reconstruction is visually worse than GPS-based reconstruction (\figref{fig:slam_vs_gps_3d_model} (left)), because the underlying SLAM algorithm is unable to \textit{track} LiDAR frames (\emph{i.e.,}\xspace it is unable to match points in successive point clouds). \noindent$\blacktriangleright$ SLAM algorithms \textit{accumulate drift}~\cite{slam_part1,slam_part2}, so position estimates can degrade on longer flights. \noindent$\blacktriangleright$ Drones have \textit{limited compute} because they can carry limited payloads. For instance, a DJI M600Pro (which we use in this paper) hexacopter has a maximum payload weight of 5~kg. It carries an A3Pro flight controller that contains three IMUs and three GNSS receivers. We have also mounted an LTE radio and an Ouster OS1-64 LiDAR, as well as a Jetson TX2 board. This compute capability is far from sufficient to run LiDAR SLAM\footnote{With a 64-beam LiDAR, SLAM processes upto 480 Mbps of 3D data}. On the TX2, plane-fitting~\cite{ransac}, a primitive used in our reconstruction pipeline, takes 0.5~seconds per LiDAR frame (LiDARs generate 20 frames per second) (\figref{fig:tx2ransac}) and plane-fitting accounts for 5\% of the execution time of our entire pipeline (\secref{sec:eval}). \noindent$\blacktriangleright$ Drones have \textit{limited flight endurance}. When fully loaded and starting from a full charge, the M600Pro can fly for approximately 25 minutes. This necessitates careful trajectory planning to minimize flight duration. {\small \begin{table}[t] \centering \footnotesize \begin{tabular}{|l|l|} \hline \textit{\textbf{Challenge}} & \textit{\textbf{Mechanism}} \\ \hline \hline Model accuracy & Model collection trajectory planning \\ \hline Limited duration & Cheap reconnaissance flight\\ \hline Limited compute & Offload compressed point clouds\\ \hline Drift accumulation & Flight re-calibration\\ \hline \end{tabular} \caption{Challenges and contributions} \label{tab:challenges} \end{table} } \parab{Contributions.} This paper presents the design and implementation of a system called \textsc{ares}\xspace, which makes the following contributions to address these challenges (\tabref{tab:challenges}). \textsc{ares}\xspace's first contribution is the design of \textit{model collection trajectories} that navigate the competing constraints \emph{i.e.,}\xspace accuracy and battery life (\secref{s:slam-phase}). Model collection uses SLAM, and SLAM error is sensitive to how the LiDAR is mounted, how fast and at what height the drone flies. Faster, higher flights use less of the drone's battery, but can incur high SLAM error. The converse is true of slower and lower flights. \textsc{ares}\xspace finds a sweet spot in this trade-off space to balance its accuracy goals with drone battery usage. Even so, SLAM can incur \textit{drift} on longer flights. \textsc{ares}\xspace needs to detect excessive drift, and correct for it in \textit{real-time}. It uses a novel algorithm that tracks consistency between GPS traces and SLAM positions to detect excessive drift, then incorporates a \textit{re-calibration} step to correct for it while the drone is \textit{in-flight}. Even with \textsc{ares}\xspace's efficient model collection trajectories, scans over large areas can exhaust a drone's battery. To this end, \textsc{ares}\xspace's second contribution is a robust and efficient \textit{geometry extraction} algorithm (\secref{s:area-inter-reconn}) that helps \textit{focus model collection} only on the structure\footnote{In this paper, we focus on extracting the boundary of a building and leave generalization to future work} to reconstruct. This algorithm, which runs during a \textit{reconnaissance flight} before model collection, works well even with a fast, high flight that minimizes drone flight duration (and hence battery usage). During the flight, this algorithm extracts the building geometry without constructing the 3D model; it relies on detecting planar surfaces by using the consistency of surface normals across points on a plane, then estimates building height and boundary from the plane forming the rooftop. \textsc{ares}\xspace uses this boundary to plan the \textit{model collection trajectories} described above. \textsc{ares}\xspace's third contribution is the design of a processing pipeline that (a) offloads computation to a cloud server by compressing point clouds on the drone to the point where they can be transmitted over LTE (\secref{s:compression}), and (b) that leverages GPU acceleration to process these point clouds in near real time at the LiDAR frame rate and with minimal end to end processing delay (\secref{sec:eval}). Experiments (\secref{sec:eval}) on real-world drone flights and a photo-realistic drone simulator (AirSim~\cite{airsim}) , demonstrate that \textsc{ares}\xspace can achieve 21-30~cm reconstruction accuracy even after compressing the raw sensor data by almost two orders of magnitude. Not only is \textsc{ares}\xspace \textit{faster} than an \textit{offline} approach, it is also \textit{more accurate}, since the latter cannot re-calibrate mid-flight. An experiment with a complete \textsc{ares}\xspace implementation is able to stream compressed point clouds over LTE, reconstruct the 3D model on-the-fly and deliver an accurate 3D model about 100~ms after flight completion. \textsc{ares}\xspace's choice of trajectories drains the battery least while achieving the accuracy target, and its pipeline can process frames at 20~fps while incurring a processing latency of less than 100~ms. To our knowledge, \textsc{ares}\xspace is the first to demonstrate on-line, cloud-based, autonomous, sub-meter, 3D model reconstruction in near real-time. \section{Background} \label{sec:motivation} \parab{3D Models.} The term \textit{3D model} covers a wide range of geometric representations of the surfaces of objects, from coarse-grained approximations (cylinders, cubes, intersection of planes), to more fine-grain representations such as \textit{meshes} (small-scale surface tesselations that capture structural variations). In this paper, we seek to extract a fine-grain \textit{point-cloud} of a building which consists of dense \textit{points} on the surface of the building. Each point has an associated 3D position, together with other attributes (depending on the sensor used to generate the point-cloud). The point-cloud based 3D model can generate all other representations. \parab{LiDAR.} A \textbf{Li}ght \textbf{D}etection \textbf{A}nd \textbf{R}anging (LiDAR) sensor measures distances to surfaces using one or more lasers mounted on a mechanical rotating platform. The lasers and mechanical rotator sit within the enclosure of the LiDAR sensor. With each revolution of the lasers, the sensor returns a point cloud, or a \textit{LiDAR frame}. The point cloud is set of 3D data points, each corresponding to a distance measurement of a particular position of the surrounding environment from the LiDAR. For instance, an Ouster OS1-64 LiDAR has 64 lasers that scan at 20~Hz, with a horizontal and vertical field of view of 360$\degree$ and 45$\degree$ respectively. \parab{Drones.} A drone is a remotely operated multi-rotor platform that carries a flight controller, an embedded compute platform, a cellular (LTE) radio and a LiDAR sensor. In general, drones have limited compute, the ability to carry minimal payload, and a limited flight time. For instance, a DJI M600Pro (which we use in this paper) hexacopter has a maximum payload weight of 5~kg. This drone uses an A3Pro flight controller that contains three IMUs and three GNSS receivers. It has an NVIDIA Jetson TX2 connected both to an LTE radio and the Ouster OS1-64 LiDAR. When fully loaded, the M600Pro has a flight endurance of approximately 25 minutes. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{figs/fig_tx2_ransac.pdf} \caption{Plane-fitting on an nVidia TX2 \label{fig:tx2_plane_fitting}} \end{figure} \parab{The challenges.} 3D model collection from aerial drones is particularly challenging due to two competing constraints (accuracy and battery-life). Unlike ground robots like vehicles, on an aerial drone, LiDAR SLAM algorithms have access to only 9\% of the entire point cloud (because the drone is in the air). To compensate for this, drones can fly slow and close to the building for better positioning. However, this is not possible because of a drone's limited battery life. \textsc{ares}\xspace tackles this challenge on two fronts: a) it uses an offline parameter study and a real-time recalibration module to reduce positioning error, and b) it collects the 3D model in two steps \emph{i.e.,}\xspace a fast reconnaissance phase to identify the region of interest and a slower data collection flight to collect the actual 3D model. A strawman solution to our problem of real-time model collection is to execute LiDAR data processing and model construction entirely on the drone. This is not likely to be feasible anytime in the near future. To demonstrate this, we ran a standard plane-fitting algorithm~\cite{ransac} on a single Ouster LiDAR point cloud; plane-fitting is a primitive we use in \secref{sec:design} to detect the rooftop of the building. On our drone's TX2, this operation can take nearly 0.5~s~ (\figref{fig:tx2_plane_fitting}) on a single point cloud, on average. Our LiDAR generates 20 point clouds every second, and plane-fitting is only one of the many expensive operations we require (\secref{sec:eval}). To put this in perspective, the plane-fitting algorithm contributes 5\% to the execution time of the entire pipeline. This means, if we ran \textsc{ares}\xspace on the drone, it would process a single second's data in 200 seconds. For this reason, \textsc{ares}\xspace \textit{offloads} most computation to the cloud (\figref{fig:overview}). The drone obtains point clouds from the LiDAR, compresses them, and streams the point clouds to a cloud VM that performs near real-time 3D model construction. \section{Related Work} \label{sec:related} \parab{Networked 3D sensing and drone positioning.} Some recent work has explored, in the context of cooperative perception~\cite{AVR} and real-time 3D map update~\cite{CarMap}, transmitting 3D sensor information over wireless networks. Compared to \textsc{ares}\xspace, they use different techniques to overcome wireless capacity constraints. Robotics literature has studied efficient coverage path-planning for single~\cite{sensorplanning}, and multiple drones~\cite{ubanc}. \textsc{ares}\xspace's trajectory design is influenced by more intricate constraints like SLAM accuracy and equi-density goals. Accurately inferring drone motion is important for SLAM-based positioning~\cite{observability}. Cartographer~\cite{Cartographer}, which \textsc{ares}\xspace uses for positioning, utilizes motion models and on-board IMU's for estimating motion. In future work, \textsc{ares}\xspace can use drone orchestration systems~\cite{beecluster}, for larger campus-scale reconstruction with multiple drones. \parab{Offline reconstruction using images.} UAV photogrammetry~\cite{federman2017} reconstructs 3D models offline from 2D photographs. Several pieces of work~\cite{7139681, 7989530, 8124461, 8628990} study the use of cameras (either RGB or RGB-D) on UAVs for 3D reconstruction. Prior work~\cite{7139681} has proposed a real-time, interactive interface into the reconstruction process for a human guide. The most relevant of these~\cite{mostegel2016uav, 7422384} predicts the completeness of 3D reconstruction in-flight, using a quality confidence predictor trained offline, for a better offline 3D reconstruction. However, unlike \textsc{ares}\xspace, this work requires human intervention, computes the 3D model offline, requires close-up flights, cannot ensure equi-dense reconstructions, cannot dynamically re-calibrate for drift and is not an end-to-end system. A body of work has explored factors affecting reconstruction accuracy: sensor error~\cite{6899451}, tracking drift, and the degree of image overlap~\cite{7139681, LIENARD2016264}. Other work~\cite{8793729, 8628990, bylow2019combining} has explored techniques to reduce errors by fusing with depth information, or using image manipulations such as upscaling. Unlike \textsc{ares}\xspace, almost all of this work reconstructs the 3-D model offline. \parab{Offline reconstruction using LiDAR.} 3D model reconstruction using LiDAR~\cite{uav_lidar_1,uav_lidar_2} relies on additional positioning infrastructure such as base stations for real-time kinematic (RTK) positioning, and long-range specialized LiDAR to achieve tens of centimeters model accuracy. \textsc{ares}\xspace explores a different part of the design space: online reconstruction with sub-meter accuracy using commodity drones, GPS and LiDAR. More recent work has explored drone-mounted LiDAR based offline reconstruction of tunnels and mines, but require specialized LiDARs and a human-in-the-loop~\cite{Prometheus, hovermap} for drone guidance (either manually or by defining a set of waypoints). \parab{Rooftop boundary detection.} Prior work has used infrared sensors, RGB-D cameras~\cite{rgbfasterboundarydetection} and a fusion of LiDAR~\cite{lidarbuildingdetection} with monocular cameras~\cite{lidarimagebuildingdetection, lidarorthophotoboundarydetection}. These assume a pre-existing stitched 3D point cloud~\cite{lidarbuildingdetection} or orthophoto~\cite{lidarorthophotoboundarydetection} and are not designed to operate in real-time. \textsc{ares}\xspace's boundary detection accuracy is comparable to these pieces of work, even though it does not rely on these assumptions.
{'timestamp': '2021-04-22T02:06:11', 'yymm': '2104', 'arxiv_id': '2104.08634', 'language': 'en', 'url': 'https://arxiv.org/abs/2104.08634'}
\section{Abstract.} \section{Optimal Learning for Parameters and Alternatives}\label{sec3} Our dual-objective optimization problem requires handling the alternative space and the parameter space simultaneously. Note that while finding the best alternative and learning the best estimate of the parameter vector each have their own metrics, they are not conflicting goals, since the best estimate of the parameters can lead to the best design Traditionally, the optimal learning literature has focused on optimizing some metric such as choosing the best parameters to maximize the strength of a material or minimize the number of defects. When we use parametric belief models, we can achieve these goals by finding good estimates of the unknown parameters, and then optimizing a (deterministic) nonlinear model based on the estimates. To capture the value of learning the unknown parameters correctly, we can replace our original metric with the entropy of the belief vector. \subsection{The Resampling Scheme} As in KGDP, we still keep a set of $L$ candidates, but they change over time. Let $\mathbb{L}^n=\{\theta_1^n,...,\theta_L^n\}$ denote the candidate set at time $n$, and $\vec{p^n}=\{p_1^n,...,p_L^n\}$ denote the probabilities. Assume that the true function $f(x;\theta^*)$ can be approximated as \small\begin{align*} f(x;\theta^*)\approx\bar{f}^n(x)=\sum_{i=1}^Lf(x;\theta_i^n)p_i^n. \end{align*}\normalsize The candidates are no longer fixed and resampling happens periodically, in which the least probable candidates get replaced by more probable ones according to the whole history of experiments. Hence, we need to search two spaces: in the outer loop, we work in the $x$ space to collect as much information as possible; in the inner loop, we search the $\theta$ space for more promising $\theta$'s. We use the following process for testing our resampling strategy. We start by choosing a large sample of $K$ (say $K=1,000,000$) realizations of $\theta$'s, and then choose one of these as the truth. We call the $K$ samples the \textit{large pool}, denoted as $\mathbb{K}$. Then, we choose a small sample set $\mathbb{L}$ as our sampled prior. The idea is that with such a large base population, it will be very unlikely that one of the $\theta$'s in our sampled prior will be the truth (or even close to it). As the candidates' probabilities change over time, under certain conditions resampling is triggered. We use mean square error (MSE) as the criterion to choose $\theta$'s in the resampling steps \subsection{Mean Square Error (MSE) Formulation} We first introduce the formulation of the MSE problem, which is equivalent to the maximum likelihood estimation (MLE) in Gaussion noise settings. Suppose that after $n$ measurements, the history of experimental results is $\mathcal{F}^n=\sigma\{(x^0,\hat{y}^1),...,(x^{n-1},\hat{y}^n)\}$. The likelihood of each $\theta_i$ is given by: \small\begin{align*} \mathcal{L}(\theta_i|\mathcal{F}^n)=\prod_{j=1}^n\exp\left(-\frac{[\hat{y}^{j}-f(x^{j-1};\theta_i)]^2}{2\sigma^2}\right). \end{align*}\normalsize By taking the logarithm of $\mathcal{L}(\theta_i|\mathcal{F}^n)$, we have the MSE formula for $\theta_i$ as \small\begin{align*} MSE(\theta_i|\mathcal{F}^n)=\frac{1}{n}\sum_{j=1}^n[\hat{y}^j - f(x^{j-1};\theta_i)]^2. \end{align*}\normalsize The idea is that each time resampling is triggered, we calculate the MSE of all $\theta$'s in the large pool $\mathbb{K}$. Although $K$ can be arbitrarily large, this calculation is considered much faster and more efficient compared to the expensive physical experiments. Then resampling is conducted in the sub-level set of the MSE function $MSE(\theta)$. We define a threshold $\delta$, and all $\theta$'s with MSE smaller than $\delta$ form the sub-level set $\mathbb{S}^n$: \begin{align*} \mathbb{S}^n=\{\theta\in\mathbb{K}|MSE(\theta|\mathcal{F}^n)\leq \delta\}. \end{align*} Alternatively, we can define a \textit{small pool} of a certain size $R$ that contains the $R$ $\theta$'s with the smallest MSE, namely \begin{align*} \mathbb{S}^n=\{\theta\in\mathbb{K}|MSE(\theta|\mathcal{F}^n)\leq MSE_R\}. \end{align*} where $MSE_R$ denotes the value of the $R$-th smallest $MSE$. In other words, the small pool $\mathbb{S}^n$ contains the most probable $\theta$'s up to now. We resample from the small pool a number of new candidates as needed. On the one hand, this method rules out the majority of unlikely samples and avoids exploring too large a space inefficiently; on the other hand, it supplies a small pool containing sufficiently promising samples to resample from, thus providing plenty of exploration compared with simply selecting the $L$ samples with the smallest MSE. \subsection{Updating the Probabilities of Candidates} For candidate $\theta_i^n$, suppose its prior probability is $p_i^{n}$. After the $(n+1)$-th measurement given by $\hat{y}^{n+1}\sim \mathcal{N}(f(x^n;\theta_i^n), \sigma^2)$, the likelihood of $\hat{y}^{n+1}$ is given by \begin{align*} g_Y(\hat{y}^{n+1}|\theta^*=\theta_i^n) = \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{[\hat{y}^{n+1}-f(x^n;\theta_i^n)]^2}{2\sigma^2}\right). \end{align*} By Bayes' rule, the posterior probability is proportional to the prior probability times the likelihood: \begin{align}\label{eq_mse} p^{n+1}_i\propto g_Y(\hat{y}^{n+1}|\theta^*=\theta_i^n) p^n_i = \frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{[\hat{y}^{n+1}-f(x^n;\theta_i^n)]^2}{2\sigma^2}\right)p_i^n. \end{align} After normalization, the updating rule of $p_i$ is given by \begin{align}\label{eq_p_update1} p_i^{n+1}=\frac{\exp\left(-\frac{[\hat{y}^{n+1}-f(x^n;\theta_i^n)]^2}{2\sigma^2}\right)p_i^n}{\sum_{l=1}^L\exp\left(-\frac{[\hat{y}^{n+1}-f(x^n;\theta_l^n)]^2}{2\sigma^2}\right)p_l^n}. \end{align} After the $(n+1)$-th measurement, if resampling is not triggered, $p_i^{n+1}$ can be updated as Equation~(\ref{eq_p_update1}) for $i=1,2,...,L$. Otherwise, we should calculate the likelihood of $\theta_i$ by all the previous measurements, given by \begin{align}\label{eq_likeli} g_Y(\hat{y}^{1},...,\hat{y}^{n+1}|\theta^*=\theta_i^{n+1})=\prod_{j=0}^{n}\exp\left(-\frac{[\hat{y}^{j+1}-f(x^j;\theta_i^{n+1})]^2}{2\sigma^2}\right). \end{align} Since initially all parameters are associated with equal probability, if resampling happens at time $(n+1)$, $p^{n+1}_i$ is given by \begin{align}\label{eq_p_update2} p^{n+1}_i=\frac{\prod_{j=0}^{n}\exp\left(-\frac{[\hat{y}^{j+1}-f(x^j;\theta_i^{n+1})]^2}{2\sigma^2}\right)}{\sum_{l=1}^L\prod_{j=0}^{n}\exp\left(-\frac{[\hat{y}^{j+1}-f(x^j;\theta_l^{n+1})]^2}{2\sigma^2}\right)}, \end{align} where $i$ and $l$ index the set of candidates after resampling. \subsection{Resampling Procedure} Resampling is triggered under either of two conditions: 1) the same set of candidates have been used for $n^{resamp}$ iterations; 2) more than $L/2$ candidates have probabilities lower than ${\epsilon}$. The resampling process goes as follows. 1. When resampling is triggered, we first remove the candidates with low probabilities. Denote the set as $\mathbb{L}^n_{rm}$ \begin{align*} \mathbb{L}^n_{rm}=\{\theta\in\mathbb{L}^n|p^n(\theta)\leq{\epsilon}\}. \end{align*} Note that if $p^n(\theta)>{\epsilon}$ for all $\theta\in\mathbb{L}^n$, we still select a small portion (say $1$ or $2$) of the least probable ones as $\mathbb{L}^n_{rm}$. In this way, we can avoid getting stuck in a set of unlikely candidates (for example, in an extreme case where we have $L$ identical but wrong candidates, we get stuck if not dropping anyone). 2. Then we calculate the MSE of each $\theta$ in the large pool by Equation~(\ref{eq_mse}), and select $R$ ones with the smallest MSE to form the small pool. 3. Next, calculate the likelihood of each $\theta$ in the small pool given by Equation~(\ref{eq_likeli}). We then use the likelihoods as weights and do weighted sampling without replacement to select $|\mathbb{L}^n_{rm}|$ $\theta$'s and add them to the candidate set 4. Once the candidates are updated, we update their probabilities accordingly using the whole measurement history according to Equation~(\ref{eq_p_update2}). 5. Finally, we check if the current set can still trigger the resampling conditions, since the new set may still contain over $L/2$ candidates with probabilities lower than ${\epsilon}$. If not, resampling finishes. Otherwise, repeat the resampling process. A detailed flowchart is given in \hyperref[detail_workflow]{Section \ref{detail_workflow}}. \subsection{Evaluation Metrics We introduce two Knowledge Gradient related policies for choosing the next alternative to measure. Corresponding to our dual objectives, they focus on maximizing the performance metric and learning the parameter respectively. The first one, initially given in \cite{Chen2014}, is function value oriented and hence denoted as KGDP-$f$. The second one, denoted as KGDP-$H$, focuses on minimizing the entropy of the belief vector $\vec{p}^n=\{p_1^n,...,p_L^n\}$, which leads to a better estimate of $\theta^*$, from which we can then optimize the original function $f(x;\theta)$. \subsubsection{KGDP-$f$}\label{sec_KGDP_eq} We can measure the expected incremental function value as in \cite{Chen2014}, where the formula of the KGDP-$f$ was originally given. At time $n$, we define $S^n$ as the probability vector $(p_1^n,...,p_L^n)$, $V^n(S^n)$ as the current largest estimate, i.e, $\max_{x\in\mathcal{X}}\bar{f}^n(x)$ (recall that $\bar{f}^n(x)$ is our estimate of $f(x;\theta^*)$ at time $n$, given by $\bar{f}^n(x)=\sum_{i=1}^L f(x;\theta_i^n)p_i^n$). Let $p^{n+1}(x)$ represent the posterior probability after measuring $x$. KGDP-$f$ is calculated in \cite{Chen2014} as: \small \begin{align}\label{eq_KGDP1} \nu^{KGDP-f,n}(x) = & \mathbb{E}^n\left[\max_{x'}\sum_{i=1}^Lf_i(x')p_i^{n+1}(x)|S^n=s,x^n=x\right]-\max_{x'}\sum_{i=1}^Lf_i(x')p_i^n \notag\\ =& \sum_{j=1}^L\left[\int_{\omega}\max_{x'}\frac1{c_j}\sum_{i=1}^L f_i(x')\exp\left(-\frac{[f_j(x)-f_i(x)+\omega]^2}{2\sigma^2}\right)p_i^ng(\omega)d\omega \right] p_j^n \notag \\ &-\max_{x'}\sum_{i=1}^L f_i(x')p_i^n, \end{align}\normalsize where $i$ and $j$ index the candidates in $\mathbb{L}^n$, $c_j=\sum_{i=1}^L\exp\left[-\frac{(f_j(x)-f_i(x)+\omega)^2}{2\sigma^2}\right]p_i^n$, $g(\omega)=\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\omega^2}{2\sigma^2}\right)$, and $f_i(x)$ is short for $f(x;\theta_i^n)$. In each integral, let $f_j(x)+\omega=\hat{y}$, and the above equation can be simplified as \small \begin{align}\label{eq_KGDP2} \nu^{KGDP-f,n}(x) =&\frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^{+\infty}\max_{x'}\left[\sum_{i=1}^L f_i(x')p_i^n\exp\left(-\frac{[\hat{y} - f_i(x)]^2}{2\sigma^2}\right)\right]d\hat{y} \notag\\ &-\max_{x'}\sum_{i=1}^L f_i(x')p_i^n, \end{align}\normalsize where $i$ indexes the candidates at time $n$. Compared with Equation~(\ref{eq_KGDP1}) in \cite{Chen2014}, Equation~(\ref{eq_KGDP2}) is simpler for both calculation and theoretical proof. In KGDP-$f$, our decision at time $n$ is \begin{align*} x^n= \mathop{\text{argmax}}_{x\in\mathcal{X}}\nu^{KGDP-f,n}(x). \end{align*} \subsubsection{KGDP-$H$} Entropy is a metric to measure the uncertainty of unknown data, which is widely used especially in information theory. The entropy of the candidates at time $n$ is given by \small \begin{align*} H(p_1^n,...,p_L^n) = -\sum_{i=1}^L p_i^n\log p_i^n. \end{align*}\normalsize In KGDP-$H$, we measure the alternative that has the largest expected entropic loss. Define the state variable $S^n$ as the probability vector $(p_1^n,p_2^n,...,p_L^n,)$, and let $V^n(S^n)$ be the entropy. The KGDP-$H$ score is given by \small \begin{align}\label{eq_KGDP-H1} \nu^{KGDP-H,n}(x) &= \mathbb{E}^n\left[\sum_{i=1}^L p_i^{n+1}(x)\log p_i^{n+1}(x) |S^n=s,x^n=x\right]-\sum_{i=1}^L p_i^n\log p_i^n \notag \\ &= \sum_{j=1}^L \left(\int_\omega \sum_{i=1}^L p_{i|j}^{n+1}(x,\omega)\log p_{i|j}^{n+1}(x,\omega) g(\omega) d\omega \right) p_j^n - \sum_{i=1}^L p_i^n\log p_i^n, \end{align}\normalsize where $g(\omega)=\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\omega^2}{2\sigma^2}\right)$, and $p_{i|j}^{n+1}(x,\omega)$ is the probability of $\theta_i$ at time $(n +1)$ given that $\theta_j$ is the truth and the noise is $\omega$, given by \begin{align*} p_{i|j}^{n+1}(x,\omega)=\frac{\exp\left[-\frac{(f_j(x)-f_i(x)+\omega)^2}{2\sigma^2}\right]p_i^n}{\sum_{k=1}^L \exp\left[-\frac{(f_j(x)-f_k(x)+\omega)^2}{2\sigma^2}\right]p_k^n}. \end{align*}\normalsize Alternatively, it can be written as \small \begin{align}\label{eq_KGDP-H2} \nu^{KGDP-H,n}(x) =& \int_{-\infty}^{+\infty} \sum_{i=1}^L p_i^{n+1}(x)\log p_i^{n+1}(x)\cdot \frac{1}{\sqrt{2\pi}\sigma}\left(\sum_{i=1}^L p_i^n \exp\left[-\frac{(\hat{y}-f_i(x))^2}{2\sigma^2}\right]\right)d\hat{y} \notag\\ &\hspace{1.5em}- \sum_{i=1}^L p_i^n\log p_i^n \notag \\ =& \frac{1}{\sqrt{2\pi}\sigma}\int_{-\infty}^{+\infty} \sum_{i=1}^L p_i^n\exp\left[-\frac{(\hat{y}-f_i(x))^2}{2\sigma^2}\right]\log p_i^{n+1}(x)d\hat{y} - \sum_{i=1}^L p_i^n\log p_i^n, \end{align}\normalsize where $p_i^{n+1}(x)$ is given by Equation~(\ref{eq_p_update1}). In KGDP-$H$, our decision at time $n$ is \small\begin{align*} x^n= \mathop{\text{argmax}}_{x\in\mathcal{X}}\nu^{KGDP-H,n}(x). \end{align*}\normalsize \bigskip After we exhaust the budget of $N$ experiment, we give our estimates of the optimal alternative $\hat{x}^*$ and parameters $\hat{\theta}^*$ by: \vspace{-1em} \small \begin{align*} \hat{x}^* = \mathop{\text{argmax}}_{x\in\mathcal{X}}\bar{f}^N(x) = \mathop{\text{argmax}}_{x\in\mathcal{X}}\sum_{i=1}^L p_i^N f(x;\theta^N_i), \hspace{1.5em} \hat{\theta}^* = \mathop{\text{argmin}}_{\theta\in\mathbb{K}} MSE(\theta|\mathcal{F}^N). \end{align*}\normalsize \subsection{Detailed Flowchart}\label{detail_workflow} The detailed flow of the whole procedure is shown in Algorithm~\ref{algo_full}, where the function $select\_x(policy,...)$ uses the designated $policy$ to select an alternative to measure. The flowchart of our experiment decision $select\_x()$ is shown in Algorithm~\ref{algo_policy}. In $select\_x()$, besides the KGDP-$f$ and KGDP-$H$ policies, we also include three competing policies used in our experiments, namely (1) Pure Exploration, which chooses $x^n$ randomly; (2) Pure Exploitation, which always chooses the current best alternative; and (3) Max Variance (Max-Var), which picks the alternative that has the largest variance under prior belief. \begin{algorithm}[!htp] \caption{\small Flow of the resampling algorithm}\label{algo_full}{\small \begin{algorithmic}[1] \REQUIRE{Budgets: $N$; Alternatives: $\mathcal{X}=\{x_1,...,x_M\}$; Noise: $\sigma$; Large pool: $\mathbb{K}$; Smaller pool size: $R$; Number of candidates: $L$;; Resample stepsize: $n^{resamp}$; Threshold for probability: $\epsilon$; and the $policy$ to select $x^n$.} \ENSURE{Estimate of the optimal alternative: $\hat{x}^*$, estimate of parameters: $\hat{\theta}^*$.} \STATE{Choose $L$ samples out of $\mathbb{K}$ randomly.} \STATE{Set $p_1^0=...=p_L^0=\frac{1}{L}$.} \FOR{$n=0$ to $N-1$} \STATE{$x^n=select\_x(policy, \mathcal{X}, \vec{p^n},\mathbb{L}^n$).} \STATE{Take a measurement: $\hat{y}^{n+1}$. \STATE{Update each $p_i$ using: $$ p_i^{n+1}=\frac{\exp[-\frac{(\hat{y}^{n+1}-f_i(x))^2}{2\sigma^2}]}{\sum_{l=1}^L\exp[-\frac{(\hat{y}^{n+1}-f_l(x))^2}{2\sigma^2}]p_l^n}p_i^n. $$} \IF{$n+1\equiv 0 \bmod(n^{resamp})$ \textbf{or} $|\{p^{n+1}_i\geq \epsilon\}|\geq L/2$} \STATE{Calculate MSE for all the $K$ samples.} \STATE{Construct $\mathbb{S}^n$ as the set of $R$ $\theta$'s with the smallest MSE. \STATE{Calculate the likelihood of each $\theta$ in $\mathbb{S}^n$. \STATE{Construct $\mathbb{L}^n_{rm}$ as the set of $\theta$'s to be removed, and set $p^{n+1}_i$ = 0 for $\theta_i\in\mathbb{L}^n_{rm}$.} \WHILE{$\min(p^{n+1})\leq \epsilon$} \STATE{Find candidates with $p^{n+1}\leq\epsilon$ and remove them. \STATE{Select $|\mathbb{L}^n_{rm}|$ $\theta$'s from $\mathbb{S}^n$ by weighted sampling without replacement according to their likelihoods. \STATE{Update posterior probabilities for all the $L$ new $\theta$'s: $$ p^{n+1}_i=\frac{\prod_{j=0}^{n}\exp[-\frac{(\hat{y}^{j+1}-f(x^j;\theta_i^{n+1}))^2}{2\sigma^2}]}{\sum_{l=1}^L\prod_{j=0}^{n-1}\exp[-\frac{(\hat{y}^{j+1}-f(x^j;\theta_l^{n+1}))^2}{2\sigma^2}]}. $$} \ENDWHILE \ENDIF \ENDFOR \RETURN {$\hat{x}^* = \mathop{\text{argmax}}_{x\in\mathcal{X}}\sum_{i=1}^L p_i^N f(x;\theta^N_i), \hspace{0.5em} \hat{\theta}^*=\mathop{\text{argmin}}_{\theta\in\mathbb{K}} MSE(\theta|\mathcal{F}^N)$. \end{algorithmic}} \end{algorithm} \begin{algorithm}[!htp] \caption{\small Choose the next alternative $x^n$.}\label{algo_policy}{\small \begin{algorithmic}[1] \REQUIRE{Policy: \textit{Pure Exploration, Pure Exploitation, Max-Var, KGDP-$f$, KGDP-$H$}; Alternatives: $\mathcal{X}=\{x_1,...,x_M\}$; Probability vector $\vec{p^n} = \{p_1^n,...,p_L^n\}$; $L$ candidates $\mathbb{L}^n=\{\theta_1^n,...,\theta_L^n\}$} \ENSURE{The alternative to measure next: $x^n$.} \SWITCH{$policy$} \CASE{KGDP-f} \STATE{$x^n = \mathop{\text{argmax}}_{x\in\mathcal{X}}\nu^{KGDP-f,n}(x)$.} \vspace{0.5em} \end{ALC@g} \CASE{KGDP-H} \STATE{$x^n = \mathop{\text{argmax}}_{x\in\mathcal{X}}\nu^{KGDP-H,n}(x).$} \vspace{0.5em} \end{ALC@g} \CASE{Pure Exploration} \STATE{$x^n=rand(\{x_1,...,x_M\})$.} \vspace{0.5em} \end{ALC@g} \CASE{Pure Exploitation} \STATE{$x^n=\mathop{\text{argmax}}_{x\in\mathcal{X}}\bar{f}^n(x)$.} \vspace{0.5em} \end{ALC@g} \CASE{Max-Var} \STATE{$x^n=\mathop{\text{argmax}}_{x\in \mathcal{X}} \sum_{j=1}^L p_j[f(x;\theta_j^n)-\bar{f}^n(x)]^2$.} \vspace{0.5em} \end{ALC@g} \STATE \textbf{end switch} \RETURN $x^n$. \end{algorithmic}} \end{algorithm} \section*{Appendix (Proofs)}\label{append} \vspace{1em} \noindent\textbf{Lemma~\ref{p_is_0}. } Let $N^n(x)$ be the number of measurements taken on $x$ when the total number of measurements is $n$. If $N^n(x)\rightarrow\infty$ as $n\rightarrow\infty$, then for $\forall l\in\{1,2,...,L\}$ such that $f(x;\theta_l)\ne f(x;\theta^*)$, $p_l^n\rightarrow 0$ almost surely, i.e, $\mathbb{P}(\lim_{n\rightarrow \infty}p_l^n=0)=1$. \subsubsection*{Proof of Lemma~\ref{p_is_0}} We prove the following claim first. {\bf Claim 1:} $\forall m\in \{1,...,M\}$, let $p_1^n, p_2^n, ..., p_L^n$ be the probabilities of the $L$ $\theta$'s after $n$ measurements at $x_m$ (i.e, we take $n$ measurements only at $x$ and nowhere else). For $\forall {\epsilon}>0$, $\forall l\in\{1,2,...,L\}$ such that $f(x_m;\theta_l)\ne f(x_m;\theta^*)$, $\mathbb{P}(p_l^n<{\epsilon})=1$, as $n\rightarrow \infty$. {\bf Proof of Claim 1:} If we take $n$ measurements at $x_m$, suppose the results are $\{\hat{y}^1,...,\hat{y}^n\}$. Let $f_l = f(x_m;\theta_l)$. Then $p_1,...,p_L$ are given by: \small \begin{align*} p^n_l=\frac{\prod_{j=1}^{n}\exp[-\frac{(\hat{y}^j-f_l)^2}{2\sigma^2}]}{\sum_{i=1}^L\prod_{j=1}^{n}\exp[-\frac{(\hat{y}^j-f_i)^2}{2\sigma^2}]}. \end{align*}\normalsize Let $f^*=f(x_m;\theta^*)$. Then $\hat{y}^j\sim N(f^*, \sigma^2)$. Let $Z^j=\frac{\hat{y}^j-f^*}{\sigma}$, then $\hat{y}^j\sim f^*+\sigma Z^j$, $Z^j\sim N(0,1)$. For any $l$ such that $f_l\ne f^*$, \small \begin{align*} p_l^n \leq & \frac{\prod_{j=1}^{n}\exp[-\frac{(\hat{y}^j-f_l)^2}{2\sigma^2}]}{\prod_{j=1}^{n}\exp[-\frac{(\hat{y}^j-f^*)^2}{2\sigma^2}]} = \frac{\exp[-\frac{\sum_{j=1}^n (\hat{y}^j-f_l)^2}{2\sigma^2}]}{\exp[-\frac{\sum_{j=1}^n (\hat{y}^j-f^*)^2}{2\sigma^2}]} = \frac{\exp[-\frac{\sum_{j=1}^n (f^*-f_l+\sigma Z^j)^2}{2\sigma^2}]}{\exp[-\frac{\sum_{j=1}^n (Z^j)^2}{2}]}\\ = &\exp\left(-\frac{\sum_{j=1}^n [(f^*-f_l+\sigma Z^j)^2 - (\sigma Z^j)^2]}{2\sigma^2}\right)\\ = & \exp\left(-\frac{ n(f^*-f_l)^2 + 2\sigma (f^*-f_l)\sum_{j=1}^n Z^j }{2\sigma^2}\right).\numberthis\label{Eq_bound1} \end{align*}\normalsize For any ${\epsilon}>0$, $\exp\left(-\frac{ n(f^*-f_l)^2 + 2\sigma (f^*-f_l)\sum_{j=1}^n Z^j }{2\sigma^2}\right)<{\epsilon} \Rightarrow p_l^n<{\epsilon}$, and therefore\\ $\{\lim_{n\rightarrow\infty}\exp\left(-\frac{ n(f^*-f_l)^2 + 2\sigma (f^*-f_l)\sum_{j=1}^n Z^j }{2\sigma^2}\right) = 0\} \subseteq \{ \lim_{n\rightarrow \infty}p_l^n = 0 \}$. We show that\small \begin{align*} \mathbb{P}\left[\lim_{n\rightarrow\infty} \exp\left(-\frac{ n(f^*-f_l)^2 + 2\sigma (f^*-f_l)\sum_{j=1}^n Z^j }{2\sigma^2}\right) = 0\right]=1, \end{align*}\normalsize which implies the almost sure convergence of $p_l^n$. By the strong law of large numbers, $\frac{\sum_{j=1}^n Z^j}{n} \xlongrightarrow{a.s.} 0$. That is, there exists $\Omega_{x,l}\in \Omega$, such that for $\forall \omega\in\Omega_{x,l}$, $\lim_{n\rightarrow\infty}\frac{\sum_{j=1}^n Z^j(\omega)}{n}=0$, and $\mathbb{P}(\Omega_{x,l})=1$. Our goal is to show that for $\forall {\epsilon}>0$, $\forall \omega\in\Omega_{x,l}$,\small \begin{align*} \lim_{n\rightarrow\infty} \exp\left(-\frac{ n(f^*-f_l)^2 + 2\sigma (f^*-f_l)\sum_{j=1}^n Z^j(\omega) }{2\sigma^2}\right) = 0. \end{align*}\normalsize Without loss of generality, we assume $f^*-f_l\geq 0$. For $\forall \omega\in \Omega_{x,l}$, there exists $N_1\in\mathbb{N}$ such that for $\forall n>N_1$, $\left| \frac{\sum_{j=1}^n Z^j(\omega)}{n} \right| < \frac{f^*-f_l}{4\sigma}$. Therefore, for any $n>N_1$, $n\left[(f^*-f_l)^2 + 2\sigma (f^*-f_l)\frac{\sum_{j=1}^n Z^j(\omega)}{n}\right] > n\cdot \frac{(f^*-f_l)^2}{2}$. Hence, for $\forall n>N_1$, \small \begin{align} \exp\left(-\frac{ n(f^*-f_l)^2 + 2\sigma (f^*-f_l)\sum_{j=1}^n Z^j(\omega) }{2\sigma^2}\right) < \exp\left(-\frac{n(f^*-f_l)^2}{4\sigma^2}\right).\label{Eq_bound2} \end{align} \normalsize For $\forall {\epsilon}>0$, there exists $N_2$ such that for $\forall n>N_2$, $\exp\left(-\frac{n(f^*-f_l)^2}{4\sigma^2}\right)<{\epsilon}$. Take $N=\max(N_1, N_2)$, we know that for $\forall \omega\in\Omega_{x,l}$, $\forall {\epsilon}>0$, there exists $N>0$, such that for $\forall n>N$, \small \begin{align*} \exp\left(-\frac{ n(f^*-f_l)^2 + 2\sigma (f^*-f_l)\sum_{j=1}^n Z^j(\omega) }{2\sigma^2}\right) < {\epsilon}. \end{align*} \normalsize Hence, \small \begin{align*} \Omega_{x,l}\subseteq \{ \lim_{n\rightarrow\infty} \exp\left(-\frac{ n(f^*-f_l)^2 + 2\sigma (f^*-f_l)\sum_{j=1}^n Z^j }{2\sigma^2}\right) \} \subseteq \{\lim_{n\rightarrow \infty}p_l^n=0\}. \end{align*}\normalsize Since $\mathbb{P}(\Omega_{x,l})=1$, we have $\mathbb{P}\{\lim_{n\rightarrow \infty}p_l^n=0\} = 1$. \hfill$\Box$ \bigskip Back to Lemma~\ref{p_is_0}, define $N^n(x,j)$ as the actual time when we make the $j$-th measurement on $x$ ($1\leq j\leq N^n(x)$). We have: \small \begin{align*} p^n_l&= \frac{\prod_{j=1}^{n}\exp[-\frac{(\hat{y}^j-f(x^{j-1};\theta_l))^2}{2\sigma^2}]}{\sum_{i=1}^L\prod_{j=1}^{n}\exp[-\frac{(\hat{y}^j-f(x^{j-1};\theta_i)^2}{2\sigma^2}]} \leq \frac{\prod_{j=1}^{n}\exp[-\frac{(\hat{y}^j-f(x^{j-1};\theta_l))^2}{2\sigma^2}]}{\prod_{j=1}^{n}\exp[-\frac{(\hat{y}^j-f(x^{j-1};\theta^{^*})^2}{2\sigma^2}]}\\ &= \prod_{x\in\mathcal{X}}\frac{\prod_{j=1}^{N^n(x)}\exp[-\frac{(\hat{y}^{t^n(x,j)}-f(x^{j-1};\theta_l))^2}{2\sigma^2}]}{\prod_{j=1}^{N^n(x)}\exp[-\frac{(\hat{y}^{N^n(x,j)}-f(x^{j-1};\theta^{^*})^2}{2\sigma^2}]}\\ &\xlongequal{def}\prod_{x\in\mathcal{X}}r(x)\\ &= \prod_{x\in \{x: f_l(x)=f^*(x)\}}r(x) \prod_{\substack{x\in \{x:N^n(x)=\infty,\\f_l(x)\ne f(x;\theta^*)\}}}r(x) \prod_{\substack{x\in \{x: N^n(x)\ne\infty,\\f_l(x)\ne f(x;\theta^*)\} }}r(x). \end{align*} \normalsize The first item equals $1$, and the third equals a constant number. For any $\omega\in \bigcap_{x\in\mathcal{X}} \Omega_{x,l}$, the second item has limit as $0$ by Claim 1. Since $\mathbb{P}(\bigcap_{x\in\mathcal{X}} \Omega_{x,l})=1-\mathbb{P}(\bigcup_{x\in\mathcal{X}}\Omega^c_{x,l})>1-\sum_{x\in\mathcal{X}}\mathbb{P}(\Omega_{x,l}^c)=1$, we have $p_l^n\xlongrightarrow{a.s}0$. $\Box$ \bigskip\bigskip \noindent\textbf{Lemma~\ref{f_non_neg}. } For $\forall n\geq 0, \forall x\in \mathcal{X}$, the KGDP-$f$ score $\nu^{KGDP-f,n}(x)\geq 0$. Equality holds if and only if (1) there exists $x'$ such that $x'\in \mathop{\text{argmax}}_x f(x;\theta_i)$ for all $i$ such that $p_i^n>0$, or (2) $p_i^n=0$ if $f(x;\theta_i)\ne f(x;\theta^{*})$. \subsubsection*{Proof of Lemma~\ref{f_non_neg}} Suppose at time $n$, the probabilities of our prior candidates are $(p_1^n, p_2^n, ..., p_L^n)$. Since $\max$ is a convex function, by Jensen's inequality, \small \begin{align*} \nu^{KGDP,n}(x) &= \mathbb{E}^n[\max_{x'}\sum_{i=1}^Lf_i(x')p_i^{n+1}(x)|S^n=s,x^n=x]-\max_{x'}\sum_{i=1}^Lf_i(x')p_i^n\\ & \geq \max_{x'}\mathbb{E}[\sum_{i=1}^Lf_i(x')p_i^{n+1}(x)|S^n=s,x^n=x] - \max_{x'}\sum_{i=1}^Lf_i(x')p_i^n. \end{align*} \normalsize We show that $\forall$ $x$, $x'$, $\mathbb{E}[\sum_{i=1}^Lf_i(x')p_i^{n+1}(x)|S^n=s,x^n=x] = \sum_{i=1}^Lf_i(x')p_i^n$. We know \small \begin{align*} \mathbb{E}[\sum_{i=1}^Lf_i(x')p_i^{n+1}(x)|S^n=s,x^n=x] = \sum_{i=1}^Lf_i(x')\mathbb{E}[p_i^{n+1}(x)|S^n=s, x^n=x]. \end{align*} \normalsize Suppose the measurement at $x$ is $\hat{y}$, we have \small \begin{align*} p_i^{n+1}(x)=\frac{p_i^n\exp[-\frac{(\hat{y}-f_i(x))^2}{2\sigma^2}]}{\sum_{j=1}^L p_j^n\exp[-\frac{(\hat{y}-f_j(x))^2}{2\sigma^2}]}. \end{align*} \normalsize Based on our prior belief, the distribution of $\hat{y}$ is: \small \begin{align*} \hat{y}\sim \left\{\begin{aligned} & N(f_1(x;\theta_1), \sigma^2), \text{with probability }p_1^n \\ & N(f_2(x;\theta_2), \sigma^2), \text{with probability }p_2^n \\ & ...\\ & N(f_L(x;\theta_L), \sigma^2), \text{with probability }p_L^n \end{aligned} \right. \end{align*} \normalsize Therefore, \small \begin{align*} \mathbb{E} \left[p_i^{n+1}(x)\right] &= \mathbb{E} \left( \frac{p_i^n\exp[-\frac{(\hat{y}-f_i(x))^2}{2\sigma^2}]}{\sum_{j=1}^L p_j^n\exp[-\frac{(\hat{y}-f_j(x))^2}{2\sigma^2}]} \right)\\ &= \int_{-\infty}^{\infty} \sum_{k=1}^L p_k^n \frac{p_i^n\exp[-\frac{(\hat{y}-f_i(x))^2}{2\sigma^2}]}{\sum_{j=1}^L p_j^n\exp[-\frac{(\hat{y}-f_j(x))^2}{2\sigma^2}]}\frac{1}{\sqrt{2\pi}}\exp[-\frac{(\hat{y}-f_k(x))^2}{2\sigma^2}]d\hat{y}\\ &= \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}} p_i^n\exp[-\frac{(\hat{y}-f_i(x))^2}{2\sigma^2}] \frac{\sum_{k=1}^L p_k^n \exp[-\frac{(\hat{y}-f_k(x))^2}{2\sigma^2}]}{\sum_{j=1}^L p_j^n\exp[-\frac{(\hat{y}-f_j(x))^2}{2\sigma^2}]} d\hat{y}\\ &= p_i^n\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}} \exp[-\frac{(\hat{y}-f_i(x))^2}{2\sigma^2}] d\hat{y}\\ &= p_i^n. \end{align*}\normalsize Hence, for any $x$ and $x'$, $\mathbb{E}[\sum_{i=1}^Lf_i(x')p_i^{n+1}(x)|S^n=s,x^n=x] = \sum_{i=1}^Lf_i(x')p_i^n$. \vspace{2em} For any convex function $f$, the general form of Jensen's inequality is $\mathbb{E}f(X)\geq f(\mathbb{E}X)$. The equality of Jensen's inequality requires that (1) $f$ is linear on $x$, or (2) $X$ is constant. (1) and (2) are equivalent to the two conditions stated in Lemma~\ref{f_non_neg} respectively: For (1), $\max (X)$ is not a linear function in general, except in the aligned case as stated in Lemma~\ref{f_non_neg}, Condition 1. For (2), this is equivalent to the following statement: $\forall$ $x'$, $\sum_{i=1}^L f(x';\theta_i)p_i^{n+1}(x)$ is constant on all possible measurement outcomes $\hat{y}$ on $x$. This is further equivalent to the statement that the $p_i^{n+1}(x)$'s are constants. According to the updating rule given by {Equation~\ref{eq_p_update1}, this is equivalent to Condition 2 in Lemma~\ref{f_non_neg}. $\Box$ \bigskip\bigskip \noindent\textbf{Lemma~\ref{H_non_neg}. } For $\forall n\geq 0, \forall x\in \mathcal{X}$, the KGDP-$H$ score $\nu^{KGDP-H,n}(x)\geq 0$. Equality holds if and only if $p_i^n=0$ if $f(x;\theta_i)\ne f(x;\theta^{*})$. \subsubsection*{Proof of Lemma~\ref{H_non_neg}} Suppose at time $n$, the probabilities of our prior candidates are $(p_1^n, p_2^n, ..., p_L^n)$. Since $x\log x$ is a convex function, by Jensen's inequality, \small \begin{align*} \nu^{KGDP-H,n}(x) &= \mathbb{E}^n[\sum_{i=1}^L p_i^{n+1}(x)\log p_i^{n+1}(x) |S^n=s,x^n=x]-\sum_i^L p_i^n\log p_i^n\\ &= \sum_{i=1}^L \mathbb{E}^n[p_i^{n+1}(x)\log p_i^{n+1}(x) |S^n=s,x^n=x]-\sum_i^L p_i^n\log p_i^n\\ &\geq \sum_{i=1}^L \mathbb{E}^n\left[p_i^{n+1}(x)\right]\log \mathbb{E}^n\left[p_i^{n+1}(x)\right]-\sum_i^L p_i^n\log p_i^n\\ &= 0. \end{align*} \normalsize Since $x\log x$ is nonlinear, the only case for equality is that $p_i^{n+1}(x)$ is a constant, which means all functions with positive probability have the same value at $x$. $\Box$ \bigskip\bigskip \noindent\textbf{Lemma~\ref{v_score0}. } For $\forall \omega\in\Omega_0$, $\forall x\in\mathcal{X}'(\omega)$, we have $\lim_{n\rightarrow\infty}\nu^{KGDP-f,n}(x)(\omega)=0$, and\\ $\lim_{n\rightarrow\infty}\nu^{KGDP-H,n}(x)(\omega)=0$. \subsubsection*{Proof of Lemma~\ref{v_score0}} First, note that for fixed $x$ and $n$, $\nu^{KGDP-f,n}(x)$ and $\nu^{KGDP-H,n}(x)$ are functions of $\vec{p}=(p_1^n,...,p_L^n)$. In order to concentrate on $\vec{p}$, we pick any fixed $\omega\in\Omega_0$, any fixed $x\in\mathcal{X}'(\omega)$, and then denote $\nu^{KGDP-f,n}(x)$ and $\nu^{KGDP-H,n}(x)$ as $\nu_f(\vec{p})$ and $\nu_H(\vec{p})$. For this fixed $x$, we assume there are $h$ $(1\leq h\leq L)$ functions equal to $f(x;\theta^*)$ at $x$. Without loss of generality, we assume they are $\theta_1,...,\theta_h$. That is, $f(x;\theta_i)=f(x;\theta^*)$, for $\forall i\in [h]$. Let $\vec{p_0} = (0,0,...,0, p_{h+1}, ..., p_L)$, where $p_{h+1}+...+p_L=1$. By Lemma~\ref{f_non_neg} and Lemma~\ref{H_non_neg}, $\vec{p_0}$ meets the conditions for $\nu_f(\vec{p})$ and $\nu_H(\vec{p})$ to be $0$. That is, $\nu_f(\vec{p_0})=0$, $\nu_H(\vec{p_0})=0$. Obviously, $\nu_f(\vec{p})$ and $\nu_H(\vec{p})$ are continuous in $\vec{p}$. We take $\nu_f(\vec{p})$ as an example. \begin{fleqn}[0pt] \begin{flalign*} &{\small \sum_{i=1}^L f_i(x')p_i^n\exp\left[-\frac{(\hat{y} - f_i(x))^2}{2\sigma^2}\right]} \text{ is continuous. Therefore, we know that }\\ &{\small \max_{x'}\left(\sum_{i=1}^L f_i(x')p_i^n\exp\left[-\frac{(\hat{y} - f_i(x))^2}{2\sigma^2}\right]\right)} \text{ is also continuous.} \text{ Hence, }\\ &\int_{-\infty}^{+\infty}\max_{x'}\left(\sum_{i=1}^L f_i(x')p_i^n\exp\left[-\frac{(\hat{y} - f_i(x))^2}{2\sigma^2}\right]\right)d\hat{y} \text{ is continuous, too. Then by }\\&\text{Equation~\ref{eq_KGDP2}, } \nu_f(\vec{p}) \text{ is continuous}. \end{flalign*} \end{fleqn} Since $\vec{p}$ is defined on a compact set, by the Heine-Cantor theorem, both $\nu_f(\vec{p})$ and $\nu_H(\vec{p})$ are uniformly continuous. That is, for $\forall {\epsilon}>0$, there exists $\delta>0$, such that $\forall \vec{p_1}\ne \vec{p2}$, $|\vec{p_1}-\vec{p_2}|<\delta$, $|\nu_f(\vec{p_1})-\nu_f(\vec{p_2})|<{\epsilon}$, $|\nu_H(\vec{p_1})-\nu_H(\vec{p_2})|<{\epsilon}$. For this $\omega\in\Omega_0$ and $x\in\mathcal{X}'(\omega)$, since $\lim_{n\rightarrow\infty}p_i^n(\omega)=0$, $1\leq i\leq h$, there exists $N$ such that for $\forall n>N$, $p_i^n(\omega)<\delta/(L+1)$. Let $\vec{p'}_n=(0,...,0, p^n_{h+1},...,p^n_{L-1}, p^n_L+\sum_{i=1}^h p_i^n)$. Then $v_f(\vec{p'})=v_H(\vec{p'})=0$, and $|\vec{p}_n - \vec{p'}_n|<\sqrt{h+h^2}\cdot \frac{\delta}{L+1}<\delta$. Therefore, $\nu_f(\vec{p_n})<{\epsilon}$, $\nu_H(\vec{p_n})<{\epsilon}$, $\forall n>N$. Hence, for $\forall \omega\in\Omega_0$, and $x\in\mathcal{X}'(\omega)$, $\lim_{n\rightarrow\infty}\nu^{KGDP-f,n}(x)=0$, $\lim_{n\rightarrow\infty}\nu^{KGDP-H,n}(x)=0$. $\Box$ \bigskip\bigskip \noindent\textbf{Lemma~\ref{measure_suf1}. } For any $\omega\in\Omega_0$, the alternatives measured infinitely often under the KGDP-$f$ or KGDP-$H$ policy constitute a sufficient set. \subsubsection*{Proof of Lemma~\ref{measure_suf1}} Assume the contrary. For any $\omega\in\Omega_0$, if $\mathcal{X}_{\infty}(\omega)$ is not a sufficient set, then there exist other $\theta$'s that fit the true values of $\mathcal{X}_{\infty}(\omega)$. Assume there are $h$ such $\theta$'s, $1\leq h\leq L$. WLOG, assume they are $\theta_1$,...,$\theta_h$. So for any $l\in[h]$, \begin{align*} f(x;\theta_l) = f(x;\theta^*), \text{ for }\forall x\in\mathcal{X}_\infty(\omega). \end{align*} By Lemma~\ref{p_is_0}, for any $h+1\leq l \leq L$, $\lim_{n\rightarrow\infty}p_l^n(\omega)=0$. We now show that for any $1\leq l \leq h$, $\lim_{n\rightarrow\infty}p_l^n(\omega)>0$ exists. For this $\omega$, let $\mathcal{L}^n_1,\mathcal{L}^n_2,...,\mathcal{L}^n_L$ be the likelihood of the $L$ $\theta$'s according to the first $n$ measurements. Hence,\small \begin{align*} \mathcal{L}^n_l = \prod_{i=1}^n \exp(-\frac{(\hat{y}^i_x - f(x;\theta_l))^2}{2\sigma^2}). \end{align*}\normalsize Let $T$ be the last time that we measure any $x$ out of $\mathcal{X}_\infty(\omega)$. After time $T$, for any $l\in\{1,2,...,h\}$, {\small \begin{align*} &p_l^n(\omega) = \frac{\mathcal{L}^n_l}{\sum_{m=1}^h \mathcal{L}^n_m + \sum_{m=1}^h \mathcal{L}^n_m }\\ =& \frac{\mathcal{L}^T_l \prod_{i=T+1}^n \exp(-\frac{(\hat{y}^i_x - f(x;\theta^{*}))^2}{2\sigma^2})}{\sum_{m=1}^h \mathcal{L}^T_m \prod_{i=T+1}^n \exp(-\frac{(\hat{y}^i_x - f(x;\theta^{*}))^2}{2\sigma^2}) + \sum_{m=h+1}^L \mathcal{L}^T_m \prod_{i=T+1}^n \exp(-\frac{(\hat{y}^i_x - f(x;\theta_{m}))^2}{2\sigma^2})}\\ =& \frac{\mathcal{L}^T_l}{\sum_{m=1}^h \mathcal{L}^T_m + \sum_{m=h+1}^L \mathcal{L}^T_m \prod_{i=T+1}^n \frac{\exp(-\frac{(\hat{y}^i_x - f(x;\theta_{m}))^2}{2\sigma^2})}{\exp(-\frac{(\hat{y}^i_x - f(x;\theta^*))^2}{2\sigma^2})}}. \end{align*} } In Proof of Lemma~\ref{p_is_0}, we have shown that $\lim_{n\rightarrow\infty} \prod_{i=T+1}^n \frac{\exp(-\frac{(\hat{y}^i_x - f(x;\theta_{m}))^2}{2\sigma^2})}{\exp(-\frac{(\hat{y}^i_x - f(x;\theta^*))^2}{2\sigma^2})} = 0$. Therefore, \small\begin{align*} \lim_{n\rightarrow\infty}p_l^n(\omega)=\frac{\mathcal{L}^T_l}{\sum_{m=1}^h \mathcal{L}^T_m}>0 \text{ exists}. \end{align*}\normalsize By Lemma~\ref{v_score0}, for $\forall x\in\mathcal{X}_\infty(\omega)$, $\lim_{n\rightarrow\infty}\nu^{KGDP-f,n}(x)(\omega)=0$, \\ $\lim_{n\rightarrow\infty}\nu^{KGDP-H,n}(x)(\omega)=0$. For $\forall x\in\mathcal{X}_\infty(\omega)^c$, $\lim_{n\rightarrow\infty}\nu^{KGDP-f,n}(x)(\omega)>0$, $\lim_{n\rightarrow\infty}\nu^{KGDP-H,n}(x)(\omega)>0$. Since the KGDP-$f$ (or KGDP-$H$) policy always chooses the $x$ with the largest KGDP-$f$ (or KGDP-$H$) score, it will measure some $x\in\mathcal{X}'(\omega)^c$ after $T$, which is contradictory to our assumption. $\Box$ \bigskip\bigskip \noindent\textbf{Theorem~\ref{KGDP-f_con}. } Non-resampling KGDP-$f$ with truth from prior is asymptotically optimal in finding both the optimal alternative and the correct parameter. The same holds for KGDP-$H$. \subsubsection*{Proof of Theorem~\ref{KGDP-f_con} Under either KGDP-$f$ or KGDP-$H$ policy, for any $\theta_l\ne \theta^*$, Lemma~\ref{measure_suf1} implies that for any $\omega\in\Omega_0$, there exists $x\in\mathcal{X}_\infty(\omega)$ such that $f(x;\theta_l)\ne f(x;\theta^*)$. By Lemma~\ref{p_is_0}, we have $\mathbb{P}(\lim_{n\rightarrow\infty}p_l^n=0)=1$. That is, $\mathbb{P}(\lim_{n\rightarrow\infty}p^n(\theta^*)=1)=1$, and $\mathbb{P}(\bar{f}(x)=f(x;\theta^*))=1$. $\Box$ \bigskip\bigskip \noindent\textbf{Lemma~\ref{measure_suf}. } For any $\omega\in\Omega_1$, the alternatives measured infinitely often under the KGDP-$f$ or KGDP-$H$ policy constitute a sufficient set. We denote this set as $\mathcal{X}_\infty(\omega)$. \subsubsection*{Proof of Lemma~\ref{measure_suf}} For any $\omega\in\Omega_1$, let $T$ be the last time that we measure any $x$ out of $\mathcal{X}_\infty(\omega)$. Assume the contrary (thus $T>0$). Since $\mathcal{X}_\infty(\omega)$ is not a sufficient set, there exists at least one $\theta\in\mathbb{K}$, $\theta\ne\theta^*$ such that $f(x;\theta)=f(x;\theta^*)$ for $\forall x\in\mathcal{X}_\infty(\omega)$. Denote the set of these $\theta$'s as $\mathbb{K}'$. Let $\mathcal{L}^n_1,\mathcal{L}^n_2,...,\mathcal{L}^n_K$ be the likelihood of the $K$ $\theta$'s according to the first $n$ measurements. If we regard $\mathbb{K}$ as the ``candidate set'' and let $w^n(\theta)$ denote its ``probability'', then as Proof of Lemma~\ref{measure_suf1} shows, for $\forall \theta\in\mathbb{K}'\bigcup\{\theta^*\}$, $\lim_{n\rightarrow\infty} w^n(\theta)=\frac{\mathcal{L}^T_{\theta}}{\sum_{i=1}^K \mathcal{L}^T_i}>0$. For all other $\theta$'s, by Lemma~\ref{p_is_0}, $\lim_{n\rightarrow\infty}w^n(\theta)=0$. $w^n(\theta)$ is actually the weight of $\theta$ when we do resampling. Hence, as $n$ gets larger, $\theta$'s in $\mathbb{K}'\bigcup\{\theta^*\}$ always rank higher than others to enter the small pool (i.e, the sub-level set), from which we resample. \bigskip We refer to the event as $A_n$ that at least two $\theta$'s in $\mathbb{K}'\bigcup \{\theta^*\}$ are in the candidate set at time $n$. We first show that $A_n$ happens infinitely often. That is, $\mathbb{P}_\theta(\bigcap_{m\geq 1}\bigcup_{n\geq m}A_n)=1$, where $\mathbb{P}_\theta$ indicates the fact that this probability is calculated in the $(\Omega_\theta, \mathcal{F}_\theta, \mathbb{P}_\theta)$ space. There are two cases at a resampling step: (1) $|\mathbb{L}_{rm}|\geq 1$ or (2) $|\mathbb{L}_{rm}|=1$ (remember that $L_{rm}$ is the set of candidates to remove). Since resampling happens infinite often, at least either (1) or (2) happens infinitely often. We first assume (1) happens infinitely often, indexed by a subsequence of times $\{s_n\}$. Let $B_{s_n}$ be the event that at time $s_n$, two $\theta$'s from $\mathbb{K}'\bigcup\{\theta^*\}$ are selected in the first draw. (Remember that we may have multiple draws if resampling keeps being triggered. However, for $n$ large enough, if two such $\theta$'s are selected in the first draw, they will not be dropped in this iteration. Hence, $B_{s_n}\subset A_{s_n}$) Since all $w^n(\theta)$'s have limits, we know $\lim_{n\rightarrow\infty} \mathbb{P}_\theta (B_{s_n}) >0$. Therefore, $\sum_{n=1}^\infty \mathbb{P}_\theta(B_{s_n})=\infty$. On the other hand, for a fixed $\omega$, $w^n(\theta)$'s are all decided, and therefore $B_{s_n}$ are pairwise independent. By the Borel-Cantelli Lemma, the probability that $B_{s_n}$ happens infinitely often, i.e, $\mathbb{P}_\theta(\bigcap_{m\geq 1}\bigcup_{s_n\geq m}B_{s_n})=1$. We know that $B_{s_n}$ is a subset of $A_{s_n}$. Hence, $\mathbb{P}_\theta(\bigcap_{m\geq 1}\bigcup_{s_n\geq m}A_{s_n})=1$, and furthermore, $\mathbb{P}_\theta(\bigcap_{m\geq 1}\bigcup_{n\geq m}A_{n})=1$. Otherwise, if (1) happens a finite number of times and (2) happens infinitely often indexed by $\{t_n\}$, we assume the contrary. That is, there exists time $T_2$, such that after $T_2$, among the $L$ candidates there is at most one $\theta\in \mathbb{K}'\bigcup \{\theta^*\}$. At time $t_n$, since $|\mathbb{L}_{rm}|=1$, by assumption, the other $L-1$ candidates must be identical, and the newly drawn one must be the same as the other $L-1$ ones. The probability that this keeps happening from time $t_M$ to infinity, is $\lim_{N\rightarrow\infty} \prod_{n=M}^N w^{t_n}(\theta)=0$. Therefore, our assumption does not hold and $A_n$ stills happens infinitely often! \bigskip Hence, for any fixed $\omega\in\Omega_1$, with probability $1$, we have the subsequence $\{q_n(\omega)\}$, such that at these times, the candidates contain at least two $\theta$'s in $\mathbb{K}'\bigcup \{\theta^*\}$, denoted as $\theta_1^n$ and $\theta_2^n$. For any $x\in \mathcal{X}_\infty(\omega)$, $\lim_{n\rightarrow\infty} \nu^{KGDP-f,n}(x)(\omega)=0$,\\ $\lim_{n\rightarrow\infty} \nu^{KGDP-H,n}(x)(\omega)=0$; while for any $x$ such that $f(x;\theta_1^n)\ne f(x;\theta_2^n)$ and any $q_n$, $\nu^{KGDP-f,q_n}(x)(\omega)>0$, $\nu^{KGDP-H,q_n}(x)(\omega)>0$. As there are only finite combinations of $\theta_1$ and $\theta_2$ and each $\theta\in\mathbb{K}'\bigcap \{\theta^*\}$ has a positive probability in the limit, there exists ${\epsilon}>0$, such that $\sup_{} \nu^{KGDP-f,q_n}(x)(\omega)>{\epsilon}$, $\nu^{KGDP-H,q_n}(x)(\omega)>{\epsilon}$. Since $q_n$ can be larger than $T$, this is contradictory to the fact that KGDP-$f$ or KGDP-$H$ will not measure any $x$ out of $\mathcal{X}_\infty(\omega)$ after time $T$. $\Box$ \bigskip\bigskip \noindent\textbf{Theorem~\ref{re_KGDP-f_con}. } Resampling KGDP-$f$ is asymptotically optimal in finding the optimal alternative and the correct parameter. The same holds for KGDP-$H$. \subsubsection*{Proof of Theorem~\ref{re_KGDP-f_con}} For any fixed $\omega\in\Omega_1$, let $C_n$ ($C_n\in\mathcal{F}_\theta$) be the set of events that the $L$ candidates includes $\theta^*$ at time $n$. By Lemma~\ref{measure_suf}, $\theta^*$ is the only parameter in $\mathbb{K}$ that fits $\mathcal{X}_\infty(\omega)$. Assume $\{s_n\}$ is the subsequence of times when resampling happens. Then $\mathbb{P}_\theta (C_{s_n})>\frac{w^{s_n}(\theta^*)(\omega)}{\sum_{i=1}^K w^{s_n}_i(\omega)}$, where the right hand side is the probability that $\theta^*$ is chosen in the first draw We show $\mathbb{P}_\theta(\bigcup_{m\geq 1}\bigcap_{n\geq m}C_n)=1$, which means that $\theta^*$ is in the candidate set for all but a finite number of times. Since resampling only happens on $\{s_n\}$, we know $\mathbb{P}_\theta(\bigcup_{m\geq 1}\bigcap_{n\geq m}C_n)=$ \\$\mathbb{P}_\theta(\bigcup_{m\geq 1}\bigcap_{n\geq m}C_{s_n})$. Intuitively, this means if after time $T$, every resampling step chooses $\theta^*$ for the candidate set, then $\theta^*$ appears in the candidate set every iteration after $T$. For this fixed $\omega$, we know\small \begin{align*} \mathbb{P}_\theta (C_{s_n}^c) = 1- \mathbb{P}_\theta (C_{s_n})< \frac{\sum_{\theta\in\mathbb{K}, \theta\ne\theta^*} w^{s_n}(\theta)}{\sum_{\theta\in\mathbb{K}} w^{s_n}(\theta)}<\frac{\sum_{\theta\in\mathbb{K}, \theta\ne\theta^*} w^{s_n}(\theta)}{w^{s_n}(\theta^*)}. \end{align*}\normalsize From proof of Lemma~\ref{p_is_0} (for detail, check out Equation~(\ref{Eq_bound1}) and (\ref{Eq_bound2})), for this $\omega\in\Omega_1$, we know there exists $N_1$, such that for all $n>N_1$ and any $\theta\ne\theta^*$,\small \begin{align*} \frac{w^{s_n}(\theta)}{w^{s_n}(\theta^*)} < \exp(-n\frac{(f^*-f_l)^2}{4\sigma^2}). \end{align*}\normalsize Assume $\underset{l,x}{\sup} \frac{(f^*-f_l)^2}{4\sigma^2}=\rho$. Hence, for $n>N_1$, $\mathbb{P}_\theta (C_{s_n}^c)<K e^{-n\rho}$. Therefore, \vspace{-0.5em}\small\begin{align*} \sum_{n\geq 1}\mathbb{P}_\theta(C_{s_n}^c) < \sum_{n \geq 1}\mathbb{P}_\theta(D_{n}^c) < N_1 + \sum_{n \geq N_1+1}Ke^{-n\rho} = N_1+ K \frac{e^{-(N_1+1)\rho}}{(1-e^{-\rho})}<\infty. \end{align*} \normalsize \vspace{-0.5em} By the Borel-Cantelli Lemma, $\mathbb{P}_\theta(\bigcap_{m\geq 1}\bigcup_{n\geq m}C_{s_n}^c)=0$. Hence,\\ $\mathbb{P}_\theta(\bigcup_{m\geq 1}\bigcap_{n\geq m}C_{n})=\mathbb{P}_\theta(\bigcup_{m\geq 1}\bigcap_{n\geq m}C_{s_n})=1$. Hence, for any $\omega\in\Omega_1$, with probability $1$, there exists $T(\omega)$, such that for any $n\geq T(\omega)$, $\theta^*$ is in the candidate set, and $\lim_{n\rightarrow\infty}p^n(\theta^*)(\omega)=1$. This holds for every $\omega\in\Omega_1$, hence, $\mathbb{P}_{\Omega\times\Omega_\theta} \{ \lim_{n\rightarrow\infty}p^{n}(\theta^*) = 1 \} = 1$, where $\mathbb{P}_{\Omega\times\Omega_\theta}$ indicates the probability in the full probability space $(\Omega_f, \mathcal{F}_f, \mathbb{P}_f)$, which is the product space of $(\Omega, \mathcal{F}, \mathbb{P})$ and $(\Omega_\theta, \mathcal{F}_\theta, \mathbb{P}_\theta)$. Since we find $\theta^*$ with probability $1$, we also find the optimal $x$ with probability $1$. $\Box$ \section{Conclusion}\label{sec8 Optimal learning for nonlinear belief models has been an interesting but difficult problem. With a belief model nonlinear in unknown parameters, the knowledge gradient becomes computationally intractable if using continuous paramenter space. In \cite{Chen2014}, the Knowledge Gradient with Discrete Priors (KGDP) model is proposed, which uses a sampled representation of the parameter space. This method works well when one of the sampled candidates is exactly the truth, but this would not happen in practice. In this paper, we propose a resampling algorithm that cooperates with KGDP to solve this problem. We no longer require that one of the candidates is correct, but use the measurement history to guide resampling and discovering more probable parameters in the parameter space. Within a small budget of noisy and expensive experiments, our algorithm is capable of both finding the optimal alternative that maximizes the value function, and narrow down the location of the unknown parameters. Experiments on both a synthetic benchmark function and a real world materials science problem, known as the stability of W/O/W nanoemulsion problem, have shown strong performance of our algorithm. We recognize the duality of this optimization problem and manipulate in both the alternative and the parameter spaces. Besides the idea of maximizing the value function (denoted as KGDP-$f$) as used in previous optimal learning literatures, we also put forward an entropy-oriented metric that works under the knowledge gradient framework, which aims directly at lowering the uncertainty of the parameters (in terms of entropy). Experiments show strong performance of both metrics. We also prove the asymptotic optimality of KGDP-$f$ and KGDP-$H$ in both the non-resampling with truth from prior case, and the resampling framework. That means, given an infinite number of measurements, we are able to find both the optimal alternative and the correct parameter. Hence, KGDP-$f$ and KGDP-$H$ are both myopically and asymptotically optimal in learning the alternative and reducing parameter uncertainty respectively. \section{Introduction} We consider the following problem: we have a function over a finite number of alternatives, where the expression of the function is known except for some unknown parameters. Our goal is both to learn the correct parameters and to find the alternative that maximizes the function. Information can be collected by measuring the function value at any chosen alternative. However, the measurements are noisy and expensive, and we only have a limited budget to evaluate these alternatives. After exhausting the budget, we have to provide the best estimate of the unknown parameters and identify the best alternative to maximize our metric The optimization of functions with noisy measurements has been broadly studied in the stochastic search community. \cite{Spall2003} provides an extensive review of different methods for stochastic search. Some commonly used algorithms include gradient estimation \cite{Fu2006}, response surface methodology \cite{Barton2006}, and metaheuristics such as tabu search and genetic algorithms (\cite{Olafsson2006}, also see \cite{Bianchi2009} for more on metaheuristics). These methods are usually designed for functions that are relatively easy to evaluate, while our focus is on problems where a function evaluation is expensive (and we do not have access to derivatives) One of the main challenges of the problems with expensive experiments is known as ``\textit{exploration vs exploitation}'' (see Chapter 12 of \cite{Powell2011}), which requires a balance of exploiting the current optimal solutions and exploring with uncertain outcomes to learn the problem. Specifically, for a finite number of alternatives, some heuristics to strike this balance include epsilon-greedy (\cite{Sutton1998}, also see \cite{Singh2000} for convergence analysis), interval estimation \cite{Kaelbling1993}, and Chernoff interval estimation \cite{Streeter2006}. These methods usually perform well on small problems but have scaling issues in higher dimensions. More sophisticated methods to collect information efficiently include Gittins indices \cite{Gittins2011}, upper confidence bound methods \cite{Auer2002}, and expected improvement (\cite{Jones1998}, also see \cite{Gramacy2011} for recent work). This problem is an extension of the ranking and selection (R\&S) problem (see \cite{Kim2006,Hong2009} and the references cited there), which focuses on finding the optimal alternative, as opposed to learning the values of an underlying, parametric belief model. There have been two major approaches to R\&S problems: the frequentist approach and the Bayesian approach. The frequentist approach assumes that information only comes from observations and uses the distribution of the observed data to estimate the parameters (see, e.g., \cite{Hastie2009,Kim2006,Audibert2010}). The Bayesian approach begins with a probabilistic belief model about the true parameters which are updated as new observations are collected (see, e.g., \cite{Chen2000,Chick2006}). For the Bayesian approach, there are mainly two directions: the Optimal Computing Budget Allocation (OCBA) (see, e.g., \cite{Chen1995,Chen2000,He2007}), which tries to maximize the posterior probability of correct selection, and the Value of Information (VoI) procedures (see, e.g., \cite{Chick2001a,Frazier2008}), which maximizes the improvement in a single experiment. Below we give a brief review of the VoI procedures, with special attention given to the knowledge gradient \cite{Frazier2008} Since every experiment is expensive or time consuming (consider for example, the laboratory experiments in materials science that might take days or weeks), it is important to maximize the value of each measurement. \cite{Gupta1996} proposes the idea of computing the marginal value of information from a single experiment. Based on this, \cite{Frazier2008} extends the idea using the Bayesian approach, and presents the Knowledge Gradient (KG) policy. It analyzes the case where all alternatives are independent, and also proves that KG is the only stationary policy that is both myopically and asymptotically optimal. \cite{Frazier2009} adapts the knowledge gradient to handle correlations in the beliefs about discrete alternatives. Both \cite{Frazier2008} and \cite{Frazier2009} use lookup table belief models, and become computationally expensive when the number of alternatives becomes large, as typically happens when an experimental choice involves multiple dimensions. \cite{Negoescu2011} is the first paper that studied KG using a parametric belief model. It assumes that the belief model is linear in some unknown parameters, and imposes the uncertainty of the function values onto the parameters. This strategy reduces the number of parameters to be estimated from the number of alternatives (required by a lookup table belief model) to a much lower-dimensional parameter vector. While the knowledge gradient is easy to compute for belief models that are linear in the parameters, nonlinear belief models are computationally intractable. \cite{Chen2014} studies the more general nonlinear parametric model, and proposes the Knowledge Gradient with Discrete Priors (KGDP), which overcomes the computational issues, but requires that we represent the uncertainty about the unknown parameters using a finite number of samples, where one of the samples has to be the true parameter. KGDP is able to handle any nonlinear belief model, but the assumption that one of the candidates is correct (we refer to this assumption later as the \textit{truth from prior} assumption) seems too strong in most real world settings, especially when the parameters have four or more dimensions. Moreover, \cite{Chen2014} also fails to give any theoretical proof of convergence of KGDP given the truth from prior assumption. In this paper, we present a resampling algorithm that works with KGDP to find not only the best alternative but also the correct parameters without the assumption of one candidate being the truth. We start with several potential candidates, and use the information from the measurements to guide resampling and discover more probable candidates. Regarding the objective of finding the correct parameter, we propose a new metric for calculating the knowledge gradient that minimizes the entropy of the candidate parameters. We also prove the convergence of both the non-resampling (with truth from prior) and the resampling KGDP algorithms. In the experimental section, we apply our algorithm to a real problem in materials science: optimizing the kinetic stability of an experimental problem involving the control of a water-oil-water (W/O/W) nano emulsion \cite{Chen2014}. Compared with \cite{Chen2014}, our algorithm shows significant improvements. In order to find the correct parameter, we need to deal with the parameter space (denoted as $\theta$ space) besides the alternative space (denoted as $x$ space). Here arises a dual problem: in $x$ space, we solve a maximization problem to figure out which alternative maximizes some performance metric (strength, conductivity, reflexivity); in $\theta$ space, we solve another optimization problem to locate the most promising candidates. The second optimization problem is solved via minimizing the mean square error (MSE) To the best of our knowledge, this is the first optimal learning paper that addresses the dual problems of objective maximization with parameter identification for problems with parametric belief models (other literatures with similar dual-objective formulations usually assume that experiments are inexpensive, e.g, \cite{2014arXiv1402}). Previous papers only concentrate on discovering the optimal alternative, but in many real world situations, scientists also care about getting accurate estimates of the parameters. For example, materials scientists may want to know both the tunable variables of an experiment (such as the temperature, the pressure, and the density), and some unknown and uncontrollable parameters, which can help them understand the intrinsic nature of the physical phenomena. Our work in this paper provides a powerful solution to this issue This paper makes the following major contributions: \begin{itemize} \item We present a resampling algorithm in the parameter space, which works with the optimal learning methods in the alternative space, to discover both the optimal alternative and the correct parameters. \item We propose a new metric to calculate the knowledge gradient, which focuses on reducing the uncertainty of parameters by maximizing entropic loss. \item We prove the asymptotic optimality of both the nonresampling (with truth from prior) and the resampling algorithms using either traditional function oriented metric or our new entropy oriented metric. \item We show experimentally that our resampling algorithm has impressive performance in both finding the optimal alternative and estimating the parameters. \end{itemize} The paper is organized as follows. Section~\ref{sec2} reviews the principles of optimal learning based on maximizing the value of information, along with the knowledge gradient with discrete priors (KGDP) proposed in \cite{Chen2014}. In Section~\ref{sec3}, we introduce the resampling algorithm as well as the entropy oriented metric for calculating the knowledge gradient. Section~\ref{sec5} provides the asymptotic convergence proof of KGDP without resampling under truth from prior assumption, and Section~\ref{sec6} proves asymptotic convergence of KGDP with resampling. We show empirical results on both a simulated and a real-world problem in Section~\ref{sec7}, and finally conclude in Section~\ref{sec8}. \section{Knowledge Gradient for Nonlinear Belief Models}\label{sec2} In this section, we first review the ranking and selection problem along with the knowledge gradient policy, and then introduce the model of Knowledge Gradient with Discrete Priors \subsection{Ranking and Selection (R\&S) Problem} Suppose we have a finite number of alternatives $\mathcal{X}=\{x_1,x_2,...,x_M\}$. Each alternative $x\in \mathcal{X}$ is associated with a true utility $\mu_{x}$, which is presumed unknown to us. The goal is to determine the $x$ that has the largest utility through a budget of $N$ sequential measurements. At time $n$ (i.e, after $n$ measurements), suppose we decide to query alternative $x^n$ according to some policy, and our measurement is $\hat{y}^{n+1}$, where the superscripts imply that $\hat{y}^{n+1}$ will be unknown until the $(n+1)$-{th} measurement. We assume the inherent noise ($W^n$) in each measurement follows a normal distribution with zero mean and variance $\sigma^2$, where $\sigma$ is known to us. That is, for $n=0,1,...,N-1$, \small\begin{align*} \hat{y}^{n+1}=\mu_{x^n} + W^{n+1}, \end{align*}\normalsize where $W^{n+1}\sim \mathcal{N}(0,\sigma^2)$. For each $x\in \mathcal{X}$, we use $\theta^n_x$ as our estimate of the true utility $\mu_x$ after $n$ experiments. Our goal is to select the $x$ with the highest estimate after $N$ measurements, i.e., \small\begin{align*} x^N = \mathop{\text{argmax}}_{x\in\mathcal{X}}\theta_x^N. \end{align*}\normalsize \subsection{Knowledge Gradient} Let $S^n$ denote the state of knowledge at time $n$, and let $V^n(S^n)$ be the value of information if we are in state $S^n$. In the R\&S problem, we have $V^n(S^n)=\max_{x'\in\mathcal{X}}\theta^n_{x'}$. The transition from $S^n$ to $S^{n+1}$ occurs when we take a measurement at $x^{n}=x$ and observe $\hat{y}^{n+1}$, with \small\begin{align*} V^{n+1}(S^{n+1}(x))= \max_{x'\in\mathcal{X}}\theta_{x'}^{n+1}(x). \end{align*}\normalsize At time $n$, $\theta_{x'}^{n+1}(x)$ is a random variable since it depends on the noise $W^{n+1}$. We would like to maximize our expected incremental information from the next measurement, which we call the \textit{knowledge gradient}. At time $n$, the knowledge gradient of $x$ is defined as: \small\begin{align}\label{eq_KG} \nu^{KG,n}(x) &= \mathbb{E}^n\left[V^{n+1}(S^{n+1}(x))-V^n(S^n)\right] \notag\\ &= \mathbb{E}^n \left[\max_{x'\in\mathcal{X}}\theta_{x'}^{n+1}(x)|S^n=s,x^n=x\right] - \max_{x'\in\mathcal{X}}\theta^n_{x'}. \end{align}\normalsize In each iteration, the \textit{Knowledge Gradient} policy measures the alternative with the largest knowledge gradient: \small\begin{align*} x^{KG,n} = \mathop{\text{argmax}}_{x\in\mathcal{X}}\nu^{KG,n}(x). \end{align*}\normalsize This is a myopic policy that maximizes the value of information from a single experiment \cite{Frazier2008}. The knowledge gradient Equation~(\ref{eq_KG}) is easy to calculate when using a lookup table belief model (\cite{Frazier2008} and \cite{Frazier2009}), or when the belief model is linear in the parameters \cite{Negoescu2011}, with the form ${f}(x;\theta) = \theta_0 + \theta_1 \phi_1(x) + ... +\theta_n \phi_n(x)$. We run into problems, however, if the function is nonlinear in the parameter vector $\theta$, since we have to incorporate the updating mechanism in the representation of $\theta^{n+1}_{x'}(x)$ in Equation~(\ref{eq_KG}). When this is imbedded within both the maximum and expectation operator, the resulting expression becomes computationally intractable. We address this issue in the next section. \subsection{Knowledge Gradient with Discrete Priors (KGDP)} The Knowledge Gradient with Discrete Priors (KGDP) \cite{Chen2014} is designed to handle parametric belief models that are nonlinear in the parameters. Suppose we have a function $f(x;\theta)$, where $x$ is the alternative and $\theta$ represents the unknown parameters. The expression of $f(x;\theta)$ is known to us except for the unknown $\theta$. Let $\theta^*$ denote the true parameter. Our goal is to find the $x$ that maximizes the true function $f(x;\theta^*)$. KGDP assumes that $f(x;\theta^*)$ can be approximated as a convex combination of $L$ candidates: \small\begin{align*} f(x;\theta^*) \approx \bar{f}^n(x)=\sum_{i=1}^Lf(x;\theta_i)p_i^n, \end{align*}\normalsize where $\theta_i$'s are known as the candidates, with $p_i^n$'s as their probabilities at time $n$. KGDP requires that one of the candidates is equal to or very close to the truth. We call this the \textit{truth from prior }assumption, while its opposite, {\it truth not from prior}, recognizes that the true $\theta$ may not belong to our sampled set. As we take measurements, the $\theta$'s are fixed, while the probabilities $p_i$'s get updated after each iteration, which means that the belief is naturally conjugate. The state variable $S^n$ is defined as the probability vector, $S^n=(p_1^n,p_2^n,...,p_L^n,)$. The KGDP score of each alternative $x$ can be calculated according to Equation~(\ref{eq_KG}) (we review this formula in \hyperref[sec_KGDP_eq]{Section \ref{sec_KGDP_eq}}). We calculate the KGDP scores of all alternatives, and select the one with the highest score to evaluate next. After we exhaust the budget, the true parameters can be estimated by choosing the most probable candidate, i.e, $\hat{\theta}^* = \theta_i$, where $i=\mathop{\text{argmax}}_{l\in[L]} p_l^N$. KGDP provides a powerful approach to handling nonlinear parametric models if we use a sampled belief model for the unknown parameters. However, the complexity of KGDP grows quickly as the number of candidates increases. In the experiments described in \cite{Chen2014}, it can handle at most tens of candidates. Moreover, the truth from prior assumption may be a reasonable approximation if $\theta$ has one or two dimensions, but in higher dimensions it is unlikely that this would be the case. This is particularly problematic when we are interested in finding the best possible estimate of the parameters themselves. For this reason, we propose a resampling algorithm that can adaptively find more promising parameter candidates. \section{An example appendix} \bibliographystyle{siamplain} \section{Empirical Results}\label{sec7} We study an asymmetric unimodal problem as a benchmark function, and another material science problem, known as the nanoemulsion problem in a water-oil-water (W/O/W) model. We mainly demonstrate results from the following aspects: \begin{itemize} \item The visualization of the resampling process, including the evolution of the candidates, small pool, and approximated function $\bar{f}^n(x)$; \item The performance of resampling KGDP-$f$ and KGDP-$H$, both in relation to each other and their absolute performance in terms of different metrics; \item The relative performance of resampling KGDP-$f$ and KGDP-$H$ policies versus competing policies; \item The comparison of resampling and non-resampling KGDP methods; \item The empirical rate of convergence under different noise and dimensionality settings. \end{itemize} We feel that the rate of convergence result is the most important. \subsection{An Asymmetric Unimodal Benchmark Function} We study a multidimensional benchmark function, which produces a wide range of asymmetric, unimodular functions with heteroscedastic noise, which we have found to be a challenging class of test problems. The function is given by \small\begin{align*} f(x_1,...,x_k) = \sum_{i=1}^k\eta_{1,i}\mathbb{E}[\min(x_i,(D-\sum_{j=1}^{i-1}x_j)^+)]-\sum_{i=1}^k\eta_{2,i}x_i, \end{align*}\normalsize where $x_1,...,x_k$ are the decision vector, $D$ is a uniformly distributed random variable, $\eta_{1,i}$ are fixed constants, and $\eta_{2,i}$ are unknown parameters. In the following experiments, we fix the variance of D, but set its mean as well as $\eta_{2,i}$'s as parameters ($\theta$), which results in a $(k+1)$-dimensional problem. \subsubsection{Illustration of the Resampling Process} We first show the results of resampling KGDP-$f$ in two-dimensional cases, where we can plot the $\theta$ space. \begin{figure}[!bp] \centering \hspace{3.5em}\includegraphics[width=0.98\textwidth]{img/epigraph_6_new3.pdf} \caption{Illustration of resampling in $\theta$ space (Iterations $1$, $2$, $3$, $5$, $7$, $9$) ($5$,$7$,$9$ are zoomed in).}\label{fig_epigraph1} \end{figure} Figure~\ref{fig_epigraph1} shows the evolution of the small pool and the $L$ candidates in $\theta$ space in the first several iterations of a realization of the problem. In these images, the horizontal and vertical axes represent $\theta_1$ and $\theta_2$ respectively, with color indicating the values of MSE. The red pentagon shows where the truth $\theta^*$ is located, while the white star indicates the $\theta$ with the smallest MSE in the large pool. The region circled by the red line is the range of the small pool. The green squares indicate the locations of the candidates, and the sizes are proportional to their probabilities. Note that these images have been scaled to show more detail. We can see that the small pool shrinks quickly towards around $\theta^*$, and the $\theta$ with the smallest MSE as well as the candidates also converges to $\theta^*$ within only a few iterations. \subsubsection{Comparison of Different Policies to Choose $x^n$} We use the opportunity cost (OC) to evaluate the performance of various policies from the alternative perspective. OC is defined as the difference between the values at our estimated optimal $x$ and the true optimal $x$, i.e., \small\begin{align*} OC(n) = \max_{x\in \mathcal{X}}f(x;\theta^{*}) - f(\mathop{\text{argmax}}_{x\in\mathcal{X}} \bar{f}^n(x);\theta^{*}). \end{align*}\normalsize To normalize the opportunity cost, we define as the percentage OC the ratio with respect to the optimal function value, i.e.: \small\begin{align*} OC\%(n)=\frac{\max_{x\in \mathcal{X}}f(x;\theta^{*}) - f(\mathop{\text{argmax}}_{x\in\mathcal{X}} \bar{f}^n(x);\theta^{*})}{\max_{x\in \mathcal{X}}f(x;\theta^{*})}. \end{align*}\normalsize We define the noise level as the ratio of the noise standard deviation to the range of the true function, i.e, $\frac{\sigma}{\max_x f(x;\theta^*)-\min_x f(x;\theta^*)}$. When we say the noise is $20\%$, we mean this ratio is $20\%$. Figure~\ref{fig_3D_OC} shows the comparison of the five policies in three dimensional settings. The subimages correspond to $5\%$, $20\%$ and $50\%$ of noise from left to right. We can see that KGDP-$f$ and KGDP-$H$ are among the best under different noise levels. We have similar results for two- and five-dimensional cases, too \begin{figure}[!htp] \centering \includegraphics[width=0.95\textwidth]{img/3D_5_20_50.pdf} \caption{\small OC\% of different policies for three-dimensional $\theta$ ($5\%$, $20\%$ and $50\%$ noise from left to right).}\label{fig_3D_OC} \end{figure} \subsection{Application in W/O/W Nanoemulsion Stability Study} We study the same problem of water-oil-water(W/O/W) nanoemulsion stability as in \cite{Chen2014}, and repeat its experiments using the original non-resampling KGDP and our resampling KGDP methods to compare the results. The problem studies controlled payload delivery using water-oil-water (W/O/W) double nanoemulsions, which depicts the delivery of payload molecules from the internal water droplet, surrounded by a larger droplet of oil, to the external water. This process is performed via laser excitation and controlled by several experimental variables. We can model the stability of this process as a function $f(x;\theta)$ that is nonlinear in $\theta$ (as well as $x$). $x$ is a five-dimensional control variable and includes variables such as the initial volume of the external water, the initial volume fraction of the oil droplets, and the diameter of the oil droplets. $\theta$ includes seven dimensions, representing the unknown parameters that appear in the formulation, such as the energy barrier for flocculation, the rate prefactor for coalescence and droplet adsorption/desorption energy barrier difference. Our goal is to conduct a series of experiments to learn the optimal setting ($x$) to achieve the best stability, and also to learn the correct unknown parameters $\theta$. A more detailed introduction and formulation of this problem can be found in \cite{Chen2014}. In order to be consistent with \cite{Chen2014}, we fix three dimensions of $x$ and create a finite set of alternatives using the other two dimensions. For $\theta$, we study 1) a three dimensional case and 2) a seven dimensional one in our experiments. Remember that the dimensionality of $\theta$ is more important than the dimensionality of $x$, since we always have finite samples of $x$. \subsubsection{Three Dimensional $\theta$} Figure~\ref{fig_nano_3D_OC_50} shows the comparison of the various policies in terms of OC\% under $50\%$ of noise. The left image is a comparison across the five policies, while the right one compares the resampling method with non-resampling KGDP in \cite{Chen2014}. \begin{figure}[!htp] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/nano_3D_OC_50_1.pdf} \caption{\small Comparison across five policies} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \vspace{0.4em} \includegraphics[width=0.76\textwidth]{img/nano_3D_OC_50_3.pdf} \caption{\small Comparison with non-resampling KGDP} \end{subfigure} \caption{\small OC\% of different policies in three-dimensional nanoemulsion ($50\%$ noise).}\label{fig_nano_3D_OC_50} \end{figure} The left image indicates that Max-Var, KGDP-$f$ and KGDP-$H$ are among the best, showing very close performance. The right image compares: (1) the orange curve: the non-resampling KGDP-$f$ with truth from prior, which is the ideal case; (2) the pink curve: the non-resampling KGDP-$f$ with truth not from prior, which is what happens with truth not from prior using \cite{Chen2014}; (3) the black curve: resampling KGDP-$f$; and (4) the green curve: resampling KGDP-$H$. We can see that (2) gets stuck after a few iterations, while our resampling methods using either KGDP-$f$ or KGDP-$H$ are both able to find the more probable parameters and approach the ideal case gradually. We have shown the performance of our resampling algorithm in finding the best alternative in terms of OC\%. To show its performance in learning the parameters, we denote our best estimate of $\theta^*$ as $\hat{\theta}^*$, and calculate: \begin{itemize} \item[(1)] the mean square error of $f(x;\theta^*)$ and $f(x;\hat{\theta}^*)$, defined as (remember $M$ is the size of $\mathcal{X}$): \small\begin{align*} f_{MSE} = \frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}}|f(x;\theta^*)-f(x;\hat{\theta}^*)|^2, \end{align*}\normalsize \item[(2)] or the error of a particular dimension of $\theta$, say $|\theta^*_1-\hat{\theta}^*_1|$. \end{itemize} Note that in method (2), we do not calculate the error of all dimensions as $||\theta^*-\hat{\theta}^*||$. This is because different dimensions of $\theta$ may have different units and follow various distributions, hence it is meaningless to sum up the errors of different dimensions. Moreover, in a higher dimensional problem, some dimensions may converge more quickly than others, and while the entire vector converges in aggregate, individual dimensions may converge more slowly than others We show the MSE and error of $\theta$ in Figure~\ref{fig_nano_3D_mse_50}. In the first image, we calculate $f_{MSE}$, and plot $\frac{\sqrt{f_{MSE}}}{\max_x f(x;\theta^*)-\min_x f(x;\theta^*)}$. and the other three show $|\theta^*_1-\hat{\theta}^*_1|$, $|\theta^*_2-\hat{\theta}^*_2|$ and $|\theta^*_3-\hat{\theta}^*_3|$, respectively. Note that in this example, the second dimension seems less important than the other two. We also have similar results under $20\%$ of noise. \begin{figure}[!htp] \centering \includegraphics[width=\textwidth]{img/nano_3D_mse_50_3.pdf} \caption{\small Performance of resampling KGDP in finding $\theta$ in three-dimensional nanoemulsion ($50\%$ noise).}\label{fig_nano_3D_mse_50} \end{figure} \subsubsection{Seven Dimensional $\theta$} In a seven-dimensional case, Figure~\ref{fig_nano_7D_fbar} shows the process of $\bar{f}^n(x)$ approaching the true function, using resampling KGDP-$f$ under $20\%$ noise. The four images in the second row are our approximations at $n=1,10,20,45$ respectively. \begin{figure}[!htp] \centering \includegraphics[scale=0.4]{img/nano_fbar.pdf} \caption{\small Evolution of $\bar{f}^n(x)$ (seven-dimensional $\theta$, $20\%$ noise).}\label{fig_nano_7D_fbar} \end{figure} The result of OC\% under $50\%$ noise are shown in Figure~\ref{fig_nano_7D_OC_50}. We have similar results regarding learning the parameters as in three-dimensional cases. \begin{figure}[!htp] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=0.7\textwidth]{img/nano_7D_OC_50_1.pdf} \caption{\small Comparison across five policies} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[width=0.7\textwidth]{img/nano_7D_OC_50_2.pdf} \caption{\small Comparison with non-resampling KGDP} \end{subfigure} \caption{\small OC\% of different policies in seven-dimensional nanoemulsion ($50\%$ noise).}\label{fig_nano_7D_OC_50} \end{figure} \section{Convergence Analysis} \section{Convergence of KGDP without Resampling}\label{sec5} By construction, KGDP-$f$ and KGDP-$H$ are naturally the optimal myopic policy to find the optimal alternative and parameter (in terms of entropy) respectively. In this section, we show that KGDP-$f$ and KGDP-$H$ with truth from prior are asymptotically optimal in the non-resampling scenario, a result that was not shown in \cite{Chen2014} where KGDP was first introduced. In other words, as the budget $N\rightarrow \infty$, KGDP-$f$ and KGDP-$H$ will both find the optimal alternative and the correct parameters. We denote the finite set of alternatives as $\mathcal{X}$. Define $(\Omega, \mathcal{F}, \mathbb{P})$ as the probability space, where $\Omega$ is the set of all possible measurement histories $\{(x^0,\hat{y}^1),...,$ $(x^{N-1},\hat{y}^N)\}$, and $\mathcal{F}$ is the $\sigma$-algebra generated by the history. In this section, $N=\infty$. To prove asymptotic optimality, we first show that if an alternative $x$ is measured infinitely often, we can learn its true function value almost surely. As $N$ goes to infinity, there will be a subset of alternatives that are evaluated infinitely often. We prove that under KGDP-$f$ or KGDP-$H$ policy with truth from prior, $f(x,\theta^*)$ is the only function that fits the true function values of this subset. To show this, we reveal the nonnegativity of KGDP-$f$ and KGDP-$H$ scores, and the fact that any alternative measured infinitely often has its KGDP-$f$ and KGDP-$H$ scores converge to $0$. Then we use proof by contradiction. Assume $f(x,\theta^*)$ is not the only function inferred by the infinitely measured alternatives. Then there exists at least an $x$ measured for a finite number of times. We claim that it has positive KGDP-$f$ and KGDP-$H$ scores in the limit. This is contrary to the fact that either KGDP-$f$ or KGDP-$H$ policy chooses the alternative with the largest score. \bigskip We begin by showing that if we measure a point $x$ infinitely often, the correct function value at $x$ will be found almost surely: \begin{lemma}\label{p_is_0} Let $N^n(x)$ be the number of measurements taken on $x$ when the total number of measurements is $n$. If $N^n(x)\rightarrow\infty$ as $n\rightarrow\infty$, then for $\forall l\in\{1,2,...,L\}$ such that $f(x;\theta_l)\ne f(x;\theta^*)$, $p_l^n\rightarrow 0$ almost surely, i.e, $\mathbb{P}(\lim_{n\rightarrow \infty}p_l^n=0)=1$. \end{lemma} The rigorous proof of Lemma~\ref{p_is_0} comes from the strong law of large numbers (shown in \hyperref[append]{Appendix}). We define the set of the almost sure events as $\Omega_0$. For any $\omega\in\Omega$, we denote the set of alternatives as $\mathcal{X}_\infty(\omega)$ that are measured infinitely often. Hence, for any $\omega\in\Omega_0$ and any $x\in\mathcal{X}_\infty(\omega)$, we have $p^n(\theta^*)(\omega)\rightarrow 1$. We hope that we can learn $\theta^*$ via $\mathcal{X}_\infty$. To achieve this goal, we first study some properties of KGDP-$f$ and KGDP-$H$ scores. We start by showing that the value of information for both objectives is always nonnegative: \begin{lemma}\label{f_non_neg} For $\forall n\geq 0, \forall x\in \mathcal{X}$, the KGDP-$f$ score $\nu^{KGDP-f,n}(x)\geq 0$. Equality holds if and only if (1) there exists $x'$ such that $x'\in \mathop{\text{argmax}}_x f(x;\theta_i)$ for all $i$ such that $p_i^n>0$, or (2) all functions with $p^n>0$ have the same value at $x$ \end{lemma} \begin{lemma}\label{H_non_neg} For $\forall n\geq 0, \forall x\in \mathcal{X}$, the KGDP-$H$ score $\nu^{KGDP-H,n}(x)\geq 0$. Equality holds if and only if all functions with $p^n>0$ have the same value at $x$. \end{lemma} \begin{proof}[Sketch of proof for Lemma~\ref{f_non_neg} and \ref{H_non_neg}] (See \hyperref[append]{Appendix} for full proof.) First, we show $\mathbb{E}^n \left[p^{n+1}_i(x)\right] = p^n_i$. Then, applying Jensen's inequality will give us the nonnegativity of both KGDP-$f$ and KGDP-$H$. \end{proof} According to Lemma~\ref{f_non_neg} and Lemma~\ref{H_non_neg}, KGDP-$f$ equals $0$ in only two cases: (1) either when the functions are aligned, in which case $\nu^{KGDP,n}(x)=0$ for all $x$, (2) or when the functions with nonzero probabilities have the same value at $x$, in which case $\nu^{KGDP,n}(x)=0$ for this particular $x$. However, KGDP-$H$ equals $0$ only in the second case. We then show that for any $x$ measured infinitely often, its KGDP-$f$ and KGDP-$H$ scores also go to zero: \begin{lemma}\label{v_score0} For $\forall \omega\in\Omega_0$ and $x\in\mathcal{X}_\infty(\omega)$, we have $\lim_{n\rightarrow\infty}\nu^{KGDP-f,n}(x)(\omega)=0$, and $\lim_{n\rightarrow\infty}\nu^{KGDP-H,n}(x)(\omega)=0$. \end{lemma} Intuitively, as we measure $x$ infinitely often and learn the true value gradually, the condition for both KGDP-$f$ and KGDP-$H$ being $0$ is satisfied (by Lemma~\ref{f_non_neg} and \ref{H_non_neg}). The full proof, as shown in \hyperref[append]{Appendix}, reveals the fact that given a fixed $x$, $\nu^{KGDP-f}(x)$ and $\nu^{KGDP-H}(x)$ are both uniformly continuous in the probability vector space $\vec{p^n}=(p_1^n,...,p_L^n)$. Combined with the convergence result of $p_i^n$ shown by Lemma~\ref{p_is_0}, we conclude $\nu^{KGDP-f}(x)$ and $\nu^{KGDP-H}(x)$ also converge to $0$. \bigskip For any subset of alternatives, if it is sufficient to infer $\theta^*$ by fitting the true function values of the subset, we call it a \textit{sufficient set}: \begin{definition} We define a \textit{sufficient set} $\mathcal{X}_s\subset\mathcal{X}$ as follows: for $\forall \theta\ne\theta^*$, there exists $x\in\mathcal{X}_s$ such that $f(x;\theta)\ne f(x;\theta^*)$. \end{definition} In other words, a sufficient set $\mathcal{X}_s$ is a subset of $\mathcal{X}$ upon which we can distinguish $\theta^*$ from all the others. Alternatively, for any subset $\mathcal{X}_s = \{x_1,...,x_m\}\subset \mathcal{X}$, if $f(x;\theta^*)$ is the only function that fits the $m$ points $(x_1,f(x_1;\theta^*)), ...,$ $(x_m,f(x_m;\theta^*))$, then $\mathcal{X}_s$ is a sufficient set. Obviously, the largest sufficient set is $\mathcal{X}$. An example of an \textit{insufficient set} is that, if all functions have the same values at $\{x_1,...,x_m\}$, then this is not a sufficient set. We show that under KGDP-$f$ or KGDP-$H$ policy, we measure a sufficient set almost surely: \begin{lemma}\label{measure_suf1} For any $\omega\in\Omega_0$, the alternatives measured infinitely often under the KGDP-$f$ or KGDP-$H$ policy constitute a sufficient set. \end{lemma} \begin{proof}[Sketch of proof] (See full proof in \hyperref[append]{Appendix}.) If $\mathcal{X}_\infty(\omega)$ is not a sufficient set, then at least another function other than $f(x;\theta^*)$ fits $\mathcal{X}_\infty(\omega)$. Hence, there exists $x\notin\mathcal{X}_\infty(\omega)$ such that this function has different values from $f(x;\theta^*)$. We can prove that for this $x$, $\lim_{n\rightarrow\infty}\nu^{KGDP-f,n}(x)(\omega)>0$, and $\lim_{n\rightarrow\infty}\nu^{KGDP-H,n}(x)(\omega)>0$. This is contrary to the fact that KGDP-$f$ or KGDP-$H$ will not measure any $x$ out of $\mathcal{X}_\infty(\omega)$ after a finite number of times. \end{proof} Using the previous lemmas, we can conclude with the following asymptotic optimality result: \begin{theorem}\label{KGDP-f_con} Non-resampling KGDP-$f$ with truth from prior is asymptotically optimal in finding both the optimal alternative and the correct parameter. The same holds for KGDP-$H$. \end{theorem} The detailed proofs of Lemma~\ref{f_non_neg} - \ref{measure_suf1} and Theorem~\ref{KGDP-f_con} are given in \hyperref[append]{Appendix}. \section{Convergence of KGDP with Resampling}\label{sec6} We now establish the important result that the resampling algorithm converges to the true optimal solution, both in terms of identifying the right parameter $\theta^*$ as well as the optimal design $x^*$. As we move from the non-resampling case to the resampling one, the difference is that the candidates are no longer fixed, and $\theta^*$ is not guaranteed to be one of the candidates. Note that the nonnegativity of the knowledge gradient established in Lemma~\ref{f_non_neg} and \ref{H_non_neg} still holds in the resampling case, regardless of whether the candidates include the truth or not. Unlike the non-resampling case, we have two probability spaces here. One is still $(\Omega, \mathcal{F}, \mathbb{P})$ as above, defined on the measurement history, while the other one describes the randomness of selecting candidates in the resampling process. We denote the latter as $(\Omega_\theta, \mathcal{F}_\theta, \mathbb{P}_\theta)$, where $\Omega_\theta$ is the set of all possible selection combinations of the $L$ candidates in $N$ iterations (in this section, $N=\infty$). The full probability space, denoted as $(\Omega_f, \mathcal{F}_f, \mathbb{P}_f)$, is the product space of $(\Omega, \mathcal{F}, \mathbb{P})$ and $(\Omega_\theta, \mathcal{F}_\theta, \mathbb{P}_\theta)$. We use $\omega$ to represent an element in $\Omega$ and $\omega_\theta$ in $\Omega_\theta$. Let $\mathbb{K}$ denote the large pool, i.e, the set of the $K$ $\theta$'s. We can adapt the concept of a sufficient set by considering the entire set $\mathbb{K}$. For a subset of alternatives $\mathcal{X}_s\subset\mathcal{X}$, if $f(x;\theta^*)$ is the only function among all the $K$ functions that fits the true values of alternatives in $\mathcal{X}_s$, we call it a sufficient set. Unlike the non-resampling case, since the candidates may keep changing, the limit of a particular $p_i^n$ may not exist. However, if we regard the whole $\mathbb{K}$ as the candidate set and consider the probability of each $\theta$, then Lemma~\ref{p_is_0} still holds, demonstrating that infinite measurements on $x$ give its true value. We define $\Omega_1$ as the set of almost sure events in $\Omega$ for Lemma~\ref{p_is_0} to hold. \begin{lemma}\label{measure_suf} For any $\omega\in\Omega_1$, the alternatives measured infinitely often under the KGDP-$f$ or KGDP-$H$ policy constitute a sufficient set. We denote this set as $\mathcal{X}_\infty(\omega)$. \end{lemma} \begin{proof}[Sketch of proof] (See \hyperref[append]{Appendix} for full proof.) Assume the contrary. Then there exists at least one $\theta\ne\theta^*$, such that $f(x;\theta)=f(x;\theta^*)$ for any $x\in\mathcal{X}_\infty(\omega)$. Denote the set of such $\theta$'s as $\mathbb{K}'$. We can show that for this fixed $\omega$, the event happens infinitely often that at least two $\theta$'s out of $\mathbb{K}'\bigcup \{\theta^*\}$ are included in the candidate set. At these times, we can further show that there exists an $x\notin \mathcal{X}_\infty(\omega)$ and ${\epsilon}>0$ such that $\nu^{KGDP-f,n}(x)>{\epsilon}$ and $\nu^{KGDP-H,n}>{\epsilon}$, while on the other hand, for any $x\in\mathcal{X}_\infty(\omega)$, $\lim_{n\rightarrow\infty}\nu^{KGDP-f,n}(x)=0$ and $\lim_{n\rightarrow\infty}\nu^{KGDP-H,n}(x)=0$. This is contradictory to the fact that any $x\notin\mathcal{X}_\infty(\omega)$ is measured for a finite number of times. \end{proof} Lemma \ref{measure_suf} implies the following theorem, which is also our main result in this section: \begin{theorem}\label{re_KGDP-f_con} Resampling KGDP-$f$ is asymptotically optimal in finding the optimal alternative and the correct parameter. The same holds for KGDP-$H$. \end{theorem} \begin{proof}[Sketch of proof] (See \hyperref[append]{Appendix} for full proof.) The crucial step is to show that for any $\omega\in\Omega_1$, there exists time $T(\omega)$, such that $\theta^*$ always appears in the candidate set after $T(\omega)$ with probability $1$. Note that this probability is calculated in the $(\Omega_\theta, \mathcal{F}_\theta, \mathbb{P}_\theta)$ space. Extending this result to the full probability space $(\Omega_f, \mathcal{F}_f, \mathbb{P}_f) = (\Omega, \mathcal{F}, \mathbb{P})\times(\Omega_\theta, \mathcal{F}_\theta, \mathbb{P}_\theta)$, we can see that $\lim_{n\rightarrow\infty}p^n(\theta^*)=1$ happens with probability $1$. \end{proof}
{'timestamp': '2016-11-23T02:03:03', 'yymm': '1611', 'arxiv_id': '1611.07161', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.07161'}
\section{Introduction} \IEEEPARstart{T}{he} relationship between conditional entropy (equivocation) or mutual information, and best possible quality of decoding is an important concept in information theory. The best possible quality of a decoding scheme, when quantified by the minimal probability of error $\epsilon$, does not uniquely determine the value of equivocation or mutual information, but various upper and lower bounds have been proved, see Sec. \ref{sec_existing_theory}. Here we discuss a scenario when not only $\epsilon$, but the complete joint probability distribution $p(x,\hat{x})$ of signals $x$ and maximum a posteriori decodes $\hat{x}$ is available. We refer to $p(x,\hat{x})$ as the confusion matrix. To our knowledge, such a scenario has not been extensively studied in the literature, despite having practical relevance for estimation of mutual information, as we point out in Sec. \ref{sec_motivation}. In this article, we derive an upper bound on mutual information (and a corresponding lower bound on equivocation) that is based on the confusion matrix and is tighter than the known similar bound by Kovalevsky and others \cite{Kovalevsky1968,Tebbe1968,Feder1994} based on probability of error alone. The inequality in our bound can be proved quickly using the bound by Kovalevsky, as we show in Sec. \ref{sec_quick_proof}. However, we also include a self-contained derivation in Sec. \ref{sec_our_proof}, where we construct the distribution of channel outputs that minimizes equivocation $H(X|Y)$ under our constraints. \subsection{Equivocation, mutual information and the minimal probability of error} \label{sec_existing_theory} We consider a signal variable (message) $X$ that is communicated through a channel with output $Y$ and then decoded, obtaining a ``decode'' $\hat{X}$ -- forming a Markov chain $X \leftrightarrow Y \leftrightarrow \hat{X}$. The equivocation $H(X|Y)$ quantifies the uncertainty in $X$ if the value of $Y$ is given. Conversely, the mutual information $I(X;Y)$ measures how much information about $X$ is contained in $Y$. It is not surprising that both $H(X|Y)$ and $I(X;Y)$ can be related to the minimal probability of error while decoding, $\epsilon = \Pr(X \neq \hat{X})$. Accurate decoding, i.e., low $\epsilon$, requires sufficiently low equivocation $H(X|Y)$. This is quantified by Fano's inequality \cite{Cover2006}. The mutual information between the true signal and the channel output, $I(X;Y) = H(X)-H(X|Y)$, needs to be sufficiently high, and this is described by rate-distortion theory \cite{Shannon1959}. Here we focus on the opposite bounds. If the minimal probability of error $\epsilon$ is specified, there is also a minimal possible equivocation. The following lower bound was derived for discrete $X$ with finite support by Kovalevsky \cite{Kovalevsky1968} and later Tebbe and Dwyer \cite{Tebbe1968} and Feder and Merhav \cite{Feder1994}. It reads \begin{equation} H(X|Y) \geq \phi^\ast (\epsilon), \label{eq_feder_merhav} \end{equation} where $\phi^\ast (\epsilon)$ is a piecewise linear function that coincides with $-\log{(1-\epsilon)}$ at points $\epsilon=0$, $1/2$, $2/3$, $\dots$, $(|\mathcal{X}|-1)/|\mathcal{X}|$ (we use $\log = \log_2$ throughout the paper, and $\mathcal{X}$ is the support of $X$), and it can be written using the floor and ceiling functions, \begin{align} \phi^\ast (\epsilon) &= \alpha(\epsilon) \log{ \left\lfloor \frac{1}{1- \epsilon} \right\rfloor} + \left(1-\alpha(\epsilon) \right) \log{ \left\lceil \frac{1}{1- \epsilon} \right\rceil}, \label{eq_phi} \\ \alpha(\epsilon) &= \left\lfloor \frac{1}{1- \epsilon} \right\rfloor \left( (1-\epsilon) \left\lceil \frac{1}{1- \epsilon} \right\rceil -1 \right). \label{eq_alpha} \end{align} The function $\phi^\ast (\epsilon)$ is plotted in Fig. \ref{fig_phi}. \begin{figure}[!t] \centering \includegraphics[width=2.7in]{phi-eps-converted-to.pdf} \caption{Plot of the functions $\phi^\ast(\epsilon)$ and $-\log{(1-\epsilon)}$. The two functions intersect at $\epsilon=0$, $1/2$, $2/3$, $\dots$, $(|\mathcal{X}|-1)/|\mathcal{X}|$ (black dots), and in between $\phi^\ast(\epsilon)$ is piecewise linear.} \label{fig_phi} \end{figure} The bound \eqref{eq_feder_merhav} has been generalized to countably infinite support of $X$ by Ho and Verd\'u \cite{Ho2010}. Sason and Verd\'u \cite{Sason2018} proved a generalisation of \eqref{eq_feder_merhav} for Arimoto-R\'enyi conditional entropy of arbitrary order. The bound \eqref{eq_feder_merhav} is tight when only the overall probability of error $\epsilon$ is available. However, when more constraints on the the joint distribution of $X$ and $Y$ are given, tighter bounds can be obtained. Prasad \cite{Prasad2015} introduced two series of lower bounds on $H(X|Y)$ based on partial knowledge of the posterior distribution $p(x|y)$. The first is in terms of the $k$ largest posterior probabilities $p(x|y)$ for each $y$, that we could label $p_1(y), p_2(y), \dots, p_k(y)$ in descending order (where $1 \leq k \leq |\mathcal{X}|$). The second series of bounds by Prasad is in terms of the averages of $p_1(y), p_2(y), \dots, p_k(y)$ across all $y$. Hu and Xing \cite{Hu2016} focused on binary signal $X$ and derived a bound tighter than \eqref{eq_feder_merhav} by taking into account the prior distribution of signals $p(x)$. Hu and Xing also discuss suboptimal (other than maximum a posteriori) decoding, which is otherwise rare in the related literature. \subsection{Motivation: estimation of mutual information} \label{sec_motivation} Here we extend the bound \eqref{eq_feder_merhav} to account for the situation when the complete confusion matrix -- the joint distribution $p(x,\hat{x})$ is known. We are motivated by the following scenario: suppose that the goal is to estimate the mutual information $I(X;Y)$ from a finite set of $(x,y)$ samples. Moreover, assume that the space of possible channel outputs $\mathcal{Y}$ is large (much larger than the space of signals, $|\mathcal{Y}|\gg|\mathcal{X}|$), making a direct calculation of $I(X;Y)$ by means of their joint distribution $p(x,y)$ infeasible due to insufficient sampling. In such a case, one approach (used e.g. in neuroscience \cite{Borst1999}) is to construct a decoder, map each $y$ into a decode $\hat{x}$ and estimate the confusion matrix $p(x,\hat{x})$. Then the post-decoding mutual information $I(X;\hat{X})$ can be calculated and used as a lower bound on $I(X;Y)$ due to the data processing inequality \cite{Cover2006}. However, the gap between $I(X;\hat{X})$ and $I(X;Y)$ is not known (but see a discussion of this gap in \cite{Samengo2002}), and an upper bound on $I(X;Y)$ based on $p(x,\hat{x})$ is desirable. Our result is such a bound, for the specific case of maximum a posteriori decoder. While mutual information $I(X;Y)$ has this practical importance, we formulate our result as an equivalent lower bound on equivocation $H(X|Y) = H(X)-I(X;Y)$ first. This is simpler to state and prove. \section{Statement of the bound} Given the joint distribution $p(X,\hat{X})$ of signals $X$ (discrete with finite support) and maximum a posteriori decodes $\hat{X}$ based on the channel output $Y$, the equivocation $H(X|Y)$ is bounded from below by \begin{equation} H(X|Y) \geq \sum_{\hat{x}} p(\hat{x}) \, \phi^\ast (\epsilon_{\hat{x}} ), \label{eq_bound_equivocation} \end{equation} where $\epsilon_{\hat{x}} = p(X \neq \hat{X} | \hat{x}) = 1 - p(X = \hat{x} | \hat{X} = \hat{x})$ is the probability of error for the decode $\hat{x}$ and the function $\phi^\ast$ is defined in \eqref{eq_phi}, \eqref{eq_alpha}. Equivalently, we can bound the mutual information $I(X;Y)$ from above: \begin{align} I(X;Y) &= H(X) - H(X|Y) \nonumber \\ &\leq H(X) - \sum_{\hat{x}} p(\hat{x}) \, \phi^\ast (\epsilon_{\hat{x}} ). \end{align} These bounds are tight, and we construct the distributions $p(y|\hat{x})$ and $p(x|y)$ that achieve equality in Sec. \ref{sec_our_proof}. \subsection{Comments on the bound} We note that since the function $\phi^\ast (\epsilon_{\hat{x}})$ is convex, we can apply Jensen's inequality to the right hand side of \eqref{eq_bound_equivocation} and recover the bound \eqref{eq_feder_merhav} by Kovalevsky \cite{Kovalevsky1968}, \begin{equation} H(X|Y) \geq \phi^\ast \left( \sum_{\hat{x}} p(\hat{x}) \, \epsilon_{\hat{x}} \right) = \phi^\ast (\epsilon). \end{equation} Both bounds coincide in case of binary signal $|\mathcal{X}| = 2$, or any other case when the probability of error is less than $1/2$, $\epsilon_{\hat{x}} < 1/2$ for all $\hat{x}$. On this range, $\phi^\ast (\epsilon_{\hat{x}}) = 2 \epsilon_{\hat{x}}$ and the bound simplifies to \begin{equation} H(X|Y) \geq 2 \sum_{\hat{x}} p(\hat{x}) \, \epsilon_{\hat{x}} = 2 \epsilon, \end{equation} as has been noted in \cite{Feder1994} and before. \subsection{Example calculation} As an illustration, we apply our bound \eqref{eq_bound_equivocation} to an example confusion matrix and compare it to the bound \eqref{eq_feder_merhav} that is in terms of error probability $\epsilon$ only. The confusion matrix considered is depicted in Fig. \ref{fig_simpleCalc} (A) for the case $|\mathcal{X}|=5$. We vary the size $|\mathcal{X}|$ of the space of signals $\mathcal{X}=\{ 1,2,\dots,|\mathcal{X}|\}$, and the confusion matrix always takes the form \begin{equation} p(x,\hat{x}) = \begin{cases} \frac{1}{2|\mathcal{X}|}; \qquad & x=\hat{x}<|\mathcal{X}|, \\ \frac{1}{2|\mathcal{X}|}; \qquad & x<|\mathcal{X}|,\, \hat{x}=|\mathcal{X}|, \\ \frac{1}{|\mathcal{X}|}; \qquad & x=\hat{x}=|\mathcal{X}|, \\ 0; \qquad & x \neq \hat{x},\, \hat{x}<|\mathcal{X}|. \end{cases} \label{eq_example_calculation} \end{equation} This distribution has the property that while most of the decodes have zero probability of being incorrect ($\epsilon_{\hat{x}}=0$ for $\hat{x}<|\mathcal{X}|$), the last one has a high probability of being incorrect, $\epsilon_{\hat{x}}=(|\mathcal{X}|-1)/(|\mathcal{X}|+1)$ for $\hat{x}=|\mathcal{X}|$. Our bound \eqref{eq_bound_equivocation} takes this into account -- which makes it substantially tighter than the bound \eqref{eq_feder_merhav} based only on the overall probability of error $\epsilon$. This can be seen in Fig. \ref{fig_simpleCalc} (B), where both lower bounds are plotted. We also plot the post-decoding conditional entropy $H(X|\hat{X})$ which serves as the upper bound on the true value of $H(X|Y)$. \begin{figure}[!t] \centering \begin{tabular}{m{1.3 in} m{1.8 in}} \includegraphics[width=\linewidth]{simple_calc_conf_matrix-eps-converted-to.pdf} & \includegraphics[width=\linewidth]{simple_calc-eps-converted-to.pdf} \end{tabular} \caption{Example application of the bound. (A) The joint distribution of signals and decodes $p(x,\hat{x})$ for which we compute the bound, defined in Eq. \eqref{eq_example_calculation}. Here for the case $|\mathcal{X}|=5$. (B) Bounds on conditional entropy (equivocation) $H(X|Y)$ plotted for different sizes of signal space $|\mathcal{X}|$. $H(X|Y)$ is bounded from above by $H(X|\hat{X})$ (blue points). Our novel lower bound (Eq. \eqref{eq_bound_equivocation}) is in orange and the bound by Kovalevsky (Eq. \eqref{eq_feder_merhav}) in green. Our bound \eqref{eq_bound_equivocation} is the tightest possible given the confusion matrix.} \label{fig_simpleCalc} \end{figure} \section{Proof of the bound} We offer two alternative proofs of the bound here. The first proves it as a simple consequence of the bound \eqref{eq_feder_merhav} by Kovalevsky. It is short, but it leaves open the question of tightness. We therefore focus on the second proof, which is self-contained, implies tightness and perhaps offers additional insights, since it includes a derivation of the distribution of channel outputs $p(y|\hat{x})$, $p(x|y)$ that minimizes $H(X|Y)$. Throughout the proofs, the spaces of possible values of $X$ and $Y$ are written as $\mathcal{X}$ and $\mathcal{Y}$ respectively. The decoding function is denoted $g : \mathcal{Y} \rightarrow \mathcal{X}$ and is based on the maximum a posteriori rule, $g(y) \in \underset{x}{\operatorname{argmax}} \, p(x|y)$. Finally, $\mathcal{Y}_{\hat{x}} = \{ y \in \mathcal{Y} \, | \, g(y) = \hat{x} \}$ is the set of all $y$ that decode into $\hat{x}$. \subsection{A quick proof of inequality following Kovalevsky's bound} \label{sec_quick_proof} The left hand side of \eqref{eq_bound_equivocation}, the equivocation $H(X|Y)$ can be written as \begin{equation} H(X|Y) = \sum_{\hat{x}} p(\hat{x}) \int_{\mathcal{Y}_{\hat{x}}} H(X|Y=y) \,dp(y|\hat{x}), \end{equation} where the term $\int_{\mathcal{Y}_{\hat{x}}} H(X|Y=y) \,dp(y|\hat{x})$ is the entropy of $X$ conditional on $Y$, but with the values of $Y$ only limited to $\mathcal{Y}_{\hat{x}}$. Since it has the form of conditional entropy, we can use the Kovalevsky bound \eqref{eq_feder_merhav} and obtain our result \eqref{eq_bound_equivocation}. This establishes the inequality in our bound, but it does not tell us if equality can be achieved -- and if it can, for what distribution of $Y$ does it happen. We address this in the following section. \section{Proof by minimization of equivocation} \label{sec_our_proof} For simplicity, we formulate the derivation for discrete $Y$. However, as we comment in Sec. \ref{sec_discussion}, the derivation applies to continuous $Y$ with only minor modifications. For clarity, let us state the minimization problem we are solving. We minimize \begin{align} H(X|Y) = \sum_{\hat{x}} p(\hat{x}) \sum_{y \in \mathcal{Y}_{\hat{x}}} p(y|\hat{x}) H(X|Y=y) \label{eq_objective_function_full} \end{align} with respect to $p(y|\hat{x})$ and $p(x|y)$, with the constraints given by the confusion matrix and maximum a posteriori decoding: \begin{align} \forall x, \hat{x}: \qquad & \sum_y p(x|y) p(y|\hat{x}) = p(x|\hat{x}), \label{eq_constraints_conf} \\ \forall \hat{x}, \forall y \in \mathcal{Y}_{\hat{x}}: \qquad & \hat{x} \in \underset{x}{\operatorname{argmax}} \, p(x|y). \label{eq_constraints_dec} \end{align} Note in \eqref{eq_objective_function_full} that the minimization can be done separately for each $\hat{x}$, since the corresponding $\mathcal{Y}_{\hat{x}}$ are disjoint. Hence we have $|\mathcal{X}|$ independent minimization problems with the objective function \begin{equation} \sum_{\mathcal{Y}_{\hat{x}}} p(y|\hat{x}) H(X|Y=y). \label{eq_objective_function} \end{equation} Note also that we do not have any constraint on $|\mathcal{Y}|$, the number of elements of $\mathcal{Y}$. We actually exploit this flexibility in the proof. However, it turns out (see Propositions 1 and 2) that when the minimum is achieved, there can be only a limited number of $y$ values with different distribution $p(x|y)$. Our approach is based on update rules for $p(y|\hat{x})$ and $p(x|y)$ that decrease the objective function \eqref{eq_objective_function} while respecting the constraints \eqref{eq_constraints_conf}, \eqref{eq_constraints_dec}. In fact, the updates also change $|Y|$. The minimum of $H(X|Y)$ is achieved when the update rules can no longer be used to decrease it -- and such situations can be characterized and the corresponding $H(X|Y)$ can be calculated. It is instructive to have in mind the following visualization of our minimization problem, which we use to illustrate the update rules in Fig. \ref{fig_updateRules}. The distribution $p(x,y|\hat{x})$ for some $\hat{x}$, with $y$ restricted to $y \in \mathcal{Y}_{\hat{x}}$ can be represented as a matrix, with a row for each $x$ and a column for each $y$. Normalized columns correspond to $p(x|y)$ and the sum of each column is $p(y|\hat{x})$. The constraint \eqref{eq_constraints_conf} means that each row has a fixed sum, $p(x|\hat{x})$, and the constraint \eqref{eq_constraints_dec} means that one row (e.g. the first) contains the dominant elements of all columns. The objective function \eqref{eq_objective_function} is a weighted sum of entropies of all columns. Our minimization will consist of adding and removing columns, and moving probability mass within rows. In the following, a probability distribution is called \emph{flat} if all non-zero elements are equal, i.e. there are $n$ non-zero elements and all have probabilities $1/n$. The number $n$ is called its \emph{length}. \subsection*{Proposition 1: equivocation minimized by flat $p(x|y)$} \label{subsec_proposition1} The minimum of the objective function \eqref{eq_objective_function}, given constraints \eqref{eq_constraints_conf}, \eqref{eq_constraints_dec} can only be achieved when the distributions $p(x|y)$ are flat for all $y$. \begin{IEEEproof} Suppose that there is a channel output $y'$ with a non-flat distribution $p(x|y')$. Then, the following update rule, illustrated in Fig. \ref{fig_updateRules} (A), will decrease the objective function \eqref{eq_objective_function}. We label the elements of $\mathcal{X}$ as $x_1, x_2, \dots, x_{|\mathcal{X}|}$ such that \begin{equation} p(x_1|y') \geq p(x_2|y') \geq \dots \geq p(x_{|\mathcal{X}|}|y') \geq 0, \end{equation} where at least two of the inequalities are sharp (otherwise $p(x|y')$ would be flat). Note that $x_1$ must be the decode of $y'$, i.e. $g(y') = \hat{x} = x_1$. The proposed update is to replace $y'$ by $y'_1, y'_2, \dots, y'_{|\mathcal{X}|}$ with flat distributions $p(x|y'_i)$, \begin{align} p(x_j|y'_i) &= \begin{cases} 1/i; & j \leq i \\ 0; & j > i, \end{cases} \label{eq_prop1_xy} \\ p(y'_i|\hat{x}) &= \begin{cases} i p(y'|\hat{x}) \left( p(x_i|y') - p(x_{i+1}|y') \right); & i < |\mathcal{X}| \\ i p(y'|\hat{x}) \, p(x_i|y'); & i = |\mathcal{X}|. \end{cases} \label{eq_prop1_yx} \end{align} Intuitively, this replaces $y'$ by multiple elements $y'_i$ with flat distributions $p(x|y'_i)$ covering the first $1, 2, \dots, |\mathcal{X}|$ elements of the ordered $x_1, x_2, \dots, x_{|\mathcal{X}|}$. It can be confirmed that this replacement respects the constraints \eqref{eq_constraints_conf}. All $y'_i$ still decode into $\hat{x} = x_1$, and the probability associated with $y'$ is merely divided among the elements $y'_i$, \begin{align} \sum_i p(y'_i|\hat{x}) &= p(y'|\hat{x}), \label{eq_prop1_argument1} \\ \sum_i p(x_j|y'_i) p(y'_i|\hat{x}) &= p(x_j|y') p(y'|\hat{x}). \label{eq_prop1_argument2} \end{align} See Fig. \ref{fig_updateRules} for an example. Now we look at the change in the objective function \eqref{eq_objective_function} induced by this replacement. Before the replacement, $y'$ contributes the amount \begin{equation} p(y'|\hat{x}) H(X|Y=y'), \label{eq_prop1_oldH} \end{equation} where $H(X|Y=y')$ is the entropy of a single random variable with distribution $p(x|y')$. After the replacement, the total contribution of all $y'_1, y'_2, \dots, y'_{|\mathcal{X}|}$ is \begin{multline} \sum_i p(y'_i|\hat{x}) H(X|Y=y'_i) = \\ = p(y'|\hat{x}) \sum_i \frac{p(y'_i|\hat{x})}{p(y'|\hat{x})} H(X|Y=y'_i), \label{eq_prop1_newH} \end{multline} where the latter sum has the form of a conditional entropy of a variable with a marginal distribution $p(x|y')$, conditioned on the value of $y'_i$ distributed according to $p(y'_i|\hat{x})/p(y'|\hat{x})$ (this follows from Eq. \eqref{eq_prop1_argument1}, \eqref{eq_prop1_argument2}). Since conditioning decreases entropy, our replacement decreases the objective function \eqref{eq_objective_function}. The only case when our the proposed replacement cannot be used to decrease the objective function is when $p(x|y)$ is flat for all $y$. Therefore flat $p(x|y)$ must be a characteristic of any solution to our minimization problem. \end{IEEEproof} Note that there are only $2^{|\mathcal{X}|-1}$ different possible flat distributions $p(x|y)$ with nonzero $p(X=\hat{x}|y)$, which means that we need at most $2^{|\mathcal{X}|-1}$ elements in $\mathcal{Y}_{\hat{x}}$ to achieve the minimum equivocation. However, as the following proposition will show, there are further restrictions on $p(x|y)$ at the minimum. Reflecting that only flat $p(x|y)$ are of further interest in the minimization, we say that the channel output $y$ has length $l$ if $p(x|y)$ has length $l$. \begin{figure}[!t] \centering \includegraphics[width=2.8in]{updateRule_1-eps-converted-to.pdf}\\ \includegraphics[width=2.8in]{updateRule_2v2-eps-converted-to.pdf} \caption{Illustrations of the update rules used to prove (A) Proposition 1 and (B) Proposition 2. Displayed is the joint distribution $p(x,y|\hat{x})$. (A) A channel output $y'$ with a non-flat distribution $p(x|y')$ is replaced by $y'_1, y'_2, \dots, y'_{4}$ with flat distributions $p(x|y'_i)$, such that $y'_1, y'_2, \dots, y'_{4}$ still decode into $x_1$ and the confusion matrix is not affected. This replacement decreases $H(X|Y)$, our objective function. The elements of $\mathcal{X}$ are labeled in decreasing order of $p(x,y|\hat{x})$. (B) Two channel outputs, $y_1$ and $y_2$, have flat distributions $p(x|y_{1,2})$ with $3$ and $1$ nonzero elements respectively. We replace $y_1$ by $\bar{y}_1$ and $\bar{\bar{y}}_1$, and then transfer probability $p(x_2,\bar{y}_1|\hat{x})$ to $p(x_2,y_2|\hat{x})$ (dotted red arrow). The distributions $p(x|\bar{y}_1)$, $p(x|\bar{\bar{y}}_1)$ and $p(x|y_{2})$ remain flat, and the objective function $H(X|Y)$ is decreased. } \label{fig_updateRules} \end{figure} \subsection*{Proposition 2: minimization restricts lengths of $p(x|y)$ } \label{subsec_proposition2} Building on Proposition 1, we further claim that if equivocation is minimized, no two channel outputs $y_1,y_2 \in \mathcal{Y}_{\hat{x}}$ can have lengths differing by more than $1$. \begin{IEEEproof} As before, we introduce an update rule. Recalling the visualization with a column for each $y$, this update rule will move a nonzero element from a longer column to a shorter column, as shown in Fig. \ref{fig_updateRules} (B). Take two elements $y_1,y_2 \in \mathcal{Y}_{\hat{x}}$ that have flat distributions $p(x|y_1)$ and $p(x|y_2)$ with lengths $a$ and $b$ respectively where $a>b$. Assume that $a$ and $b$ differ by more than one, $a - b > 1$. This means that we can choose an element $x' \in \mathcal{X}$ such that $p(x'|y_1) = 1/a$ and $p(x'|y_2) = 0$. Assume momentarily that $p(y_1|\hat{x})/a = p(y_2|\hat{x})/b$ (we will relax this assumption later). Then we can replace $y_1,y_2$ by $y'_1$ and $y'_2$, such that \begin{itemize} \item $p(x|y'_1)$ is flat with length $a-1$. It is nonzero for the same $x$ as $p(x|y_1)$, except for $x'$ where it is zero. \item $p(x|y'_2)$ is flat with length $b+1$. It is nonzero for the same $x$ as $p(x|y_2)$, and also for $x'$. \end{itemize} Given that $p(y_1|\hat{x})/a = p(y_2|\hat{x})/b$, we can also choose the probabilities $p(y'_1|\hat{x})$ and $p(y'_2|\hat{x})$ such that $y'_1$, $y'_2$ contribute the same amount to $p(x|\hat{x}) = \sum_{y} p(x|y) p(y|\hat{x})$ as $y_1$ and $y_2$ did, ensuring that constraints \eqref{eq_constraints_conf} are respected: \begin{align} p(y'_1|\hat{x}) &= \frac{a-1}{a} p(y_1|\hat{x}), \\ p(y'_2|\hat{x}) &= \frac{b+1}{b} p(y_2|\hat{x}). \end{align} Now we show that the proposed replacement reduces the objective function. Before the replacement, the contribution to the objective function \eqref{eq_objective_function} by $y_1$ and $y_2$ was \begin{equation} p(y_1|\hat{x}) \log{a} + p(y_2|\hat{x}) \log{b}. \end{equation} After the replacement, $y'_1$ and $y'_2$ contribute by \begin{equation} \frac{a-1}{a} p(y_1|\hat{x}) \log{(a-1)} + \frac{b+1}{b} p(y_2|\hat{x}) \log{(b+1)}. \end{equation} The difference of these contributions $\Delta$, after minus before replacement, has the form \begin{equation} \Delta = \frac{p(y_1|\hat{x})} {a} \left( f(b+1) - f(a) \right), \end{equation} where $f(t) = t \log{t} - (t-1) \log{(t-1)}$ is an increasing function for $t\geq 1$. Since $b+1 < a$, we have $\Delta < 0$, meaning that the objective function is reduced. This update rule is applicable to any $y_1,y_2 \in \mathcal{Y}_{\hat{x}}$ with lengths $a$ and $b$ such that $a-b>1$ respectively. We have, however, further required that $p(y_1|\hat{x})/a = p(y_2|\hat{x})/b$. This requirement can be avoided. If $p(y_1|\hat{x})/a > p(y_2|\hat{x})/b$, we first split $y_1$ into $\bar{y}_1$ and $\bar{\bar{y}}_1$ with \begin{align} p(\bar{y}_1|\hat{x}) &= a \, p(y_2|\hat{x})/b, \\ p(\bar{\bar{y}}_1|\hat{x}) &= p(y_1|\hat{x}) - a \, p(y_2|\hat{x})/b, \\ p(x|\bar{y}_1) &= p(x|\bar{\bar{y}}_1) = p(x|y_1), \end{align} such that the above mentioned update rule can be applied to $\bar{y}_1$ and $y_2$ while $\bar{\bar{y}}_1$ is left unchanged, see Fig. \ref{fig_updateRules} (B). If $p(y_1|\hat{x})/a < p(y_2|\hat{x})/b$, we can proceed analogously by splitting $y_2$. We can decrease the objective function by repeatedly applying this generalized update rule. Therefore, the minimum can only be achieved when the lengths of $p(x|y)$ for $y \in \mathcal{Y}_{\hat{x}}$ vary by no more than 1. \end{IEEEproof} Note that by repeated application of this update rule, in a finite number of steps we reach a state with only up to two lengths (per $\hat{x}$) that differ by at most 1. As shown in the next section, such a state implies a specific value of $H(X|Y)$. Together with the update rule in the proof of Proposition 1, this gives us an algorithm to find the distributions $p(y|\hat{x})$ and $p(x|y)$ that achieves the minimum $H(X|Y)$. The algorithm can start from an arbitrary initialization of $p(y|\hat{x})$ and $p(x|y)$ that follows the constraints \eqref{eq_constraints_conf}, \eqref{eq_constraints_dec} and finishes in a finite number of steps. It remains to be determined what are the (at most two) allowed lengths of $y \in \mathcal{Y}_{\hat{x}}$ and how the elements $y$ with these lengths contribute to the equivocation $H(X|Y)$. \subsection*{Admissible lengths of $p(x|y)$} Let us call the two admissible lengths $l_{\hat{x}}$ and $l_{\hat{x}}+1$. Given $\hat{x}$, the total probability of all $y \in \mathcal{Y}_{\hat{x}}$ with length $l_{\hat{x}}$ is $\alpha_{\hat{x}}$, and those of length $l_{\hat{x}}+1$ have probability $1-\alpha_{\hat{x}}$. Then from the constraint \eqref{eq_constraints_conf}, we can write the probability that $\hat{x}$ is the correct decode \begin{equation} 1-\epsilon_{\hat{x}} = \frac{\alpha_{\hat{x}}}{ l_{\hat{x}} }+ \frac{1-\alpha_{\hat{x}}} {l_{\hat{x}}+1}, \label{eq_alpha_length} \end{equation} from which we can deduce that $\frac{1} {l_{\hat{x}}+1} \leq 1-\epsilon_{\hat{x}} \leq \frac{1}{ l_{\hat{x}} }$, and that the two admissible lengths must be \begin{equation} l_{\hat{x}} = \left\lfloor \frac{1}{1- \epsilon_{\hat{x}}} \right\rfloor \text{ and } l_{\hat{x}} + 1 = \left\lceil \frac{1}{1- \epsilon_{\hat{x}}} \right\rceil, \label{eq_result_lengths} \end{equation} unless $\frac{1}{1- \epsilon_{\hat{x}}}$ is an integer -- in that case the floor and ceiling coincide into a single admissible length. Now, from equations \eqref{eq_alpha_length} and \eqref{eq_result_lengths} we can determine that \begin{equation} \alpha_{\hat{x}} = \left\lfloor \frac{1}{1- \epsilon_{\hat{x}}} \right\rfloor \left( (1-\epsilon_{\hat{x}}) \left\lceil \frac{1}{1- \epsilon_{\hat{x}}} \right\rceil -1 \right) = \alpha(\epsilon_{\hat{x}}) \label{eq_result_alpha} \end{equation} is the total probability (given $\hat{x}$) of $y \in \mathcal{Y}_{\hat{x}}$ with length $\lfloor \frac{1}{1- \epsilon_{\hat{x}}} \rfloor$. Finally, the minimal value of equivocation is simply \begin{equation} H(X|Y) \geq \sum_{\hat{x}} p(\hat{x}) \left( \alpha_{\hat{x}} \log{l_{\hat{x}}} + (1-\alpha_{\hat{x}}) \log{(l_{\hat{x}} + 1)} \right), \end{equation} which together with equations \eqref{eq_result_lengths} and \eqref{eq_result_alpha} constitutes our main bound, as stated in \eqref{eq_bound_equivocation}. \section{Discussion} \label{sec_discussion} We have introduced a tight lower bound on equivocation in terms of the maximum a posteriori confusion matrix, and proved it in two ways. The first is a proof of the inequality, starting from a similar bound by Kovalevsky \cite{Kovalevsky1968}, but it does not prove that the bound is tight. Therefore, we developed a second proof, in which we construct the distribution of channel outputs that minimizes the equivocation and achieves equality in our bound. Central to the latter approach are two update rules for the distribution of the channel outputs. These update rules exploit the fact that equivocation can be, under our constraints, minimized by (1) making the posterior distributions $p(x|y)$ flat and (2) making sure that these flat distributions contain similar numbers of nonzero elements. We formulated the proof for discrete random variables $X$ and $Y$, but it can be extended. If $X$ is discrete but $Y$ continuous, application of a modified version of the first update rule would result in $2^{|\mathcal{X}|}$ regions in the $\mathcal{Y}_{\hat{x}}$ space corresponding to each of the $2^{|\mathcal{X}|}$ possible flat distributions $p(x|y')$. For example, the region associated with a flat distribution of length $|\mathcal{X}|$, that is $p(x|y') = 1/|\mathcal{X}|$, would have a total probability $\int_{\mathcal{Y}_{\hat{x}}} |\mathcal{X}| \min_x{p(x|y)} dp(y|\hat{x})$. These subsets of $\mathcal{Y}$ where $p(x|y)$ is constant can then be treated like discrete values, and the rest of our derivation applies. Bounds on equivocation (or mutual information) in terms of the confusion matrix are, to our knowledge, not common -- despite their relevance for estimation of mutual information. We hope that our result can be useful for these purposes, and that it sheds some light on the gap between mutual information before and after decoding. However, its applicability is restricted by the assumption of maximum a posteriori decoding, and relaxing this assumption remains an interesting challenge. \section*{Acknowledgment} The authors would like to thank Sarah A. Cepeda-Humerez for helpful discussions.
{'timestamp': '2019-01-14T02:12:17', 'yymm': '1812', 'arxiv_id': '1812.01475', 'language': 'en', 'url': 'https://arxiv.org/abs/1812.01475'}
\section{Introduction} Markov Chain Monte Carlo (MCMC) methods have been widely applied in many areas for more than 40 years \citep{hastings70monte}. In particular, they are successful when the target distribution $\pi$ is available only up to a normalizing constant. To sample from such a distribution, various MCMC algorithms have been developed. A typical MCMC algorithm is designed using a transition kernel $P$ that has $\pi$ as stationary distribution. See for example \citet{meyn93markov,robert04monte,roberts04general}, and the references therein. Constructing a transition kernel to sample efficiently from a given distribution, although conceptually easy, is rather difficult in practice. The difficulty is that generic algorithms such as the Metropolis--Hastings algorithm requires a careful choice and tuning of the proposal kernel. The development of adaptive MCMC (AMCMC) methods stems partly from these difficulties. Instead of having a fixed Markov kernel $P$, at each round $n$ an AMCMC algorithm selects a kernel $P_{\what\theta_n}$ from a family of Markov kernels $\{P_\theta\}_{\theta\in\Theta}$, where the value (parameter) $\what\theta_n$ is computed based on possibly all the samples generated up to time $n$, so that the transition kernel is automatically self-adapted. This approach is very appealing in practice, as it frees the users from parameter tuning, and provides a better exploration-exploitation balance in the performance. As a consequence, AMCMC algorithms often yield smaller asymptotic errors in Monte Carlo estimations. The theoretical and methodological studies of AMCMC have drawn attentions of many researchers lately. See for example the survey by \citet{atchade11adaptive} and the references therein. In this paper, we investigate the convergence rates of two AMCMC algorithms: the {\it Importance Resampling MCMC (IRMCMC)} algorithm introduced by \citet{atchade09resampling}, and the {\it Equi-Energy (EE) sampler} by \citet{kou06equi}. The IRMCMC algorithm is also referred to {\it interacting annealing} algorithm \citep{bercu12fluctuations}. For the EE sampler, we actually focus on a simplified version, which is sometimes referred to as {\it interacting tempering} algorithm \citep{fort11central}. Throughout the paper we denote by $\{X_n\}_{n\in{\mathbb N}}$ the random process generated by either of these algorithms. A common feature is that in either case, the dynamics of $\{X_n\}_{n\in{\mathbb N}}$ is driven by a sequence of random measures $\what\theta_n$ computed from an auxiliary chain $\{Y_n\}_{n\in{\mathbb N}}$. Most of the theoretical results available so far focused on the convergence of marginal distributions \[ {\cal L}_{X_n}\Rightarrow\pi, \] and on the law of large numbers: \[ \lim_{n\to\infty}\frac1n \summ i1n f(X_i) = \pi(f) \mbox{ almost surely.} \] See for example \citet{andrieu08note,andrieu11nonlinear}, \citet{atchade09resampling,atchade10cautionary} and \citet{fort11convergence} (there is a mistake in the proof of \citep{atchade10cautionary}, pointed out in \citep{fort11convergence}). Central limit theorems for such AMCMC algorithms have only been considered recently by \citet{fort11central} and \citet{bercu09functional,bercu12fluctuations}. In short, introducing the auxiliary chain makes the stochastic process no longer Markov, which raises considerable technical difficulties. We point out that there are other classes of AMCMC algorithms, for which the parameters take values in finite dimensional spaces (e.g.~the adaptive Metropolis algorithm introduced by~\citet{haario99adaptive}). The analysis of such algorithms is out of the scope of this paper. In this paper, we study the {\it convergence rate} (or mixing time) of the IRMCMC and EE algorithms. That is, we provide upper bounds on the distances between ${\cal L}_{X_n}$ (the distribution of $X_n$) and the target distribution. This type of rate differs and complements rate of convergence obtained in central limit theorems. Mixing time results provide information on the burn-in time of the algorithm, whereas central limit theorems (such as those mentioned above) deal with the rate of convergence and fluctuations of Monte Carlo averages. Beside \citet{andrieu07efficiency} who considered AMCMC with a finite-dimensional parameter, and to the best of our knowledge, the mixing time of AMCMC has not been investigated so far. We show that the IRMCMC algorithm has convergence rate of order $O(n^{-1})$. In particular, we also provide a simple example, for which the convergence rate has lower bound $1/n$. That is, one should not expect a rate superior to $O(n^{-1})$ in general. We also show that for $m$-tuple IRMCMC (to be defined in section~\ref{sec:mIRMCMC}), the convergence rate is also within $O(n^{-1})$. For the EE sampler, under some regularity conditions, we show that the rate of convergence is $O(n^{-1/2})$ in terms of a slightly weaker norm than the total variation distance. Our results are qualitative, in that the constants in the rates are not explicit. But they clarify what is known about these algorithms, and suggest in particular that longer burn-in should be selected for the EE sampler. The rest of the paper is organized as follows. The remaining of the introduction gives a general description of the algorithms considered in the paper and introduces some notation. Section~\ref{sec:IRMCMC} is devoted to IRMCMC algorithm. The convergence rate is established in Section~\ref{sec:IRMCMCrate}, and for multiple IRMCMC in Section~\ref{sec:mIRMCMC}. Section~\ref{sec:EE} is devoted to EE sampler. The convergence rate is established in Section~\ref{sec:EErate}. A remark on the connection to parallel tempering is provided in Section~\ref{sec:mEE}. \subsection{A class of AMCMC algorithms} We describe the general framework of AMCMC considered in this paper. Let ${\cal X}$ denote a general state space. An AMCMC algorithm is a stochastic process $\{(X_n,Y_n)\}_{n\geq 0}$ in ${\cal X}\times{\cal X}$, designed such that the main chain $X_n$ converges to the target distribution $\pi$ in a certain sense to be described precisely later. Let ${\cal P}$ denotes the set of all probability measures on ${\cal X}$, and $\{K_\theta,\;\theta\in{\cal P}\}$ a set of transition kernels on ${\cal X}$. Let $P$ be a transition kernel on ${\cal X}$ with invariant probability measure $\pi$. Starting from $P$ and $\{K_\theta,\theta\in{\cal P}\}$, we consider the family of transition kernel $P_\theta$ given by \[ P_\theta(x,\cdot) = (1-\epsilon)P(x,\cdot) + \epsilon K_{\theta}(x,\cdot),\;\;\theta\in{\cal P},\;x\in{\cal X}. \] The dynamics of the AMCMC algorithms considered in this paper can be unified as follows: given ${\cal F}_n = \sigma(X_0,\dots,X_n,Y_0,\dots,Y_n)$, $X_{n+1}$ and $Y_{n+1}$ are conditionally independent, and for all bounded measurable functions $h:{\cal X}\to{\mathbb R}$, \[{\mathbb E}_x \left(h(X_{n+1})\mid {\cal F}_n,\,\{Y_k\}_{k\geq n+1}\right)={\mathbb E}_x \left(h(X_{n+1})\mid {\cal F}_n\right)= P_{\wt\theta_n}h(X_n), \mbox{ almost surely} \] where $\wt \theta_n(\cdot)\ensuremath{\stackrel{\mathrm{def}}{=}} n^{-1}\sum_{j=1}^n \delta_{Y_j}(\cdot)$, denotes the empirical measure associated to the auxiliary chain $\{Y_n\}_{n\geq 0}$. Each algorithm is determined by the choice of the kernels $K_\theta$. For the IRMCMC, \[ K_\theta(x,A)=\frac{\int_A w(z)\theta({\rm d} z)}{\int_{{\cal X}}w(z){\rm d} z}, \] where $w(z)={{\rm d}\pi}/{{\rm d}\pi_Y}(z)$ (see Section~\ref{sec:notation} for our convention on $\pi$ and ${\rm d}\pi$), while for the EE, the following choice is made \[ K_\theta(x,A)=\textbf{1}_A(x) + \int_{{\cal X}}\left(1\wedge\frac{\pi(z)\pi_Y(x)}{\pi(x)\pi_Y(z)}\right)\left(\textbf{1}_A(z)-\textbf{1}_A(x)\right)\theta({\rm d} z). \] In both cases, $\pi_Y$ is an auxiliary distribution chosen such that it is relatively close to $\pi$, and easy to sample from (at least easier than $\pi$). We assume that the evolution of the auxiliary train is independent of the main chain in the sense that for bounded measurable function $h:{\cal X}\to{\mathbb R}$, \[ {\mathbb E}(h(Y_{n+1})\mid {\cal F}_n) = {\mathbb E}(h(Y_{n+1})\mid Y_0,\dots,Y_n), \mbox{ almost surely.} \] The description of the dynamics of the joint process $\{(X_n,Y_n)\}_{n\geq 0}$ is completed by specifying the dynamics of $Y_{n+1}\mid \F_n$, which is either a Markov chain with target distribution $\pi_Y$, or the main chain of another adaptive MCMC with target distribution $\pi_Y$, not necessarily Markov. The rationale of these algorithms can be viewed as follows. For $\theta=\theta_\star=\pi_Y$, the Markov chain with transition kernel $P_{\theta_\star}$ have nice mixing properties, due to the choice of $\pi_Y$. Unfortunately, however, it is not possible to implement directly the kernel $P_{\theta_\star}$. The idea here therefore is (i) to run an auxiliary chain $\{Y_n\}_{n\geq 0}$ with limiting distribution $\pi_Y$, so that the empirical measure $\wt \theta_n$ converges to $\theta_\star$, and (ii) to sample $X_{n+1}$ from \[ P_{\wt\theta_n}(X_n,\cdot) = (1-\epsilon)P(X_n,\cdot) + \epsilon K_{\wt\theta_n}(X_n,\cdot)\,, \] which approximates $P_{\theta_\star}(X_n,\cdot)$ as $n\to\infty$. \subsection{Notation}\label{sec:notation} We assume that the state space ${\cal X}$ is a Polish space equipped with a metric $\textsf{d}$, and ${\cal B}$ is the associated Borel $\sigma$-algebra. In addition, $({\cal X},{\mathbb B})$ is a measure space with a reference $\sigma$-finite measure, which we denote for short by ${\rm d} x$. Let $\pi$ and $\pi_Y$ be probability measures on $({\cal X},{\mathbb B})$. We assume that $\pi$ and $\pi_Y$ are both absolutely continuous with respect to ${\rm d} x$ and with a little abuse of notation, we also use $\pi$ and $\pi_Y$ to denote the density respectively. That is, we write $\pi({\rm d} x) = \pi(x){\rm d} x$ and similarly for $\pi_Y$. For a transition kernel $Q$, a measure $\nu$ and a function $h$, we shall write $\nu Q(\cdot)\ensuremath{\stackrel{\mathrm{def}}{=}} \int\nu({\rm d} z)Q(z,\cdot)$, and $Qh(\cdot)\ensuremath{\stackrel{\mathrm{def}}{=}} \int Q(\cdot,{\rm d} z)h(z)$. We denote $\what\pi_{Y,n}$ the empirical measure associated to the auxiliary chain $\indn Y$ defined by \[ \what \pi_{Y,n}(\cdot) \ensuremath{\stackrel{\mathrm{def}}{=}} \frac1n\summ i1n\delta_{Y_i}(\cdot)\,. \] At times, we also use the notation $\wt \theta_n(\cdot)$ to denote $\what \pi_{Y,n}(\cdot)$. For functions $f:{\cal X}\to{\mathbb R}$, we write \[ \what\pi_{Y,n}(\wb f) \ensuremath{\stackrel{\mathrm{def}}{=}} \what\pi_{Y,n}(f) - \pi_Y(f). \] We let $C$ denote general constants that do not depend on $n$, but may change from line to line. \newpage \section{Importance Resampling MCMC}\label{sec:IRMCMC} We consider the {importance-resampling Markov Chain Monte Carlo} method described in \citet{atchade09resampling}. \begin{Algo}[IRMCMC]\label{algo:IRMCMC} Fix $\epsilon\in(0,1)$. Pick arbitrary $X_0 = x_0$ and $Y_0 = y_0$. Let $P$ be an arbitrary Markov kernel with stationary distribution $\pi$. At each round $n$, $X_n$ and $Y_n$ are conditionally independent given $\F_{n-1}$, and \[ X_{n}\mid \F_{n-1} \sim \left\{ \begin{array}{l@{\mbox{ w.p.~}}l} P(X_{n-1},\cdot) & 1-\epsilon\,,\\ \what\theta_{n-1}(\cdot) & \epsilon\,, \end{array}\right. \] where $\what\theta_n$ is the (randomly) weighted empirical distribution defined by \begin{equation}\label{eq:theta} \what\theta_n(\cdot) = \summ i1{n}\frac{\wt w(Y_i)}{\summ j1{n}\wt w(Y_j)}\delta_{Y_i}(\cdot)=\frac{\int_\cdot \wt w(z)\what\pi_{Y,n}({\rm d} z)}{\int_{{\cal X}} \wt w(z)\what\pi_{Y,n}({\rm d} z)}, \end{equation} with $\wt w(y) \propto \pi(y)/\pi_Y(y)=: w(y)$, and $\what\theta_0 = \delta_{y_0}$. We assume $|w|_\infty\ensuremath{\stackrel{\mathrm{def}}{=}}\sup_{x\in{\cal X}}|w(x)|<\infty$. \end{Algo} \begin{Rem}\label{rem:w} The assumption on the boundedness of $w$ is not too restrict. Indeed, very often in practice, we have $\wt\pi$, the un-normalized density function of $\pi$ as a bounded function, and set the auxiliary chain with stationary distribution $\wt\pi_Y\propto\pi_Y$ obtained by $\wt\pi_Y = \wt\pi^T$ with $T\in(0,1)$. In this case, $\wt w = \wt \pi/\wt \pi_Y$ is bounded and thus so is $w$. \end{Rem} Equivalently, for any bounded function $f:{\cal X}\to{\mathbb R}$, \[ {\mathbb E} (f(X_{n+1})\mid {\cal F}_n) = P_{\what\theta_n}f(X_n) \mbox{ almost surely, } \] where for all probability measures $\theta$ on ${\cal X}$, $P_\theta(x,\cdot)$ is defined by \begin{equation}\label{eq:Ptheta} P_\theta(x,\cdot) = (1-\epsilon)P(x,\cdot) + \epsilon \theta(\cdot)\,. \end{equation} For the time being, we make no particular assumption on the dynamics of the auxiliary chain $\{Y_n\}_{n\geq 0}$. \subsection{Convergence rate of IRMCMC}\label{sec:IRMCMCrate} The following equivalent representation of Algorithm~\ref{algo:IRMCMC} is useful. Let the auxiliary chain be as above. Furthermore, let $\{Z_n\}_{n\geq 0}$ be a sequence of independent and identically distributed random variables with $\mathbb P(Z_1 = 1) = 1-\mathbb P(Z_1 = 0) = \epsilon$. Furthermore, we assume that $\{Z_n\}_{n\geq 0}$ and $\{Y_n\}_{n\geq 0}$ are independent and for each $n\geq 1$, $Z_n$ and $\F_{n-1}$ are independent. Then, at round $n$, we can introduce $Z_n$, and write the conditional distribution of $X_n$ given $Z_n,\F_{n-1}$ as \[ X_n\mid \F_{n-1},Z_n\sim\left\{ \begin{array}{l@{\mbox{ if }}l} P(X_{n-1},\cdot) & Z_n = 0\\ \what\theta_{n-1}(\cdot) & Z_n = 1\,. \end{array} \right. \] Define \begin{equation}\label{eq:tau} \tau_0 =0, \tau_{i+1} = \min\{k>\tau_i: Z_k = 1\} \mbox{ and } n^* = \max\{k: \tau_k\leq n\}\,. \end{equation} Observe that at each time $\tau_k>0$, conditioning on $Y_0, Y_1,\dots,Y_{\tau_k-1}$, $X_{\tau_k}$ is sampled from $\what\theta_{\tau_k-1}$, independent of $X_0,\dots,X_{\tau_k-1}$. Furthermore, $Y_0,\dots,Y_n$ are independent from $\tau_1,\dots,\tau_{n^*}$. Therefore, we first focus on \begin{equation}\label{eq:eta} \eta_n\ensuremath{\stackrel{\mathrm{def}}{=}} \mathbb P(X_{n+1}\in\cdot\mid Z_{n+1}=1) = {\mathbb E}\what\theta_{n}(\cdot)\,,n\in{\mathbb N}\,. \end{equation} We first obtain a bound on the total variation distance $\nnTV{\eta_n-\pi}$. Recall that, given two probability distributions $\mu$ and $\nu$, the total variation distance $\nnTV{\mu-\nu}$ is defined by: \begin{equation}\label{eq:W} \nnTV{\mu-\nu} = \frac12\sup_{|f|\leq 1}|\mu(f)-\nu(f)|\,. \end{equation} For convenience, write \begin{equation}\label{eq:Bn} B_n\ensuremath{\stackrel{\mathrm{def}}{=}} |w|_\infty\sup_{|f|\leq 1}{\mathbb E}\what\pi_{Y,n}(\wb f) + 2|w|_\infty^2\sup_{|f|\leq 1}{\mathbb E}^\prime{\what\pi_{Y,n}(\wb f)}^2, n\in{\mathbb N}. \end{equation} Recall that throughout we assume $|w|_\infty<\infty$. \begin{Lem}\label{lem:etak} For all $n\in{\mathbb N}$, \begin{equation}\label{eq:etanTV} \nnTV{\eta_n-\pi} \leq B_n. \end{equation} \end{Lem} \begin{Rem} A special case of $\nnTV{\eta_n-\pi}$, when $w\equiv 1$ (equal weights), is the so-called {\it cesaro mixing time} of $\{Y_n\}_{n\geq 0}$. See for example~\citet[Chapter 6.6]{levin09markov}. \end{Rem} The proof of Lemma~\ref{lem:etak} is postponed to next subsection. We have no explicit requirement on $\pi_Y$, except that $w= \pi/\pi_Y$ is bounded. However, the convergence of $Y_1,Y_2,\dots$ to $\pi_Y$ is implicitly ensured when we require further $\sup_{|f|\leq 1}{\mathbb E}(\what\pi_{Y,n}(f) - \pi_Y(f)) + \sup_{|f|\leq 1}{\mathbb E}(\what\pi_{Y,n}(f) - \pi_Y(f))^2\to 0$ as $n\to\infty$. Indeed, these rates yield an upper bound on the convergence rate of ${\cal L}_{X_n}\Rightarrow \pi$, as shown in the following theorem. We set $B_0 = B_{-1} = 1$. \begin{Thm}\label{thm:IRMCMC} Consider $\indn X$ generated from Algorithm~\ref{algo:IRMCMC}. Then, \begin{equation}\label{eq:boundXn} \nnTV{{\cal L}_{X_n}-\pi} \leq \sum_{\ell=0}^n (1-\epsilon)^{n-\ell}B_{\ell-1}. \end{equation} Furthermore, for any bounded measurable function $f$, \begin{multline}\label{eq:L2fX} \PE\bb{\frac1{\sqrt n}\summ i1n(f(X_i)-\pi(f))}^2\\ \leq \frac{80\epsilon^{-2}|f|_\infty^2}n + 64\epsilon^{-2}|f|_\infty^2 + |f|_\infty^2^\prime{\frac1{\sqrt n}\summ 01{n-1}\sqrt{B_k}}^2, n\in{\mathbb N}. \end{multline} \end{Thm} The proof of Theorem~\ref{thm:IRMCMC} is postponed to next subsection. \begin{Rem} The control of the total variation distance depends on the initial position of the auxiliary chain, as in general, $B_n$ depends on the initial position $Y_0 = y_0$. We omit this dependence throughout this paper. At the same time, it is obvious that the initial position $X_0 = x_0$ is irrelevant. \end{Rem} \begin{Rem} In Theorem~\ref{thm:IRMCMC}, we do not assume any ergodicity assumption on the kernel $P$. In the case $P$ is say, geometrically ergodic, one can improve (\ref{eq:boundXn}) quantitatively by bounding the term $\nnTV{\eta_k P^{n-k}-\pi}$ more effectively. For example, if $P$ is uniformly ergodic with rate $\rho$, then (\ref{eq:boundXn}) would become \[\nnTV{{\cal L}_{X_n}-\pi} \leq \sum_{\ell=0}^n \left[\rho(1-\epsilon)\right]^{n-\ell}B_{\ell-1}.\] A similar improvement can be formulated for (\ref{eq:L2fX}). However, these improvements do not change the rate but only the constant in the corollary below. Beside, such improvements will not be easily available if $P$ is sub-geometrically ergodic. \end{Rem} Now, as a corollary we obtain an upper bound on the convergence rate of IRMCMC algorithm, under the following assumption. \assumpH \item \label{A:Y} There exist a finite constant $C$ such that for all measurable function $f:\;{\cal X}\to\mathbb R$, with $|f|_\infty\leq 1$, \begin{equation}\label{eq:AY} {\mathbb E} \what\pi_{Y,n}(\wb f)\leq \frac {C}n \quad\mbox{ and }\quad \PE\bbpp{\what\pi_{Y,n}(\wb f)}^2 \leq \frac{C}n . \end{equation} \assumpE \begin{Rem} For example, if $\indn Y$ is a Markov chain with transition kernel $P_Y$ and stationary distribution $\pi_Y$, then the first part of~\eqref{eq:AY} holds if $P$ is geometrically ergodic \citep{roberts04general}. The second part of~\eqref{eq:AY} essentially assumes that the sample variances of $\{f(Y_n)\}_{n\in{\mathbb N}}$ is bounded, which also holds for geometrically ergodic Markov chains under appropriate moment condition on $f$ (e.g. $\pi(|f|^{2+\epsilon})<\infty$ with $\epsilon>0$). Occasionally this condition can fail, as the sample variance might become infinity in the limit. See for example~\citet{haggstrom05central} and~\citet{haggstrom07variance}. \end{Rem} \begin{Coro}\label{coro:1} Consider the importance resampling MCMC (Algorithm~\ref{algo:IRMCMC}). If Assumption~\ref{A:Y} holds, then there exists a finite constant $C$ such that \[ \nnTV{{\cal L}_{X_n}-\pi} \leq \frac {C}n\,. \] Furthermore for any bounded measurable function $f$, \[\PE\bb{\frac1{\sqrt n}\summ i1n(f(X_i)-\pi(f))}^2 \leq C |f|_\infty^2\,, n\in{\mathbb N}\,.\] \end{Coro} \subsection{Proofs of Lemma~\ref{lem:etak} and Theorem~\ref{thm:IRMCMC}} \begin{proof}[Proof of Lemma~\ref{lem:etak}] Fix $n\geq 1$ and write $\what\pi_Y \equiv\what\pi_{Y,n}$. Rewrite $\eta_n(f)$ as, \begin{eqnarray*} \eta_n(f) & = & {\mathbb E}\bbpp{\summ j1{n}\frac{w(Y_j)}{\summ l1{n}w(Y_l)}f(Y_j)} \nonumber\\ & = & {\mathbb E}\bb{\frac1n\summ j1{n}w(Y_j)f(Y_j)+ \bbpp{1-\frac1n{\summ j1{n}w(Y_j)}}\summ j1{n}\frac{w(Y_j)f(Y_j)}{\summ l1{n}w(Y_l)}}\nonumber\\ & = & {\mathbb E}\bb{\what\pi_{Y,n}(wf) + (\pi_{Y}(w)-\what\pi_{Y,n}(w))\what\theta_n(f)}\,, \end{eqnarray*} where in the second term above we used the fact that $\pi_Y(w)=1$. Since $\pi(f) = \pi_Y(wf)$, \begin{multline} \nnTV{\eta_n-\pi} = \sup_{|f|\leq 1}\frac12\bbpp{\eta_n(f)-\pi(f)}\\ \leq \frac12\sup_{|f|\leq 1}{{\mathbb E}\what\pi_{Y,n}(\wb{wf})} + \frac12\sup_{|f|\leq 1}{{\mathbb E}^\prime{\what\pi_{Y,n}(\wb w)\what\theta_n(f)}}\\ \leq |w|_\infty{\sup_{|f|\leq 1}{{\mathbb E}\what\pi_{Y,n}(\wb f)} + \frac12\sup_{|f|\leq 1}{{\mathbb E}\bb{\what\pi_{Y,n}(\wb w)\bbpp{\what\theta_n(f)-\pi_Y(wf)}}}}.\label{eq:etan} \end{multline} By Cauchy--Schwarz inequality, \begin{multline} \sup_{|f|\leq 1}{\mathbb E}\bb{{\what \pi_{Y,n}(\wb w)}\bbpp{\what\theta_n(f)-\pi_Y(wf)}}\\ \leq \bb{{\mathbb E}\bbpp{\what \pi_{Y,n}(\wb w)}^2}^{1/2}\times \sup_{|f|\leq 1}\bb{{\mathbb E}\bbpp{\what\theta_n(f)-\pi_Y(wf)}^2}^{1/2}\,.\label{eq:etan2} \end{multline} The first term is bounded by $|w|_\infty\sup_{|f|\leq 1}\sbb{{\mathbb E}\spp{\what \pi_{Y,n}(\wb f)}^2}^{1/2}$. For the second term, observe that \begin{multline}\label{eq:etan3} {\mathbb E}\bbpp{\what\theta_n(f)-\pi_Y(wf)}^2 \\ \leq 2{\mathbb E} \bbpp{\what\theta_n(f)-\what\pi_{Y,n}(wf)}^2 + 2 {\mathbb E} \bbpp{\what\pi_{Y,n}(wf) - \pi_Y(wf)}^2, \end{multline} and \begin{multline} {\mathbb E} \bbpp{\what\theta_n(f)-\what\pi_{Y,n}(wf)}^2 = {\mathbb E} \bbpp{\summ j1{n}\frac{w(Y_j)f(Y_j)}{\summ l1{n}w(Y_l)} - \frac1n\summ j1{n}w(Y_j)f(Y_j)}^2\\ = {\mathbb E}\bb{\bbpp{1-\what\pi_{Y,n}(w)}^2\what\theta_n^2(f)} \leq {\mathbb E}\bbpp{\pi_Y(w)-\what\pi_{Y,n}(w)}^2\\ \leq |w|_\infty^2\sup_{|g|\leq 1}{\mathbb E}\bbpp{\what \pi_{Y,n}(\wb g)}^2,\label{eq:etan4} \end{multline} and the above calculation holds for all $f: |f|\leq 1$. Combining~\eqref{eq:etan},~\eqref{eq:etan2},~\eqref{eq:etan3} and~\eqref{eq:etan4} yields the desired result. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:IRMCMC}] We recall that $\tau_{n^*}$ is the last time $k$ before $n$ that the main chain is sampled from $\what\theta_{k-1}$. Now, we can write \begin{eqnarray} \nnTV{{\cal L}_{X_n}-\pi} & = & \sup_{|f|\leq 1}\frac12|{\mathbb E} f(X_n) - \pi (f)|\nonumber\\ & = & \sup_{|f|\leq 1}\frac12\abs{\summ k0n{\mathbb E}(f(X_n), \tau_{n^*} = k) - \pi(f)}\nonumber\\ & = & \sup_{|f|\leq 1}\frac12\abs{\summ k0n\mathbb P(\tau_{n^*} = k)\sbb{{\mathbb E}(f(X_n)\mid \tau_{n^*} = k) - \pi(f)}}\nonumber\\ & \leq & \summ k0n\mathbb P(\tau_{n^*} = k)\sup_{|f|\leq 1}\frac12|{\mathbb E}(f(X_n)\mid \tau_{n^*} = k) - \pi(f)|.\label{eq:bound} \end{eqnarray} Observe that the conditional distribution of $X_n$ given that $\tau_{n^*} = k\geq 1$, is $\eta_{k-1}P^{n-k}$ (set $\eta_0=\delta_{Y_0}$). Then, \begin{eqnarray*} \sup_{|f|\leq 1}\frac12|{\mathbb E}(f(X_n)\mid\tau_{n^*} = k) - \pi(f)| & = & \sup_{|f|\leq 1}\frac12|\eta_{k-1}P^{n-k}(f) - \pi(f)| \\ & = & \nnTV{\eta_{k-1}P^{n-k}-\pi}\,. \end{eqnarray*} By the fact that $\pi P = \pi$, we have $\nnTV{\eta_{k-1} P^{n-k}-\pi} \leq \nnTV{\eta_{k-1}-\pi}\leq B_{k-1}$, by Lemma \ref{lem:etak}. Also $\mathbb P(\tau_{n^*} = k)=\epsilon(1-\epsilon)^{n-k}$ for $k=1,\dots,n$ and $\mathbb P(\tau_{n^*} = 0) = (1-\epsilon)^n$. Thus,~\eqref{eq:bound} becomes \eqref{eq:boundXn}. To establish (\ref{eq:L2fX}), we show that the partial sum $\sum_{k=1}^n \left(f(X_k)-\pi(f)\right)$ admits a well behaved martingale approximation. For a probability measure $\theta$ on ${\cal X}$, define \[\pi_\theta(A)=\epsilon\sum_{j=0}^\infty(1-\epsilon)^j(\theta P^j)(A),\;\;A\in{\cal B}.\] Clearly, $\pi_\theta$ is a probability measure on $({\cal X},{\cal B})$, and in fact we have for all $A\in{\cal B}$, \begin{multline*} \pi_\theta P_\theta(A)=\epsilon\sum_{j=0}^\infty (1-\epsilon)^j \int(\theta P^j)({\rm d} z)\left((1-\epsilon)P(z,A)+\epsilon\theta(A)\right)\\ =\epsilon\sum_{j=0}^\infty (1-\epsilon)^{j+1}(\theta P^{j+1})(A)+\epsilon\theta(A)=\pi_\theta(A).\end{multline*} This means that the kernel $P_\theta$ is invariant by $\pi_\theta$. It is also easy to check by induction that for any bounded measurable function $f$, and $n\geq 1$, \begin{equation}\label{eq:ratePtheta1} P_\theta^nf(x)-\pi_\theta(f)=(1-\epsilon)^nP^nf(x)-\epsilon\sum_{j=n}^\infty(1-\epsilon)^j(\theta P^j)f.\end{equation} It then follows that \begin{equation}\label{eq:ratePtheta2} \nnTV{P_\theta^n(x,\cdot)-\pi_\theta}\leq 2(1-\epsilon)^n. \end{equation} As a result of (\ref{eq:ratePtheta2}), the function \[g_\theta(x)=\sum_{j=0}^\infty \left(P_\theta^jf(x)-\pi_\theta(f)\right),\] is well-defined with $|g_\theta|_\infty\leq 2\epsilon^{-1}|f|_\infty$, and satisfies Poisson's equation \begin{equation}\label{eq:PoissonPtheta} g_\theta(x)-P_\theta g_\theta(x)=f(x)-\pi_\theta(f),\;\;\;x\in{\cal X}.\end{equation} In particular, we have $f(X_k)-\pi_{\what\theta_{k-1}}(f)=g_{\what\theta_{k-1}}(X_k)-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_k)$, almost surely. Using this, we write: \begin{multline*} \sum_{k=1}^n \left(f(X_k)-\pi(f)\right)=\sum_{k=1}^n \left(\pi_{\what\theta_{k-1}}(f)-\pi(f)\right)+ \sum_{k=1}^n \left(f(X_k)-\pi_{\what\theta_{k-1}}(f)\right)\\ =\sum_{k=1}^n \left(\pi_{\what\theta_{k-1}}(f)-\pi(f)\right)+ \sum_{k=1}^n \left(g_{\what\theta_{k-1}}(X_k)-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})\right)\\ +\sum_{k=1}^n \left(P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})-P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})\right) + \sum_{k=1}^n \left(P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k})\right). \end{multline*} Using (\ref{eq:PoissonPtheta}), the definition of $g_\theta$, and (\ref{eq:ratePtheta1}), it is easy to check that for any probability measures $\theta,\theta'$ and $x\in{\cal X}$, \[P_\theta g_\theta(x)-P_{\theta'}g_{\theta'}(x)=\int(\theta'-\theta)({\rm d} z)\left(\epsilon\sum_{j=0}^\infty j(1-\epsilon)^j P^jf(z)\right).\] this implies that the term $\sum_{k=1}^n \left(P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k})\right)$ is a telescoping sum and we have \begin{multline*} \left|\sum_{k=1}^n \left(P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k})\right)\right|\\ \leq \abs{^\prime{\what\theta_n-\what\theta_0}^\prime{\epsilon\sif j0j(1-\epsilon)^jP^jf}} \leq \frac{2(1-\epsilon)}\epsilon|f|_\infty. \end{multline*} The term $\sum_{k=1}^n \left(P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})-P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})\right)$ is also a telescoping sum and we have \[ \left|\sum_{k=1}^n \left(P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})-P_{\what\theta_{k}}g_{\what\theta_{k}}(X_{k})\right)\right|\leq 4\epsilon^{-1}|f|_\infty. \] From the definition of $\pi_\theta$, notice that we can write \[\sum_{k=1}^n \left(\pi_{\what\theta_{k-1}}(f)-\pi(f)\right)=\sum_{k=1}^n \what\theta_{k-1}(f_\epsilon-\pi(f_\epsilon)),\] where $f_\epsilon(x)=\epsilon\sum_{j=0}^\infty(1-\epsilon)^j P^j f(x)$. Thus, \begin{multline*} {\mathbb E}\bb{\summ k1n^\prime{\pi_{\what\theta_{k-1}}(f)-\pi(f)}}^2 \\ \leq ^\prime{\summ k1n{\mathbb E}^{1/2}\what\theta_{k-1}^2(f_\epsilon - \pi(f_\epsilon))}^2 \leq |f|_\infty^2^\prime{\summ k0{n-1}\sqrt{B_k}}^2, \end{multline*} where in the last equality, we use the fact that $\sup_{|f|_\infty\leq 1}\PE\what\theta_k^2(f-\pi(f))\leq B_k$ which is (\ref{eq:etan3}) and is proved as part of Lemma~\ref{lem:etak}. Finally we also notice that $\sum_{k=1}^n \left(g_{\what\theta_{k-1}}(X_k)-P_{\what\theta_{k-1}}g_{\what\theta_{k-1}}(X_{k-1})\right) =:\summ k1n D_k$ is a martingale with respect to $\{\F_n\}$, whence \[ {\mathbb E}^\prime{\summ k1nD_k}^2 = \summ k1n{\mathbb E} D_k^2\leq 4n\sup_{\theta}|g_\theta|_\infty^2\leq 16\epsilon^{-2}|f|_\infty^2n . \] Using all the above, we obtain~\eqref{eq:L2fX}. \end{proof} \subsection{An example on the lower bound} We provide an example where $O(n^{-1})$ is also the lower bound of the rate for both $\nnTV{\eta_n-\pi}$ and $\nnTV{{\cal L}_{X_n}-\pi}$. This shows that the rate in our upper bound in Corollary~\ref{coro:1} is optimal. \begin{Example}\label{rem:2state} Consider the simple case when ${\cal X} = \{\pm1\}$, and $\pi = \pi_Y$. In this case, the weight function is uniform ($w \equiv 1$). Suppose the auxiliary chain $\{Y_n\}_{n\geq 0}$ has transition matrix \[ P_Y = \bbpp{ \begin{array}{cc} 1-a & a\\ b & 1-b \end{array} }, \mbox{ with } a,b\in(0,1)\,. \] The corresponding Markov chain has stationary distribution $\pi_Y = (a+b)^{-1}(b,a)$ and eigenvalues $\lambda_1 = 1,\lambda_2 = 1-a-b$. Suppose $a+b\neq 1$ and the chain starts at $Y_0 = -1$. By straight-forward calculation, $\mathbb P(Y_n = -1) = a/(a+b) + b/(a+b)\lambda_2^n$. Then, \begin{multline*} {\mathbb E}\what\pi_{Y,n}(\{-1\}) - \pi_Y(\{-1\}) \\ \equiv \frac1n\summ i1{n}(\mathbb P(Y_i = -1) - \pi_Y(\{-1\})) = \frac a{a+b}\frac1n\frac{\lambda_2-\lambda_2^{n+1}}{1-\lambda_2}\,. \end{multline*} It then follows from the definition that $\nnTV{\eta_n-\pi}\geq C/n$. Furthermore, in~\eqref{eq:Ptheta} set $P(x,\cdot) = \pi(\cdot)$. That is, $P$ is the {\it best} kernel we can put into the algorithm, in the sense that it takes one step to arrive at the stationary distribution (although this is too ideal to be practical). Now, \begin{eqnarray*} \mathbb P(X_n = -1) - \pi(\{-1\}) & = & (1-\epsilon)\pi(\{-1\}) + \epsilon{\mathbb E}\what\pi_{Y,n}(\{-1\}) - \pi(\{-1\})\\ & = & \epsilon\bbpp{{\mathbb E}\what\pi_{Y,n}(\{-1\}) - \pi_Y(\{-1\})}\,. \end{eqnarray*} It then follows that $\nnTV{{\cal L}_{X_n}-\pi}\geq C/n$. \end{Example} \subsection{Multiple IRMCMC}\label{sec:mIRMCMC} We discuss importance-resampling MCMC algorithm in forms of multiple chains and establish a similar convergence rate as in Section~\ref{sec:IRMCMCrate}. \begin{Algo}[Multiple IRMCMC]\label{algo:mIRMCMC} We construct iteratively $m$ discrete-time stochastic processes $X\topp \ell \equiv \{X_n\topp \ell\}_{n\geq 0}, \ell=0,\dots, m$ as follows. Fix $\epsilon\in(0,1)$. Let $X\topp 0$ be a Markov chain with target distribution $\pi_0$ starting at $x_0$. Then iteratively, for each $\ell=1,\dots,m$ with $X\topp {\ell-1}$ constructed, design $X\topp{\ell}$ starting from $x_\ell$, so that $X\topp\ell$ and $X\topp{\ell-1}$ interact as the main chain and the auxiliary chain respectively in Algorithm~\ref{algo:IRMCMC}. Namely, let $P_\ell$ be a Markov kernel with stationary distribution $\pi_\ell$, and sample $X_{n+1}\topp\ell$ from $P_{\ell,\what\theta\topp{\ell-1}_{n}}(X_n\topp \ell,\cdot)$ with \[ P_{\ell,\theta}(x,\cdot) = (1-\epsilon) P_\ell(x,\cdot) + \epsilon\theta(\cdot) \] and \[ \what\theta\topp{\ell-1}_n(\cdot) = \summ i1{n}\frac{w_\ell(X_i\topp{\ell-1})}{\summ j1{n}w_\ell(X_j\topp{\ell-1})}\delta_{X_i\topp{\ell-1}}(\cdot), \] with $w_\ell(x) = {\pi_\ell(x)}/{\pi_{\ell-1}(x)}, x\in{\cal X}$. Note that the $\ell$-th chain $X\topp \ell$ at time $n$, depends on $\{X_k\topp \ell\}_{k=0,\dots,n-1, \ell=1,\dots m-1}$. We assume that $\max_{\ell=1,\dots,m}|w_\ell|_\infty<\infty$. \end{Algo} In view of Theorem~\ref{thm:IRMCMC}, it suffices to control \begin{equation}\label{eq:Bnl} B_n\topp\ell \ensuremath{\stackrel{\mathrm{def}}{=}} \sup_{|f|\leq 1}{\mathbb E}\what\pi_{X\topp{\ell},n}(\wb f) + \sup_{|f|\leq 1}{\mathbb E}^\prime{\what\pi_{X\topp{\ell},n}(\wb f)}^2, n\in{\mathbb N}, \end{equation} where this time $\what\pi_{X\topp\ell,n}(\wb f) \ensuremath{\stackrel{\mathrm{def}}{=}} \what\pi_{X\topp\ell,n}(f) - \pi_{\ell-1}(f)$. In fact, it suffices to control $B_n^{(0)}$, which is the purpose of the following assumption. \assumpH \item \label{A:mIRMCMC} As $n\to\infty$, the initial Markov chain $\{X_n\topp 0\}_{n\geq 0}$ satisfies $B_n^{(0)}\leq C/n$. \assumpE \begin{Thm}\label{thm:mIRMCMC} Consider the multiple IRMCMC (Algorithm~\ref{algo:mIRMCMC}) for which Assumption~\ref{A:mIRMCMC} holds and $\max_{\ell=1,\dots,m}|w_\ell|_\infty<\infty$. Then for $\ell = 1,\dots,m$, there exists a finite constant $C$ such that \begin{equation}\label{eq:lognm} \nnTV{{\cal L}_{X_n\topp \ell} - \pi_\ell} \leq \frac{C}n\,, \end{equation} and for any bounded measurable function $f$, \begin{equation}\label{eq:varm} {\mathbb E}\bb{\frac1{\sqrt n}\summ i1n^\prime{f(X_i\topp{\ell})-\pi(f)}}^2 \leq C.\end{equation} \end{Thm} \begin{proof} Simply observe that (\ref{eq:lognm}) and (\ref{eq:varm}) imply that $B_n^{(\ell)}\leq C/n$, as $n\to\infty$. By Theorem \ref{thm:IRMCMC}, this implies in turn that (\ref{eq:lognm}) and (\ref{eq:varm}) hold with $\ell$ replaced by $\ell+1$. Given \ref{A:mIRMCMC}, the result follows by induction. \end{proof} \newpage \section{Equi-Energy Sampler}\label{sec:EE} In this section, we consider the simplified EE sampler as follows. Recall that the auxiliary chain $\{Y_n\}_{n\geq 0}$ evolves independently from the main chain $\{X_n\}_{n\geq 0}$. \begin{Algo}[Equi-Energy sampler]\label{algo:EE} Fix $\epsilon\in(0,1)$. Start $X_0 = x_0$ and $Y_0 = y_0$. At each round $n$, generate \[ X_n \sim\left\{ \begin{array}{ll} P(X_{n-1},\cdot) & \mbox{ w.p.~} 1-\epsilon\\ K_{\what\theta_{n-1}}(X_{n-1},\cdot) & \mbox{ w.p.~} \epsilon \end{array} \right.\,, \] where $\what\theta_n = \what\pi_{Y,n}$ is the empirical measure associated to $\{Y_n\}_{n\geq 0}$ and $K_\theta$ is defined by \[ K_\theta(x,A)=\textbf{1}_A(x) + \int_{{\cal X}}\left(1\wedge\frac{\pi(z)\pi_Y(x)}{\pi(x)\pi_Y(z)}\right)\left(\textbf{1}_A(z)-\textbf{1}_A(x)\right)\theta({\rm d} z). \] In other words, for all non-negative functions $h:{\cal X}\to{\mathbb R}$ and $n\in{\mathbb N}$, \begin{equation}\label{dynEE} \PE_{x}\left(h(X_{n+1})\mid \F_{n}\right)=P_{\what\theta_{n}}h(X_{n}) \;\;\mbox{ almost surely,} \end{equation} where for any probability measure $\theta$ on ${\cal X}$, $P_\theta$ is defined as \begin{equation}\label{eq:EE} P_\theta(x,A)=(1-\epsilon)P(x,A)+\epsilon K_\theta(x,A), \end{equation} Recall that we write $\pi({\rm d} x) \equiv \pi(x){\rm d} x$ and similarly for $\pi_Y$ with a little abuse of language, and $w(x) = \pi(x)/\pi_Y(x)$. We assume $|w|_\infty<\infty$. \end{Algo} The kernel $K_{\theta_\star}$ is the Independent Metropolis kernel with target $\pi$ and proposal $\theta_\star = \pi_Y$. It is well known that under the assumption $|w|_\infty<\infty$ (recall Remark~\ref{rem:w}), the kernel $K_{\theta_\star}$ is uniformly ergodic \citep{mengersen96rates}, and this property is inherited by $P_{\theta_\star}$. That is, there exists $\rho\in (0,1)$, such that \begin{equation}\label{eq:rateconv} \nnTV{P^n_{\theta_\star}(x,\cdot) - \pi(\cdot)}\leq C \rho^n,\;\;\;n\geq 0. \end{equation} \subsection{Convergence rate of EE sampler}\label{sec:EErate} We make the following assumptions. \assumpH \item \label{A:subGaussian} There exist a finite universal constant $C$ such that for any measurable function $f:\;{\cal X}\to\mathbb R$, with $|f|_\infty\leq 1$, \[ \sup_{n}\PP\left(\left|\frac{1}{\sqrt{n}}\summ j1n ^\prime{f(Y_j) - \pi_Y(f)}\right|>x\right) \leq C\exp\left(-\frac{x^2}{C\sigma^2(f)}\right), \] where $\sigma^2(f)\ensuremath{\stackrel{\mathrm{def}}{=}} \int_{\cal X} f^2(x)\pi_Y({\rm d} x)$. \assumpE \assumpH \item \label{A:w} The function $w:\;{\cal X}\to\mathbb R$ is continuous (with respect to the metric on ${\cal X}$), and \begin{equation}\label{eq:w} \sup_{x\in{\cal X}}\frac{\phi(x)}{w^2(x)}<\infty, \end{equation} where $\phi(x)\ensuremath{\stackrel{\mathrm{def}}{=}} \pi_Y\left(\{z:\; w(z)\leq w(x)\}\right)$. \assumpE \assumpH \item \label{A:continuous} The kernel $P$ is such that if $f:{\cal X}\to\mathbb R$ is continuous, then $Pf$ is also continuous. \assumpE \begin{Rem}Deviation bounds as in~\ref{A:subGaussian} are available for various conditions on transition kernels. See for example \citet[Proposition 1.2]{cattiaux08deviation}. \end{Rem} \begin{Rem} Assumption~\ref{A:w} is not restrictive. For example, consider ${\cal X} = {\mathbb R}$ and $\pi_Y = \pi^T$ with some $T\in(0,1)$. For the sake of simplicity, we focus on $x\in{\mathbb R}_+$ and define $\phi_+(x) \ensuremath{\stackrel{\mathrm{def}}{=}} \pi_Y(\{z>0:w(z)\leq w(x)\})$. Suppose the density $\pi(x)$ decays asymptotically as $x^{-\alpha}$ for $\alpha>1$ as $x\to\infty$. Then, $\pi_Y(x)\sim x^{-T\alpha}$ and $w(x)\sim x^{(T-1)\alpha}$. Here and below, we write $a(x)\sim b(x)$ if $\lim_{x\to\infty}a(x)/b(x) = 1$. Assume further that $T\alpha>1$. Then, $\phi_+(x)\sim (T\alpha-1)^{-1} x^{1-T\alpha}$ and \[ \frac{\phi_+(x)}{w^2(x)} \sim\frac1{T\alpha-1}x^{1+2\alpha-3T\alpha}. \] Therefore,~\eqref{eq:w} holds, if $T>(1+2\alpha)/(3\alpha)$. \end{Rem} \begin{Thm}\label{thm:EE} Consider the Equi-Energy sampler described as above and suppose that Assumptions \ref{A:subGaussian}--\ref{A:continuous} hold. Then, there exists a constant $C$, such that for all continuous functions $f:{\cal X}\to{\mathbb R}$ and $n\in{\mathbb N}$, \begin{equation}\label{eq:thm:EE} \left|{\mathbb E}\left(f(X_n)-\pi(f)\right)\right|\leq \frac{C|f|_\infty}{\sqrt{n}}. \end{equation} \end{Thm} \begin{proof} Fix $n\geq 2$ and $1\leq q\leq n$. Fix $f:\;{\cal X}\to\mathbb R$ with $|f|_\infty= 1$. Then write \[ \PE_xf(X_n)-P_{\theta_\star}^nf(x)=\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-P_{\theta_\star}^nf(x)\right)-\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-f(X_n)\right). \] For the first term we can use \eqref{eq:rateconv} to get: \[\left|\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-P_{\theta_\star}^nf(x)\right)\right|\leq C \rho^{n-q},\] for some finite constant $C$ that does not depend on $f$. For the second term, we write: \begin{eqnarray} & & \PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-f(X_n)\right)\nonumber\\ & = & \PE_x\left[\sum_{j=q}^{n-1}\left(P_{\theta_\star}^{n-j}f(X_j)-P_{\theta_\star}^{n-j-1}f(X_{j+1})\right)\right]\nonumber\\ & = & \sum_{j=q}^{n-1}\PE_x\left[P_{\theta_\star}^{n-j}f(X_j)-\PE_x\left(P_{\theta_\star}^{n-j-1}f(X_{j+1})\mid \F_j\right)\right]\nonumber\\ & = & \sum_{j=q}^{n-1}\PE_x\left[P_{\theta_\star}^{n-j}f(X_j)-P_{\what\theta_j}P_{\theta_\star}^{n-j-1}f(X_{j})\right]\nonumber\\ & = &\sum_{j=q}^{n-1}C_0 \rho^{n-j-1}\PE_x\left[\left(P_{\theta_\star}-P_{\what\theta_j}\right)\zeta_{n,j}(X_j)\right],\label{eq:Ex} \end{eqnarray} where in the last line we write \[ \zeta_{n,j}(x)=\frac{P_{\theta_\star}^{n-j-1}(f(x)-\theta_\star(f))}{C_0\rho^{n-j-1}},\;\;x\in{\cal X}\,, \] with $C_0$ and $\rho$ chosen as in~\eqref{eq:rateconv}. As a consequence of~\eqref{eq:rateconv}, $|\zeta_{n,j}|_\infty\leq 1$. It is also continuous by the continuity of $f$ and Assumption~\ref{A:continuous}. To simplify the notation, for any function $g:\;{\cal X}\to \mathbb R$, define \begin{equation}\label{eq:Hg} H_g(x,z)\ensuremath{\stackrel{\mathrm{def}}{=}}\alpha(x,z)\left(g(z)-g(x)\right),\quad x,z\in{\cal X},\end{equation} where \begin{equation}\label{eq:alpha} \alpha(x,z)\ensuremath{\stackrel{\mathrm{def}}{=}} 1\wedge \frac{w(z)}{w(x)}. \end{equation} Thus, we can write \[ P_\theta g(x)-P_{\theta_\star}g(x)=\epsilon\int H_g(x,z)(\theta({\rm d} z)-\theta_\star({\rm d} z)). \] Always based on $g:{\cal X}\to{\mathbb R}$, we introduce the class of functions \[ {\cal F}_g\ensuremath{\stackrel{\mathrm{def}}{=}}\ccbb{z\mapsto H_g(x,z):\;x\in{\cal X}}, \] and the empirical process \[{\mathbb G}_n(h)\ensuremath{\stackrel{\mathrm{def}}{=}} \frac{1}{\sqrt{n}}\sum_{j=1}^n \left(h(Y_j)-\pi_Y(h)\right),\;\;\; h\in{\cal F}_g. \] Therefore, the expectation term in~\eqref{eq:Ex} becomes \begin{multline*} {\mathbb E}_x\bb{\bbpp{P_{\theta_\star}-P_{\what\theta_j}}\xi_{n,j}(X_j)} = \epsilon{\mathbb E}_x\bb{\int H_{\xi_{n,j}}(X_j,z)(\theta_\star({\rm d} z)-\what\theta_j({\rm d} z))}\\ = -\epsilon{\mathbb E}_x\bb{\frac1j\summ \ell1j H_{\xi_{n,j}}(X_j,Y_\ell) - \int_{\cal X} H_{\xi_{n,j}}(X_j,z)\theta_\star({\rm d} z)}\\ = -\frac{\epsilon}{\sqrt{j}}{\mathbb E}_x\bb{{\mathbb G}_j\bbpp{H_{\xi_{n,j}}(X_j,\cdot)}}\,, \end{multline*} whence \begin{eqnarray*} \left|\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-f(X_n)\right)\right| & = & \abs{\epsilon\sum_{j=q}^{n-1}\frac{C_0\rho^{n-j-1}}{\sqrt{j}}{\mathbb E}_x\bb{{\mathbb G}_j\bbpp{H_{\xi_{n,j}}(X_j,\cdot)}}}\\ & \leq & C_0\sum_{j=q}^{n-1}\frac{\rho^{n-j-1}}{\sqrt{j}}\PE_x\left(\sup_{h\in \F_{\zeta_{n,j}}}\left|{\mathbb G}_j(h)\right|\right). \end{eqnarray*} We prove in Lemma \ref{lem2} below that for any continuous function $g:{\cal X}\to\mathbb R$ such that $|g|_\infty\leq 1$, \[ \PE_x\left(\sup_{h\in \F_{g}}\left|{\mathbb G}_n(h)\right|\right)\leq C, \] for some constant $C$ that does not depend on $n$ nor $g$. We conclude that \[ \left|\PE_x\left(P_{\theta_\star}^{n-q}f(X_q)-f(X_n)\right)\right|\leq C\sum_{j=q}^{n-1}\frac{1}{\sqrt{j}}\rho^{n-j-1}. \] Thus for any $1\leq q\leq n$, \[ \left|\PE_x\left(f(X_n)\right)-\theta_\star(f)\right|\leq C \left\{\rho^n + \rho^{n-q} + \epsilon \sum_{j=q}^{n-1}\frac{\rho^{n-j-1}}{\sqrt{j}}\right\}\leq Cn^{-1/2}, \] by choosing $q=n-\lfloor\frac{-\log n}{2\log\rho}\rfloor$. \end{proof} We rely on the following technical result on the auxiliary chain $\{Y_n\}_{n\geq 0}$. \begin{Lem}\label{lem2}Suppose that Assumptions \ref{A:subGaussian} and \ref{A:w} hold, and let $g:{\cal X}\to\mathbb R$ be continuous such that $|g|_\infty\leq 1$. Then \[ \PE_x\left(\sup_{h\in \F_g}\left|{\mathbb G}_n(h)\right|\right)\leq C,\] for a constant $C$ that does not depend on $n$. \end{Lem} \begin{proof} Throughout the proof $n\geq 1$ is fixed. Assumption \ref{A:subGaussian} suggests the following metric on $\F_g$: \[\textsf{d}(h_1,h_2)=\sigma(h_1-h_2)=\left(\int_{\cal X}\left(h_1(x)-h_2(x)\right)^2\pi_Y({\rm d} x)\right)^{1/2},\] which has the following properties. For $x_1,x_2\in{\cal X}$, it is easy to check that \begin{equation}\label{dist1} \left |H_g(x_1,z)-H_g(x_2,z)\right|\leq 2\left|\alpha(x_1,z)-\alpha(x_2,z)\right| + \left|g(x_1)-g(x_2)\right|.\end{equation} It follows that \begin{multline}\label{dist2} \textsf{d}\left(H_g(x_1,\cdot),H_g(x_2,\cdot)\right)\\ \leq \sqrt 2\left|g(x_1)-g(x_2)\right| + 2\sqrt2\sqrt{\int\left|\alpha(x_1,z)-\alpha(x_2,z)\right|^2\pi_Y({\rm d} z)}.\end{multline} This implies that the diameter of $\F_g$ is bounded by $\delta(\F_g)=4\sqrt{2}$. It also implies that with respect to $\textsf{d}$, the empirical process $\{{\mathbb G}_n(h),\;h\in \F_g\}$ is separable. Indeed, for $x\in{\cal X}$ arbitrary and $h=H_g(x,\cdot)$, using the Polish assumption, we can find a sequence $x_m\in{\cal X}$ ($x_m$ belongs to a countable subset of ${\cal X}$) such that $x_m\to x$, as $m\to\infty$. Setting $h_m=H_g(x_m,\cdot)$, it follows from (\ref{dist2}) and the continuity of $u$ and $E$ that $h_m\to h$ in $(\F,\textsf{d})$, and (\ref{dist1}) easily show that ${\mathbb G}_n(h_m)-{\mathbb G}_n(h)=n^{-1/2}\sum_{\ell=1}^n\left(H_g(x,Y_\ell)-H_g(x_m,Y_\ell)\right)+\sqrt{n}\pi_Y\left(H_g(x,\cdot)-H_g(x_m,\cdot)\right)\to 0$ as $m\to\infty$ for all realizations of $\{Y_1,\ldots,Y_n\}$. For any $h_1,h_2\in\F_{g}$, Assumption \ref{A:subGaussian} implies that for any $x>0$ \[ \PP_x\left(\left|{\mathbb G}_n(h_1)-{\mathbb G}_n(h_2)\right|>x\right)\leq C\exp\left(-\frac{x^2}{c\textsf{d}^2(h_1,h_2)}\right). \] Then we apply \citet[Corollary 2.2.8]{vandervaart96weak} to conclude that for $h_0\in\F_g$, there exists a constant $C$ independent of $g$, such that \[ \PE_x\left(\sup_{h\in \F_{g}}\left|{\mathbb G}_n(h)\right|\right)\leq \PE_x|{\mathbb G}_n(h_0)| + C\int_0^{\delta(\F_g)} \sqrt{1+\log\textsf{D}(\epsilon,\F_g,\textsf{d})}\;{\rm d}\epsilon<\infty,\] where $\textsf{D}(\epsilon,\F_g,\textsf{d})$ is the packing number of $\F_g$ with respect to $\textsf{d}$. Assumption \ref{A:subGaussian} shows that $\PE_x|{\mathbb G}_n(h_0)|<\infty$. To control the entropy number, we further bound the right hand of (\ref{dist2}). Without loss of generality, assume $x_1,x_2\in{\cal X}$ and $w(x_1)<w(x_2)$. If $w(x_1)\vee w(x_2)\leq w(z)$, then $\alpha(x_1,z)-\alpha(x_2,z)=0$. If $w(z)\leq w(x_1)$, then \[ \left|\alpha(x_1,z)-\alpha(x_2,z)\right|^2=\left|\frac{w(z)}{w(x_1)} - \frac{w(z)}{w(x_2)}\right|^2\leq \frac{1}{w(x_1)^2}\left(w(x_2)-w(x_1)\right)^2. \] If $w(x_1)\leq w(z)\leq w(x_2)$, then \[\left|\alpha(x_1,z)-\alpha(x_2,z)\right|^2=\left|1- \frac{w(z)}{w(x_2)}\right|^2 \leq \frac{1}{w(x_2)^2}\left(w(x_2)-w(x_1)\right)^2.\] Thus \begin{multline*} \int\left|\alpha(x_1,z)-\alpha(x_2,z)\right|^2\pi_Y({\rm d} z)\\ \leq ^\prime{\frac{\phi(x_1)}{w(x_1)^2} + \frac{\phi(x_2)}{w(x_2)^2}}\left(w(x_2)-w(x_1)\right)^2\leq C\left(w(x_2)-w(x_1)\right)^2, \end{multline*} where $\phi(x)\ensuremath{\stackrel{\mathrm{def}}{=}} \pi_Y\left(\{z:\; w(z)\leq w(x)\}\right)$, and the last inequality follows from \ref{A:w}. Together with (\ref{dist2}), we conclude from this bound that \begin{equation}\label{lip} \textsf{d}\left(H_g(x_1,\cdot),H_g(x_2,\cdot)\right) \leq C (\left|g(x_1)-g(x_2)\right|+ \left|w(x_2)-w(x_1)\right|). \end{equation} Since $|g|_\infty\leq 1$ and $w(x)\in [0,|w|_\infty]$, this implies that the $\epsilon$-packing number of $(\F_g,\textsf{d})$ is at most of order $\epsilon^{-2}$, so that $\int_0^{\delta(\F_g)} \sqrt{1+\log\textsf{D}(\epsilon,\F_g,\textsf{d})}\;{\rm d}\epsilon\leq C \int_0^{\delta(\F_g)}\sqrt{1+\log(1/\epsilon)}d\epsilon<\infty$. This proves the lemma. \end{proof} \subsection{Connection with Parallel Tempering}\label{sec:mEE} Our results suggest that the EE sampler mixes relatively slowly. A plausible reason for this slow mixing is the dependence on the entire sample path $\{Y_k\}_{0\leq k\leq n}$. The EE sampler is closely related to the Parallel Tempering (PT) algorithm of \citep{geyer91markov}, which suggests that it might be possible to exploit this connection by deriving versions of the EE sampler with better mixing properties. Like the EE sampler, a 2-chain PT generates a stochastic process $\{(X_n,Y_n)\}_{n\geq 0}$ where with probability $1-\epsilon$, $X_n$ is generated from $P(X_{n-1},\cdot)$ and with probability $\epsilon$, one proposes to swap the two chains. Thus PT is closely related to a EE-type algorithm where the empirical measure $\what\pi_{Y,n}$ would be replaced by a Dirac measure $\delta_{Y_n}$. However, we show that in general, this new algorithm does not maintain the correct stationary distribution. We hope that the discussion in this section would be helpful in conceptualizing new adaptive algorithms in the future. The modified EE sampler is as follows. Let $\{Y_n\}_{n\geq 0}$ be the auxiliary chain with transition kernel $P_Y$ and stationary distribution $\pi_Y$. Let $\{X_n\}_{n\geq 0}$ be a chain satisfying the following assumption: for all continuous and bounded function $f$, \[ {\mathbb E}[f(X_{n+1})\mid X_0,\dots,X_n,Y_0,\dots,Y_n] = P_{\delta_{Y_n}}f(X_n), n\in{\mathbb N}, \] where $P_\theta$ is as in~\eqref{eq:EE}, and denote the stationary distribution of $P$ by $\pi_X$. \begin{Rem} The difference from the EE sampler is that we replace $\what\pi_{Y,n}$ by $\delta_{Y_n}$. If, when $X_{n+1}$ is moving to $Y_n$, we also make $Y_{n+1}$ move to $X_n$, then we are allowing {\it swaps} between the two chains. Such swaps are in the spirit of parallel tempering algorithms (see e.g.~\citet{geyer91markov}). \end{Rem} A nice property of this algorithm is that $\{(X_n,Y_n)\}_{n\geq 0}$ is a Markov chain. Indeed, it has transition kernel \begin{equation}\label{eq:MEE} P_{X,Y}(x,y,{\rm d} z,{\rm d} w) = P_{\delta y}(x,{\rm d} z)P_Y(y,{\rm d} w)\,. \end{equation} This Markov chain may not have the desired stationary distribution. Let $\pi_{X,Y}$ denote the stationary distribution and let $\pi_{X,Y}\topp i, i=1,2$ denote its two marginal distributions. Naturally, we wish $\pi_{X,Y}\topp 1 = \pi_X$ and $\pi_{X,Y}\topp 2 = \pi_Y$. By construction, the latter identity is always true. However, the former does not always hold. Since $P_\theta = (1-\epsilon)P + \epsilon K_\theta$ and $P(x,{\rm d} z)P_Y(y,{\rm d} w)$ has stationary distribution $\pi_X({\rm d} z)\pi_Y({\rm d} w)$, instead of~\eqref{eq:MEE} it suffices to focus on the transition kernel \begin{equation}\label{eq:MEE1} P_{X,Y}(x,y,{\rm d} z,{\rm d} w) = K_{\delta_Y}(x,{\rm d} z)P_Y(y,{\rm d} w)\,. \end{equation} Consider the simple case when both chains take values from $\{\pm1\}$. Let the auxiliary chain has the following transition matrix and stationary distribution: \[ P_Y = \left( \begin{array}{cc} 1- a & a\\ b & 1-b \end{array} \right)\quad\mbox{ and }\quad \pi_Y = ^\prime{\frac b{a+b}, \frac a{a+b}}\,. \] Recall that in this case, \[ K_{\delta y}(x,z) = \alpha(x,y)\indd{z=y} + (1-\alpha(x,y))\indd {z=x}\,, \] with \[ \alpha(x,y) = 1\wedge \frac{\pi_X(y)}{\pi_X(x)}\frac{\pi_Y(x)}{\pi_Y(y)}\,. \] Write $c = \alpha(1,-1)$ and $d= \alpha(-1,1)$. Then, one can write the transition matrix of $P_{X,Y}$ as follows: \[ \begin{array}{l|llll} & (1,1) & (1,-1) & (-1,1)& (-1,-1)\\ \hline (1,1) & 1-a & a & 0 & 0\\ (1,-1) & (1-c)b & (1-c)(1-b) & cb & c(1-b)\\ (-1,1) & d(1-a) & da & (1-d)(1-a) & (1-d)a\\ (-1,-1) & 0 & 0 & b & 1-b \end{array} \] For example, the table reads as \[ P(1,-1,1,-1) = P_{\delta_{-1}}(1,1)P_Y(-1,-1) = (1-c)(1-b)\,. \] We solve $\pi_{X,Y}P_{X,Y} = \pi_{X,Y}$ and obtain \[ \pi_{XY} = \frac1{(a+b)((1-a-b)cd + ac + bd)} \left( \begin{array}{c} bd(b+(1-a-b)c) \\ abd\\ abc\\ ac(a+(1-a-b)d) \end{array} \right)^\top\,. \] To see that $\pi_{X,Y}\topp1$ does not always equal $\pi_X$, take for example $a=b=1/3$ and $\pi_X = (2/3,1/3)$. In this case, $c = 1/2$, $d=1$, and $\pi_{X,Y} = (3/8,1/4,1/8,1/4)$, whence $\pi_{X,Y}\topp1 = (5/8,3/8)\neq \pi_X$. \def\cprime{$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
{'timestamp': '2012-07-31T02:03:37', 'yymm': '1207', 'arxiv_id': '1207.6779', 'language': 'en', 'url': 'https://arxiv.org/abs/1207.6779'}
\section{Introduction} \label{sec:Intro} The exchange interactions control the magnetic order and properties of a vast number of materials \cite{White2006Dec} and lead to many fascinating phenomena, such as various types of the Kondo effect \cite{Kondo,NozieresBlandin,Pustilnik_Glazman}. Double quantum dots (DQDs), and in general multi-impurity systems, constitute a convenient and controllable playground, where nearly as much different exchange mechanisms compete with each other to shape the ground state of the system. \emph{Local exchange} between the spin of a quantum dot (QD) and the spin of conduction band electrons gives rise to the Kondo effect \cite{Kondo,Hewson_book}. \emph{Direct exchange} arriving with an additional side-coupled QD may destroy it or lead to the two-stage Kondo screening \cite{Pustilnik_Glazman,Cornaglia,Granger,ZitkoBonca,ZitkoPRB2010,Ferreira}. In a geometry where the two QDs contact the same lead, conduction band electrons mediate the \emph{RKKY exchange} \cite{RK,K,Y}. The RKKY interaction competes with the Kondo effect and leads to the quantum phase transition of a still debated nature \cite{Doniach,Jones,Affleck,Bork,Neel,KondoRKKYexp,Hans,Hans2,Fabian}. Moreover, in DQDs coupled in series also \emph{superexchange} can alter the Kondo physics significantly \cite{Zitko_2QDEx,Sela}. Recently, hybrid quantum devices, in which the interplay between various magnetic correlations with superconductivity (SC) plays an important role, have become an important direction of research \cite{hybridQDs,SCspintronics}. In particular, chains of magnetic atoms on SC surface have proven to contain self-organized Majorana quasi-particles and exotic spin textures \cite{Braunecker,Klinovaja,Vazifeh,Yazdani}, while hybrid DQD structures have been used to split the Cooper pairs coherently into two entangled electrons propagating to separated normal leads \cite{CPS1,CPS2,CPS4,CPS5,CPS9}. The latter is possible due to non-local (\emph{crossed}) Andreev reflections (CARs), in which each electron of a Cooper pair tunnels into different QD, and subsequently to attached lead. Such processes give rise to an exchange mechanism \cite{Yao}, that we henceforth refer to as \emph{the CAR exchange}, which can greatly modify the low-temperature transport behavior of correlated hybrid nanostructures. The CAR exchange may be seen as RKKY-like interaction between two nearby impurities on SC surface \cite{Yao}. The effect can be understood as a consequence of spin-dependent hybridization of the Yu-Shiba-Rusinov (YSR) states \cite{Yu,Shiba,Rusinov} in SC contact, caused both by the overlap of their wave functions and their coupling to Cooper-pair condensate. This process is the most effective when the YSR states are close to the middle of the SC gap, {\it e.g.} in the YSR-screened phase \cite{YSRscreening}. The mechanism presented here is essentially the same, yet in the considered regime can be understood perturbatively without referring to YSR states, as a consequence of the non-local pairing induced by SC electrode. In particular, the presence of YSR bound states close to the Fermi level is not necessary for significant consequences for the Kondo physics, as long as some inter-dot pairing is present. The proximity of SC induces pairing in QDs \cite{RozhkovArovas,Buitelaar} and tends to suppress the Kondo effect if the superconducting energy gap $2\Delta$ becomes larger than the relevant Kondo temperature $T_K$ \cite{Buitelaar2002Dec,adatomsSC,Kondo_vs_SC1,Kondo_vs_SC2,Zitko_Kondo-Andreev,Zitko_S-QD-N,IW_Sau,YSRscreening}. Moreover, the strength of SC pairing can greatly affect the Kondo physics in the sub-gap transport regime: For QDs attached to SC and normal contacts, it can enhance the Kondo effect \cite{DomanskiIW,KWIW,part1}, while for DQD-based Cooper pair splitters, it tends to suppress both the $\mathrm{SU}(2)$ and $\mathrm{SU}(4)$ Kondo effects \cite{IW_Kacper}. Our main result is that the non-local pairing induced by superconducting proximity effect, which gives rise to CAR exchange, can be the sole cause of the Kondo screening. Moreover, relatively small values of coupling to SC, $\GS{}\ll U$, are sufficient for the effect to occur. This is in contrast to the DQD system considered in Ref.~\cite{part1}, where only one of the quantum dots is proximized, such that CAR exchange cannot arise, and the Kondo physics becomes qualitatively affected only for $\GS{}\sim U/2$.% \begin{figure}[bt] \centering \includegraphics[width=1\linewidth]{Fig1.png} \caption{ (a) Schematic of the considered system. Left/right (L/R) lead is coupled to the first quantum dot (QD1), while superconductor is attached to both QD1 and QD2. (b)-(d) illustrate an example of direct spin exchange: spin-up electron from the initial state (b) hops to the other QD (c) and spin-down electron hops back (d). Note, that the final state is in fact the same singlet state, only with opposite sign. (e)-(g) show an example of process contributing to crossed Andreev reflection (CAR) exchange. A Cooper pair from SC approaches DQD (e) and two singlets of the same charge are formed (f), before the Cooper pair is re-emitted (g). (h)-(j) present an example of RKKY process: an electron scattered off one QD (h) mediates the spin exchange towards the other (i), before it is finally scattered off there, too (j). } \label{fig:system} \end{figure} In this paper we discuss the CAR-induced Kondo screening in a setup comprising T-shaped DQD with normal and superconducting contacts, see \fig{system}(a). We note that despite quite generic character of CAR exchange, and its presence in systems containing at least two localized electrons coupled close to each other to the same SC bath, to best of our knowledge CAR-induced screening has hardly been identified in previous studies \cite{CPS1,CPS2,CPS4,CPS5,CPS9,IW_Kacper,IW_Sau,Zitko_Josephson,Zitko_S2QD,Martinek2017}. In the system proposed here [\fig{system}(a)], its presence is evident. Moreover, CAR exchange magnitude can be directly related to the relevant energy scales, such as the Kondo temperature, which provides a fingerprint for quantitative experimental verification of our predictions. The paper is organized as follows. In \Sec{model} we describe the considered system and present the model we use to study it. In \Sec{scales} the relevant energy scales are estimated to make the discussion of main results concerning CAR-induced Kondo effect in \Sec{main} more clear. Finally, the influence of effects neglected in \Sec{main} are presented in the following sections, including CAR exchange interplay with RKKY interaction (\Sec{RKKY}), particle-hole asymmetry (\Sec{asym}), couplings asymmetry (\Sec{x}) and reduced efficiency of CAR coupling (\Sec{coef}). In summary, the effects discussed in \Sec{main} remain qualitatively valid in all these cases. The paper is concluded in \Sec{conclusions}. \section{Model} \label{sec:model} The schematic of the considered system is depicted in \fig{system}(a). It contains two QDs attached to a common SC lead. Only one of them (QD1) is directly attached to the left (L) and right (R) normal leads, while the other dot (QD2) remains coupled only through QD1. The SC is modeled by the BCS Hamiltonian, $H_{\mathrm{S}}=\sum_{\mathbf{k}\sigma}\xi_{\mathbf{k}}a_{\mathbf{k}\sigma}^{\dag}a_{\mathbf{k}\sigma}-\Delta\sum_{\mathbf{k}}(a^\dag_{\mathbf{k}\uparrow}a_{-\mathbf{k}\downarrow}^{\dag}+a_{-\mathbf{k}\downarrow}a_{\mathbf{k}\uparrow})$, with energy dispersion $\xi_{\mathbf{k}}$, energy gap $2\Delta>0$ and $a_{\mathbf{k}\sigma}$ annihilation operator of electron possessing spin $\sigma$ and momentum $\mathbf{k}$. The coupling between SC and QDs is described by the hopping Hamiltonian $H_{\mathrm{TS}}=\sum_{i\mathbf{k}\sigma}v_{\mathrm{S}i}(d^\dagger_{i\sigma}a^{}_{\mathbf{k}\sigma}+h.c.)$, with $d^\dagger_{i\sigma}$ creating a spin-$\sigma$ electron at QD$i$. The matrix element $v_{\mathrm{S}i}$ and the normalized density of states of SC in normal state, $\rho_{\rm S}$, contribute to the coupling of QD$i$ to SC electrode as $\GS{i} = \pi \rho_{\rm S} |v_{{\rm S}i}|^2$. We focus on the sub-gap regime, therefore, we integrate out SC degrees of freedom lying outside the energy gap \cite{RozhkovArovas}. This gives rise to the following effective Hamiltonian, $H_{\mathrm{eff}}=H_{\mathrm{SDQD}}+H_{\rm L}+H_{\rm R}+H_{\rm T}$, where \begin{eqnarray} H_{\rm SDQD} & = & \sum_{i\sigma} \varepsilon_{i} n_{i\sigma} +\sum_{i} U n_{i\uparrow} n_{i\downarrow} +U' (n_1-1)(n_2-1) \nonumber\\ &+&\sum_\sigma t(d^\dagger_{1\sigma}d^{}_{2\sigma} + h.c.) +J \vec{S}_1\vec{S}_2 \nonumber\\ &+&\sum_{i} \!\!\left[ \Gamma_{{\rm S}i} (d^\dagger_{i\uparrow} d^\dagger_{i\downarrow} \!+\! h.c.) +\Gamma_{\rm SX} (d^\dagger_{i\uparrow} d^\dagger_{\bar{i}\downarrow} \!+\! h.c.) \right] \label{H_DQD} \end{eqnarray} is the Hamiltonian of the SC-proximized DQD \cite{IW_Kacper,Walldorf2018Feb}, with QD$i$ energy level $\varepsilon_i$, inter-site (intra-site) Coulomb interactions $U'$ ($U$), inter-dot hopping $t$, and CAR coupling $\GS{\rm X}$. $n_{i\sigma}=d^\dagger_{i\sigma}d^{}_{i\sigma}$ denotes the electron number operator at QD$i$, $n_i=n_\uparrow+n_\downarrow$, and $\bar{i}\equiv 3-i$. Our model is strictly valid in the regime where $\Delta$ is the largest energy scale. Nevertheless, all discussed phenomena are present in a full model for energies smaller than SC gap. Moreover, by eliminating other consequences of the presence of SC lead, our model pinpoints the fact that the non-local pairing is sufficient for the occurrence of the CAR exchange. The presence of out-gap states shall result mainly in additional broadening of DQD energy levels, changing the relevant Kondo temperatures. We note that the procedure of integrating out out-gap states neglects the RKKY interaction mediated by SC lead and other possible indirect exchange mechanisms% \footnote{ Note, that by RKKY interaction we mean only such an effective exchange, which arises due to multiple scattering of a single electron or hole, see \fig{system}(h)-(j). Other mechanisms leading to the total indirect exchange are considered separately. In particular, in the large gap limit, exchange described in Ref.~\cite{Yao} is in fact reduced to the CAR exchange, and additional antiferromagnetic contribution would arise for finite gap. }. To compensate for this, we explicitly include the Heisenberg term $ J \vec{S}_1\vec{S}_2$ in $H_{\rm SDQD}$, with $\vec{S}_i$ denoting the spin operator of QD$i$ and a Heisenberg coupling $J$ substituting the genuine RKKY exchange. The normal leads are treated as reservoirs of noninteracting electrons, $H_{r}=\sum_{\mathbf{k}\sigma}\varepsilon_{r\mathbf{k}}c^\dagger_{r\mathbf{k}\sigma}c^{}_{r\mathbf{k}\sigma}$, where $c^{}_{r\mathbf{k}\sigma}$ annihilates an electron of spin $\sigma$ and momentum $\mathbf{k}$ in lead $r$ ($r={\rm L,R}$) with the corresponding energy $\varepsilon_{r\mathbf{k}\sigma}$. The tunneling Hamiltonian reads, $H_{\rm T} = \sum_{r\mathbf{k}\sigma} v_{r} (d^\dagger_{1\sigma}c^{}_{r\mathbf{k}\sigma} + h.c.)$, giving rise to coupling between lead $r$ and QD$i$ of strength $\Gamma_r = \pi \rho_r |v_r|^2$, with $\rho_r$ the normalized density of states of lead $r$ and $v_r$ the local hopping matrix element, assumed momentum-independent. We consider a wide-band limit, assuming constant $\Gamma_r=\Gamma/2$ within the cutoff $\pm D = \pm 2U$ around the Fermi level. For thorough analysis of the CAR exchange mechanism and its consequences for transport, we determine the linear conductance between the two normal leads from \begin{equation} G = \frac{2e^2}{h} \pi \Gamma \int \left[ -\frac{\partial f_T}{\partial\omega} \right] \mathcal{A}(\omega) {\rm d} \omega , \label{G} \end{equation} where $f_T$ is the Fermi function at temperature $T$, while $\mathcal{A}(\omega)$ denotes the normalized local spectral density of QD1 \cite{fn1}. Henceforth, unless we state otherwise, we assume a maximal CAR coupling, $\GS{\rm X} = \sqrt{\GS{1}\GS{2}}$ \cite{IW_Kacper,Walldorf2018Feb}, $\GS{1}=\GS{2}=\GS{}$ and consider DQD tuned to the particle-hole symmetry point, $\varepsilon_1=\varepsilon_2=-U/2$. However, these assumptions are not crucial for the results presented here, as discussed in Secs.~\ref{sec:asym}-\ref{sec:coef}. \section{Estimation of relevant energy scales} \label{sec:scales} Since we analyze a relatively complex system, let us build up the understanding of its behavior starting from the case of a QD between two normal-metallic leads, which can be obtained in our model by setting $t=\GS{}=J=U'=0$. Then, the conductance as a function of temperature, $G(T)$, grows below the Kondo temperature $T_K$ and reaches maximum for $T\to 0$, $G(T\!=\!0)=G_{\rm max}$. At particle-hole symmetry point, the unitary transmission is achieved, $G_{\rm max}= G_0 = 2e^2/h$; see short-dashed line in \fig{G-T}(a). An experimentally relevant definition of $T_K$ is that at $T=T_K$ $G(T)=G_{\rm max}/2$. $T_K$ is exponentially small in the local exchange $J_0 = 8\Gamma / (\pi \rho U)$, and is approximated by $T_K \approx D \exp[-1/(\rho J_0)]$ \cite{Hewson_book}. The presence of a second side-coupled QD, $t,U'>0$, significantly enriches the physics of the system by introducing direct exchange between QDs, see \fig{system}(b-d). In general, effective inter-dot exchange can be defined as energy difference between the triplet and singlet states of isolated DQD, $J^{\mathrm{eff}} = E_{S=1} - E_{\rm GS}$. Unless $U$ becomes very large, superexchange can be neglected \cite{Zitko_2QDEx} and $J^{\mathrm{eff}}$ is determined by \emph{direct exchange}, $J^{\mathrm{eff}}\approx 4t^2/(U-U')>0$. When the hopping $t$ is tuned small \cite{CPS1}, one can expect $J^{\mathrm{eff}}\lesssim T_K$, which implies the two-stage Kondo screening \cite{Pustilnik_Glazman,Cornaglia}. Then, for $T \ll T_K$, the local spectral density of QD1 serves as a band of width $\sim T_K$ for QD2. The spin of an electron occupying QD2 experiences the Kondo screening below the associated Kondo temperature \begin{equation} T^* = a T_K \exp(- b T_K / J_{\rm eff}) \label{Tstar} \end{equation} with $a$ and $b$ constants of order of unity \cite{Pustilnik_Glazman,Cornaglia}. This is reflected in conductance, which drops to $0$ with lowering $T$, maintaining characteristic Fermi-liquid $G\sim T^2$ dependence \cite{Cornaglia}; see the curves indicated with squares in \fig{G-T}(a). Similarly to $T_K$, experimentally relevant definition of $T^*$ is that $G(T\!=\!T^*) = G_{\rm max}/2$. Even at the particle-hole symmetry point $G_{\rm max} < G_0$, because the single-QD strong-coupling fixed point is unstable in the presence of QD2 and $G(T)$ does not achieve $G_0$ exactly, before it starts to decrease. The proximity of SC gives rise to two further exchange mechanisms that determine the system's behavior. First of all, the (conventional) \emph{RKKY interaction} appears, $J \sim \GS{}^2$ \cite{RK,K,Y}. Moreover, the \emph{CAR exchange} emerges as a consequence of finite $\GS{}$ \cite{Yao}. It can be understood on the basis of perturbation theory as follows. DQD in the inter-dot singlet state may absorb and re-emit a Cooper pair approaching from SC; see \fig{system}(e)-(g). As a second-order process, it reduces the energy of the singlet, which is the ground state of isolated DQD. A similar process is not possible in the triplet state due to spin conservation. Therefore, the singlet-triplet energy splitting $J^{\mathrm{eff}}$ is increased (or generated for $t=J=0$). More precisely, the leading ($2$nd-order in $t$ and $\GS{}$) terms in the total exchange are \begin{equation} J^{\mathrm{eff}} \approx J + \frac{4t^2}{U-U'+\frac{3}{4}J} + \frac{4\GS{}^2}{U+U'+\frac{3}{4}J}. \label{Jeff} \end{equation} Using this estimation, one can predict $T^*$ for finite $\GS{}$, $t$ and $J$ with \eq{Tstar}. Apparently, from three contributions corresponding to: (i) RKKY interaction, (ii) direct exchange and (iii) CAR exchange, only the first may bear a negative (ferromagnetic) sign. The two other contributions always have an anti-ferromagnetic nature. More accurate expression for $J^{\mathrm{eff}}$ is derived in Appendix~\ref{sec:downfolding} [see \eq{A_J}] by the Hamiltonian down-folding procedure. The relevant terms differ by factors important only for large $\GS{}/U$. Finally, it seems worth stressing that normal leads are not necessary for CAR exchange to occur. At least one of them is inevitable for the Kondo screening though, and two symmetrically coupled normal leads allow for measurement of the normal conductance. It is also noteworthy that inter-dot Coulomb interactions decrease the energy of intermediate states contributing to direct exchange [\fig{system}(c)], while increasing the energy of intermediate states causing the CAR exchange [\fig{system}(f)]. This results in different dependence of corresponding terms in \eq{Jeff} on $U'$. As can be seen in \figs{G-T}(b) and \ref{fig:G-T}(c), it has a significant effect on the actual values of $T^*$. \begin{figure} \includegraphics[width=1\linewidth]{Fig2.pdf} \caption{(a) Linear conductance $G$ as function of $T$ calculated for $\varepsilon_1=\varepsilon_2=-U/2$, $\Gamma=U/5$, $U'=U/10$ and different situations, as indicated. The quantity $\xi\equiv\sqrt{\GS{}^2+t^2}$ is fixed for different curves drawn with the same dashing style. Note the logarithmic scale on both axes. % (b) Points show $T^*/T_K$ calculated by NRG from curves in subfigure (a). Lines present the fit to \eq{Tstar} with $J^{\mathrm{eff}}$ obtained from \eq{Jeff}. % (c) The same as (b), only for $U'=0$. % (d) and (e) show the residual conductance $G_{\mathrm{min}} \equiv G(T \!=\! 0)$ as a function of $\GS{}$ for $t=0$ (denoted "CAR") and $t=\GS{}$ (denoted "Both"). Dotted line is a guide for eyes. $U'=U/10$ in (b) and (d) and $U'=0$ in (c) and (e). } \label{fig:G-T} \end{figure} \section{CAR exchange and Kondo effect} \label{sec:main} To verify \eqs{Tstar}-(\ref{Jeff}) we calculate $G$ using accurate full density matrix numerical renormalization group (NRG) technique \cite{WilsonNRG,Weichselbaum,FlexibleDMNRG,fn2}. We compare $U'=0$ case with experimentally relevant value $U'=U/10$ \cite{Keller2013Dec}. While for two close adatoms on SC surface RKKY interactions may lead to prominent consequences \cite{Klinovaja}, the conventional ({\it i.e.} non-CAR) contribution should vanish rapidly when the inter-impurity distance $r$ exceeds a few lattice constants \cite{RKKYrange,SC_RKKY}. Meanwhile, the CAR exchange may remain significant for $r$ of the order of coherence length of the SC contact \cite{Yao}. Therefore, we first neglect the conventional RKKY coupling and analyze its consequences in Sec.~\ref{sec:RKKY}. The main results are presented in \fig{G-T}(a), showing the temperature dependence of $G$ for different circumstances. For reference, results for $\GS{}=0$ are shown, exhibiting the two-stage Kondo effect caused by \emph{direct} exchange mechanism. As can be seen in \figs{G-T}(b) and \ref{fig:G-T}(c), an excellent agreement of $T^*$ found from NRG calculations and \eq{Tstar} is obtained with $a=0.42$ and $b=1.51$, the same for both $U'=0$ and $U'=U/10$. Note, however, that $J^{\mathrm{eff}}$ is different in these cases, cf. \eq{Jeff}, and $U'$ leads to increase of $T^*$. Furthermore, for $t=0$ and $\GS{}>0$ the two-stage Kondo effect caused solely by the \emph{CAR exchange} is present; see \fig{G-T}(a). Experimentally, this situation corresponds to a distance between the two QDs smaller than the superconducting coherence length, but large enough for the exponentially suppressed direct hopping to be negligible. While intuitively one could expect pairing to compete with any kind of magnetic ordering, the Kondo screening induced by CAR exchange is a beautiful example of a superconductivity in fact leading to magnetic order, namely the formation of the Kondo singlet. This CAR-exchange-mediated Kondo screening is our main finding. For such screening, \eq{Tstar} is still fulfilled with very similar parameters, $a=0.37$ ($a=0.35$) and $b=1.51$ ($b=1.50$) for $U'=0$ ($U'=U/10$), correspondingly; see \figs{G-T}(b-c). Moreover, as follows from \eq{Jeff}, $U'$ reduces CAR exchange, and therefore diminishes $T^*$. For the same values of $J^{\mathrm{eff}}$, the dependence of $G(T)$ for $t=0$ and $\GS{}>0$ is hardly different from the one for $\GS{}=0$ and $t>0$ for $T\geq T^*$ (results not shown). However, $G(T)$ saturates at residual value $G_{\mathrm{min}}$ as $T\to 0$ only for finite $\GS{}$, which at particle-hole symmetry makes $G_{\mathrm{min}}$ the hallmark of SC proximity and the corresponding CAR exchange processes. From numerical results, one can estimate it as \begin{equation} G_{\mathrm{min}} = \frac{e^2}{h} \cdot c \, \frac{\GS{}^2}{U^2} \qquad {\scriptstyle (\GS{1}=\GS{2}=\GS{})} , \label{Gmin} \end{equation} with $c\approx 2.25$, barely depending on $U'$ and getting smaller for $t>0$. This is illustrated in \figs{G-T}(d-e), where the dotted line corresponds to \eq{Gmin} with $c=2.25$. Lastly, in \fig{G-T}(a) we also present the curves obtained for $t=\GS{}$ chosen such, that the quantity $\xi=\sqrt{t^2+\GS{}^2}$ remains the same in all the cases. This is to illustrate what happens when \emph{both} (direct and CAR) exchange interactions are present. \fig{G-T}(c) clearly shows that $T^*$ remains practically unaltered for $U'=0$. The comparison with \fig{G-T}(b) proves that in this case it practically does not depend on $U'$. The enhancement of direct exchange is compensated by the decrease of the CAR one. On the contrary, $G_{\mathrm{min}}$ decreases for larger $t$ below the estimation given by Eq.~(\ref{Gmin}), as can be seen in \figs{G-T}(d-e). While analyzing the results concerning $G_{\mathrm{min}}(\GS{})$ plotted in \figs{G-T}(d-e) one needs to keep in mind that $G_{\mathrm{min}}$ is obtained at deeply cryogenic conditions. To illustrate this better, $G(\GS{})$ obtained for $t=0$ and $T=10^{-6}U$ is plotted with solid line in \fig{3}. Clearly, for weak $\GS{}$ the system exhibits rather conventional (single-stage) Kondo effect with $G=G_{\mathrm{max}}\approx 2e^2/h$, while QD2 is effectively decoupled ($G_{\mathrm{max}}<2e^2/h$ in the proximity of SC lead \cite{KWIW}). Only for larger values of $\GS{}$ the CAR exchange is strong enough, such that $T^*>T$ and the dependence $G(\GS{})$ continuously approaches the $T=0$ limit estimated by \eq{Gmin} and presented in \figs{G-T}(d-e). \section{CAR-RKKY competition} \label{sec:RKKY} \begin{figure} \includegraphics[width=0.98\linewidth]{Fig3.pdf} \caption{Linear conductance $G$ vs. $\GS{}$ calculated for $t=0$, $\Gamma=U/5$, $U'=U/10$, finite $T=10^{-6}U$ and different values of RKKY coupling $J$, as indicated. Inset shows QD1 spectral function $\mathcal{A}(\omega)$ as a function of energy $\omega$ for points on $J=-0.1U$ curve, indicated with corresponding symbols. } \label{fig:3} \end{figure} Let us now discuss the effects introduced by the conventional RKKY interaction. We choose $t=0$ for the sake of simplicity and analyze a wide range of $\GS{}$, starting from the case of anti-ferromagnetic RKKY interaction ($J>0$). Large $J>0$ leads to the formation of a molecular singlet in the nanostructure. This suppresses the conductance, unless $\GS{}$ becomes of the order of $U/2$, when the excited states of DQD are all close to the ground state. This is illustrated by double-dotted line in \fig{3}. Smaller value of $J>0$ causes less dramatic consequences, namely it just increases $J^{\mathrm{eff}}$ according to \eq{Jeff}, leading to enhancement of $T^*$, cf. \eq{Tstar}. This is presented with dot-dashed line in \fig{3}. The situation changes qualitatively for ferromagnetic RKKY coupling, $J<0$. Then, RKKY exchange and CAR exchange have opposite signs and compete with each other. Depending on their magnitudes and temperature, one of the following scenarios may happen. For $J^{\mathrm{eff}} > 0$, {\it i.e.} large enough $\GS{}$, and $T<T^*$, the system is in the singlet state due to the two-stage Kondo screening of DQD spins. $G(T\!=\!0)$ is reduced to $G_{\mathrm{min}}$, which tends to increase for large negative $J$; see dashed lines in \fig{3}. In the inset to \fig{3}, the spectral density of QD1 representative for this regime is plotted as curve indicated by triangle. It corresponds to a point on the $J=-0.1U$ curve in the main plot, also indicated by triangle. The dip in $\mathcal{A}(\omega)$ has width of order of $T^*$. For finite $T$, there is always a range of sufficiently small $|J^{\mathrm{eff}}|$, where QD2 becomes effectively decoupled, and, provided $T<T_K$, $G$ reaches $G_{\mathrm{max}}$ due to conventional Kondo effect at QD1. This is the case for sufficiently small $\GS{}$ for $J=0$ or $J=-0.01U$, and in the narrow range of $\GS{}$ around the point indicated by a circle in \fig{3} for $J=-0.1U$ (for $J=0.01U$, the considered $T$ is close to $T^*$ and $G$ does not reach $G_{\rm max}$). The conventional Kondo effect manifests itself with a characteristic peak in $\mathcal{A}(\omega)$, as illustrated in the inset in \fig{3} with line denoted by circle. Finally, large enough $J^{\mathrm{eff}} < 0$ and low $T$, give rise to an effective ferromagnetic coupling of DQDs spins into triplet state. Consequently, the underscreened Kondo effect occurs \cite{Mattis,NozieresBlandin} for weak $\GS{}$ and, {\it e.g.}, $J=-0.1U$; see the point indicated by square in \fig{3}. This leads to $G=G_{\mathrm{max}}$ and a peak in $\mathcal{A}(\omega)$, whose shape is significantly different from the Kondo peak, cf. the curve denoted by square in the inset in \fig{3}. \section{Effects of detuning from the particle-hole symmetry point} \label{sec:asym} \begin{figure} \includegraphics[width=0.98\linewidth]{Fig4.pdf} \caption{ (a) Linear conductance between the normal leads $G$ as a function of temperature $T$ for parameters corresponding to \fig{G-T}(a) with $\xi=U/10$, and additional curves for finite detuning from particle-hole symmetry point, $\delta_1=-\delta_2$, and two values of $\xi=\sqrt{t^2+\GS{}^2}$, as indicated in the figure. (b) $G_{\mathrm{min}} \equiv G(T \!=\! 0)$ as a function of QD1 detuning $\delta_1$ for different exchange mechanisms, $\xi=U/10$ and $\delta_2=\pm\delta_1$ (as indicated). } \label{fig:asym} \end{figure} At PHS $G_{\mathrm{min}}=G(T \!=\! 0)=0$ in the absence of superconducting lead, making $G_{\mathrm{min}} > 0$ a hallmark of SC-induced two-stage Kondo effect. However, outside of PHS point $G_{\mathrm{min}} > 0$ even in the case of the two-stage Kondo effect caused by the direct exchange. Exact PHS conditions are hardly possible in real systems, and the fine-tuning of the QD energy levels to PHS point is limited to some finite accuracy. Therefore, there may appear a question, if the results obtained at PHS are of any importance for the realistic setups. As we show below --- they are, in a reasonable range of detunings $\delta_i=\varepsilon_i +U/2$. In \fig{asym}(a) we present the $G(T)$ dependence in and outside the PHS, corresponding to parameters of \fig{G-T}(a). Clearly, for considered small values of $\delta_1=\delta_2=\delta$, $G_{\mathrm{min}}<10^{-3}e^2/h$ for direct exchange only, while $G_{\mathrm{min}}$ in the presence of a superconductor is significantly increased and close to the PHS value. Furthermore, for $|\delta_1| \sim |\delta_2| \sim \delta$, the residual conductance caused by the lack of PHS, $G_{\mathrm{min}} \approx e^2/h \cdot (\delta/U)^2$, which is a rapidly decreasing function in the vicinity of PHS point, as illustrated in \fig{asym}(b) with lines denoted by a square. Evidently, in the regime $|\delta_i| < 0.01U$ the residual conductance caused by SC is orders of magnitude larger, leading to the plateau in $G_{\mathrm{min}}(\delta_1)$ dependence, visible in \fig{asym}(b). Taking into account that the realistic values of $U$ in the semiconductor quantum dots are rather large, this condition seems to be realizable by fine-tuning of QD gate voltages. Lastly, let us point out that while in the presence of only one exchange mechanism, \emph{CAR} or \emph{direct}, $G_{\mathrm{min}}(\delta_1)$ dependencies depicted in \fig{asym}(b) are symmetrical with respect to sign change of $\delta_1$, for \emph{both} exchange mechanisms the dependence is non-symmetric. \section{Effects of asymmetry of couplings to superconductor} \label{sec:x} \begin{figure} \includegraphics[width=0.98\linewidth]{Fig5.pdf} \caption{ (a) Linear conductance between the normal leads, $G$, as a function of temperature, $T$, for parameters corresponding to \fig{G-T}(a) with $\xi=U/10$, for different values of asymmetry coefficient $x$ [see \eq{xGS}], in the presence of \emph{CAR} exchange only. % (b) The second-stage Kondo temperature $T^*$ normalized by $T_K$ as a function of $x$, calculated with the aid of NRG (points) and a fit to \eq{Tstar} (lines) with $J^{\mathrm{eff}}$ from \eq{Jeff}. % (c) The zero-temperature conductance $G_{\mathrm{min}}$ as a function of QD1 coupling to SC lead, $\GS{1}$, compiled from data obtained at different circumstances (as indicated in the legend) for different $x$. Dotted line corresponds to \eq{Gmin2} with $c=2.25$. } \label{fig:x} \end{figure} Similarly to PHS, the ideal symmetry in the coupling between respective QDs and SC lead is hardly possible in experimental reality. As shown below, it does not introduce any qualitatively new features. On the other hand, it decreases the second stage Kondo temperature, which is already small, therefore, quantitative estimation of this decrease may be important for potential experimental approaches. To analyze the effects of $\GS{1}\neq\GS{2}$, we introduce the asymmetry parameter $x$ and extend the definition of $\GS{}$, \beq x = \frac{\GS{1}-\GS{2}}{\GS{1}+\GS{2}}, \quad \GS{} = \frac{\GS{1}+\GS{2}}{2}. \label{xGS} \end{equation} Note, that even for a fixed $\GS{}$, the actual CAR coupling $\GS{\rm X}=\GS{}\sqrt{1-x^2}$ decreases with increasing $|x|$, which is a main mechanism leading to a decrease of $T^*$ outside the $x=0$ point visible in \figs{x}(a) and (b). To illustrate this, the curves corresponding to \emph{both} exchange mechanisms were calculated using $x$-dependent $t=\GS{\rm X}$ instead of $t=\xi/\sqrt{2}$. Therefore, $\xi$ was generalized for $x\neq 0$ by setting $\xi=\sqrt{t^2(1-x^2)^{-1}+\GS{}^2}$. Clearly, in \fig{x}(b) the curves for different exchange mechanisms are very similar and differ mainly by a constant factor, resulting from different influence of $U'$; see \Sec{scales}. The magnitude of $T^*$ changes is quite large, exceeding an order of magnitude for $x=\pm 0.5$ and $\xi=U/20$. Moreover, $T^* \to 0$ for $x\to\pm 1$. Consequently, for strongly asymmetric devices one cannot hope to observe the second stage of Kondo screening. A careful observer can note that the $T^*(x)$ dependency is not symmetrical; note for example different $T^*$ for $x=\pm 0.5$ in \fig{x}(a). This is caused by the dependence of the first stage Kondo temperature $T_K$ on $\GS{1}$ \cite{part1,DomanskiIW}, \beq \widetilde{T}_K(\GS{1}) = T_K \cdot \exp\!\left( \frac{\pi}{2} \frac{\GS{1}^2}{\Gamma U}\right). \end{equation} Here, $T_K$ is, as earlier, defined in the absence of SC, while $\widetilde{T}_K$ is a function of $\GS{1}$, such that $G(\widetilde{T}_K) = G_{\rm max}(\GS{1})/2$ in the absence of QD2. As $\widetilde{T}_K$ grows for increasing $\GS{1}$ (or $x$), $T^*$ decreases according to \eq{Tstar}. Its $\GS{}$ dependence can be accounted for by small changes in the coefficients $a$ and $b$ in \eq{Tstar}, as long as $x$ is kept constant. To close the discussion of $T^*(x)$ dependence let us point out, that in \eq{A_J} there appears a correction to \eq{Jeff} for $x\neq 0$. However, it is very small due to additional factor $\GS{}^2/U^2$ in the leading order. Its influence on curves plotted in \fig{x}(b) is hardly visible. In turn, let us examine the $x$ dependence of the $T=0$ conductance $G_{\mathrm{min}}$. As can be seen in \fig{x}(a), it monotonically increases with $x$, as it crosses $x=0$ point. In fact, \eq{Gmin} can be generalized to \beq G_{\mathrm{min}} = \frac{e^2}{h} \cdot c \, \frac{\GS{1}^2}{U^2} , \label{Gmin2} \end{equation} with $c\approx 2.25$ (indicated by a dotted line in \fig{x}(c)). Note that $G_{\mathrm{min}}$ is proportional to $\GS{1}^2=(x+1)^2 \GS{}^2$, instead of simply $\GS{}$, cf. \eq{Gmin}. The values of $G_{\mathrm{min}}$ obtained from all analyzed $G(T)$ dependencies for different $x$ have been compiled in \fig{x}(c). It is evident, that \eq{Gmin2} is approximately fulfilled for all the considered cases. Finally, it seems noteworthy that the normal-lead coupling asymmetry, $\Gamma_{\rm L}\neq \Gamma_{\rm R}$, is irrelevant for the results except for a constant factor diminishing the conductance $G$ \cite{KWIWJB-asym}. \section{The role of CAR efficiency} \label{sec:coef} \begin{figure}[tb] \includegraphics[width=0.98\linewidth]{Fig6.pdf} \caption{Linear conductance between the normal leads $G$ as a function of coupling to SC lead, $\GS{}$, for indicated values of RKKY exchange $J$ and the efficiency of CAR processes reduced by factor (a) $\mathcal{C}=0.9$ and (b) $\mathcal{C}=0.5$. Other parameters as in \fig{3}. Insets: QD1 local spectral density $\mathcal{A}(\omega)$ as a function of energy $\omega$ for points on $J=-0.1U$ curve, indicated with corresponding symbols. } \label{fig:C} \end{figure} Up to this point we assumed $\GS{\rm X} = \sqrt{\GS{1}\GS{2}}$, which is valid when the two quantum dots are much closer to each other than the coherence length in the superconductor. This does not have to be the case in real setups, yet relaxing this assumption does not introduce qualitative changes. Nevertheless, the model cannot be extended to inter-dot distances much larger than the coherence length, where $\GS{\rm X}\to 0$. To quantitatively analyze the consequences of less effective Andreev coupling we define the CAR efficiency as $\mathcal{C} \equiv \GS{\rm X} / \sqrt{\GS{1}\GS{2}}$ and analyze $\mathcal{C} < 1$ in the wide range of $\GS{1}=\GS{2}=\GS{}$ and other parameters corresponding to \fig{3}. The results are presented in \fig{C}. Clearly, decreasing $\mathcal{C}$ from $\mathcal{C}=1$ causes diminishing of $\GS{\rm X}$, and consequently of CAR exchange. For a change as small as $\mathcal{C}=0.9$, the consequences reduce to some shift of the conventional Kondo regime, compare \fig{C}(a) with \fig{3}. Stronger suppression of CAR may, however, increase the SC coupling necessary to observe the second stage of Kondo screening caused by CAR outside the experimentally achievable range, see \fig{C}(b). Moreover, the reduced $T^*$ leads to narrowing of the related local spectral density dip, while the increased critical $\GS{}$ necessary for the observation of the second stage of screening leads to the shallowing of the dip. This is visible especially in the inset in \fig{C}(b). \section{Conclusions} \label{sec:conclusions} The CAR exchange mechanism is present in any system comprising at least two QDs or magnetic impurities coupled to the same superconducting contact in a way allowing for crossed Andreev reflections. In the considered setup, comprised of two quantum dots in a T-shaped geometry with respect to normal leads and proximized by superconductor, it leads to the two-stage Kondo screening even in the absence of other exchange mechanisms. This CAR induced exchange screening is characterized by a residual low-temperature conductance at particle-hole symmetric case. We have also shown that the competition between CAR exchange and RKKY interaction may result in completely different Kondo screening scenarios. The presented results bring further insight into the low-temperature behavior of hybrid coupled quantum dot systems, which hopefully could be verified with the present-day experimental techniques. Moreover, non-local pairing is present also in bulk systems such as non-$s$-wave superconductors. The question if an analogue of discussed CAR exchange may play a role there seems intriguing in the context of tendencies of many strongly correlated materials to possess superconducting and anti-ferromagnetic phases. \begin{acknowledgments} This work was supported by the National Science Centre in Poland through project no. 2015/19/N/ST3/01030. We thank J. Barna\'{s} and T. Maier for valuable discussions. \end{acknowledgments}
{'timestamp': '2019-01-18T02:10:00', 'yymm': '1809', 'arxiv_id': '1809.06415', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.06415'}
\section{Introduction} Setting up health information systems to monitor the evolving burden of noncommunicable diseases (NCDs) and their risk factors, is one of the claims of the Moscow Declaration, which was approved by the First Global Ministerial Conference on Healthy Lifestyles and NCD control \cite{Mos11}. In that respect, secular trends of the incidence of NCDs are especially important as they indicate changes of the risk profile of the population under consideration. Common ways to detect secular trends in the incidence are either performing a series of follow-up studies or running a register. Both approaches may be very costly and lead to a variety of practical problems. In contrast, a series of prevalence studies sometimes is much easier to accomplish. Based on illness-death model (IDM), this article describes a method for detecting trends in the incidence of NCDs without a series of follow-up studies and without a register. \bigskip The next section introduces the IDM and derives the theoretical background for the method. In the third section, the theory is applied to data from the National Diabetes Register in Denmark. The register observed an increasing diabetes incidence in 1995--2004. We show that the trend is detectable using the IDM and a series of prevalence data. The last section contains a summary. \section{Illness-Death Model} In modelling chronic (i.e., irreversible) diseases, often the three-state model (compartment model) in Figure \ref{fig:CompModel} is used. The numbers of persons in the states \emph{Normal} and \emph{Disease} are denoted by $S$ and $C$. The transition intensities (synonymously: rates) between the states are: the incidence rate $i$ and the mortality rates $m_0$ and $m_1$ of the healthy or the diseased, respectively. These rates generally depend on the calendar time $t$, the age $a$ and in the case of the mortality $m_1$ also on the duration of the disease $d$. \begin{figure}[ht] \centering \includegraphics[keepaspectratio,width=0.85\textwidth]{ThreeStates.eps} \caption{Chronic disease model with three states and the corresponding transition rates. People in the state \emph{Normal} are healthy with respect to the disease under consideration. At onset of the disease, they change to state \emph{Disease}.} \label{fig:CompModel} \end{figure} Although the inclusion of the disease duration $d$ is also possible, hereinafter it is assumed that $m_1$ does not depend on $d.$ This article analytically describes the relationship between the prevalence of a chronic disease, the incidence and mortality rates. This problem has existed at least since 1934 \cite{Mue34}, but so far has only been solved in special cases. The article \cite{Hen10} presents a brief review and further references. \bigskip As in \cite{Bri12}, we look for the numbers $S(t, a)$ and $C(t, a)$ of healthy and diseased persons in terms of differential equations, which can be derived from the disease model in Figure \ref{fig:CompModel}. For the healthy persons we get the following initial value problem: \begin{align} (\partial_t + \partial_a) \, S & = - \left [ m_0 + i \right ] \, S \label{e:PDE_S_ta} \\ S(t - a, 0) & = S_0(t - a). \nonumber \end{align} Here $S_0(t - a) = S(t - a, 0)$ is the number of (healthy) newborns\footnote{This paper only considers diseases acquired after birth.} at calendar time $t-a.$ The notation $\partial_x$ denotes the partial derivative with respect to $x, ~x \in \{t, a\}$. The solution $S(t, a)$ is \begin{equation}\label{e:S} S(t, a)= S_0(t - a) \, \exp \left ( - \int_0^a m_0(t-a+\tau, \tau) + i(t-a+\tau, \tau) \, \mathrm{d}\tau \right ), \end{equation} which may be checked easily. \bigskip The number $C$ of diseased persons are described similarly: \begin{align} (\partial_t + \partial_a) \, C & = -m_1 \, C + i \, S \label{e:PDE_C_ta} \\ C(t, 0) & = 0. \nonumber \end{align} The solution is \begin{equation}\label{e:C} C(t, a) = \int_0^a i( t -\delta, a - \delta) \, S(t - \delta, a - \delta) \exp \left ( - \int_0^\delta m_1(t - \delta + \tau, a - \delta + \tau) \mathrm{d} \tau \right ) \mathrm{d}\delta. \end{equation} Equation \eqref{e:C} allows the following interpretation: Starting from $(t, a)$ at $\delta$ time units before, i.e., at $(t - \delta, a - \delta)$, exactly $i(t - \delta, a - \delta) \, S (t - \delta, a - \delta)$ persons newly enter state \emph{Disease}. Until $(t, a)$ the proportion $$\exp \left (- \int_0 ^ \delta m_1 (t - \delta + \tau, a - \delta + \tau) \mathrm{d} \tau \right)$$ of those has survived. Integration over all possible $0 \le \delta \le a$ yields the number of diseased persons at time $(t, a).$ After applying the quotient rule to the age-specific prevalence $$p(t, a) = \frac{C (t, a)}{S (t, a) + C (t, a)}$$ and using \eqref{e:PDE_S_ta} and \eqref{e:PDE_C_ta} it follows \begin{align} (\partial_t + \partial_a) \, p & = \left ( 1-p \right ) \, \left ( i - p \, \left (m_1 - m_0 \right ) \right ) \label{e:PDE}\\ p(t, 0) & = 0. \nonumber \end{align} \begin{rem}\label{rem:nachI} For $t, a \ge 0$ with $p(t, a) \neq 1$ it holds \begin{equation}\label{e:inc} i(t, a) = \frac{(\partial_t + \partial_a) \, p (t, a)}{1-p(t,a)} + p(t, a) \, \left (m_1(t, a) - m_0(t, a) \right ). \end{equation} \end{rem} Furthermore, the solution of \eqref{e:PDE} can be calculated directly via \eqref{e:S} and \eqref{e:C}: \begin{equation} p(t, a) = \frac{ \int \limits_0^a i(t - \delta, a - \delta) \, \exp \left ( -\int\limits_0^\delta \Psi(t - \delta + \tau, a - \delta + \tau) \mathrm{d} \tau \right ) \mathrm{d}\delta}{1+\int\limits_0^a i(t-\delta, a - \delta) \, \exp \left ( -\int\limits_0^\delta \Psi(t - \delta + \tau, a - \delta + \tau) \mathrm{d}\tau \right ) \mathrm{d}\delta}, \label{e:p} \end{equation} with $\Psi := m_1 - m_0 - i.$ \bigskip This follows from \begin{align*} C(t, a) &= \int_0^a i( t -\delta, a - \delta) \, S_0(t-a) \\ & \qquad \cdot \exp \left ( - \int_0^\delta m_1(t - \delta + \tau, a - \delta + \tau) \mathrm{d} \tau - \int_0^{a-\delta}(m_0 + i)(t - a + \tau, \tau) \mathrm{d} \tau \right ) \mathrm{d}\delta \displaybreak[0]\\ &= \int_0^a i( t - \delta, a - \delta) \, S_0(t-a) \\ & \qquad \cdot \exp \left ( - \int_0^\delta m_1(t - \delta + \tau, a - \delta + \tau) \mathrm{d} \tau - \int_0^{a}(m_0 + i)(t - a + \tau, \tau) \mathrm{d} \tau \right. \\ & \qquad \qquad \qquad + \left. \int_{a - \delta}^{a} (m_0 + i)(t - a + \tau, \tau) \mathrm{d} \tau \right ) \mathrm{d}\delta \displaybreak[0]\\ &= S_0(t-a) \, \exp \left ( - \int_0^a (m_0 + i)(t-a+\tau, \tau) \mathrm{d}\tau \right ) \\ & \qquad \cdot \int \limits_0^a i(t - \delta, a - \delta) \, \exp \left ( - \int \limits_0^\delta \Psi(t - \delta + \tau, a - \delta + \tau) \mathrm{d} \tau \right ) \mathrm{d}\delta. \end{align*} The first part of the last expression equals $S(t, a)$ and Equation \eqref{e:p} follows. \bigskip The usefulness of equation \eqref{e:p} is obvious: Given the incidence $i(t, a)$ and mortalities $m_0(t, a), ~m_1(t, a)$, the prevalence $p(t, a)$ can be calculated for all $t, a \ge 0$. \bigskip So we can state \begin{rem} The prevalence $p(t, a)$ is independent from the number $S_0$ of newborns. \end{rem} \begin{rem} For $t, a \ge 0$ it holds: $0 \le p(t, a) \le 1.$ \end{rem} \begin{rem}\label{rem:increasing} If for some $(t_1, a_1)$ the integral $$\Upsilon(t_1, a_1) := \int\limits_0^{a_1} i(t_1-\delta, a_1 - \delta) \, \exp \left ( -\int\limits_0^\delta \Psi(t_1 - \delta + \tau, a_1 - \delta + \tau) \mathrm{d}\tau \right ) \mathrm{d}\delta$$ is lower than for $(t_2, a_2): ~\Upsilon(t_1, a_1) < \Upsilon(t_2, a_2)$, then it holds $p(t_1, a_1) < p(t_2, a_2).$ This follows from observing that $x \mapsto \nicefrac{x}{1+x}, ~x\ge 0$ is strictly increasing. \end{rem} \bigskip At the end of the section we introduce the \emph{relative mortality} $R(t, a).$ For $(t, a) \ge 0$ with $m_0(t, a) > 0,$ define \begin{equation*} R(t, a) = \frac{m_1(t, a)}{m_0(t, a)}. \end{equation*} Now we have all the definitions and results for the next section. \section{Diabetes in Denmark} In the article \cite{Car08} the age-specific prevalence of diabetes for men (and women) in Denmark in the period 1995-2007 is presented in great detail. The results are based on a complete survey of the Danish population ($n > 5$ million). Classifying a person as diabetic is done by combining different health registers, which yields a sensitivity of more than 85\% \cite[p. 2188]{Car08}. In this paper we confine ourselves to the male population in Denmark. The age-specific incidence rate $i$ for 2004 is given for all age groups, but for the other years in the period 1995-2007 just relatively to 2004, averaged across all age groups; likewise, with the mortality $m_0$ of the non-diabetic population. Mortality $m_1$ significantly depends on the disease duration \cite[Fig. 4]{Car08}. To the apply the model of the previous section model, the duration dependence has to be suppressed. This is done by an initialization step: The relative mortality $R^\star (a)$ is calculated such that the observed incidence and the associated increase in prevalence from 1995 to 1996 are in agreement. Therefor, the Equation \eqref{e:PDE} is solved for $m_1$. Then, $ R^\star$ is calculated by $R^\star = \nicefrac{m_1}{m_0}.$ For the period 1996 - 2004, this relative mortality is kept fixed and Equation \eqref{e:PDE} is solved for $i$ as in Remark \ref{rem:nachI} with $m_1 = R^\star \, m_0.$ By doing so, the relative mortality $R^\star$ for the period 1996-2004 is assumed to be independent from calendar time. Thus, we have a two-step approach: \begin{enumerate} \item Initialization: Calculate $R^\star$ by fitting the observed incidence rate in 1995 and the increase in age-specific prevalence from 1995 to 1996. \item Application: Derivation of the incidence rates in 1996-2004 via Remark \ref{rem:nachI} mit $m_1(t, a) = R^\star(a) \, m_0(t, a).$ \end{enumerate} \begin{rem} After initialization, we just use the mortality $m_0$ of the non-diabetic population and the prevalences $p(t, a),$ $t=1996, \dots, 2004,$ for deriving $i(t, a),$ $t=1996, \dots, 2004.$ \end{rem} Figure \ref{fig:Inc2001} shows the results of applying the model to 2001 (circles). For comparison the observed incidence is shown (solid line). Obviously, the data are in good agreement. \begin{figure}[ht] \centering \includegraphics[width=.8\textwidth,keepaspectratio]{Incidence2001.eps}\\ \caption{Age-spefic incidence rate in 2001: observed (solid line) and reconstructed by the model (circles).}\label{fig:Inc2001} \end{figure} \bigskip Now, the increase in the incidence can be examined. For some of the age groups, the incidence rate over calendar time is shown in Figure \ref{fig:Trends}. In addition, the regression lines and the corresponding correlation coefficients $r$ are given. In all age groups there is a significant upward trend. The higher the age, the better the fit of the linear regression model. Table \ref{tab} shows the numerical values for all age groups. \begin{figure}[ht] \centering \includegraphics[width=.9\textwidth,keepaspectratio]{Trends.eps}\\ \caption{Incidence rates over calendar time for some age groups. }\label{fig:Trends} \end{figure} \begin{table}[ht] \centering \begin{tabular}{c|c|c} Age- & Annual & Correlation- \\ group & increase (\%) & coefficient\\ \hline 40 -- 44 & 9.2 & 0.76 \\ 45 -- 49 & 9.6 & 0.86 \\ 50 -- 54 & 9.3 & 0.90 \\ 55 -- 59 & 8.8 & 0.93 \\ 60 -- 64 & 8.3 & 0.96 \\ 65 -- 69 & 8.0 & 0.98 \\ 70 -- 74 & 7.8 & 0.99 \\ 75 -- 79 & 7.9 & 0.99 \\ 80 -- 84 & 8.2 & 0.99 \\ \end{tabular} \caption{Parameters of the secular trends in the incidence rates in the age groups.}\label{tab} \end{table} The annual increase rates in the age groups (second column in Table \ref{tab}) are all greater than the corresponding value 5.3\% reported in \cite[p. 2190]{Car08}. However, the reported increase of 5.3\% refers to all persons (both sexes, all age groups). Hence, a direct comparison with the values of Table \ref{tab} is impossible. \section{Summary} In this work, a novel method for deriving trends in incidence from a sequence of prevalence studies is presented. With a view to the tremendous effort required by collecting incidence data, the novel method provides a simpler alternative. A typical application is the conversion of a sequence of telephone surveys for the collection of age-specific prevalence of a chronic disease into a sequence of incidence data. As a first application, the method was used with data from the Danish National Diabetes Register. The directly observed secular trend in the incidence is visible by the new method as well. \bigskip In the application to the Danish Diabetes Register, the relative mortality $R^\star$ in 1995 has been extrapolated for the period 1996-2004. While this may be possible for a period of eight years is a word of caution is in order: \begin{rem} The calendar time trend in mortality $m_0$ of the non-diabetic population is usually much better known than the trend in mortality $m_1$ of the patients. The reason is that $m_0$ is surveyed on a demographic scale, while $m_1$ is investigated sporadically in epidemiological studies only. In epidemiology, one might try to link the time trend in $m_1$ to the time trend in $m_0$. The idea might be to measure a relative mortality $R$ at some time $t' < t$ and extrapolate from $t'$ to $t$ and set $m_1 (t, a) = R (t', a) \, m_0 (t, a).$ Indeed, time dependence of $m_1$ is enforced, but this approach may lead to a possibly unexpected increase in the prevalence. If the incidence $i$ is independent of $t$ and the family of functions $ t \mapsto m_{0, a} (t): = m_0(t, a) $ is decreasing for all $a \ge 0$, then, by Remark \ref{rem:increasing}, the function $t \mapsto p_{a}(t) : = p (t, a)$ is monotonically increasing for all $ a \ge 0.$ This means: although the incidence $i$ is remains unchanged by the calendar time, by increasing the life expectancy of the healthy (decreasing $m_{0, a}$), the prevalence increases. Extrapolating the relative mortality $R$ from $t'$ to $t$ therefore must be viewed critically. \end{rem} Beside the presentation of the theoretical background, this work is little more than a feasibility study. There are two sources of limitations: \begin{enumerate} \item Data: Due to the incomplete detection of diabetes cases and the shortened report of incidence trends (pooled for all persons), a direct comparison between the observed and derived trends in the incidence is impossible. \item Model: Although it is evident that $m_1$ depends on the duration $d$ of diabetes, this dependency is neglected. In additon, the relative mortality $R^\star$ has been calculated for 1995, but has been extrapolated to 1996-2004. \end{enumerate} Both inaccuracies interact, which makes a rigorous evaluation difficult. Thus, a systematic evaluation of the method based on a comprehensive simulation study is necessary.
{'timestamp': '2013-03-07T02:03:44', 'yymm': '1303', 'arxiv_id': '1303.1442', 'language': 'en', 'url': 'https://arxiv.org/abs/1303.1442'}
\subsection*{\sf I. Introduction} The physical interpretation of \pti\ operators---and hence their relevance for the description of physical systems---conti\-nues to be debated \cite{bender04,mostafazadeh04,weigert04}. There is, however, no doubt about the cathartic role of \pty : it has become more evident what it means to let go hermiticity in exchange for a weaker property such as \pte . The success and ease to describe quantum mechanical systems in terms of hermitean operators is based on two of their generic properties, namely the existence of {\em real} eigenvalues and their {\em diagonalizability}, i.e. the completeness of their orthonormal eigenstates. These properties do not persist if a quantum system was described by a \ptc\ Hamiltonian: its eigenvalues could be complex, and its eigenfunctions would, in general, neither be pairwise orthogonal nor form a complete set. Given a \pti\ operator, it thus appears desirable to decide whether it is diagonalizable or not. The purpose of this contribution is to provide an algorithm answering the question of whether a given \pti\ Hamiltonian operator in a finite-dimensional space {\em does} or {\em does not} possess a complete set of eigenstates. It is convenient to represent such an operator as a \ptc\ matrix ${\sf M}$, say. A procedure will be outlined which, after a finite number of steps, will announce whether the matrix ${\sf M}$ at hand is diagonalizable or not. In principle, the algorithm can be carried out by hand for matrices of any dimension, and no approximations are necessary. Often, the question of diagonalizability will arise in a more general setting where one considers not just a single matrix but a {\em family}\ of \ptc\ matrices ${\sf M} (\varepsilon), \varepsilon \in \mathbb R$. The parameter $\varepsilon$ measures the strength $\varepsilon \in \mathbb R$ of a ``perturbation'' which destroys hermiticty while repecting \pte . As the parameter varies, all of the cases described previously may occur: typically, two real eigenvalues merge into a single real one at a critical value of $\varepsilon$, subsequently splitting into a pair of two complex conjugate eigenvalues, or {\em vice versa}. These dramatic modifications are accompanied by changes in the nature of the eigenstates of the \pti\ operator, possibly no longer spanning the space on which ${\sf M}(\varepsilon)$ acts. This behaviour can be understood in terms of so-called {\em exceptional points} \cite{kato84} which are known to occur when a matrix is subjected to the perturbation depending analytically on a parameter such as $\varepsilon$. At such a point, the corresponding matrix is not diagonalizable, and its spectrum may undergo a qualitative change. For a {\em hermitean} operator subjected to a parameter-dependent {\em hermitean} perturbation, exceptional points cannot occur. If one applies the algorithm testing for diagonalizability to a parameter-dependent matrix ${\sf M} (\varepsilon)$, it will output a polynomial in $\varepsilon$ instead of a number. Its zeros correspond to those values of the perturbation parameter where the matrix family ${\sf M} (\varepsilon)$ has exceptional points. The matrices corresponding to these values of the perturbation are not diagonalizable, and the spectra of matrices for nearby values of the parameter differ qualitatively. The following section summarizes the properties of \pti\ systems in terms of ($2 \times 2$) matrices. Then, the link between diagonalizability and the so-called minimal polynomial is reviewed. In Section 3, the algorithmic test is presented which consists of constructing the minimal polynomial of the matrix followed by a search for degenerate roots by means of the Euclidean algorithm. Various methods are known to effectively calculate the minimal polynomial of a matrix, outlined in Section 4. Simple examples are studied in Section 5, leading to some general conclusions about the structure of \ptc\ Hamiltonian operators in finite-dimensional spaces. Section 6 summarizes the results and discusses the challenge to extend them to state spaces of infinite dimension. % \subsection*{\sf II. \pti\ systems} % A matrix {\sf H} is \pti\ \cite{bender+98}, % \begin{equation}\label{ptinv} % [{\sf H} , {\sf P} {\sf T} ] = 0 \, , % \end{equation} % if it commutes with the product of parity ${\sf P}$ and the anti-unitary operation of time reversal ${\sf T}$, represented here by complex conjugation, ${\sf T}^\dagger {\sf T} {\sf H} = {\sf H}^*$. Eq. (\ref{ptinv}) implies that the characteristic polynomial of any \ptc\ operator {\sf H} has real coefficients only. Consequently, its roots are either real or come in complex-conjugate pairs. One way to show this is to construct a basis in which the Hamiltonian has real matrix elements only \cite{bender+02}. Let us briefly review the properties of \ptc\ systems by considering the most general \pti\ matrix of dimension $2$, % \begin{equation}\label{2by2} % {\sf H} = \left( \begin{array}{cc} a & b \\ b^* & a^* \end{array} \right) \, , \quad a,b \in \mathbb C \, , % \end{equation} % with parity given by the Pauli matrix $\sigma_x$ in the standard representation. For real numbers $a$ and $b$, the matrix ${\sf H}$ is not only \pti\ but also hermitean. Thus, its eigenvalues are {\em real}, and its orthonormal eigenstates span ${\mathbb C}^2$. For $a^* \neq a$ and $b=0$, ${\sf H}$ has a pair of complex conjugate eigenvalues and two orthonormal eigenstates. Matrices of the form % \begin{equation}\label{interesting} % {\sf H} = \left( \begin{array}{cc} i & b \\ b & -i \end{array} \right) \, , \quad b \in [-1,1] \, , % \end{equation} % are particularly interesting. For $|b| < 1$, one finds a pair of two complex conjugate eigenvalues, % \begin{equation}\label{ccpair} % E_\pm = \pm \sqrt{b^2-1} \in i \mathbb R \, , % \end{equation} % associated with two non-orthogonal eigenstates, % \begin{equation}\label{nononeigenstates} % \frac{1}{\sqrt{2} b }\left( \begin{array}{c} b \\ i - \sqrt{b^2-1} \end{array} \right) \, , \qquad \frac{1}{\sqrt{2} b }\left( \begin{array}{c} b \\ i + \sqrt{b^2-1} \end{array} \right) \, . % \end{equation} % When $b = \pm 1 $ in (\ref{interesting}), ${\sf H}$ has a two-fold degenerate eigenvalue, $E_0=0$, and there is only {\em one} eigenstate, namely, % \begin{equation}\label{singleeigenstate} % \frac{1}{\sqrt{2}}\left( \begin{array}{c} \mp 1 \\ i \end{array} \right) \, . % \end{equation} % This situation, impossible for a hermitean matrix, is usually described by saying that the {\em algebraic} multiplicity of the eigenvalue $E_0$ is two while its {\em geometric} multiplicity equals one: the characteristic polynomial of $\sf M$ has a double root associated with a single eigenvector only. In this case, the matrix ${\sf H}$ is not diagonalizable: a similarity transformation sending it to a {\em diagonal} matrix cannot exist since its eigenstates {\em would} span the space ${\mathbb C}^2$. \subsection*{\sf III. Diagonalizability and the minimal polynomial of a matrix} Each square matrix $\sf M$ of dimension $N$ satisfies the identity \begin{equation} p_{\sf M}( {\sf M}) = 0 \, , \label{CayleyHamM} \end{equation} where $p_{\sf M}(\lambda)$ is the {\em characteristic polynomial} of ${\sf M}$, \begin{equation}\label{charpolM} p_{\sf M}( \lambda ) = \det \left( \lambda {\sf E} - {\sf M} \right) \, , \ee with ${\sf E}$ being the unit matrix of dimension $N$. In other words, the characteristic polynomial of ${\sf M}$ {\em annihilates} the matrix ${\sf M}$. The polynomial $p_{\sf M}(\lambda)$ has degree $N$ and it is a {\em monic} polynomial, that is, the coefficient multiplying the highest power of $\lambda$ is equal to $1$. Obviously, many other monic polynomials of {\em higher} degree also annihilate ${\sf M}$: simply take $p^2_{\sf M}(\lambda), p^3_{\sf M}(\lambda), \dots$ It is less obvious, however, whether one can find polynomials of degree {\em less} than $N$ which annihilate ${\sf M}$. This, in fact, depends on the properties of the matrix ${\sf M}$. Define \cite{lancaster+85} the {\em minimal polynomial} of the matrix {\sf M} as the monic polynomial $m_{\sf M}(\lambda)$ of {\em least} degree which annihilates {\sf M}: \begin{equation}\label{minpolM} m_{\sf M}( \sf M ) = 0 \, . \ee The minimal polynomial $m_{\mbox{\small $\sf M$}}( \lambda)$ is unique \cite{lancaster+85}, and its degree $N_0$ is less than or equal to the degree of the characteristic polynomial, $N_0 \leq N$. The minimal polynomial divides the characteristic polynomial without a remainder, $m_{\sf M}( \lambda) \left. \right| p_{\sf M}(\lambda)$, or equivalently, \begin{equation}\label{dividecharpol} p_{\sf M}( \lambda) = d_{\sf M}(\lambda) m_{\sf M}( \lambda) \, , \ee where $d_{\sf M}(\lambda)$ is a non-zero polynomial of degree less than $N$. The characteristic and the minimal polynomial of the matrix {\sf M} coincide in the case where $d_{\sf M}(\lambda) \equiv 1$. In general, the minimal polynomial has $\nu_0$ roots $M_\nu$, \begin{equation}\label{genminpoly} m_{\sf M} (\lambda) = \prod_{\nu=1}^{\nu_0} (\lambda - M_\nu)^{\mu_\nu} \, , \quad \nu_0 \leq N \, , \ee with multiplicities $\mu_\nu$ summing to $N_0 = \mu_1 + \mu_2 + \dots + \mu_{\nu_0}$. Here is the important property of the polynomial $m_{\sf M} (\lambda)$: the matrix {\sf M} is diagonalizable if and only if each root $M_\nu$ in (\ref{genminpoly}) has multiplicity one, $\mu_\nu \equiv 1, \nu =1, \dots, \nu_0$, that is, \begin{equation}\label{genminpolydiag} m_{\sf M}( \lambda) = \prod_{\nu=1}^{\nu_0} (\lambda - M_\nu)\, , \quad \mbox{all } \, M_\nu \mbox{ distinct} \, . \ee No polynomial of degree less than $m_{\sf M}( \lambda)$ annihilates the matrix ${\sf M}$. Let us illustrate the properties of minimal polynomials using low-dimensional matrices. Consider the matrix {\sf A} with entries $(1,1,2)$ on the diagonal, and zero elsewhere. Its characteristic polynomial is given by \begin{equation}\label{charpolA} p_{\sf A}(\lambda)= (\lambda-1)^2 (\lambda-2) \, , \ee while its minimal polynomial reads \begin{equation}\label{minpolA} m_{\sf A}(\lambda)= (\lambda-1) (\lambda-2) \, , \ee being of the form (\ref{genminpoly}), with $N_0 = \nu_0 = 2$. This is easy to verify since $m_{\sf A}(\sf A) = {\sf A}^2 -3 {\sf A}+2{\sf E}=0 $ holds, while none of its factors annihilates ${\sf M}$: both $({\sf M} - {\sf E}) $ and $({\sf M} - 2{\sf E})$ are different from zero. Thus, the minimal polynomial divides the characteristic one, $p_{\sf A}(\lambda) = (\lambda - 1) m_{\sf A}(\lambda)$, leading to $d_{\sf A}(\lambda) = (\lambda -1)$. Due to (\ref{minpolA}), the matrix ${\sf A}$ {\em is} diagonalizable--- a correct but hardly surprising result since the matrix {\sf A} has been diagonal from the outset. Here is the instructive part of the example: consider the matrix \begin{equation}\label{defB} {\sf B} = \left( \begin{array}{ccc} 1 & b & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{array} \right) \, , \qquad b \in \mathbb C \, , \ee which is different from {\sf A} as long as $b$ is different from zero. The characteristic polynomial of {\sf B} equals that of {\sf A} but the matrix {\sf B} must have a different minimal polynomial since $m_{\sf A}({\sf B}) \neq 0$. It is not difficult to verify that no linear or quadratic polynomial annihilates {\sf B} as long as $b \neq 0$. This implies that its minimal polynomial coincides with its characteristic polynomial, \begin{equation} p_{\sf B}(\lambda) = m_{\sf B}(\lambda) \, , \quad d_{\sf B}(\lambda) \equiv 1 \, . \label{Bpolys} \ee Consequently, the minimal polynomial of {\sf B} does {\em not} have the form specified in (\ref{genminpolydiag}), and the matrix ${\sf B}$ is {\em not} similar to a diagonal matrix. Inspection shows that {\sf B} indeed contains a $(2 \times2)$ Jordan block for any nonzero value of $b$. For \pti\ matrices, both the polynomials $d_{\mathsf{H}}(\lambda)$ and $m_{\mathsf{H}}(\lambda)$ have real coefficients only, just as the characteristic polynomial. This will be shown once the function $d_{\mathsf{M}}(\lambda)$ in (\ref{dividecharpol}) has been defined in general (cf. Sec. {\sf IV.2}). \subsection*{\sf IV. An algorithmic test for diagonalizability} A square matrix ${\sf M}$ of dimension $N$ is diagonalizable if its minimal polynomial is a product of factors $(\lambda - M_\nu)$ with all numbers $M_\nu, \nu = 1,2, \ldots , n \leq \nu_0$, distinct, as shown in Eq. (\ref{genminpolydiag}). Consequently, to test for diagonalizability of given a matrix {\sf M}, one needs to \begin{enumerate} \item[($\imath$)] find the minimal polynomial $m_{\sf M}(\lambda)$ of the matrix {\sf M}; \item[($\imath \imath$)] determine whether the polynomial $m_{\sf M}(\lambda)$ has single roots only. \end{enumerate} To calculate numerically the roots of either the characteristic or the minimal polynomial is not a valid approach since, in general, the {\em exact} roots of a polynomial cannot be specified in a finite procedure. Any algorithmic implementation must generate answers to ($\imath$) and ($\imath \imath$) in a {\em finite} number of steps. Note that even if the first step has been implemented, it is unlikely that the minimal polynomial will emerge in factorized form. Interestingly, it is possible to construct the minimal polynomial of a matrix and to check for degenerate roots in a finite number of steps. In both cases one searches for common factors of polynomials, which is achieved algorithmically by the {\em Euclidean division algorithm}. These results seem to have been put together for the first time in \cite{abate97} in order to decide algorithmically whether a given matrix is diagonalizable. As it stands, it could be applied to the non-hermitean matrix govering the motion of two coupled damped classical oscillators studied in \cite{heiss04}. In the following, a slightly simplified approach to the problem of diagonalizability is presented, adapted to matrices with \pty. Before implementing the steps ($\imath$) and ($\imath\imath$), the Euclidean algorithm for polynomials will be presented briefly to establish notation. \subsubsection*{\sf IV.1 Euclidean division algorithm for polynomials} Given two integer numbers $p_0 > p_1$, say, the Euclidean division algorithm outputs their greatest common divisor, denoted by $\mbox{gcd}(p_0,p_1) \in {\mathbb N}_0$, after a finite number of steps. It works as follows: first, you need to express the larger number as $q_1$-fold multiple of the smaller number plus a remainder $p_2$, \begin{equation}\label{eucl} p_0 = q_1 p_1 + p_2 \, , \qquad q_1,p_2 \in {\mathbb N}_0\, , \quad p_1 > p_2 \leq 0 \, . \ee This relation implies that any common divisor of $p_0$ and $p_1$ divides $p_2$ as well, hence $\mbox{gcd}(p_0,p_1) = \mbox{gcd}(p_1,p_2)$. Thus, it is sufficient to search for the greatest common divisor of the pair $(p_1,p_2)$. This can be achieved by increasing each index in (\ref{eucl}) and feeding in the pair $(p_1,p_2)$ instead of $(p_0,p_1)$, etc. Since $p_0>p_1$ and $p_1>p_2$, the algorithm will stop after a finite number of iterations and produce a remainder equal to zero, $p_{k+1}=0$, say. The {\em non-zero} remainder $p_k$ generated in the penultimate step is the desired result, $\mbox{gcd}(p_0,p_1) = p_k$. If $\mbox{gcd}(p_0,p_1)= 1$, the numbers $p_0$ and $p_1$ are relatively prime, otherwise a common divisor different from one has been identified. A polynomial in the variable $\lambda$ can be written as a unique product of linear factors $(\lambda - \lambda_n)$ where the numbers $\lambda_n \in \mathbb C$ are its roots. This representation makes polynomials similar to integer numbers in some respects. The equivalent of the Euclidean algorithm, when applied to two polynomials, outputs their greatest common divisor, which is a polynomial itself. This result is based on the fact that any two polynomials $p_0(\lambda)$ and $p_1(\lambda)$, with $\deg p_0(\lambda) > \deg p_1(\lambda)$ are related by \begin{equation}\label{euclpoly} p_0(\lambda) = q_1(\lambda) p_1(\lambda) + p_2(\lambda) \, , \quad \deg p_1(\lambda) > \deg p_2(\lambda) \geq 0 \, , \ee which is the equivalent of (\ref{eucl}). The polynomials $q_1(\lambda)$, with $\deg q_1(\lambda) = (\deg p_0(\lambda) - \deg p_1(\lambda))$, and hence $p_2(\lambda)$, are found from long division. If $p_0(\lambda)$ and $p_1(\lambda)$ have a common factor, then $p_2(\lambda)$ must have this factor as well. Thus, it is sufficient to search for $\mbox{gcd}(p_1(\lambda),p_2(\lambda))$ instead of $\mbox{gcd} (p_0(\lambda),p_1(\lambda))$ but the degrees of the polynomials involved have effectively been reduced. Consequently, this procedure can be repeated all over again and it halts once a {\em vanishing} remainder has been obtained, $p_{k+1}(\lambda) = 0$, say. Then, the greatest common factor of the polynomials $p_0(\lambda)$ and $p_1(\lambda)$ is given by the last non-zero remainder polynomial, $p_k (\lambda)$, calculated in the next-to-last application of the algorithm. If $\deg p_k (\lambda) = 0$, the initial polynomials are ``relatively prime,'' otherwise their greatest common divisor is a polynomial of degree at least one. \subsubsection*{\sf IV.2 Step ($\imath$): Finding the minimal polynomial of a matrix} The function $d_{\sf M} (\lambda)$ relates the minimal polynomial $m_{\sf M}(\lambda)$ of the matrix {\sf M} to its characteristic polynomial $p_{\sf M}(\lambda)$ according to Eq. (\ref{dividecharpol}). Hence, the minimal polynomial associated with ${\sf M}$ is known once the characteristic polynomial and the function $d_{\sf M} (\lambda)$ have been determined. Two steps are required to construct the function $d_{\sf M} (\lambda)$ \cite{lancaster+85}. First, you need to calculate the matrix ${\sf D}_{\sf M} = \mbox{adj } (\lambda {\sf E} - {\sf M})$, given by the transposed cofactors---or signed minors---of the matrix $(\lambda {\sf E} - {\sf M})$. The adjoint of a matrix, {\sf C} say, always exists, and it satisfies the relation % \begin{equation}\label{adjoint} % {\sf C} \, \mbox{adj } {\sf C} = (\det {\sf C}) {\sf E} \, . % \end{equation} % For $\det \sf C \neq 0$, Eq. (\ref{adjoint}) leads to the familiar expression of the inverse matrix of ${\sf C}$. According to \cite{lancaster+85} the polynomial $d_{\sf M}(\lambda)$ is given by the {\em greatest} (monic) {\em common divisor} of the $N^2$ elements of $\mbox{adj } (\lambda {\sf E} - {\sf M})$, \begin{equation} % d_{\mathsf{M}} (\lambda) = \mbox{gcd} \left\{ (\mathsf{D}_{\sf M})_{nm} | n,m=1,\ldots,N\right\} \, , % \label{dhldefinition} \end{equation} % Thus, in a second step, you need to apply the Euclidean algorithm to all pairs of entries of the matrix $\mbox{adj } (\lambda {\sf E} - {\sf M})$. Having thus identified the function $d_{\sf M} (\lambda)$, the minimum polynomial of {\sf M} follows from (\ref{dividecharpol}), % \begin{equation}\label{findminpoly} % m_{\sf M} (\lambda) = \frac{p_{\sf M} (\lambda)}{d_{\sf M}(\lambda)} \, . % \end{equation} % Now it is possible to show that, for a \pti\ matrix, the polynomials $d_{\sf M}(\lambda)$ and $m_{\sf M} (\lambda)$ have real coefficients only, just as the characteristic polynomial. Using a basis in which all elements of {\sf H} are real, leads to % \begin{equation} % ({\sf D}_{\sf H} (\lambda))^* = \left( \mbox{adj}(\lambda \mathsf{E} - \mathsf{H} ) \right)^{*} = \mbox{adj} (\lambda^{*} \mathsf{E}-\mathsf{H)} = {\sf D}_{\sf H} (\lambda^*) \, . % \label{realadjoint} % \end{equation} % which states that in this basis the adjoint of {\sf H} has only real matrix elements (except for the unknown $\lambda$). Taking (\ref{dhldefinition}) into account this leads to % \begin{equation} % \left( d_{\mathsf{H}} (\lambda) \right)^{*} =\mbox{gcd} \left\{ (\mathsf{D}_{\mathsf{H}})_{nm}(\lambda^{*})|n,m=1,\ldots,N\right\} = d_{\mathsf{H}} (\lambda^{*}) \, , % \label{dhlreaL} \end{equation} % which, in conjunction with $\left( p_{\mathsf{H}} (\lambda) \right)^{*} = p_{\mathsf{H}} (\lambda^{*})$ and Eq. (\ref{findminpoly}) implies indeed $\left( m_{\mathsf{H}}(\lambda) \right)^{*}=m_{\mathsf{H}}(\lambda^{*})$. Let us verify that this procedure outputs the correct minimal polynomials for the matrices {\sf A} and {\sf B} introduced in Section 3. The adjoint of the matrix $(\lambda {\sf E} - {\sf B})$ reads % \bea \label{cofB} % \mbox{adj } (\lambda {\sf E} - {\sf B}) &=& \mbox{adj } \left( \begin{array}{ccc} \lambda - 1 & - b & 0 \\ 0 & \lambda -1 & 0 \\ 0 & 0 & \lambda -2 \end{array} \right) \nonumber \\ &=& \left( \begin{array}{ccc} (\lambda - 1) (\lambda -2) & b(\lambda -2) & 0 \\ 0 & (\lambda - 1) (\lambda -2) & 0 \\ 0 & 0 & (\lambda - 1)^2 \end{array} \right) \, . % \eea % Due to the simplicity of the matrices involved, the Euclidean algorithm can be run ``by inspection:'' for $b\neq 0$, the only common factor among the entries in (\ref{cofB}) is given by $d_{\sf B}(\lambda) = 1$. Consequently, the minimal and the characteristic polynomial of {\sf B} coincide as stated in Eq. (\ref{Bpolys}). If the parameter $b$ takes the value zero, {\sf B} turns into {\sf A}, and and a non-constant greatest common divisor emerges, $d_{\sf A}(\lambda) = (\lambda - 1)$. Using Eq. (\ref{dividecharpol}), one obtains the minimal polynomial $m_{\sf A}(\lambda) = (\lambda -1)(\lambda-2)$, agreeing with Eq. (\ref{minpolA}). In \cite{abate97}, a different approach to determine the minimal polynomial of a matrix {\sf M} has been presented which, ultimately, is also based on finding the greatest common divisor of specific polynomials. According to \cite{horn+99}, any method to determine whether the matrices ${\sf M}^0 \equiv {\sf E}, {\sf M}, {\sf M}^2, \ldots , {\sf M}^{N-1}$, are linearly dependent, can be used to construct the minimal polynomial of {\sf M}; two such methods are described in this reference, and a third one can be found in \cite{horn+85}. The latter approaches have in common that they are {\em not} based on the Euclidean algorithm. For actual calculations, it is convenient to resort to a {\tt Mathematica} program \cite{weisstein+05} to find the minimal polynomial of a matrix ${\sf M}$. \subsubsection*{\sf 4.3 Step ($\imath \imath$): Identifying degenerate roots of a polynomial} % Once the minimal polynomial $m_{\sf M} (\lambda)$ has been found, one needs an algorithm to decide whether it has single roots only \cite{abate97}. Imagine a polynomial $m(\lambda)$ to have an $s$-fold root $\lambda_0$, $2 \leq s \leq N$. Its factorization reads % \begin{equation}\label{sfold} % m(\lambda) = (\lambda - \lambda_0)^s \ldots \, , % \end{equation} % where the dots indicate a polynomial of degree $(N-s)$. Its derivative takes the form % \begin{equation}\label{sfoldprime} % \frac{dm}{d\lambda} = (\lambda - \lambda_0)^{s-1} \dots \, , % \end{equation} % the dots standing again for some polynomial of degree $(N-s)$. Obviously, the polynomial and its derivative are not relatively prime: $m(\lambda)$ and $m^\prime(\lambda)$ have a factor $(\lambda - \lambda_0)^{s-1}$ of order at least one in common. Thus, applying the division algorithm to the pair $(m_{\sf M}(\lambda), m_{\sf M}^{\prime} (\lambda))$ checks whether the polynomial $m_{\sf M} (\lambda)$ has the form (\ref{genminpolydiag}). If the procedure outputs $\mbox{gcd} (m_{\sf M}(\lambda), m_{\sf M}^{\prime} (\lambda)) \propto 1$, all roots of $m_{\sf M}$ are distinct and the associated matrix {\sf M} is diagonalizable, otherwise it is not. This concludes the description of an algorithm to test for diagonalizability of a given \ptc\ matrix ${\sf M}$. No fundamental changes are necessary if one studies a parameter-dependent family of matrices ${\sf M} (\varepsilon)$. However, the algorithm will output conditions polynomial in the parameter $\varepsilon$, indicating specific parameter values where diagonalizability breaks down. It is convenient to study the resulting modifications by working out some simple examples, illustrating at the same time the proposed algorithm. % \subsection*{\sf V. Examples} % \subsubsection*{\sf V.1 Matrices of dimension ($2\times 2$)} % Let us apply the algorithm described above to the matrix ${\sf H}$ in (\ref{2by2}}) assuming the numbers $a$ and $b$ to be different from zero. Its characteristic polynomial reads % \begin{equation}\label{charpolH} % p_{\sf H} (\lambda) = \det (\lambda {\sf E} - {\sf H}) = \lambda^2 - 2 (\Re a) \lambda + | a |^2 - |b|^2 \, , % \end{equation} % while its minimal polynomial is found via the function $d_{\sf H}(\lambda)$ equal to the highest common factor of the matrix % \begin{equation} \label{hcfH?} % {\sf D}_{\sf H} (\lambda) = \mbox{adj } (\lambda {\sf E} - {\sf H} ) = \left( \begin{array}{cc} \lambda - a^* & -b \\ -b^* & \lambda- a \end{array} \right) \, . % \end{equation} % By inspection, a non-constant factor only exists among the four entries of ${\sf D}_{\sf H} (\lambda)$ if $b=\Im a =0$. In this case, ${\sf H}$ turns into a real multiple of the identity, hence it is diagonalizable. This observation illustrates a fine point of the construction of the minimal polynomial: even upon identifying a non-constant function $d_{\sf H} (\lambda)$, the minimal polynomial $m_{\sf H}(\lambda)$ may still be of the form (\ref{genminpolydiag}). For $b=\Im a =0$, the characteristic polynomial turns into $(\lambda - \Re a)^2$, implying indeed $m_{\sf H} (\lambda) = (\lambda - \Re a)$. Here, the function $d_{\sf H}(\lambda) = (\lambda - \Re a)$ removes factors of the characteristic polynomial which stem from the {\em degeneracy} of an eigenvalue of ${\sf H}$. From now on, either $b$ or $\Im a$ are assumed to be different from zero, hence $d_{\sf H} (\lambda) =1$, and the minimal polynomial $m_{\sf H}(\lambda)$ is given by Eq. (\ref{charpolH}), % \begin{equation}\label{char=miniH} % m_{\sf H} (\lambda) = p_{\sf H} (\lambda) \, , % \end{equation} % which concludes the first step of the algorithm. In the second step of the algorithm, the search for multiple roots of $m_{\sf H}(\lambda) \equiv p_0(\lambda)$, one needs to determine the highest common factor of the minimal polynomial and its derivative, % \begin{equation}\label{derminpoly} % m^\prime_{\sf H} (\lambda) = 2 (\lambda - \Re a) \equiv p_1 (\lambda) \, . % \end{equation} % Applying the Euclidean algorithm to the pair $(m_{\sf H} (\lambda), m^\prime_{\sf H} (\lambda))$ means to solve for a polynomial $q_1 (\lambda)= A \lambda + B$ and for $p_2 (\lambda)$ such that % \begin{equation}\label{mmprimeeuclid} % m_{\sf H} (\lambda) = (A \lambda +B) m_{\sf H}^\prime (\lambda) + p_2(\lambda) \, , \quad A, B \in \mathbb R \, . % \end{equation} % The unknowns are easily obtained as % \begin{equation}\label{euclappl1} % A = \frac{1}{2} \, , \quad B= - \frac{1}{2} \Re a \, , \quad p_2(\lambda) = (\Im a)^2 - | b |^2 \simeq \lambda^0 \, . % \end{equation} % Two possibilities now arise: either $p_2(\lambda)$ equals zero or it does not. The first case occurs if % \begin{equation}\label{condp2} % (\Im a)^2 = | b |^2 \, , % \end{equation} % and the algorithm comes to a halt. As mentioned above, the greatest common divisor of the initial polynomials is then given by the penultimate (monic) remainder polynomial, i.e. $p_1 (\lambda) = ( \lambda - \Re a)$. It follows that $m_{\sf H} (\lambda)$ and its derivative do have a common non-constant divisor, so that ${\sf H}$ cannot be brought to diagonal form. It is easy to verify that $m_{\sf H} (\lambda) = ( \lambda - \Re a)^2$ when (\ref{condp2}) holds, confirming that the minimal polynomial of ${\sf H}$ has a {\em double} root $\Re a$. Furthermore, a simple calculation shows that the matrix $\sf H$ has indeed a {\em single} eigenstate only if the relation $\Im a = \pm |b|$ holds. Note that this result covers the example of a non-diagonalizable ${\sf H}$ of Section 2, where $a=i$ and $b=\pm 1$ had been considered. Finally, if $a$ and $b$ do not satisfy (\ref{condp2}), the remainder polynomial $p_2(\lambda)$ does not vanish. Being a constant, the algorithm is bound to stop after the next iteration. Determine $q_2(\lambda)=(C\lambda + D)$ and $p_3 (\lambda)$ such that % \begin{equation}\label{mmprimeeuclid2} % (\lambda - \Re a) = (C \lambda + D) ((\Im a)^2 - | b |^2) + p_3 (\lambda) \, , \quad C, D \in \mathbb R \, , % \end{equation} % holds, i.e., % \begin{equation}\label{euclappl2} % C = \frac{1}{(\Im a)^2 - | b |^2} \, , \quad D = \frac{- \Re a}{(\Im a)^2 - | b |^2} \, , \quad p_3(\lambda) = 0 \, . % \end{equation} % The algorithm halts indeed due to $p_3(\lambda) = 0$, and the penultimate remainder is $p_2 (\lambda) \propto 1$, indicating that the minimal polynomial does not have any degenerate roots, and ${\sf H}$ is diagonalizable. In summary, the matrix {\sf H} is diagonalizable for all parameter values except when $\Im a = \pm |b|$. In this case the algebraic multiplicity of its eigenvalue is two, while its geometric multiplicity is one; otherwise the multiplicities both equal two. It is important to note that it was not necessary at any stage to determine the eigenvalues of ${\sf H}$. % \subsubsection*{\sf V.2 Matrices of dimension ($4 \times 4$)} % It is instructive to apply the algorithm to a \pti\ matrix of dimension $4$, % \begin{equation}\label{4by4} % {\sf H} = \left( \begin{array}{cccc} i \varepsilon & s & 0 & 0\\ s & -i \varepsilon & \delta & 0 \\ 0 & \delta & i \varepsilon & s \\ 0 & 0 & s & -i \varepsilon \\ \end{array} \right) \, , \quad s,\delta > 0 \, , % \end{equation} % which depends on a perturbation parameter $\varepsilon$. As before, the action of ${\sf T}$ on a matrix effects complex conjugation of its entries, while ${\sf P}$ is now given by a $(4 \times 4)$ matrix with entries equal to one along its minor diagonal and zero elsewhere. Eq. (\ref{ptinv}) is then readily verified. The characteristic polynomial of ${\sf H}$ is given by % \begin{equation}\label{charpoldim4} % p_{\sf H} (\lambda) = \lambda^4 + \alpha_\varepsilon \lambda^2 + \beta_\varepsilon \, , % \end{equation} % with $\alpha_\varepsilon = 2 \varepsilon^2 - 2 s^2 - \delta^2$ and $ \beta_\varepsilon = \varepsilon^4 - (2s^2 + \delta^2)\varepsilon^2 + s^4$. The minimal polynomial of ${\sf H}$ coincides with the characteristic one, $m_{\sf H} (\lambda) \equiv p_{\sf H} (\lambda)$, since the only common factor of the matrix elements of ${\sf D}_{\sf H}$ is equal to one, $d_{\sf H} (\lambda) = 1$. To see this, it is sufficient to calculate the two matrix elements $[{\sf D}_{\sf H}]_{14} = -s^2 \delta$, and $[{\sf D}_{\sf H}]_{23} = - (\lambda^2 + \varepsilon^2) \delta$, for example. Whatever the value of $\varepsilon$, for nonzero $s$ and $\delta$ the only common divisor is one, so that $d_{\sf H} (\lambda) =1$. Let us now determine $\mbox{gcd} (m_{\sf H} (\lambda), m^\prime_{\sf H} (\lambda ))$ by the Euclidean algorithm, where $m_{\sf H} (\lambda) \equiv p_0(\lambda) $ is given in Eq. (\ref{charpoldim4}) and $m^\prime_{\sf H} (\lambda ) \equiv p_1(\lambda) = 4 \lambda^3 + 2\alpha_\varepsilon\lambda$. Comparing powers of $\lambda$ in Eq. (\ref{mmprimeeuclid}) with the polynomials just defined, one obtains % \begin{equation}\label{edastep2} % A= \frac{1}{4}\, , \quad B= 0\, , \quad p_2(\lambda) = \frac{\alpha_\varepsilon}{2}\lambda^2 + \beta_\varepsilon \, . % \end{equation} % The algorithm only stops here if $\alpha_\varepsilon = \beta_\varepsilon = 0$ which would require $s=\delta = 0$, contrary to both $s$ and $\delta$ being different from zero. If $\alpha_\varepsilon=0$ is assumed, ${\sf H}$ is diagonalizable for all $\varepsilon$ since $\beta_\varepsilon$ cannot take the value zero, and the algorithm stops after the next step, outputting $\beta_\varepsilon \propto 1$ as greatest common factor. Assume now $\alpha_\varepsilon \neq 0$ and apply the division algorithm to the pair $(p_1(\lambda),p_2 (\lambda))$. The unknown constants in $q_2(\lambda) = C\lambda + D$, and the remainder polynomial $p_3(\lambda)$ are found to be % \begin{equation}\label{edastep3} % C = \frac{8}{\alpha_\varepsilon}, \quad D = 0 \, , \quad p_3 (\lambda) = \frac{2}{\alpha_\varepsilon} (\alpha_\varepsilon^2 - 4 \beta_\varepsilon)\lambda \, . % \end{equation} % For the algorithm to stop, one must have $p_3(\lambda) = 0$. This, however, does not happen whatever the value of $\varepsilon$ since $\alpha_\varepsilon^2 - 4 \beta_\varepsilon = \delta^2 (4s^2 + \delta^2) >0$. The next iteration of the algorithm leads to % \begin{equation}\label{edastep4} % E = \frac{\alpha^2_\varepsilon}{4(\alpha_\varepsilon^2 - 4 \beta_\varepsilon)}, \quad F = 0 \, , \quad p_4 (\lambda) = \beta_\varepsilon \, , % \end{equation} % where $q_3(\lambda) = E\lambda + F$. Producing a remainder polynomial of degree zero in $\lambda$, the condition for the minimal polynomial to have multiple roots is finally given by % \begin{equation}\label{remaindercond} % \beta_\varepsilon = \varepsilon^4 - (2s^2 + \delta^2)\varepsilon^2 + s^4 = 0 \, . % \end{equation} % This fourth-order polynomial in $\varepsilon$ has roots % \begin{equation}\label{roots} % \pm \varepsilon_\pm = \pm \sqrt{ \sigma \pm \Delta \sigma} \, , \quad \sigma = s^2 +\frac{\delta^2}{2} > 0 \, , \quad \Delta \sigma = \sqrt{\sigma^2 -s^4} \in (0,\sigma) \, . % \end{equation} % For each of these four real values of the parameter $\varepsilon$, the matrix ${\sf H}$ is {\em not} diagonalizable. In other words, the algebraic and geometric multiplicity of the eigenvalues of ${\sf H}$ do not coincide, and its eigenstates do not span the space ${\mathbb C}^4$. It is important to note that the eigenvalues of the matrix $\sf H$ are not known at this stage. \subsubsection*{\sf V.3 The global structure of ${\sf H}$} \label{sec:53SfGlobalStructure} Let us now determine the global properties of ${\sf H}$. This is easily done upon combining (\ref{roots}) with the characteristic polynomial (\ref{charpoldim4}) in its factorized form, % \begin{equation}\label{charpoldim4factorized} % p_{\sf H} (\lambda) = (\lambda - \lambda_{+})(\lambda + \lambda_{+}) (\lambda - \lambda_{-})(\lambda + \lambda_{-}) \, % \end{equation} with roots % \begin{equation}\label{fourevs} % \pm \lambda_{\pm} = \pm \sqrt{\sigma \pm \Delta \sigma - \varepsilon^2} \, , % \end{equation} % expressed directly in terms of $\sigma$ and $\Delta \sigma$. To graphically represent the parameter space of {\sf H} and its properties, it is convenient to eliminate the parameter $s$ by the scaling $\varepsilon \to s \varepsilon$ and $\delta \to s \delta$. This effectively amounts to sending $s \to 1$, and Eq. (\ref{remaindercond}) simplifies to $\varepsilon^4 - 2(1 + \delta^2/2)\varepsilon^2 + 1 = 0$, plotted in Fig. 1. \begin{figure}[t] \begin{center} \includegraphics{paramspace.eps} \end{center} \caption{Parameter Space of the matrix {\sf H} defined in (\ref{4by4}); region I: four real eigenvalues; regions II: two real and one pair of complex-conjugate eigenvalues; region III: a two pairs of complex-conjugate eigenvalues; the matrix {\sf H} is not diagonalizable on the full lines separating the regions I,II, and III} \label{parameterspace} \end{figure} Imagine to move along the dashed vertical line, determined by a fixed positive value of $\delta>0$ and variable $\varepsilon$. For $\varepsilon = 0$, the matrix ${\sf H}$ is hermitean, hence it has four distinct real eigenvalues and four orthonormal eigenstates. In region I, where $0 < |\varepsilon| < \sqrt{\sigma - \Delta \sigma}$, Eq. (\ref{fourevs}) says that the eigenvalues remain real and distinct; a complete, not necessarily orthonormal set of four eigenstates continues to exist since (\ref{remaindercond}) does not hold. When $\varepsilon = \pm\sqrt{(\sigma - \Delta \sigma)}$, two eigenvalues coincide numerically, and the corresponding two eigenstates merge into a single one, leaving ${\sf H}$ with an incomplete basis. Then, for $\sqrt{\sigma - \Delta \sigma} < |\varepsilon| < \sqrt{\sigma + \Delta \sigma}$, in region II, two real eigenvalues and a pair of complex-conjugate ones exist, with $\sf H$ being diagonalizable throughout since (\ref{remaindercond}) does not hold. At $\varepsilon = \pm\sqrt{(\sigma + \Delta \sigma)}$, the remaining two real eigenvalues degenerate to a single one, leaving ${\sf H}$ non-diagonalizable again with only three eigenstates. Finally, in region III, defined by $\sqrt{\sigma + \Delta \sigma} < |\varepsilon|$, the matrix $\sf H$ is diagonalizable and it comes with two pairs of complex-conjugate eigenvalues. Finally, for $\delta=0$, the matrix ${\sf H}$ in (\ref{4by4}) decouples into an pair of identical two-dimensional matrices. The left boundary of region I sees the real eigenvalues of ${\sf H}$ degenerate pairwise which is consistent with the observations made earlier. At $\varepsilon =\pm 1$, only two eigenstates exist while all four eigenvalues coincide numerically. Beyond this value of $\varepsilon$, there are two pairs of identical complex-conjugate eigenvalues, and the associated basis is complete. For \ptc\ systems described by matrices of higher dimensions it is, in general, not possible to find the roots of the characteristic polynomial. Nevertheless, a discussion of the parameter space can still be given: to this end one needs to detect the number of real and complex eigenvalues for each set of parameter values; an algorithm capable of doing this will be presented in \cite{weigert05b}. \subsection*{\sf VI. Discussion and Outlook}\label{sec:Discussion-and-Outlook} An algorithm has been presented which allows one to determine whether a given PT-invariant matrix ${\sf M}$ is diagonalizable. To do so, it is not necessary to determine the roots of its characteristic polynomial. In terms of Linear Algebra, the algorithm decides whether the given matrix is {\em similar} \cite{lancaster+85} to a diagonal matrix or to a matrix containing at least one Jordan block of dimension two or more. Somewhat surprisingly, this question seems to have been addressed only recently from an algorithmic point of view. It seems worthwhile to point out that the test for multiple roots of a polynomials can, in fact, be used without any change to determine whether the eigenvalues of a given hermitean matrix are degenerate or not. The present author is not aware that this observation has been made before. When applied to a family of PT-symmetric matrices, the algorithm outputs polynomial conditions on the perturbation parameter. These conditions are satisfied for sets of matrices all of which are not diagonalizable, and they divide the full parameter space into regions of diagonalizable matrices with qualitatively different spectra. When combined with an algorithm to identify the number of real and complex eigenvalues of {\sf M}, a complete picture of the system's properties in the entire parameter space can be established. Many PT-symmetric systems--including the first one studied from this perspective \cite{bender+98}--have been defined on Hilbert spaces with countably infinite dimension. Various concepts such as eigenvalues and eigenstates, or the difference between algebraic and geometric multiplicities of degenerate eigenvalues continue to exist in the more general case \cite{kato84}. In spite of a close similarity of hermitean operators in finite- and infinite-dimensional spaces, many concepts of the matrix case are not easily carried over to the more general situation. For any \emph{algorithm}, finiteness is a crucial feature: the number of steps required to identify a potential common factor of two polynomials is always finite, no matter what their degree. It will be interesting to see whether algorithmic tests for diagonalizability of operators acting on spaces with countably infinite dimension can be found.
{'timestamp': '2005-06-06T12:24:04', 'yymm': '0506', 'arxiv_id': 'quant-ph/0506042', 'language': 'en', 'url': 'https://arxiv.org/abs/quant-ph/0506042'}
\section{Introduction} Let $\mathbf{K}^*(X) = \mathbf{K}^0(X) \oplus \mathbf{K}^1(X)$ denote the complex $K$-theory of a space $X$. I am not sure who first proposed that when $X$ and $\hat{X}$ are a mirror pair of compact Calabi-Yau $3$-folds one should have isomorphisms \begin{equation} \label{eq:111} \mathbf{K}^0(X) \cong \mathbf{K}^1(\hat{X}) \text{ and } \mathbf{K}^1(X) \cong \mathbf{K}^0(\hat{X}) \end{equation} --- it is an instance of the string-theoretical idea \cite{MM,Moore,Witten} that ``$D$-branes have charges in $K$-theory.'' Rationally, \eqref{eq:111} is a consequence of the usual Hodge-diamond flip, but the question of whether it holds becomes interesting if $\mathbf{K}^*(X)$ or $\mathbf{K}^*(\hat{X})$ has torsion, or if one and not the other group is known to be torsion-free. It might be interesting more generally if one searches for very natural isomorphisms, more on that in \S\ref{sec:three}. \medskip I believe that \eqref{eq:111} is an open problem. Batyrev and Kreuzer in \cite{BK} gave a case-by case verification for the half-billion mirror pairs associated with 4d reflexive polytopes, actually obtaining isomorphisms in integral cohomology \begin{equation} \label{eq:112} \mathrm{tors}(H^2(X,\mathbf{Z})) \cong \mathrm{tors}(H^3(\hat{X},\mathbf{Z})) \qquad \mathrm{tors}(H^4(X,\mathbf{Z})) \cong \mathrm{tors}(H^5(\hat{X},\mathbf{Z})) \end{equation} and deducing \eqref{eq:111} from the Atiyah-Hirzebruch spectral sequence. But Addington \cite{Addington} has given examples of derived equivalent $3$-folds $\hat{X}$ and $\hat{X}'$ where $H^3(\hat{X},\mathbf{Z})$ and $H^3(\hat{X}',\mathbf{Z})$ have different torsion subgroups, suggesting that \eqref{eq:112} should not hold in general. \medskip In \S\ref{sec:two}, we will give an explicit example, by verifying \eqref{eq:111} in one new case: a $T$-dual pair of flat $3$-folds (for which homological mirror symmetry is essentially known after \cite{Abouzaid}) \[ X:= X_{1,5} \qquad \hat{X} := X_{2,12} \] with $\mathbf{K}^0(X) \cong \mathbf{K}^1(\hat{X})$ but $\mathrm{tors}(H^2(X,\mathbf{Z})) = (\mathbf{Z}/4)^3$ and $\mathrm{tors}(H^3(\hat{X},\mathbf{Z})) = \mathbf{Z}/4$. \medskip In \S\ref{sec:three} we will discuss conjectures --- some of mine and one of Ganatra's --- about the $K$-theory of Fukaya categories. \section{$3$-folds} \label{sec:two} \subsection{The flat $3$-manifold $B$.} Let $B$ denote the quotient of $\mathbf{R}^3/\mathbf{Z}^3$ by the action of $\mathbf{Z}/2 \times \mathbf{Z}/2$ whose three nontrivial operators are \begin{equation} \label{eq:Fed-Sch} \begin{array}{ccc} \alpha(x_1,x_2,x_3)& := & (x_1 + \frac{1}{2},-x_2 + \frac{1}{2},-x_3) \\ \beta(x_1,x_2,x_3)& :=& (-x_1+\frac{1}{2},-x_2,x_3+\frac{1}{2}) \\ \gamma(x_1,x_2,x_3)& :=& (-x_1,x_2+\frac{1}{2},-x_3 + \frac{1}{2}) \end{array} \end{equation} It is the $3$-manifold studied in \cite{HW}. We regard it as having a basepoint at image of $0 \in \mathbf{R}^3$, and as having a flat metric given by the usual dot product on $\mathbf{R}^3$. The fundamental group of $B$ is one of the Fedorov-Schoenflies crystallographic groups, with presentation \cite[Th. 3.5.5]{Wolf} \begin{equation} \label{eq:Wolf} \begin{array}{ccc} \alpha^2 = t_1 & \alpha t_2 = t_2^{-1} \alpha & \alpha t_3 = t_3^{-1} \alpha \\ \beta t_1 = t_1^{-1} \beta & \beta t_2 = t_2^{-1} \beta & \beta^2 = t_3 \\ \gamma t_1 = t_1^{-1} \gamma & \gamma^2 = t_2 & \gamma t_3 = t_3^{-1} \gamma \end{array} \end{equation} and \[ [t_1,t_2] = [t_2,t_3] = [t_3,t_1] = \gamma \beta \alpha = 1 \] The $t_1,t_2,t_3$ are translation operators on $\mathbf{R}^3$. Being flat, the holonomy group of $B$ is a representation \begin{equation} \label{eq:holonomy} \pi_1(B) \to \mathrm{SO}(3) \end{equation} Its image is isomorphic to $\mathbf{Z}/2 \times \mathbf{Z}/2$ (the group of diagonal matrices in $\mathrm{SO}(3)$). Abelianizing \eqref{eq:Wolf} gives $H_1(B) = \mathbf{Z}/4 \oplus \mathbf{Z}/4$, and since $\alpha,\beta,\gamma$ are orientation-preserving we have by Poincar\'e duality \begin{equation} \label{eq:HiB} H_0(B) = \mathbf{Z} \qquad H_1(B) = \mathbf{Z}/4 \oplus \mathbf{Z}/4 \qquad H_2(B) = 0 \qquad H_3(B) = \mathbf{Z} \end{equation} \subsection{Tri-elliptic $3$-fold $X_{0,4}$} \label{subsec:X04} Let $\tau_1,\tau_2,\tau_3$ be complex numbers with positive imaginary part, and put \begin{equation} E_i := \mathbf{C}/(\mathbf{Z} + \tau_i \mathbf{Z}) \end{equation} Let $X_{0,4}$ be the quotient of $E_1 \times E_2 \times E_3$ by the complexification of the operators \eqref{eq:Fed-Sch}, i.e. \begin{equation} \label{eq:Fed-Sch-complex} \begin{array}{ccc} \alpha(z_1,z_2,z_3)& := & (z_1 + \frac{1}{2},-z_2 + \frac{1}{2},-z_3) \\ \beta(z_1,z_2,z_3)& :=& (-z_1+\frac{1}{2},-z_2,z_3+\frac{1}{2}) \\ \gamma(z_1,z_2,z_3)& :=& (-z_1,z_2+\frac{1}{2},-z_3 + \frac{1}{2}) \end{array} \end{equation} (We follow \cite{DW} for the name). The projections $\mathbf{C} \to \mathbf{R}:x_i + \tau_i y_i \mapsto x_i$ descend to a map \begin{equation} \label{eq:X04B} X_{0,4} \to B \end{equation} which is split by the subset cut out by $y_1 = y_2 = y_3 = 0$. The translation action of \begin{equation} \label{eq:V} V:= \mathbf{R}\tau_1 \times \mathbf{R}\tau_2 \times \mathbf{R}\tau_3 \end{equation} on $\mathbf{C} \times \mathbf{C} \times \mathbf{C}$ descends to an action on $E_1 \times E_2 \times E_3$ and on $X_{0,4}$. The action preserves the fibers of \eqref{eq:X04B}, and determines an identification of the fiber over $b$ with the quotient of $V$ by a lattice $V_{\mathbf{Z},b} \subset V$. We will denote the lattice over the basepoint by $M_{0,4}$, i.e. \begin{equation} \label{eq:M04} M_{0,4} := V_{\mathbf{Z},0} = \mathbf{Z}\tau_1 \times \mathbf{Z}\tau_2 \times \mathbf{Z} \tau_3 \end{equation} The action of $\pi_1(B)$ on $V$ and on $M_{0,4}$ is through the holonomy $\mathbf{Z}/2 \times \mathbf{Z}/2$ \eqref{eq:holonomy}. \subsection{More tri-elliptic $3$-folds} On each $E_i$ we may define a biholomorphic action of $\mathbf{Z}/2 \times \mathbf{Z}/2 \times \mathbf{Z}/2$: the three generators act by \[ z \mapsto z+1/2 \qquad z \mapsto z + \tau_i/2 \qquad z \mapsto -z \] Altogether this defines an action of $(\mathbf{Z}/2)^{\times 9}$ on $E_1 \times E_2 \times E_3$. In \cite{DW}, Donagi and Wendland classified the subgroups that act freely. The quotient $X = (E_1 \times E_2 \times E_3) /G$ must factor as a product of a surface and an elliptic curve, or else be isomorphic to one of the foursome \begin{equation} \label{eq:DW-names} X_{0,4} \qquad X_{1,5} \qquad X_{1,11} \qquad X_{2,12} \end{equation} where $X_{0,4}$ is as in \S\ref{subsec:X04} and the other three are defined below. These $3$-folds are part of a more general classification problem considered in \cite{DW}, which is reflected in the weird names. They also appear in \cite{Lange}, where they are called ``hyperelliptic $3$-folds of type (2,2).'' Some older appearances are given in \cite{DonagiSharpe}. Each of the $3$-folds \eqref{eq:DW-names} is aspherical, and fits into a fiber sequence \begin{equation} \label{eq:T-here} V/M_{I,J} \to X_{I,J} \to B \end{equation} where $V$ is as in \eqref{eq:V} and $M_{I,J}$ is a lattice in $V$. \subsection{Definition} Let $X_{1,5}$ denote the quotient of $X_{0,4}$ by the involution \begin{equation} \label{eq:X15quot} \qquad (z_1,z_2,z_3) \mapsto \left(z_1 + \frac{\tau_1}{2},z_2 + \frac{\tau_2}{2},z_3 + \frac{\tau_3}{2}\right) \end{equation} Then \[ M_{1,5} = M_{0,4} + \mathbf{Z}\left(\tau_1/2, \tau_2/2, \tau_3/2\right) \] \subsection{Definition} Let $X_{1,11}$ denote the quotient of $X_{0,4}$ by the involution \begin{equation} \label{eq:X111quot} (z_1,z_2,z_3) \mapsto \left(z_1 + \frac{\tau_1}{2},z_2 + \frac{\tau_2}{2},z_3\right) \end{equation} Then \[ M_{1,11} = M_{0,4} + \mathbf{Z}\left(\tau_1/2,\tau_2/2,0\right) \] \subsection{Definition} Let $X_{2,12}$ denote the quotient of $X_{0,4}$ by the $\mathbf{Z}/2 \times \mathbf{Z}/2$ group generated by the pair of involutions \begin{equation} \label{eq:X212quot} (z_1,z_2,z_3) \mapsto \left(z_1 + \frac{\tau_1}{2},z_2 + \frac{\tau_2}{2},z_3\right) \text{ and }(z_1,z_2,z_3) \mapsto \left(z_1 ,z_2 + \frac{\tau_2}{2},z_3 + \frac{\tau_3}{2}\right) \end{equation} Then \[ M_{2,12} = M_{0,4} + \mathbf{Z}\left\{(\tau_1/2,\tau_2/2,0),(0,\tau_2/2,\tau_3/2)\right\} \] \subsection{$T$-duality} The $T$-dual fibration to $X_{I,J} \to B$, of Strominger-Yau-Zaslow, is the space of pairs $(b,L)$ where $b \in B$ and $L \in H^1(V/V_{\mathbf{Z},b},\mathrm{U}(1))$ is the isomorphism class of a rank one unitary local system on the fiber above $b$. Let us denote it by $\hat{X}_{I,J}$. It is another split torus fibration \begin{equation} \label{eq:T-hat-here} V^*/\hat{M}_{I,J} \to \hat{X}_{I,J} \to B \end{equation} where $V^* := \mathrm{Hom}(V,\mathfrak{u}(1))$ and $\hat{M} \subset V^*$ is the dual lattice to $M$. As such $\hat{X}_{I,J}$ is determined up to homotopy equivalence by the dual $\pi_1(B)$-module (equivalently, the dual $\mathbf{Z}/2 \times \mathbf{Z}/2$-module) to $M_{I,J}$. $M_{0,4}$ and $M_{1,11}$ are self-dual, while $M_{1,5}$ and $M_{2,12}$ are dual to each other, and therefore we have homotopy equivalences \begin{equation} \label{eq:T-dual-IJ} \hat{X}_{0,4} \simeq X_{0,4} \qquad \hat{X}_{1,5} \simeq X_{2,12}, \qquad \hat{X}_{1,11} \simeq X_{1,11} \end{equation} The homotopy equivalences \eqref{eq:T-dual-IJ} can be taken to be natural diffeomorphisms, if $X_{I,J}$ has parameters $\tau_1,\tau_2,\tau_3$ and we take the corresponding parameters for $\hat{X}_{I,J}$ to be the purely imaginary numbers $(i|\tau_1|^{-1},i|\tau_2|^{-1}, i|\tau_3|^{-1})$. \subsection{$K$-theory} Let $X = X_{I,J}$ and $\hat{X} = X_{I',J'}$ be a dual pair of the $3$-folds. We wish to prove \eqref{eq:111}, that $\mathbf{K}^0(X) \cong \mathbf{K}^1(\hat{X})$ and that $\mathbf{K}^1(X) \cong \mathbf{K}^0(\hat{X})$ --- we will do so without actually computing $\mathbf{K}^*(X)$ and $\mathbf{K}^*(\hat{X})$, indeed I do not quite know what the $K$-theory of these manifolds is \S\ref{subsec:HXIJ}--\ref{subsec:AH-fil}. Let $\mathbf{K}$ denote the complex $K$-theory spectrum. It is an $E_{\infty}$-ring spectrum. We write $\mathrm{Mod}(\mathbf{K})$ for the symmetric monoidal $\infty$-category of module spectra over $\mathbf{K}$, and we will study sheaves of $\mathbf{K}$-module spectra on $X$, $\hat{X}$ and related spaces. These are stable $\infty$-categories --- for an $\infty$-category we will write $\mathrm{Maps}(c,d)$ for the space of maps and $[c,d]$ for the set of homotopy classes of maps between two objects. We write $\Sigma$ for the suspension functor in a stable $\infty$-categories. If $U$ is a manifold we write $\mathbf{K}_U$ for the constant sheaf of $\mathbf{K}$-module spectra on $U$, and $\omega_U$ for the orientation sheaf. \subsection{Lemma} \label{lem:spinc-structures} Each of the spaces $B, X, \hat{X}, X \times_B \hat{X}$ are $\mathbf{K}$-orientable --- that is, there are isomorphisms of sheaves \begin{equation} \label{eq:spin-on-these} \Sigma^{-3} \mathbf{K}_B \cong \omega_B \qquad \Sigma^{-6} \mathbf{K}_X \cong \omega_X \qquad \Sigma^{-6} \mathbf{K}_{\hat{X}} \cong \omega_{\hat{X}} \qquad \Sigma^{-9} \mathbf{K}_{X \times_B \hat{X}} \cong \omega_{X \times_B \hat{X}} \end{equation} \begin{proof} Any $\mathrm{Spin}^c$-structure on a manifold induces a $\mathbf{K}$-orientation, and one way to endow an oriented flat manifold with a $\mathrm{Spin}^c$ structure is to lift its holonomy representation \begin{equation} \label{eq:spin-n} \pi_1 \to \mathrm{SO}(n) \end{equation} along the natural homomorphism $\mathrm{Spin}^c(n) \to \mathrm{SO}(n)$. Each of $B$, $X$, $\hat{X}$ and $X \times_B \hat{X}$ fibers over $B$, and the holonomy around any loop in those fibers is trivial, so \eqref{eq:spin-n} factors through $\pi_1(B) \to \mathrm{SO}(3)$ \eqref{eq:holonomy}. The equations \eqref{eq:Wolf} can be solved in $\mathrm{Spin}^c(3)$, for instance we may solve them in $\mathrm{Spin}(3)$ by taking $\alpha,\beta,\gamma$ to be the usual unit quaternions. Then the lift of \eqref{eq:spin-n} can be taken to be the composite of $\pi_1 \to \pi_1(B) \to \mathrm{Spin}(3)$ with any lift of $\mathrm{Spin}(3) \to \mathrm{SO}(3) \to \mathrm{SO}(n)$ to $\mathrm{Spin}^c(n)$. \end{proof} \subsection{Local-on-$B$ identifications of $K$-theory} Write $\mathbf{K}[U]$ for the $K$-homology spectrum and $\mathbf{K}^U$ for the $K$-cohomology spectrum of a space $U$ --- that is, $\mathbf{K}[U]$ is the smash product of $\mathbf{K}$ with the suspension spectrum of $U$ and $\mathbf{K}^U$ is the internal mapping object from $\mathbf{K}[U]$ to $\mathbf{K}$. They are related to the $K$-homology and $K$-cohomology groups of $U$ by \[ [\Sigma^i \mathbf{K},\mathbf{K}[U]] \cong \mathbf{K}_i(U) \] and \[ \mathbf{K}^i(U) \cong [\mathbf{K}[U],\Sigma^i \mathbf{K}] \cong [\Sigma^{-i} \mathbf{K}, \mathbf{K}^U] \] In terms of sheaf operations, we have \[ \Gamma_c(\omega_U) = \mathbf{K}[U] \qquad \Gamma(\mathbf{K}_U) = \mathbf{K}^U \] We consider the fiber square \begin{equation} \label{eq:this-square} \xymatrix{ X \times_B \hat{X} \ar[r]^-g \ar[d]_-h & X \ar[d]^-{p} \\ \hat{X} \ar[r]_-{q} & B } \end{equation} Factoring the maps $X \to \mathit{pt}$ and $\hat{X} \to \mathit{pt}$ through $B$ gives canonical isomorphisms \begin{equation} \label{eq:from-local-on-B} \mathbf{K}[X] \cong \Gamma_c(B,p_! \omega_X) = \Gamma(B,p_!\omega_X) \qquad \mathbf{K}^{\hat{X}} \cong \Gamma(B,q_* \mathbf{K}_{\hat{X}}) \end{equation} where we replace $\Gamma_c$ with $\Gamma$ using the compactness of $B$. The $\mathbf{K}$-orientability of $X$ gives an identification of $\mathbf{K}[X] \cong \Sigma^{-6} \mathbf{K}^{\hat{X}}$. So to prove \eqref{eq:111} it suffices to produce an isomorphism between $\Sigma^{-3} p_! \omega_X$ and $q_* \mathbf{K}_{\hat{X}}$. To that end, let us study the sheaf of spaces on $B$ whose sections over $U \subset B$ are given by \begin{equation} \label{eq:sheaf-of-maps} \mathrm{Maps}\left(\left(\Sigma^{-3} p_! \omega_X\right)\vert_U, \left(q_* \mathbf{K}_{\hat{X}}\right)\vert_U\right) \end{equation} where $\mathrm{Maps}$ is taken in the $\infty$-category of sheaves of $\mathbf{K}$-modules over $U$. \subsection{Lemma} If $\pi = q\circ h = p \circ g$ denotes the projection $X \times_B \hat{X} \to B$, and one fixes $\mathbf{K}$-orientations of $X$, $\hat{X}$, and $X\times_B \hat{X}$, there are natural isomorphisms \begin{equation} \label{eq:sheafhom2} \mathrm{Maps}\left(\left(\Sigma^{-3} p_! \omega_X\right)\vert_U, \left(q_* \mathbf{K}_{\hat{X}}\right)\vert_U\right) \cong \mathrm{Maps}(\mathbf{K}[\pi^{-1}(U)],\mathbf{K}) \end{equation} where the left-hand side is \eqref{eq:sheaf-of-maps} and on the right-hand side $\mathrm{Maps}$ is taken in the $\infty$-category of $\mathbf{K}$-modules. \begin{proof} \begin{eqnarray} \quad \quad \mathrm{Maps}\left(\left(\Sigma^{-3} p_! \omega_X\right)\vert_U, \left(q_* \mathbf{K}_{\hat{X}}\right)\vert_U\right) & \cong & \mathrm{Maps}\left(\left(\Sigma^{-3} q^* p_! \omega_X\right)\vert_{q^{-1}(U)}, \mathbf{K}_{{q^{-1}(U)}}\right) \label{eq:364} \\ & \cong & \mathrm{Maps}\left(\left(\Sigma^{-3} h_! g^* \omega_X\right)\vert_{q^{-1}(U)}, \mathbf{K}_{{q^{-1}(U)}}\right) \label{eq:365} \\ \label{eq:366} & \cong & \mathrm{Maps}\left(\Sigma^{-3} \left(h_! g^* \Sigma^{-6} \mathbf{K}_{X}\right)\vert_{q^{-1}(U)}, \mathbf{K}_{{q^{-1}(U)}}\right) \\ \label{eq:367} & \cong & \mathrm{Maps}\left(\Sigma^{-9} \left(h_! \mathbf{K}_{X \times_B \hat{X}}\right)\vert_{q^{-1}(U)}, \mathbf{K}_{{q^{-1}(U)}}\right) \\ \label{eq:368} & \cong & \mathrm{Maps}\left(\left(h_! \omega_{X \times_B \hat{X}}\right)\vert_{q^{-1}(U)}, \mathbf{K}_{{q^{-1}(U)}}\right) \\ \label{eq:369} & \cong & \mathrm{Maps}\left(\left(h_! \omega_{X \times_B \hat{X}}\right)\vert_{q^{-1}(U)}, \Sigma^{6}\omega_{{q^{-1}(U)}}\right) \\ \label{eq:3610} & \cong & \mathrm{Maps}\left(\Gamma_c\left(\left(h_! \omega_{X \times_B \hat{X}}\right)\vert_{q^{-1}(U)}\right), \Sigma^{6}\mathbf{K}\right) \\ \label{eq:3611} & \cong & \mathrm{Maps}(\mathbf{K}[(q \circ h)^{-1}(U)],\Sigma^6\mathbf{K}) \end{eqnarray} where \eqref{eq:364} is the $(q^*,q_*)$-adjunction, \eqref{eq:365} is proper base-change, \eqref{eq:366} uses the $\mathbf{K}$-orientation of $X$,\eqref{eq:368} uses the $\mathbf{K}$-orientation of $X \times_B \hat{X}$, \eqref{eq:369} uses the $\mathbf{K}$-orientation of $q^{-1}(U) \subset \hat{X}$, \eqref{eq:3610} uses the $(q^{-1}(U) \to \mathit{pt})_!,(q^{-1}(U) \to \mathit{pt})^!$ adjunction. Finally one applies the Bott isomorphism $\mathbf{K} \cong \Sigma^6 \mathbf{K}$ to obtain the right-hand-side of \eqref{eq:sheafhom2}. \end{proof} \subsection{Poincar\'e bundle} \label{subsec:Pe-bundle} When $T$ and $\hat{T}$ are dual tori, (for instance, if $T = V/M$ \eqref{eq:T-here} and $\hat{T} = V^*/\hat{M}$ \eqref{eq:T-hat-here} are fibers above the basepoint of $X \to B$ and $\hat{X} \to B$), there is a canonical pairing $H_1(T) \otimes H_1(\hat{T}) \to \mathbf{Z}$, which determines a canonical element \begin{equation} \label{eq:coev} \mathrm{coev} \in H^1(T;\mathbf{Z}) \otimes H^1(\hat{T};\mathbf{Z}) \subset H^2(T \times \hat{T};\mathbf{Z}) \end{equation} Let us say that a line bundle on $X \times_B \hat{X}$ is a ``Poincar\'e bundle'' if its restriction to a fiber is this canonical element. The connected components of the right-hand side of \eqref{eq:sheaf-of-maps} are virtual vector bundles on $\pi^{-1}(U)$. In particular, a line bundle on $X \times_B \hat{X}$ determines a homotopy class of maps \begin{equation} \label{eq:line-bundle-map} P_{L}:\Sigma^{-3} p_! \omega_X \to q_* \mathbf{K}_{\hat{X}} \end{equation} \begin{lemma} \label{lem:poincare} If $L$ is a Poincar\'e bundle, $P_L$ is an isomorphism. \end{lemma} \begin{proof} We prove that $P_L$ is an isomorphism on stalks. More generally we prove that if $T$ and $\hat{T}$ are dual tori, a line bundle whose Chern class is \eqref{eq:coev} exhibits $\mathbf{K}^T$ and $\mathbf{K}^{\hat{T}}$ as dual objects in the monoidal category $\mathrm{Mod}(\mathbf{K})$. Such a line bundle determines a homotopy class of maps \begin{equation} \label{eq:241} \mathbf{K} \to \mathbf{K}^{T\times \hat{T}} \cong \mathbf{K}^T \otimes_{\mathbf{K}} \mathbf{K}^{\hat{T}} \end{equation} in $\mathrm{Mod}(\mathbf{K})$, and we will show that for all $i$ the composite \begin{equation} \label{eq:251} [\Sigma^i \mathbf{K}^T,\mathbf{K}] \xrightarrow{\otimes \mathbf{K}^{\hat{T}}} [\Sigma^i \mathbf{K}^{T} \otimes_{\mathbf{K}} \mathbf{K}^{\hat{T}}, \mathbf{K}^{\hat{T}}] \xrightarrow{\eqref{eq:241}} [\Sigma^i \mathbf{K},\mathbf{K}^{\hat{T}}] \end{equation} is an isomorphism. In case $T = \hat{T} = \mathrm{U}(1)$, we have canonically $\mathbf{K}^T \cong \mathbf{K} \oplus \Sigma \mathbf{K}$, $\mathbf{K}^{\hat{T}} \cong \mathbf{K} \oplus \Sigma \mathbf{K}$, and \begin{equation} \label{eq:TTU1} \mathbf{K}^{T \times \hat{T}} \cong \mathbf{K} \oplus \Sigma \mathbf{K} \oplus \Sigma \mathbf{K} \oplus \Sigma^2 \mathbf{K}. \end{equation} Then \eqref{eq:241} is the Bott isomorphism $\mathbf{K} \cong \Sigma^2 \mathbf{K}$ onto the last summand of \eqref{eq:TTU1}, and one can check \eqref{eq:251} directly. In the general case, the domain of \eqref{eq:251} is $\mathbf{K}_{i}(T)$ and the codomain is $\mathbf{K}^{-i}(\hat{T})$, and the square \[ \xymatrix{ \mathbf{K}_1(\mathrm{U}(1)) \otimes_{\mathbf{Z}} \mathrm{Hom}(\mathrm{U}(1),T) \ar[r] \ar[d] & \mathbf{K}_1(T) \ar[d]^{\eqref{eq:251}}\\ \mathbf{K}^{-1}(\mathrm{U}(1)) \otimes_{\mathbf{Z}} \mathrm{Hom}(\hat{T},\mathrm{U}(1)) \ar[r] & \mathbf{K}^{-1}(\hat{T}) } \] commutes, where the left vertical arrow is \eqref{eq:251} for $T = \hat{T} = \mathrm{U}(1)$, tensored with the identification of cocharacters of $T$ with characters of $\hat{T}$. The horizontal arrows induce graded ring isomorphisms \begin{equation} \label{eq:ring-iso} \Lambda (\mathrm{Hom}(\mathrm{U}(1),T)) \otimes \mathbf{K}^* \to \mathbf{K}_*(T) \qquad \Lambda (\mathrm{Hom}(\hat{T},\mathrm{U}(1))) \otimes \mathbf{K}^* \to \mathbf{K}^*(T) \end{equation} where the multiplication on $\mathbf{K}_*(T)$ is defined using the group structure on $T$ (the Pontrjagin product), and the ring structure on $\mathbf{K}^*(\hat{T})$ is tensor product of vector bundles. Thus we may complete the proof that \eqref{eq:251} is an isomorphism by noting that it intertwines the Pontrjagin product on $\mathbf{K}_*(T)$ with the tensor product on $\mathbf{K}^*(\hat{T})$. A strong form of this is true but to make use of \eqref{eq:ring-iso} we only need to note that (letting $m:T \times T \to T$ denote the multiplication and $\Delta:\hat{T} \to \hat{T} \times \hat{T}$ the diagonal) the following two elements of $\mathbf{K}^0(T \times T \times \hat{T})$ are equal: \begin{itemize} \item The pullback of \eqref{eq:coev} along $m \times 1:T \times T \to \hat{T} \to T \times \hat{T}$ \item The pullback of \eqref{eq:coev} $\boxtimes$ \eqref{eq:coev} along the map $T \times T \times \hat{T} \to T \times \hat{T} \times T \times \hat{T}$ that carries $(t_1,t_2,\hat{t})$ to $(t_1,\hat{t},t_2,\hat{t})$ \end{itemize} In fact these are equal in $H^2(T \times T \times \hat{T};\mathbf{Z})$. It follows that two maps from the upper left to the lower right corner of the evident square \[ \xymatrix{ \mathbf{K}[T] \otimes_{\mathbf{K}} \mathbf{K}[T] \ar[d] \ar[r] & \mathbf{K}^{\hat{T}} \otimes_{\mathbf{K}} \mathbf{K}^{\hat{T}} \ar[d] \\ \mathbf{K}[T] \ar[r] & \mathbf{K}^{\hat{T}} } \] are homotopic, and therefore that \eqref{eq:251} is a ring homomorphism. \end{proof} \subsection{Theorem} \label{th:111} Let $X$ and $\hat{X}$ be as in \eqref{eq:T-dual-IJ}. Then \eqref{eq:111} holds, i.e. \[ \mathbf{K}^0(X) \cong \mathbf{K}^1(\hat{X}) \text{ and } \mathbf{K}^1(X) \cong \mathbf{K}^0(\hat{X}) \] \begin{proof} After \eqref{eq:from-local-on-B} and Lemma \ref{lem:poincare}, it suffices to construct a Poincar\'e bundle on $X \times_B \hat{X}$. The fundamental group $\pi_1(B)$ acts on $H^2(T \times \hat{T};\mathbf{Z}) = H^2(V/M \times V^*/\hat{M};\mathbf{Z})$, and the canonical class \eqref{eq:coev} is fixed by this action. We will prove the existence of a Poincar\'e bundle by showing that the map \begin{equation} \label{eq:surj} H^2(X \times_B \hat{X};\mathbf{Z}) \to H^2(T \times \hat{T};\mathbf{Z})^{\pi_1(B)} \end{equation} is a surjection. As $X \times_B \hat{X}$ is a $K(\pi,1)$-space, the domain of \eqref{eq:surj} is isomorphic to the cohomology of the fundamental group $\pi_1(X \times_B \hat{X})$. To prove that it is a surjection is equivalent to showing that the differentials \[ d_2:H^2(T \times \hat{T};\mathbf{Z})^{\pi_1(B)} \to H^2(\pi_1(B); H^1(T \times \hat{T};\mathbf{Z}) \] and \[ d_3:\ker(d_2) \to H^3(B;\mathbf{Z}) \] vanish, in the Serre spectral sequence of the fibration $X \times_B \hat{X} \to B$. Let us denote this spectral sequence by ${}^{X\hat{X}}E_r^{st}$. We similarly denote the Serre spectral sequence of $X \to B$ by ${}^X E_r^{st}$ and of $\hat{X} \to B$ by ${}^{\hat{X}} E_r^{st}$. Since the fibration has a section, all of $H^3(B;\mathbf{Z})$ must survive to the $E_{\infty}$-page, so $d_3$ vanishes. The codomain of $d_2$ is \begin{equation} \label{eq:dirsumdec} H^2(\pi_1(B),H^1(T;\mathbf{Z})) \oplus H^2(\pi_1(B),H^1(\hat{T};\mathbf{Z})) \end{equation} The sections of $X \to B$ and $\hat{X} \to B$ induce maps $X \to X \times_B \hat{X}$ and $\hat{X} \to X \times_B \hat{X}$ that commute with the projections to $B$, which in turn induce maps of spectral sequences \begin{equation} \label{eq:mapsss} {}^X E_r^{st} \to {}^{X \hat{X}} E_r^{st} \qquad {}^{\hat{X}} E_r^{st} \to {}^{X \hat{X}} E_r^{st} \end{equation} The direct sum decomposition \eqref{eq:dirsumdec} is induced by \eqref{eq:mapsss} on $E_2^{21}$, thus we can complete the proof by showing that ${}^X E_2^{21}$ and ${}^{\hat{X}} E_2^{21}$ survive to the $E_{\infty}$-pages, i.e. that \begin{equation} {}^X E_2^{02} \to {}^X E_2^{21} \qquad {}^{\hat{X}} E_2^{02} \to {}^{\hat{X}} E_2^{21} \end{equation} are both zero. Now ${}^X E_2^{02} = H^0(\pi_1;H^2(T;\mathbf{Z})) = 0$ and $H^0(\pi_1;H^2(\hat{T};\mathbf{Z})) = 0$: $H^2(T;\mathbf{Z}) \cong M \subset V$ and $H^2(\hat{T};\mathbf{Z}) \cong \hat{M} \subset V^*$ as $\pi_1$-modules, and $\pi_1$ acts on $V$ and $V^*$ without invariants ($V$ and $V^*$ split as the sum of the three nontrivial characters $\pi_1 \to \mathrm{GL}_1(\mathbf{R})$). \end{proof} \subsection{Cohomology of $X_{I,J}$} \label{subsec:HXIJ} The two-vertex regular cell complex structure on $S^1$, with vertices at $0$ and $\pi$, is preserved by the action of $\mathbf{Z}/2 \times \mathbf{Z}/2$ generated by \[ \theta \mapsto \theta+ \pi \qquad \theta \mapsto -\theta \] Each of the $3$-folds \eqref{eq:DW-names} can be written as a quotient of a torus $T^6 = S^1 \times S^1 \times S^1 \times S^1 \times S^1 \times S^1$ by the free action of an elementary abelian $2$-group that preserves the product cell structure. The cellular cochain complex of $T^6$ is a complex of free $\mathbf{Z}[G]$-modules \begin{equation} \label{eq:T6complex} \mathbf{Z}^{64}\to\mathbf{Z}^{384} \to \mathbf{Z}^{960} \to \mathbf{Z}^{1280} \to \mathbf{Z}^{960} \to \mathbf{Z}^{384} \to \mathbf{Z}^{64} \end{equation} of $\mathbf{Z}[G]$-rank $2^6 \binom{6}{i}/|G|$ in degree $i$. Passing to invariants gives a cochain complex for the cohomology of $T^6/G$, small enough to handle by computer --- I used sage. Besides $H^0 = H^6 = \mathbf{Z}$ and $H^1 = 0$, we have \[ \begin{array}{|c|c|c|c|c|c|} \hline & H^2 & H^3 & H^4 & H^5 \\ \hline X_{0,4} & \mathbf{Z}^3 \oplus (\mathbf{Z}/4)^2 \oplus (\mathbf{Z}/2)^3 & \mathbf{Z}^8 \oplus (\mathbf{Z}/2)^3 & \mathbf{Z}^3 \oplus (\mathbf{Z}/2)^3 & (\mathbf{Z}/4)^2 \oplus (\mathbf{Z}/2)^3 \\ \hline X_{1,5} & \mathbf{Z}^3 \oplus (\mathbf{Z}/4)^3 & \mathbf{Z}^8 \oplus (\mathbf{Z}/2)^2 & \mathbf{Z}^3 \oplus (\mathbf{Z}/2)^2 & (\mathbf{Z}/4)^3 \\ \hline X_{1,11} & \mathbf{Z}^3 \oplus (\mathbf{Z}/4)^2 \oplus (\mathbf{Z}/2)^2 & \mathbf{Z}^8 \oplus (\mathbf{Z}/2)^2 & \mathbf{Z}^3 \oplus (\mathbf{Z}/2)^2 & (\mathbf{Z}/4)^2 \oplus (\mathbf{Z}/2)^2 \\ \hline X_{2,12} & \mathbf{Z}^3 \oplus (\mathbf{Z}/4)^2 \oplus (\mathbf{Z}/2)^2 & \mathbf{Z}^8 \oplus \mathbf{Z}/4 & \mathbf{Z}^3 \oplus \mathbf{Z}/4 & (\mathbf{Z}/4)^2 \oplus (\mathbf{Z}/2)^2 \\ \hline \end{array} \] The top row was previously computed in \cite{BCDP}, and the $H^5$ (equivalently, $H^2$) columns in \cite{DW}. \subsection{Atiyah-Hirzebruch filtration} \label{subsec:AH-fil} Let $X$ be a connected closed manifold of real dimension $6$. $\mathbf{K}^*(X)$ carries the Atiyah-Hirzebruch filtration \begin{equation} \label{eq:AH} \begin{array}{ccccccccc} \mathbf{K}^0(X) & = & F^0 \mathbf{K}^0(X) & \supset & F^2 \mathbf{K}^0(X) & \supset & F^4 \mathbf{K}^0(X) & \supset & F^6 \mathbf{K}^0(X) \\ \mathbf{K}^1(X) & = & F^1 \mathbf{K}^1(X) & \supset & F^3\mathbf{K}^1(X) & \supset & F^5 \mathbf{K}^1(X) \end{array} \end{equation} where $F^k(\mathbf{K}^*(X))$ consists of those classes that vanish when restricted to any $(k-1)$-dimensional submanifold. The associated graded pieces of this filtration are the groups at the last page of the Atiyah-Hirzebruch spectral sequence: \[ E_2^{st} = H^s(X,\mathbf{K}^t(\mathit{pt})) \implies E_{\infty}^{st} = F^{s+t} \mathbf{K}^s(X)/F^{s+t + 1} \mathbf{K}^s(X) \] If $X$ is oriented, then the spectral sequence degenerates immediately: $E_2^{st} = E_{\infty}^{st}$. The argument is given in \cite{BrDi} --- let us briefly repeat the argument here. Since $\mathbf{K}^t(\mathit{pt}) = 0$ for $t$ odd, all the even differentials $d_{2p}$ vanish. In general, $d_{2p-1}$ vanishes on $H^i(X,\mathbf{Z})$ for $i \leq 2p-2$ \cite[\S 7]{Atiy}, so on a 6-dimensional complex the only possible nonvanishing differential is $d_3:H^3(X,\mathbf{Z}) \to H^6(X,\mathbf{Z})$. Even this differential must vanish if $H^6(X,\mathbf{Z})$ has no torsion \cite[\S 2.4]{AtHi}, i.e. if $X$ is orientable. Plausibly, whenever $X$ is a Calabi-Yau $3$-fold, or even just admits a Spin structure, the Atiyah-Hirzebruch filtration might split: that is, there might be a $\mathbf{Z}/2$-graded isomorphism between \[ \mathbf{K}^0(X)\oplus \mathbf{K}^1(X) \text{ and } \bigoplus H^i(X;\mathbf{Z}) \] This is claimed in \cite{Doran}, but I believe the proof there has a gap (discussed in \S\ref{subsec:chern-classes}). I do not know whether the filtration on $\mathbf{K}^*(X_{I,J})$ splits: if it does, one could conclude Theorem \ref{th:111} directly from the computations in \S\ref{subsec:HXIJ}. On an oriented 6-manifold one necessary and sufficient condition for the filtration of $\mathbf{K}^0(X)$ to split is the existence of a function $\varphi:H^2(X;\mathbf{Z}) \to H^4(X;\mathbf{Z})$ that obeys \[ \varphi(c+c') - \varphi(c) - \varphi(c') = c \cup c' \] (For instance, we could take $\varphi(c) = c^2/2$ if we could divide by $2$). The problem of computing the cup product on $H^*(X_{I,J};\mathbf{Z})$ also arose in \cite{BCDP}. Determining this by computer is more difficult --- the problem is that, although the cup product on $H^*(T^6)$ is induced by a (noncommutative) ring structure on the cochains \eqref{eq:T6complex}, the groups $G$ do not act by ring automorphisms. One can solve this by passing to the barycentric subdivision of $S^1$ (which induces a subdivision of $(S^1)^{\times 6}$), but the resulting chain complexes are too big to treat in a simple-minded way. \subsection{Chern classes} \label{subsec:chern-classes} A virtual vector bundle has a well-defined Chern class, giving us maps \begin{equation} \label{eq:chern-classes} c_i:\mathbf{K}^0(X) \to H^{2i}(X,\mathbf{Z}) \qquad c_i^\Sigma:\mathbf{K}^1(X) \to H^{2i -1}(X,\mathbf{Z}) \end{equation} The second map $c_i^{\Sigma}$ is the composite of \[ \mathbf{K}^1(X) \cong \mathbf{K}^2(\Sigma X) \cong \mathbf{K}^0(\Sigma X) \xrightarrow{c_i} H^{2i}(\Sigma X,\mathbf{Z}) = H^{2i-1}(X,\mathbf{Z}) \] Except for $c_0$, the functions $c_i$ of \eqref{eq:chern-classes} are not group homomorphisms, they instead obey the Cartan formula $c_n(V+W) = c_n(V) c_0(W) + c_{n-1}(V) c_1(W) + \cdots + c_0(V) c_n(W)$. The $i$th Chern class becomes a group homomorphism on $F^{2i} \mathbf{K}^0(X)$, since $c_j(E) = 0$ for any $j < i$ and $E \in F^{2i} \mathbf{K}^0(X)$. As all nontrivial cup products in $H^*(\Sigma X;\mathbf{Z})$ vanish, the Cartan formula shows that $c_i^{\Sigma}:\mathbf{K}^1(X) \to H^{2i - 1}(X,\mathbf{Z})$ are group homomorphisms. Lemma 4.1 of \cite{Doran} asserts that, when $X$ is a closed oriented $6$-manifold, the map \begin{equation} \label{eq:DM41} (c_2,c_3):F^4 \mathbf{K}^0(X) \to H^4(X,\mathbf{Z}) \oplus H^6(X,\mathbf{Z}) \end{equation} is an isomorphism onto \begin{equation} \label{eq:DM41-im} \{(c_2,c_3) \mid \mathrm{Sq}^2(c_2) = c_3\} \end{equation} where $\mathrm{Sq}^2:H^4(X,\mathbf{Z}/2) \to H^6(X,\mathbf{Z}/2)$ is a Steenrod operation. Lemma 4.2 of \cite{Doran} asserts that the map \begin{equation} \label{eq:DM42} (c_1,c_2,c_3):F^2\mathbf{K}^0(X) \to H^2(X,\mathbf{Z}) \oplus H^4(X,\mathbf{Z}) \oplus H^6(X,\mathbf{Z}) \end{equation} is an isomorphism onto \begin{equation} \label{eq:DM42-im} \{(c_1,c_2,c_3) \mid \mathrm{Sq}^2(c_2) = c_3 + c_1 c_2 + c_1^3 \} \end{equation} I believe that \eqref{eq:DM41-im} is correct, but \eqref{eq:DM42-im} is not. For example, if $X$ is the quintic $3$-fold, the virtual vector bundle $\mathcal{O}(1) - \mathcal{O}$ belongs to $F^2 \mathbf{K}^0(X)$ and has $(c_1,c_2,c_3) = (h,0,0)$, where $h$ is the hyperplane section of $X \subset \mathbf{P}^4$. But $h^3 = 5 \in H^6(X,\mathbf{Z})$, which is nonzero in $H^6(X,\mathbf{Z}/2)$. \section{Conjectures} \label{sec:three} It should be possible to choose the isomorphisms \eqref{eq:111} to intertwine additional structures on $X$ and $\hat{X}$. \subsection{$K$-homology} In fact \eqref{eq:111} is expected for any mirror pair of Calabi-Yau manifolds of odd complex dimension. If $X$ and $\hat{X}$ have even complex dimension, then we expect $\mathbf{K}^i(X) \cong \mathbf{K}^i(\hat{X})$ for $i = 0,1$. I think the right way to organize these expectations is as an equivalence of $\mathbf{K}$-module spectra: \begin{equation} \label{eq:spectrum} \Sigma^{-n} \mathbf{K}[X] \cong \mathbf{K}^{\hat{X}} \end{equation} where $n$ is the complex dimension of $X$, $\Sigma$ denotes suspension, $\mathbf{K}[?]$ denotes the $K$-homology spectrum and $\mathbf{K}^?$ denotes the $K$-cohomology spectrum. The $K$-homology and $K$-cohomology of a compact almost complex manifold are naturally identified, and $\mathbf{K}$-theory is $2$-periodic, so \eqref{eq:spectrum} implies \eqref{eq:111} by taking homotopy groups. Two $\mathbf{K}$-module spectra are isomorphic if and only if their homotopy groups are isomorphic, so the converse is true as well. But using $K$-homology in place of $K$-cohomology seems to go with the grain of homological mirror symmetry, in a way that we will explain. \subsection{The large volume and large complex structure limits} \label{subsec:lvllcsl} For the rest of the paper we will be treating the symplectic geometry of $X$ and the complex geometry of $\hat{X}$. And we will assume that the symplectic form on $X$ has integral cohomology class $[\omega] \in H^2(X;\mathbf{Z})$. The isomorphism class of line bundles whose Chern class is $[\omega]$ gives a unit in $\mathbf{K}^0(X) := \pi_0(\mathbf{K}^X)$, and (using the $\mathbf{K}^X$-module structure on $\mathbf{K}[X]$) a homotopy class of automorphisms of $\mathbf{K}[X]$. The corresponding homotopy class of automorphisms of $\mathbf{K}^{\hat{X}}$ is a monodromy operator one obtains by putting $\hat{X}$ in a family $\hat{X}_t$, where $t$ runs through a punctured disk. The Seidel strategy \cite{Seidel} for proving HMS is to prove it first in a limit --- one takes a hyperplane section $D$ of the line bundle on $X$, and the special fiber $\hat{X}_0$ at the center of the family $\hat{X}_t$, so that there is a mirror relationship between $X -D$ and $\hat{X}_0$. $X-D$ is called the ``large volume limit'' and $\hat{X}_0$ is called the ``large complex structure limit'' of the mirror pair. In such a case I conjecture (I am not sure how originally) that \begin{equation} \label{eq:at-limit} \Sigma^{-n} \mathbf{K}[X-D] \cong \mathbf{K}^{\hat{X}_0} \end{equation} as $\mathbf{K}$-modules. For the noncompact $X-D$ or the singular $\hat{X}_0$, it is now necessary to pay attention to the difference between $\mathbf{K}$-homology and $\mathbf{K}$-cohomology. \medskip \noindent {\bf Example.} The case when $\hat{X} \subset \mathbf{C} P^{n+1}$ is a degree $n+2$ hypersurface furnishes a standard example. A mirror $X$ to $\hat{X}$ is obtained by resolving the singularities of an anticanonical hypersurface in a weighted projective $(n+1)$-space. The limits $X -D$ and $\hat{X}_0$ can be described directly: $X -D \subset (\mathbf{C}^*)^{n+1}$ is any sufficiently generic hypersurface whose Newton polytope is the standard reflexive lattice simplex, e.g. \begin{equation} \label{eq:dual-ntic} X-D := W^{-1}(0), \quad W:(x_0,\ldots,x_n) \mapsto x_0 + \cdots + x_n + \frac{1}{x_0\cdots x_n} - 1 \end{equation} and $\hat{X}_0$ is the union of the coordinate hyperplanes \begin{equation} \label{eq:ntic} \hat{X}_0 := \{[x_0,\ldots,x_n] \in \mathbf{C}P^{n+1} \mid x_0 \cdots x_n = 0\} \end{equation} For these examples, \eqref{eq:at-limit} can be deduced from a similar equivalence \begin{equation} \label{eq:LG} \Sigma^{-n-1} \mathbf{K}[(\mathbf{C}^*)^{n+1},W^{-1}(0)] \cong \mathbf{K}^{\mathbf{C}P^{n+1}} \end{equation} and from the long exact sequence of a pair. The left-hand side of \eqref{eq:LG} denotes the $K$-homology of the pair $((\mathbf{C}^*)^{n+1},W^{-1}(0))$, which has the same homotopy type as a bouquet of spheres --- one $(n+1)$-sphere for each critical point of $W$. Note that \eqref{eq:LG} can be seen as a third variant of \eqref{eq:spectrum}, as $((\mathbf{C}^*)^{n+1},W)$ is the Landau-Ginzburg mirror to projective space). \subsection{$T$-duality} \label{subsec:T-duality} Homotopy classes of maps $\Sigma^{-n} \mathbf{K}[X] \to \mathbf{K}^{\hat{X}}$ are naturally identified with classes in the $n$th $K$-cohomology group $\mathbf{K}^n(X \times \hat{X})$. So if one wants to prove that $\Sigma^{-n} \mathbf{K}[X]$ and $\mathbf{K}^{\hat{X}}$ are isomorphic, one should investigate classes in $\mathbf{K}^n(X \times \hat{X})$. \S\ref{subsec:Pe-bundle} gives the example at the heart of SYZ --- a distinguished isomorphism class of line bundles on $T \times \hat{T}$ that (regarded as an element of $\mathbf{K}^0(T \times \hat{T})$ induces an isomorphism \begin{equation} \label{eq:KUT-duality} \mathbf{K}[T] \cong \mathbf{K}^{\hat{T}} \end{equation} when $T$ and $\hat{T}$ are dual tori. When $X$ and $\hat{X}$ are mirror Calabi-Yaus of real dimension $2n$, fibering over the same base $B$ with dual torus fibers, this suggests that $\mathbf{K}[X]$ and $\mathbf{K}^{\hat{X}}$ could be identified by a virtual vector bundle on $X \times_B \hat{X}$ whose restriction to each fiber gives \eqref{eq:KUT-duality} --- a ``Poincar\'e bundle.'' The primary obstacle to doing this is that it is not clear what this virtual ``bundle'' should look like on singular fibers. Indeed it should not be a bundle at all, but a class in $K$-homology $\mathbf{K}_{3n}(X \times_B \hat{X})$ --- this group has a pushforward map to $\mathbf{K}_{3n}(X \times \hat{X})$, which is isomorphic to $\mathbf{K}^n(X \times \hat{X})$ using the $\mathbf{K}$-orientations of $X$ and $\hat{X}$. Even after discarding the singular fibers, or when they are just absent, there may be a Leray obstruction to finding the Poincar\'e bundle. In the flat cases of \S\ref{sec:two}, this was simple but not exactly tautological. At the large volume/large complex structure limit, the singular fibers can disappear, so that every fiber is a smooth torus (though the dimensions of these tori can jump); more precisely one can in some cases \cite{RSTZ} write $X - D$ as the homotopy colimit of a diagram of commutative Lie groups and homomorphisms, and $\hat{X}_0$ as the homotopy colimit of the diagram of dual groups (perhaps orbifolds), in this generality the Leray obstruction might be interesting. As to singular fibers, it's been known for a long time what the necessary class in $\mathbf{K}_{3n}$ looks like when $n = 2$, by hyperkahler rotating until $X \times_B \hat{X} \subset X \times \hat{X}$ is algebraic \cite{k3,BrMa}. For higher even $n$, finding these Poincar\'e bundles is a more difficult algebraic geometry problem, even when the same hyperkahler techniques are available \cite{Arinkin, ADM}. In general, especially for $n$ odd, the class in $\mathbf{K}_{3n}(X \times_B \hat{X})$ cannot be algebraic; it would be interesting to describe it when $X \to B$ and $\hat{X} \to B$ are a dual pair of Gross's ``well-behaved'' singular $T^3$-fibrations \cite{Gross}. \subsection{Blanc's invariant} In \cite{Blanc}, Blanc showed how to compute the topological $K$-theory $\mathbf{K}^Y$ of a complex algebraic variety $Y$ in a noncommutative fashion --- that is, Blanc introduced an invariant $\mathbf{K}_{\mathrm{Blanc}}(\mathcal{C}) \in \mathrm{Mod}(\mathbf{K})$ for a $\mathbf{C}$-linear dg category $\mathcal{C}$, and showed \begin{equation} \label{eq:blanc} \mathbf{K}_{\mathrm{Blanc}}(\mathrm{Perf}(Y)) \cong \mathbf{K}^Y \end{equation} It is desirable to understand Blanc's invariant for categories arising from symplectic manifolds --- Fukaya categories and microlocal sheaf categories. When $X$ is compact, K\"ahler with integer K\"ahler class, and Calabi-Yau, then Ganatra has conjectured that $\mathbf{K}_{\mathrm{Blanc}}(\mathrm{Fuk}(X))$ recovers the complex $K$-theory of $X$ whenever $\mathrm{Fuk}(X)$ is smooth and proper. The last condition is motivated by results of \cite{Toen} (which state that when $Y$ is a compact complex manifold, $\mathrm{Perf}(Y)$ is smooth and proper if and only if $Y$ is algebraic) and the failure of \eqref{eq:blanc} for complex analytic manifolds that are not algebraic. There is a basic problem with formulating Ganatra's conjecture precisely, or formulating any question about $\mathbf{K}_{\mathrm{Blanc}}(\mathrm{Fuk}(X))$ at all. The Fukaya category of a symplectic manifold is not automatically defined over the complex numbers, but over a large Novikov field (we will call it $\mathfrak{N}$). \subsection{Achinger-Talpo and Blanc's invariant for $\mathbf{C}((t))$-linear categories} \label{subsec:achtal} The $\mathbf{C}$-linear structure on a dg category $\mathcal{C}$ enters in Blanc's construction in an essential way, but for a compact symplectic manifold it is not usually possible to reduce the linear structure of $\mathrm{Fuk}(X)$ from $\mathfrak{N}$ to $\mathbf{C}$. Recent work of Achinger-Talpo, and also of Robalo and Antieau-Heller, allow for a definition of $\mathbf{K}_{\mathrm{Blanc}}(\mathcal{C})$ when $\mathcal{C}$ is defined over $\mathbf{C}((t))$ --- this version is adapted to Seidel's relative Fukaya category and to Ganatra's conjecture. If $\mathcal{O} \subset \mathbf{C}((t))$ is the coordinate ring of an affine curve, and $Y \to \mathrm{Spec}(\mathcal{O})$ is a dominant map of algebraic varieties, then $\mathbf{K}^{Y_a}$ has a local monodromy automorphism (call it $m$) at $t = 0$ whenever $Y_a$ is the fiber above a point $a$ close to $t = 0$. We seek a computation of the pair $(\mathbf{K}^{Y_a},m)$ that is both noncommutative and formal, in the sense that it depends only on the $\mathbf{C}((t))$-linear category $\mathrm{Perf}(Y \times_{\mathcal{O}} \mathbf{C}((t)))$. To define such a pair $(\mathbf{K}^{Y_a},m)$ is equivalent to defining a $\mathbf{K}$-module object of the $\infty$-category $\mathcal{S}_{/S^1}$. For any field $F$, let $\mathrm{MV}_F$ denote the $\infty$-category underlying the Morel-Voevodsky model structure for $\mathbf{A}^1$-homotopy theory \cite[Def. 2.1]{MV}. Let $\mathrm{MV}_F[(\mathbf{P}^1)^{-1}]$ denote the stable $\infty$-category underlying the Morel-Voevodsky model category of motivic spectra over $F$ (\cite[Def. 5.7]{Voevodsky} or \cite[Def. 2.38]{Robalo}). If $D$ is an $F$-linear triangulated dg category, let $\mathbf{k}_{\mathrm{mot}}(D) \in \mathrm{MV}_F[(\mathbf{P}^1)^{-1}]$ denote the motivic refinement of the algebraic $K$-theory spectrum (as in \cite[Prop. 3.2]{AnHe}. An embedding $F \to \mathbf{C}$ induces a functor (preserving direct products and all small colimits) \[ b^*:\mathrm{MV}_F \to \mathcal{S} \] where $\mathcal{S}$ denotes the $\infty$-category of spaces, and a similar functor on spectra that we will also denote by $b^*$ ($b$ for ``Betti''). When $F = \mathbf{C}$, the Blanc $K$-theory of $D$ is $\mathbf{K}_{\mathrm{Blanc}}(D) := \mathbf{K} \otimes_{\mathbf{ku}} b^*\mathbf{k}_{\mathrm{mot}}(D)$, where $\mathbf{ku}$ denotes the connective complex $K$-theory spectrum. \begin{thm*}[Achinger-Talpo \cite{AchingerTalpo}] There is a functor $\mathrm{MV}_{\mathbf{C}((t))} \to \mathcal{S}_{/S^1}$ making the following diagram commute: \begin{equation} \xymatrix{ \mathrm{MV}_{\mathbf{C}} \ar[r]^-{b^*} \ar[d]_-{\times_{\mathbf{C}} \mathbf{C}((t))} & \mathcal{S} \ar[d]^-{\times S^1} \\ \mathrm{MV}_{\mathbf{C}((t))} \ar[r]_-{b_t^*} & \mathcal{S}_{/S^1} } \end{equation} \end{thm*} The functor $b_t^*$ carries the Morel-Voevodsky space $\mathbf{Z} \times B\mathrm{GL} \in \mathrm{MV}_{\mathbf{C}((t))}$ \cite[p. 138]{MV} representing algebraic $K$-theory to $\mathbf{Z} \times \mathrm{BU} \times S^1$. It also carries $\mathbf{P}^1$ to $S^2 \times S^1$, and so induces a map to spectra in $\mathcal{S}_{/S^1}$. Thus one can define the Blanc $K$-theory of a $\mathbf{C}((t))$-linear category $\mathcal{C}$ to be \begin{equation} \mathbf{K} \otimes_{\mathbf{ku}} b_t^*\mathbf{k}_{\mathrm{mot}}(\mathcal{C}) \end{equation} \subsection{Doing without Blanc's invariant} \label{subsec:dowithout} Like any spectrum, $\mathbf{K}^Y$ fits into Sullivan's arithmetic square (\cite[Prop. 3.20]{Sullivan} or \cite[Prop. 2.9]{Bousfield}) \begin{equation} \label{eq:sullivan} \xymatrix{ \mathbf{K}^Y \ar[r] \ar[d] & \ar[d] \prod_p L_{\hat{p}} \mathbf{K}^Y \\ L_{\mathbf{Q}} \mathbf{K}^Y \ar[r] & L_{\mathbf{Q}} \prod_p L_{\hat{p}} \mathbf{K}^Y } \end{equation} which is homotopy Cartesian. Here $L_{\mathbf{Q}}$ denotes the rationalization and $L_{\hat{p}}$ the $p$-completion of a spectrum. Thomason's descent theorem shows that, when $Y$ is a complex algebraic variety, $L_{\hat{p}}\mathbf{K}^Y$ can be recovered from the algebraic $K$-theory spectrum of $\mathrm{Perf}(Y)$: \begin{equation} \label{eq:thomason} L_{\hat{p}} \mathbf{K}^Y \cong L_{K(1),p} \mathbf{K}_{\mathrm{alg}}(\mathrm{Perf}(Y)) \end{equation} From this point of view, Blanc's theorem is equivalent to a ``noncommutative'' construction of $L_{\mathbf{Q}} \mathbf{K}^Y$ and of the map $L_{\mathbf{Q}} \mathbf{K}^Y \to L_{\mathbf{Q}} \prod_p L_{K(1),p} \mathbf{K}_{\mathrm{alg}}(\mathrm{Perf}(Y))$. If one is merely interested in the isomorphism type of $\mathbf{K}^Y$, then Thomason allows it to be recovered from $\mathbf{K}_{\mathrm{alg}}(\mathrm{Perf}(Y))$ only. If $\mathcal{C}$ is linear over an algebraically closed extension of $\mathbf{C}$, and $p$ is any prime, then $L_{K(1),p} \mathbf{K}_{\mathrm{alg}}(\mathcal{C})$ is a $L_{\hat{p}} \mathbf{K}$-module in a natural way. So a weaker form of Ganatra's conjecture can be formulated without invoking any form of Blanc's construction, this way: if $X$ is a compact symplectic manifold of dimension $2n$, with a smooth and proper $\mathfrak{N}$-linear Fukaya category, then for every prime $p$ the pair of $L_{\hat{p}} \mathbf{K}$-module spectra \[ L_{K(1),p} \mathbf{K}_{\mathrm{alg}}(\mathrm{Fuk}(X)) \text{ and } \Sigma^{-n} L_{\hat{p}} \mathbf{K}[X] \] are isomorphic. Maybe it's appropriate to call the desired equivalence of spectra a homological mirror analog of Thomason's \eqref{eq:thomason}. \subsection{The Euler pairings} Let $\psi^{-1}:\mathbf{K} \to \mathbf{K}$ denote the natural $E_{\infty}$-ring map that carries a virtual vector space to its complex conjugate. It induces an autoequivalence on $\mathrm{Mod}(\mathbf{K})$, the $\infty$-category of $\mathbf{K}$-modules. The $2n$-manifolds $X$ and $\hat{X}$ have distinguished $\mathbf{K}$-orientations --- that is, there is a distinguished class in $\mathbf{K}_{2n}(X)$ and in $\mathbf{K}_{2n}(\hat{X})$ that maps to a generator of $\mathbf{K}_{2n}(X,X - x_0)$ and of $\mathbf{K}_{2n}(\hat{X},\hat{X} - x_0)$. Denote these classes by $[X]$ and $[\hat{X}]$ --- one is determined by the complex structure on $\hat{X}$ and the other by any choice of compatible almost complex structure on $X$. The action of the line bundle fixes $[X]$ and the action of the monodromy operator fixes $[\hat{X}]$. They induce a further structure on $\Sigma^{-n}\mathbf{K}[X]$ and $\mathbf{K}^{\hat{X}}$, namely the ``Euler pairings'' \begin{equation} (\psi^{-1} \Sigma^{-n} \mathbf{K}[X]) \otimes_{\mathbf{K}} \Sigma^{-n} \mathbf{K}[X] \to \mathbf{K} \qquad (\psi^{-1} \mathbf{K}^{\hat{X}}) \otimes_{\mathbf{K}} \mathbf{K}^{\hat{X}} \to \Sigma^{-2n} \mathbf{K} \end{equation} Under \eqref{eq:blanc} and the desired equivalence between $\Sigma^{-n} \mathbf{K}[X]$ and the Blanc $K$-theory of $\mathrm{Fuk}(X)$, these maps should be induced by the Hom structures on these categories, suggesting the purely topological problem of choosing \eqref{eq:spectrum} so that the pairings match. On $\pi_0$ this problem is closely related to Iritani's $\Gamma$-conjectures, or to the rationality question of \cite[\S 2.2.7]{KKP}. If $M_1$ and $M_2$ are $\mathbf{K}$-module spectra, write $B_n(M_1,M_2)$ for the spectrum of maps from $(\psi^{-1} M_1) \otimes M_2$ to $\Sigma^{-n} \mathbf{K}$. This is a nondegenerate symmetric bilinear spectrum-valued functor on $\mathrm{Mod}(\mathbf{K})$, it would be interesting to know the $L$-theory of $B_n$. \subsection{Exact manifolds} \label{subsec:exact-manifolds} If $X$ is a Weinstein manifold, a version of the Fukaya category generated by exact Lagrangian submanifolds is naturally defined over any coefficient ring (not just for $\mathfrak{N}$-algebras). The same is true for the category of sheaves with a microsupport condition (my comfort zone). In either case the coefficient ring can be taken to be $\mathbf{C}$ and one may apply Blanc's construction without worrying about the Novikov parameter. I propose the following analogue of Ganatra's conjecture: \begin{conj*}[Assembly] Let $Q$ be a $d$-dimensional $\mathrm{Spin}^c$-manifold, let $\Lambda \subset T^* Q$ be a conic Lagrangian, and let $U$ be an open subset of $Q$. Let $\mathrm{Sh}_{\Lambda}^w(U,\mathbf{C}) \subset \mathrm{Sh}(U,\mathbf{C})$ be Nadler's wrapped variant \cite{N-wrapped} of the category of sheaves with microsupport in $\Lambda$. \begin{enumerate} \item There is a natural map \begin{equation} \label{eq:conj} \Sigma^{-d} \mathbf{K}[T^* U,T^* U - \Lambda] \to \mathbf{K}_{\mathrm{Blanc}}(\mathrm{Sh}_{\Lambda}^w(U,\mathbf{C})), \end{equation} that is covariantly functorial for open embeddings \item Whenever $\mathrm{Sh}_{\Lambda}^w(U,\mathbf{C})$ is homologically smooth and proper, \eqref{eq:conj} is an isomorphism. \end{enumerate} \end{conj*} I expect that one can formulate a similar conjecture for the wrapped and partially wrapped Fukaya categories of a Weinstein manifold $X$ --- a natural map \[ \Sigma^{-d} \mathbf{K}^{\eta}[X,X -\Lambda]) \to \mathbf{K}(\mathrm{Fuk}_{\Lambda}^w(X)) \] where $\Lambda$ is the skeleton and $\eta$ is a twisting parameter, presumably trivialized on the cotangent bundle of a $\mathrm{Spin}^c$-manifold. \subsection{String topology} Known results on homological mirror symmetry for toric varieties \cite{Kuwagaki}, combined with computations like \eqref{eq:LG} give an indurect route to equivalences \begin{equation} \label{eq:cannot-hold} \mathbf{K}_{\mathrm{Blanc}}(\mathrm{Sh}_\Lambda^w(Q;\mathbf{C})) \cong \Sigma^{-d} \mathbf{K}[T^*Q,T^* Q - \Lambda] \end{equation} in some examples where $Q$ is a compact torus. But the case where $Q$ is arbitrary and $\Lambda = Q$ is the zero section (we may call this the ``string topology case'' after \cite{Abouzaid2}) shows that \eqref{eq:cannot-hold} cannot hold in general. Let us discuss this class of examples in more detail. If $\Lambda \subset T^* Q$ is the zero section, then $\mathrm{Sh}^w_{\Lambda}(Q;\mathbf{C})$ is naturally equivalent to the category of left dg-modules over \begin{equation} \label{eq:COQ} \mathbf{C}[\Omega Q] := C_*(\Omega Q;\mathbf{C}) \end{equation} the $\mathbf{C}$-valued chains on the based loop space of $Q$. This quasi-isomorphism type of this algebra knows the rational homotopy type of $Q$, but nothing more, so one cannot expect to recover from it the $K$-theory of $Q$. Nevertheless, the \emph{algebraic} $K$-theory of $\mathbf{C}[\Omega Q]$ is a variant of Waldhausen's $A$-theory of $Q$, and is the target of an assembly map \cite[\S 3.2]{Waldhausen}. More generally, for any ring or ring spectrum $R$ there is a natural map \begin{equation} \label{eq:waldhausen} \mathbf{K}_{\mathrm{alg}}(R)[Q] \to \mathbf{K}_{\mathrm{alg}}(R[\Omega Q]) \end{equation} Letting $R$ run through $\mathbf{C}$-algebras and taking realizations should produce a map $\mathbf{K}[Q] \to \mathbf{K}_{\mathrm{Blanc}}(\mathbf{C}[\Omega Q])$. A $\mathrm{Spin}^c$-structure on $Q$ gives an identification of $\mathbf{K}[Q]$ with $\mathbf{K}[T^* Q, T^*Q - Q]$, the domain of \eqref{eq:conj}. In Waldhausen's setting, the failure of the assembly map to be an isomorphism is very interesting. When $R$ is the sphere spectrum, the cone on \eqref{eq:waldhausen} (whose codomain is called the $A$-theory of $Q$) is Hatcher's ``Whitehead spectrum'' \cite{Hatcher} that encodes the higher simple homotopy of $Q$, see \cite{Waldhausen} and Lurie's notes available at \url{math.harvard.edu/~lurie/281.html}. When $R$ is a $\mathbf{C}$-algebra, or anything else, I don't know if there is a similar interpretation. \subsection{Speculation about the length filtration} \label{subsec:metric} I wonder whether one could recover the complex $K$-theory of an exact manifold from a suitable absolute version of the Fukaya category, even if this category is not homologically smooth. (``Absolute'' means ``not relative,'' i.e. not defined over $\mathbf{C}$ or $\mathbf{C}((t))$ but only over the full Novikov field.) It would require a version of Blanc's construction that treats the Novikov parameter in a more interesting way than \S\ref{subsec:achtal}--\S\ref{subsec:dowithout}, and one could hope that in this more interesting treatment the assembly map would become an isomorphism. I will explain what I mean by making an explicit string-topology-style conjecture along these lines. I have no evidence for it, but I will make some remarks after stating the conjecture. \medskip Let $Q$ be a Riemannian manifold, and let $\Omega_{q_0} Q$ be the space of rectifiable loops in $Q$ that start and end at a basepoint $q_0$. We will treat the basepoint a little more carefully than at the end of \S\ref{subsec:exact-manifolds}, in order to make a point about it later. The metric endows the chain algebra $\mathbf{C}[\Omega_{q_0} Q]$ \eqref{eq:COQ} with an $\mathbf{R}$-indexed filtration: for each $t \in \mathbf{R}$ we let $F_{<t} \Omega_{q_0} Q \subset \Omega_{q_0} Q$ denote the space of loops of length less than $t$, and put \[ F_{<t} \mathbf{C}[\Omega_{q_0} Q] := \mathbf{C}[F_{<t} \Omega_{q_0} Q] \] \begin{conj*}[Length and $K$-theory] Let $(Q,q_0)$ and $(Q',q'_0)$ be compact, pointed Riemannian manifolds and suppose that there is a quasi-isomorphism of dg algebras \begin{equation} \label{eq:iso-lakt} C_*(\Omega_{q_0} Q,\mathbf{C}) \cong C_*(\Omega_{q'_0} Q',\mathbf{C}) \end{equation} that for all $t$ carries $F_{<t} C_*(\Omega_{q_0} Q;\mathbf{C})$ quasi-isomorphically to $F_{<t} C_*(\Omega_{q'_0} Q';\mathbf{C})$ \begin{equation} \label{eq:iso-lakt-filt} C_*(\Omega_{q_0} Q;\mathbf{C}) \xrightarrow{\sim} C_*(\Omega_{q'_0} Q';\mathbf{C}) \end{equation} Then $\mathbf{K}_*(Q) \cong \mathbf{K}_*(Q')$. \end{conj*} A suitable Rees construction on the filtered dg algebra $F_{<\bullet}\mathbf{C}[\Omega_{q_0} Q]$ might give an $\mathfrak{N}$-algebra that generates the absolute wrapped Fukaya category of the unit disk bundle in $T^* Q$. The real conjecture, which I do not know how to formula precisely, is that there is a procedure similar to Blanc's for extracting a $\mathbf{K}$-module from such a category, and that on the Fukaya category of the disk bundle of a Riemannian (or merely Finsler?) $Q$, it outputs the $K$-homology of $Q$. (In particular, the notion of equivalence used in \eqref{eq:iso-lakt} is stronger than necessary: a Morita-style notion would be more appropriate. For instance if $q_0$ and $q_1$ are different points of $Q$, there is not likely to be any quasi-isomorphism between $\mathbf{C}[\Omega_{q_0} Q]$ and $\mathbf{C}[\Omega_{q_1} Q]$ that preserves lengths, but the length-filtered space of paths from $q_0$ to $q_1$ could provide the Morita equivalence.) \medskip Let us give a reason to doubt the conjecture, followed by something more optimistic. If $Q$ is simply-connected, one recovers $\mathbf{C}[\Omega_q Q]$, up to quasi-isomorphism, as the cobar construction of the coalgebra of chains on $Q$ \cite[\S 2]{JMoore}. The cobar construction has a natural filtration which seems to ``coarsely'' recover the legnth filtration on $\mathbf{C}[\Omega_q Q]$, regardless of the metric. Under the identification with the cobar complex of $\mathbf{C}[Q]$, the loops of metric length $m$ are sandwiched between the cobars of word length $b_1 m$ and $b_2 m$, where $b_1$ and $b_2$ are constants independent of $m$. So any way of recovering the $K$-theory of $Q$ would require knowledge of the exact numerical values of the breaks in the $\mathbf{R}$-indexed filtration. These breaks in the length filtration are a kind of homological, based version of the length spectrum of the metric. The genuine length spectrum is known to recover the Laplace eigenvalues of $Q$, if the metric is generic \cite{DuistermaatGuillemin}. Bergeron and Venkatesh have observed that similar spectral data can see a little bit of the homotopy type of $Q$ beyond the rational homotopy type \cite{BeVe}. Specifically the Cheeger-Muller theorem gives a formula for the alternating product \begin{equation} \label{eq:tors-prod} \prod_i \#\mathrm{tors}H^i(Q;\mathbf{Z})^{(-1)^i} \end{equation} in terms of the Laplace-de Rham eigenvalues and the volumes of the images of $H^i(Q;\mathbf{Z})$ in the spaces of harmonic $i$-forms. In another ``coarse'' sense, (perhaps a related one?) these eigenvalues are given by the Weyl law --- it is their exact numerical values that are needed to recover \eqref{eq:tors-prod}. \medskip \noindent {\bf Example.} Let $Q$ be a nontrivial $\mathrm{SU}(2)$-bundle over $S^4$. Any degree one map $Q \to S^7$ induces a quasi-isomorphism \begin{equation} \label{eq:S7example} \mathbf{C}[\Omega_{q_0} Q] \cong \mathbf{C}[\Omega_{x_0} S^7]. \end{equation} But if the Chern class of the bundle is $m \geq 2$, there is a little bit of torsion in the $K$-theory of $Q$: $\mathbf{K}_1(Q) = \mathbf{Z} \oplus \mathbf{Z}/m$ (while $\mathbf{K}_0(Q) = \mathbf{Z}$, and $\mathbf{K}_0(S^7) = \mathbf{K}_1(S^7) = \mathbf{Z}$). The conjecture predicts that there is no metric on $Q$ for which \eqref{eq:S7example} preserves the length filtration. The possibly spurious comparison made in the remarks above is that, since $\eqref{eq:tors-prod} = m$, the Laplace-de Rham spectra of $Q$ and of $S^7$ are never exactly the same for any choice of metrics. \subsection*{Acknowledgements} I thank Mohammed Abouzaid and Sheel Ganatra for sharing their ideas about the $K$-theory of Fukaya categories, Piotr Achinger and Mattias Talpo for their paper about $\mathbf{C}((t))$-schemes, and Nick Addington and Ben Antieau for some corrections and other advice. I have also benefited from discussions with Ron Donagi, Mauricio Romo, Paul Seidel, Jake Solomon, Semon Rezchikov, and Arnav Tripathy. I was supported by NSF-DMS-1811971.
{'timestamp': '2019-09-09T02:16:18', 'yymm': '1909', 'arxiv_id': '1909.03018', 'language': 'en', 'url': 'https://arxiv.org/abs/1909.03018'}
\section{Introduction} Materials with coupled magnetic and dielectric properties have become the topic of intensive research in recent times. Double perovskite material have shown potential as promising candidate in this research field and have exhibited novel phenomenon such as ferromagnetism, magneto-dielectric, multiferroic, magnetoelectric, ferromagnetic semiconductors etc.\cite{ndi, vilar, yang, dass, ndia, de, ghara, ilyas4} However, the magnetic and electric coupling is rather weak and appears at low temperature thus making it difficult to use for application purpose. The ferromagnetic ordering along with strong insulating behavior of some 3$d$ based double perovskite is eye caching and can exhibit exotic phenomenon.\cite{fuh, azu} Recently, several double perovskites have been extensively studied and have exhibited exotic phenomena. For instance, partially disordered ferromagnetic semiconductor La$_2$NiMnO$_6$ reportedly exhibited colossal magnetodielectricity and multiglass properties.\cite{choudhary} This material also exhibits magnetocapacitance close to room temperature\cite{rogado} and exhibit novel behavior with doping.\cite{hoss2} Lu$_2$CoMnO$_6$ with up-up-down-down ($\uparrow\uparrow\downarrow\downarrow$) magnetic structure exhibits ferroelectricity which originates from the exchange-striction mechanism.\cite{zhang} La$_2$Mn(Ni/Fe)O$_6$ shows ferromagnetic ordering and high refrigerant capacity.\cite{gau} In these compounds the Ni/Mn are ferromagnetically ordered via super exchange interactions mediated by oxygen. Materials with Co/Mn at B$^\prime$/B$^\prime\prime$ are again an interesting class of double perovskite materials with interesting physical properties. Double provskit material like La$_2$CoMnO$_6$ shwows ferromagnetic behavior around $\sim$220 K and shows magnetocapacitance and these properties draw the interest of researches to investigate Co/Mn based perovskites. Tb$_2$CoMnO$_6$ is an anisotropic magnetic material and shows giant rotating magnetocaloric effect \cite{moon}. Further, pyroelectric/magnetoelectric properties along with ferromagnetic transition have been observed for (Y/Er)$_2$CoMnO$_6$.\cite{bisco1, bisco2} Structural and magnetic properties have been studied in single crystal (Ho,Tm,Yb,Lu)$_2$CoMnO$_6$ materials using neutron diffraction and DC magnetization.\cite{bisco3} Mean field type ferromagnetism and occurrence of Griffith phase has also been observed in nano-crystaline Pr$_2$CoMnO$_6$.\cite{ilyas1, ilyas2} Raman study on bulk (Pr, Ho)$_2$CoMnO$_6$ reveals strong spin-phonon coupling in these similar materials.\cite{wliu, ilyas3, silva} Further, in bulk double perovskites dielectric properties have been reported and found thermally activated relaxation mechanism and non-debye's behavor.\cite{nath,jwchen, ilyas3, jgjwang} Despite so many interesting properties of Co/Mn based double perovskites these materials have not been studied in much detail especially in thin films and nano-crystalline form. Here we have chosen Sm$_2$CoMnO$_6$ to investigate its physical properties in nano crystalline form. It is worth noting that the charge state Co$^{2+}$ and Mn$^{4+}$ give an ordered monoclinic phase structure whereas $Co^{3+}$ and $Mn^{3+}$ would result in B-site disordered phase. Further, in ordered phase of Sm$_2$CoMnO$_6$ the Co$^{2+}$-O-Mn$^{4+}$ would couple via superexchange interaction and give rise to a ferromagnetic ordering.\cite{good} Whereas for disordered phase moments favour antiferromagnetic spin arrangment. Furthermore, it is expected that with decreasing ionic radii of rare earth ions at A-site the structrual modification will take place and magnetic ordering temperature decreases. Sm$^{3+}$ ion is smaller than the La$^{3+}$ and Pr$^{3+}$ ions we expect reduced magnetic phase transition temperature.\cite{vasi} Further we investigate nano-crystalline Sm$_2$CoMnO$_6$ sample so the effect of grain size will also effect the physical properties specially dielectric behavior of sample. In this paper, we have studied nano-crystalline sol-gel prepared Sm$_2$CoMnO$_6$. Structural, magnetic, dielectric and transport properties have been studied in detail. Our aim in to investigate the physical properties of nano-crystalline Sm$_2$CoMnO$_6$. Further, we will compare the obtained result to results obtained for bulk material in earlier literature. Magnetic ordering, spin-phonon coupling and dielectric response are much focused in this study. \begin{figure}[th] \centering \includegraphics[width=8cm, height=12cm]{Fig1.eps} \caption{(Color online) Comparative x-ray diffraction analysis by Rietveld refinement for nano-crystalline Sm$_2$CoMnO$_6$ (a) orthorhombic, (b) monoclinic phase. (c) SEM micrograph for nano-crystalline Sm$_2$CoMnO$_6$.} \label{fig:Fig1} \end{figure} \section{Experimental details} The nano-crystalline sample of Sm$_2$CoMnO$_6$ was prepared by the sol-gel method as adopted in earlier work.\cite{ilyas1} Sm$_2$O$_3$, Co(NO$_3$)$_2$.6H$_2$O and C$_4$H$_6$MnO$_4$.4H$_2$O were taken in the stoichiometric ratio and dissolved in water. Sm$_2$O$_3$ is insoluble in the water, so we added HNO$_3$ drop by drop with continuous stirring on a magnetic stirrer with heating 70 $^o$C temperature until a clear solution is obtained. These solutions were then poured into a 400 ml beaker half-filled with the citric acid solution in water. The beaker with the obtained solution was then placed on the magnetic satire at 90 $^o$C temperature for 24 hours until the gel is formed. Then the temperature was increased to 200 $^o$C to dry up the gel. The sample was then collected from the beaker and crushed into fine powder by grinding in mortar and pestle. For the phase purity of the sample, the fine powder was then heated in a furnace at 800 $^o$C and 900 $^o$C for 12 hours. The x-ray diffraction was taken for structural characterization and to confirm the phase purity of the sample using Rigaku made differectometer MiniFlex600 . XRD data was collected at room temperature in 2$\theta$ range 10$^o$ -90$^o$ with step size of 0.02$^o$ and scan rate of 2$^o$/min. Crystal structural analysis was done by Rietveld refinement of XRD data using Fullprof program. The XPS measurements were performed with base pressure in the range of $10^{-10}$ mbar using a commercial electron energy analyzer (Omnicron nanotechnology) and a non-monochromatic Al$_{K\alpha}$ X-ray source (h$\nu$ = 1486.6 eV). The XPSpeakfit software was used to analyze the XPS data. The samples used for XPS study are in pallet form where an ion beam sputtering has been done on the samples to expose clean surface before measurements. Magnetic data was collected by Physical Properties Measurement System by Cryogenic Inc. Labram-HR800 micro-Raman spectrometer with diode laser having wavelength ($\lambda$) = 473 nm have been used to record the temperature dependent Raman spectra. This spectrometer use grating with 1800 groves/mm and CCD detector with a high resolution of $\sim$ 1 cm$^{-1}$. A THMS600 stage from Linkam UK have been used for temperature variation with the stability of $\pm$0.1 K for the low temperature Raman measurements. Dielectric measurements in the frequency range from 1 Hz to 5.6 MHz were performed using a computer controlled dielectric spectrometer. \section{Result and Discussions} \subsection{Structural study} X-ray study was done for structural characterization of Sm$_2$CoMnO$_6$ and data was recorded. Further detailed structural study of the sample was done by Rietveld refinement of XRD data using Fullprof program. XRD patterns along with Rietveld refinement for orthorhombic and monoclinic phases are shown in the Fig. 1a and 1b. We have performed Rietveld refinement of XRD data for both orthorhombic and monoclinic crystal structures however the fitting for orthorhombic phase is not as good as for monoclinic. The R-factor R$_{exp}$, R$_{wp}$ and $\chi^{2}$ for orthorhombic structure comes out 15.0, 19.7 and 1.78 respectively. However, the rietveld refinement of XRD pattern with monoclinic phase gives reasonable good fitting with R$_{wp}$, R$_{exp}$ and $\chi^{2}$ values as 18.6, 14 and 1.5 respectively. These values are acceptable and shows sample is chemically pure and single phase.\cite{bhatti1, bhatti2} Further, results confirm Sm$_2$CoMnO$_6$ sample adopt monoclinic structure with \textit{P2$_1$/n} space group. The sample has unit cell parameters are $a$ = 5.330624 $\AA$, $b$ = 5.498045 $\AA$, $c$ = 7.566482 $\AA$ and $\beta$ is 89.98(5) $^o$ with unit cell volume is 221.75 $\AA^3$. Further, to obtain the particle size in the nano-crystalline sample of Sm$_2$CoMnO$_6$ we have performed the scanning electron microscopy. Fig. 1c shows the SEM image obtained for the nano-crystalline structure. The SEM image is analyzed using ImageJ software. We observed that the average crystallite size for present compound is $\sim$84.17 nm. \begin{figure}[t] \centering \includegraphics[width=8cm, height=12cm]{Fig2.eps} \caption{(Color online) (a) The XPS core level spectra of Co 2$p$. (b) XPS core level spectra of Mn 2$p$. (c) XPS core level spectra of Sm 3$d$. In the figure the red solid line is the overall envolp of the XPS spectrum and the other colored solid lines are the other respective fitted peaks.} \label{fig:Fig2} \end{figure} \subsection{X-ray photo-electron spectroscopy (XPS)} The physical properties of a compound is largely described by the oxidation state of cations present in material. XPS is a vital tool to understand the charge state of cations. We have employed the XPS to study the cationic charge state of Co, Mn and Sm in Sm$_2$CoMnO$_6$. XPS spectrum of Co 2$p$ is shown in Fig. 2a, it is evident that there are two peaks located at 780.7 eV and 796.01 eV for CO 2$p$$_{3/2}$ and Co 2$p$$_{1/2}$ respectively resulting from spin orbital splitting of 2$p$ orbital with 15.3 eV. Beside the Co 2$p$ peaks two satellite peaks have also been observed close to Co 2$p$ peaks and are in agreement resulting with in literature.\cite{wang, qiu, xia} The peaks locations of Co 2$p$ core level indicates that the Co cations are presents in +2 oxidation states. The measured XPS spectrum for Mn 2$p$ core levels along with peak fitting is shown in Fig. 2b. The Mn 2$p$ spectrum shows two distinct peaks located at 642 eV and 654 eV corresponding to Mn 2$p$$_{3/2}$ and 2$p$$_{1/2}$ resulted from spin orbital splitting of 2$p$ orbitals with splitting energy of 12 eV. There are two small peaks at the higher energy side is observed in fit the data and believed due to satellite correction. The resulting corroborates with the reported literature.\cite{ida, sachoo, cao} The peak position reveals that the Mn cation is present in +4 oxidation state. XPS spectra of Sm 3$d$ core level along with the fitting of peaks are shown in Fig. 2c. It is quite evident from the figure that the two distinct spin-orbital split peaks Sm 3$d$$_{5/2}$ and Sm 3$d$$_{3/2}$ are located at 1082.16 eV and 1109.33 eV respectively with at spin-orbital splitting energy of 27.17 eV. The detail analysis of XPS spectrum reveals that the Sm cation are present in +3 oxidation states as reported in literature.\cite{qliu, duan} Thus the XPS study reveals the oxidation state and the spin orbital splitting energy of the compounds present in the material. The observed oxidation state Co$^{2+}$, Mn$^{4+}$ and Sm$^{3+}$ suggests the crystallization of ordered phase of Sm$_2$CoMnO$_6$. Further, in the XPS data we do not see any contribution from other cationic valencies corresponding to the peak positions for Mn$^{3+}$ or Co$^{3+}$ unlike seen in the literature.\cite{qiu, xia, sachoo, cao, haung} \begin{figure}[th] \centering \includegraphics[width=8cm]{Fig3.eps} \caption{(Color online) (a) Temperature dependent magnetization data $M(T)$ shown for Sm$_2$CoMnO$_6$ measured at different fields. (b) $M(T)$ data plotted in terms of inverse susceptibility $\chi^{-1}$, solid line is fitting due to modified Curie Weiss Law and dotted line shows conventional Curie Weiss Law fit. Inset shows dM/dT vs $T$ plot showing $T_c$ for Sm$_2$CoMnO$_6$.} \label{fig:Fig3} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8cm]{Fig4.eps} \caption{(Color online) Isothermal magnetization $M(H)$ data collected at 2 K in applied field of $\pm$ 50 kOe for Sm$_2$CoMnO$_6$.} \label{fig:Fig4} \end{figure} \subsection{Magnetization study} Fig. 3a shows the temperature dependent magnetization ($M(T)$) data measured in both zero field cooled ($ZFC$) and field cooled ($FC$) mode. $ZFC$ is measured in 300 K to 2 K range in an applied field of 100 Oe whereas $FC$ data is measure at three different applied magnetic fields (see Fig. 3a). It is quite evident that with decreasing temperature the magnetic moment ($M$) in $M(T)$ is steady till 160 K, however, with further decrease in temperature the magnetic moment begin to rise sharply. It is well known that the ordered Sm$_2$CoMnO$_6$ double perovskite is expected to exhibit ferromagnetic ordering. Since, from XPS study we observed that Mn and Co cations are in +4 and +2 oxidation states respectively with outer electronic configuration Mn$^{4+}$ (t$_{2g}^{3}$e$_{g}^{0}$), Co$^{2+}$ (t$_{2g}^{5}$e$_{g}^{2}$) these cations take part in superexchange interaction and are ferromagnetically ordered. The sharp rise in moment below 160 K is marked by paramagnetic to ferromagnetic phase transition in Sm$_2$CoMnO$_6$. With further decrease in temperature, large bifurcation in $M_{ZFC}$ and $M_{FC}$ appears. $M_{ZFC}$ curve shows a peak like behavior around $T_c$ as shown in Fig. 3a, with further decreasing temperature the moment remain steady. However, $M_{FC}$ shows a typical ferromagnetic like feature. The nano-crystalline Sm$_2$CoMnO$_6$ shows a magnetic phase transition from PM-FM with T$_c$ $\sim$148 K, as obtained from point of inflection in $dM/dT$ vs $T$ plot shown in inset Fig. 3b. We have measured $M_{FC}$ at different applied field and observe that with increasing field, $T_c$ shifts to higher field which is due to forced spin arrangement in field direction. To understand the magnetic behavior in further detailed we have plotted magnetization data in terms of temperature dependent inverse magnetic susceptibility i.e. $\chi^{^{-1}}$ vs $T$ as shown in Fig. 3b. In paramagnetic region close to above $T_c$ we observe that the inverse magnetic susceptibility ($\chi^{^{-1}}$) shows deviation from linearity unlike as expected for paramagnetic system by Curie Weiss law (see dotted line). However, such deviation in $\chi^{^{-1}}$ close to $T_c$ is observed and expected for double perovskite compounds which can be understood by using modified Curie Weiss law described as:\cite{booth} \begin{eqnarray} \chi = \frac{C_{TM}}{T - \theta_{TM}} + \frac{C_{RE}}{T - \theta_{RE}} \end{eqnarray} where $C_{TM}$, $C_{RE}$ and $\theta_{TM}$, $\theta_{RE}$ are the Curie Constants and paramagnetic Curie temperatures respectively, here subscript TM and RE represent transition metal ions and rare earth ions. We have fitted the susceptibility data of Sm$_2$CoMnO$_6$ at high temperature above $T_c$. The fitting parameters obtained from fitting of $\chi^{-1}$ with Eq. 1 in Fig. 3b were used to calculate Curie constant and $\theta_P$. The values obtained for $C_{TM}$ and $\theta_{TM}$ for Sm$_2$CoMnO$_6$ are 3.081 emu K$^{-1}$ mole$^{-1}$ Oe$^{-1}$ and 146.96 K respectively. The large positive value of $\theta_P$ signifies that the ferromagnetic ordering is present in the sample. Further, effective magnetic moment is calculated using formula $\mu_{eff}$ = $\sqrt{3 C_{TM} k_{B}/N}$ where $C_{TM}$ is obtained from modified Curie Weiss fitting in Fig. 3b. The value of $\mu_{eff}$ is 4.91474 $\mu_B$/f.u. Further, from fitting we have obtained the $C_{RE}$ value for rare earth ions as 15.953 emu K$^{-1}$ mole$^{-1}$ Oe$^{-1}$ which is close to the value of Sm$^{3+}$ free ions. Isothermal magnetization $M(H)$ data have been collected at 2 K up to $\pm$50 kOe applied magnetic field as shown in Fig. 4. $M(H)$ curve shows hysteresis which is signature of ferromagnetic ordering. However the $M(H)$ curve is not symmetric. Further the magnetic moment does not show any signature of saturation even at highest applied magnetic field of 50 kOe. The magnetic moment at 50 kOe is about 3.1 $\mu$$_B$/f.u. where as the remanent magnetization and coercive force is 1.7 $\mu$$_B$/f.u. and 10 kOe respectively. Reducing particle size in nano-scale does effect the magnetic properties of compounds in great deal. Many materials with particles dimensions at nano-scale shows drastic change in magnetic properties in comparison to their bulk form, especially manganites show superparamagnetism, spin-glass behavior, core-shall spin structure etc.\cite{tzhang} It is well established that the reducing particle size effect unit cell dimensions, which effect the Metal-Oxygen bond length and bond angles hence effect the magnetic properties.\cite{rer} Finally, to understand the effect of nano-size on magnetism we made a comparison of the magnetic parameters with bulk as reported in literature.\cite{nath} It is found that in nano form Sm$_2$CoMnO$_6$ sample have large Curie temperature 146.96 K whereas for bulk it is found to be around 122 K \cite{nath}. The experimentally obtained value of effective magnetic moment found to be lower than bulk as reported.\cite{nath} The effect on magnetic properties can be described due to surface ferromagnetism on small nano particles as found in case of doped perovskite.\cite{hoss1} \begin{figure*}[t] \centering \includegraphics[width=14cm]{Fig5.eps} \caption{(Color online) (a) Raman spectra of Sm$_2$CoMnO$_6$ measured at different temperatures. (b) shows the line shape and its Lorentzian fitting of A$_{1g}$ and B$_{2g}$ Raman modes at 496 and 636 cm$^{-1}$ respectively.} \label{fig:Fig5} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=8cm]{Fig6.eps} \caption{(Color online) Temperature Variation of (a) Raman shift (b) FWHM for Raman mode at 636 cm$^{-1}$ corresponding to stretching of Co/MnO$_6$ for Sm$_2$CoMnO$_6$. The solid line is fitting due to Eq. 2} \label{fig:Fig6} \end{figure} \subsection{Temperature dependent Raman study} Fig. 5a shows the temperature dependent Raman spectra taken at selective temperatures across magnetic transition. The Raman spectrum are taken at 10 K, room temperature and across the magnetic transition at close temperature intervals. It is evident from the Fig. 5a the important feature in the Raman spectra are the prominent Raman modes at 636 and 496 cm$^{-1}$ corresponding to B$_{2g}$ stretching mode and A$_{1g}$ breathing mode respectively. These Raman modes are due to stretching, bending and rotation of (Co/Mn)O$_6$ octhadera. It is known from the theoretical lattice dynamics that the strong sharp peak at 636 cm$^{-1}$ originates from a symmetric stretching of the (Co/Mn)O$_6$ octahedra, whereas the band at around 496 cm$^{-1}$ describes a mixed type vibration of antisymmetric stretching and bending.\cite{ilive} Additionally, the modes at $\sim$1278 cm$^{-1}$ represent the second-order overtones of the breathing mode.\cite{meyer} The temperature dependent Raman spectra shows modification in intensity of peaks well as their peak positions corresponding to each modes. Further, it is well known that the behavior of ordered and disordered phases in double perovskite can be understand using Raman spectra, such detail study have been carried out on La$_2$CoMnO$_6$ where mode at 636 cm$^{-1}$ seems prominent and assigned to a monoclinic P2$_1$/n structure.\cite{ilive, meyer} The present study shows similar frequency results. To understand the presence of spin phonon coupling in present compound, we have analyzed the Raman data of present series with the following anharmonic decay model:\cite{harish} \begin{eqnarray} \omega(T) = \omega{_0} - A\left[1 + \frac{2}{exp\left(\frac{\hbar\omega_0}{2k_BT}\right) - 1}\right] \end{eqnarray} \begin{eqnarray} \Gamma(T) = \Gamma{_0} - B\left[1 + \frac{2}{exp\left(\frac{\hbar\omega_0}{2k_BT}\right) - 1}\right] \end{eqnarray} where $\omega_0$ and $\Gamma_0$ are the intrinsic frequency and line width of the optical mode, A and B are the anharmonic coefficients. $\omega(T)$ and $\Gamma(T)$ describes expected temperature dependence of a phonon mode frequency and line width due to anharmonic phonon-phonon scattering. The temperature dependent Raman spectra shown in Fig. 5a have been analyzed using Lorentz functions. From the fitting of these spectra, peak position of Raman modes and line width have been obtained. Fig. 6a shows the temperature dependent peak position of Raman mode corresponding to stretching mode. The mode frequency increases with decreasing temperature, and shows a deviation around 150 K which is magnetic phase transition and further increases down to lowest measured temperature. To understand further we have fitted the phonon mode frequency with Eq.2 anharmonic decay model (solid red line in Fig 6a). We observed that around $T_c$ phonon frequency shows a deviation from anharmonic behavior and such behavior is due to spin phonon coupling present in materials. Such results have been reported in many other materials where at magnetic transition the material shows strong deviation from anharmonic behavior.\cite{sandi, grana, lave} Similarly, the line width as a function of temperature is plotted and shown in Fig. 6b. The line width is also fitted with the anharmonic model given in Eq. 2. It is found that the line width shows a deviation across $T_c$ in a similar fashion as mode frequency does. The deviation of mode frequency and line width form anharmonic behavior around magnetic ordering temperature is due to the additional scattering resulting from spin-phonon interaction. \begin{figure*}[th] \centering \includegraphics[width=14cm]{Fig7.eps} \caption{(Color online) Temperature dependent (a) real part of complex dielectric permittivity ($\epsilon$$^\prime$) (b) loss tangent (tan $\delta$) measure for Sm$_2$CoMnO$_6$ in the temperature range of 20 K to 300 K at various frequencies.} \label{fig:Fig7} \end{figure*} \begin{figure}[th] \centering \includegraphics[width=8cm]{Fig8.eps} \caption{(Color online) Variation of relaxation time against normalized temperature i.e ln $\tau$ vs 1000/T obtained from tengant loss in Fig. 7(b).} \label{fig:Fig8} \end{figure} \subsection{Dielectric study} Fig. 7a and 7b represent the temperature dependent real and imaginary part of complex dielectric permittivity $\epsilon^{\prime}$ and tan $\delta$ respectively measured in the temperature range 20 K to 300 K at different frequencies for Sm$_2$CoMnO$_6$. Further, for relaxor systems with relaxation mechanisms, each relaxation component will correspond to plateaus in $\epsilon^{\prime}$(T) and respond with peaks in tan $\delta$. For this material we have observed that with increasing temperature the $\epsilon^{\prime}$ increases, at low temperature the $\epsilon^{\prime}$ increases slowly however with increasing temperature $\epsilon^{\prime}$ increases sharply. The high response of dielectric permeability is due to grain boundary effect.\cite{jwchen, mansuri} Further, with increasing frequencies $\epsilon^{\prime}$ decreases sharply. The higher value of $\epsilon^{\prime}$ at low frequency is attributed to the accumulation of the charges at grain boundaries On careful observation of tangent loss curve tan $\delta$ it is clearly seen that there is a broad hump at low temperatures which is feature of relaxor phenomenon. The observed relaxation is frequency dependent and shift to higher temperature with increasing frequency. This clearly indicates that the relaxation peaks in the dielectric loss is due to thermally activated mechanisms. The resonance condition defined as $\omega_p$$\tau_p$ = 1 where $\omega$ = 2$\pi$$f$ is defined as resonance frequency. The relaxation mechanisms and its origin can be analyzed by fitting the peaks in tan $\delta$ with the Arrhenius law given as follow: \begin{eqnarray} \tau_{tan \delta} = \tau_0 exp(\frac{-E_{\alpha}}{K_B T}) \end{eqnarray} \begin{eqnarray} \tau_{tan \delta} = \frac{1}{2 \pi f_{tan \delta}}) \end{eqnarray} where, $T$ is the temperature where peak occurs in tangent loss curve at a particular frequency $f_{tan \delta}$, $\tau_0$ and $E_{\alpha}$ are characteristic relaxation temperature and activation energy respectively and, $k_B$ is the Boltzmann constant. Fig. 8 shows the plot of dielectric loss peaks as a function of absolute temperature. From the fitting parameters of the data using Eq. 4 we have calculated activation energy $E_{\alpha}$ = 0.14 eV. \begin{figure*}[th] \centering \includegraphics[width=14cm]{Fig9.eps} \caption{(Color online) Frequency dependent (a) real part of complex dielectric permittivity ($\epsilon$$^\prime$) (b) loss tangent (tan $\delta$) measure for Sm$_2$CoMnO$_6$ at various temperatures between 25 K and 300 K in the frequency range 1 Hz to 5.6 MHz.} \label{fig:Fig9} \end{figure*} To further understand the dielectric response we have measured the frequency dependent $\epsilon$$^\prime$ and tan $\delta$ over the frequency range 1 Hz to 5.5 MHz for Sm$_2$CoMnO$_6$ at different temperatures. In the Fig. 9a we observed that Sm$_2$CoMnO$_6$ exhibits high dielectric constant at low frequency and at high temperature. The dielectric spectrum shown in Fig. 9a clearly shows two plateaus well separated by dispersion. The separate plateau in Fig. 9a are attributed to static and optical dielectric constant. Further, the frequency dependent dielectric constant shows the dispersion which moves to higher frequency with increasing temperature. In Fig. 9b we have shown the tan $\delta$ as a function of frequency at different temperature in the range of 20 K to 300 K. It is observed that at low frequency the loss is high which decreases with increasing frequency and decreasing temperature. \subsection{Impedance spectroscopy} Impedance spectroscopy is vital and informative technique to understand and distinguish the contributions to the electric and dielectric properties from grain, grain boundaries and electrode-sample contact interface. Various involve in relaxation mechanisms can be identified by plotting impedance in a complex plan at various temperature. The complex impedance describe by the equation:\cite{fang} \begin{eqnarray} Z^* = Z^\prime + jZ^{\prime\prime} \end{eqnarray} \begin{eqnarray} Z^\prime = \frac{R}{1 + (\omega\tau)^2} \end{eqnarray} \begin{eqnarray} Z^{\prime\prime} = \frac{\omega R \tau}{1 + (\omega\tau)^2} \end{eqnarray} where Z$^*$ is complex impedance, Z$^\prime$ and Z$^\prime\prime$ are real and imaginary parts of impedance respectively. R is resistance, $\omega$ is angular frequency and $\tau$ is relaxaton time. Fig. 10a shows the real part of complex impedance ($Z^{\prime}$) plotted as a function of frequency in the frequency range 1 Hz to 5.6 MHz at various temperatures between 50 K to 300 K. For clarity both the axis are in logarithmic scales. It is quite evident from the figure that $Z^{\prime}$ decreases with increasing temperature. At low temperature $Z^{\prime}$ gradually decreases with increasing frequency, however at temperatures above 100 K, $Z^{\prime}$ initially remains independent of frequency then at higher frequency it began to decreases. Further, the frequency independent region moves to higher frequency with increasing temperature. Further, it is observed that at higher frequency and high temperature $Z^{\prime}$ is almost similar. This feature is possibly due to the release of accumulated space charges at high temperatures hence contribute to the enhancement of conduction in this material at high temperature. Imaginary part of impedance ($Z^{\prime\prime}$) is shown in Fig. 10b for wide frequency range. $Z^{\prime\prime}$ shows a peak which attain $Z^{\prime\prime}_{max}$ for all the curves measured at different temperatures, further it is evident that the peak moves towards higher frequency with increasing temperature. The peak shift to higher frequency with increasing temperature suggests that the relaxation time constant decreases with increasing temperature. \begin{figure*}[th] \centering \includegraphics[width=14cm]{Fig10.eps} \caption{(Color online) (a) Frequency dependent real part of impedance $Z^{\prime}$ measure at different temperatures. (b) Frequency dependent imaginary part of impedance $Z^{\prime\prime}$ measure at various temperatures. (c), (d) real $Z^{\prime}$ and imaginary part $Z^{\prime\prime}$ plotted in terms of Nyquist plot $Z^{\prime}$ vs $Z^{\prime\prime}$.} \label{fig:Fig10} \end{figure*} \begin{figure}[th] \centering \includegraphics[width=8cm]{Fig11.eps} \caption{(Color online) Variation of relaxation time against normalized temperature i.e ln $\tau$ vs 1000/T obtained from Z$^\prime\prime$ plot.} \label{fig:Fig11} \end{figure} We know that the most probable relaxation time ($\tau$) can be determined for relaxation system by identification of the position of the loss peak in the $Z^{\prime\prime}$ vs log ($f$) plots using the relation: \begin{eqnarray} \tau = \frac{1}{\omega} = \frac{1}{2 \pi f} \end{eqnarray} where $\tau$ is relaxation time and $f$ is the relaxation frequency. To further understand the relaxation behavior we have plotted the relaxation time $\tau$ vs inverse temperature 10$^3$/$T$ (K$^{-1}$). Fig. 11 shows the temperature variation of $\tau$, it is observed that the relaxation time follows Arrhenius behavior given as: \begin{eqnarray} \tau_b = \tau_0 exp \left( \frac{-E_\alpha}{k_BT} \right) \end{eqnarray} where $\tau_0$ is the pre-exponential factor, k$_B$ the Boltzmann constant and $T$ the absolutes temperature. The peak frequency $f_p$ obtained from the peak values of Z$^{\prime\prime}$ in Fig. 10b is plotted as relaxation time as a function of absolutes temperature shown in Fig. 11. The data is very well fitted with the Eq. 10 Arrhenius law for thermally activated relaxation mechanism. From the fitting parameters the activation energy (E$_{\alpha}$) have been calculated and is found to be 0.17 eV. Fig. 10c and 10d shows the Z$^{\prime}$ vs Z$^{\prime\prime}$ in the form of Nyquist plots at some selective temperatures measure in the wide frequency range 1 Hz to 5.6 MHz. It is quite evident from the figures that the plot gives the semicircle in whole range of temperature. But the circle are compressed and their center are not on Z$^\prime$ axis. Which shows that the system is deviates from deal Debye's model, because for ideal Debye system the semicircles are supposed to have their Z$^\prime$ axis. The Nyqust plot give lot of information about grains, grain boundaries and electrode effect etc. Further, we observed the semicircle in Nyquist plot shows a decrease in radius with increasing temperature which reflect that the resistivity decreases wth increasing temperature. Further the depression in semicircle are due to polarization effect. The non-Debye behavior account for grain boundaries, grain, stress and strain present in material. In this case we observed only one depress semicircle which indicate that the grain effect deviate the system from deal Debye behavior. \begin{figure*}[th] \centering \includegraphics[width=14cm]{Fig12.eps} \caption{(Color online) (a) Variation of real part of electrical modulus ($M^\prime$) with temperature. (b) Imaginary part of electrical modulus $M^{\prime\prime}$ as a function of temperature.} \label{fig:Fig12} \end{figure*} \begin{figure}[th] \centering \includegraphics[width=8cm]{Fig13.eps} \caption{(Color online) Variation of relaxation time against normalized temperature i.e ln $\tau$ vs 1000/T obtained from M$^\prime\prime$ plot.} \label{fig:Fig13} \end{figure} \subsection{Electric modulus} Information of interface polarization, relaxation time, electrical conductivity and grain boundary conduction effects etc can be deduced from electrical modulus of materials. Figs. 12a and 12b show the temperature dependent real (M$^{\prime}$) and imaginary (M$^{\prime\prime}$) part of electrical modulus obtained at selective frequencies for Sm$_2$CoMnO$_6$ in the temperature range of 20 K to 300 K. Fig. 12a shows variation of real part of electrical modulus as a function of temperature at different frequencies. It is observed that M$^\prime$ ncreases wth decreasing temperature at low temperatures all the curves of M$^\prime$ measured at different frequencies merge in to one curve. Similar feature is observed at high temperatures also. There is large dispersion in M$^\prime$ at intermediate temperatures around 150 K which may be activated by magnetic ordering. The M$^\prime$ increases with increasing frequency, but with temperature the behavior remain same. Fig. 12b shows variation of $M^{\prime\prime}$ with frequency at selected temperatures. Once again, $M^{\prime\prime}$ spectroscopy plot reveals relaxation phenomena in the material. The maximum value ($M^{\prime\prime}$) of the $M^{\prime\prime}$ peak shifts to higher frequency, which suggests that hopping of charge carriers is predominantly thermally activated. Asymmetric broadening of the peak indicates spread of relaxation with different time constants, which once again suggests the material is non-Debye-type. In Fig. 13 we have plotted the $\tau$ vs absolute temperature $T$ calculated from the frequency for which peak value M$^{\prime\prime}_{max}$ appears in Fig. 12b by using the relaton $\tau$ = 1/2 $\pi$ f. We observed that the relaxation time as a function of absolute temperature (1000/T) gives a stright line which is well obayed by Arhenioud behavior. From the fittng parameters we have obtained activaton energy as E$_\alpha$ = 0.11 eV. \subsection{Electric AC conductivity} To further understand the charge hoping and electrical properties we have investigated the AC conductivity in Sm$_2$CoMnO$_6$. The AC conductivity is calculated by using relation $\sigma_{ac} = \epsilon_0 \omega \epsilon^{\prime\prime}$.\cite{sing} Fig. 14a shows the variation of AC conductivity with frequency i.e. $\sigma_{ac}$ vs $f$ at some selective temperatures in the range 50 K to 300 K. It is evident from the figure that at low frequencies the conductivity is independent of frequency and gives a plateau region at all temperatures. In this region of frequency the conduction is mainly dominated by DC conductivity ($\sigma_{dc}$). However, at higher frequencies the conductivity increases with increasing frequency. It is further notable that the plateau region marked by dc conductivity in the Fig. 14a extends to higher frequencies with increasing temperature. The frequency independent region also suggests that the hopping charge carriers are absent at low frequencies. The ac conductivity at high frequency in this case obey Jonscher's Universal Power Law given as follow:\cite{thakur} \begin{eqnarray} \sigma_{ac} = \sigma_{dc} + A\omega^n \end{eqnarray} when A is a temperature dependent constant, $\omega$ = 2$\pi$$f$ and $n$ is the power law exponent which generally varies between 0 and 1 depending upon temperature. The value of power law exponent $n$ represent the extent of interaction between mobile ions and lattice around. For non-interacting Debye system $n$ = 1 and with decreasing $n$ value the interaction is expected to increase between lattice surrounding and mobile ions. Further the constant A gives the degree of polarizibility. \begin{figure}[th] \centering \includegraphics[width=8cm]{Fig14.eps} \caption{(Color online) (a) Frequency dependence plot of the ac conductivity ($\sigma_{ac}$) for temperatures ranging from 50 K to 300 K are shown for Sm$_2$CoMnO$_6$. Solid red lines are fitting due to Eq. 11. (b) Shows the temperature variation of the power law exponent $n$.} \label{fig:Fig14} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8cm]{Fig15.eps} \caption{(Color online) The variation of $\sigma_{ac}$ with absolute temperature (10$^3$/T) is shown for Sm$_2$CoMnO$_6$. The solid line are due to fitting with Eq. 12.} \label{fig:Fig15} \end{figure} The frequency dependent $\sigma_{ac}$ is fitted with the power law as shown in Fig. 14a. The conductivity data is fitted well in full frequency range 1 Hz to 5.5 MHz. From the fitting parameters the exponent factor $n$ is obtained for all temperatures and is plotted in Fig. 14b. It ss evident from the figure that the $n$ value at room temperature is around 0.5 which increases with decreasing temperature. However we found that below 150 K the conductivity data does not obey the Power law behavior. To further understand the conduction mechanism we have measured temperature dependent AC conductivity. The variation of AC conductivity with absolute temperature i.e. $\sigma_{ac}$ vs 10$^3$/T at some selective frequencies is shown in Fig. 15. We observed that with increasing frequency the conductivity increases. \begin{eqnarray} \sigma_{ac} = \sigma_{0}exp\left(\frac{-E_\alpha}{k_BT}\right) \end{eqnarray} where $\sigma_{0}$ is pre-exponent factor, $k_B$ is Boltzmann constant and $E_\alpha$ is the activation energy. We have fitted the conductivity data with the Eq. 12. It is found that the conductivity data is well fitted at high temperatures for all frequency. At low temperature the AC conductivity is independent of temperature and form a steady region but does depend on frequency. From the fitting parameter we calculated the activation energy which decreases with increasing frequency listed as for E$_{\alpha}$(1MHz) = 2.15 eV, E$_{\alpha}$(100 kHz) = 2.67 eV, E$_{\alpha}$(10 kHz) = 2.86 eV, E$_{\alpha}$(1 kHz) = 2.87 eV, E$_{\alpha}$(100 Hz) = 2.88 eV. \section{Conclusion} Nano-crystalline Sm$_2$CoMnO$_6$ sample is successfully prepared by the sol-gel method. Structural, magnetic and dielectric properties are studies in detail. Structural study by x-ray diffraction confirms that the sample is in single-phase and chemically pure. Rietveld refinement shows that Sm$_2$CoMnO$_6$ crystallized in monoclinic crystal structure and adopt \textit{P2$_1$/n} space group. Magnetization study reveals that Sm$_2$CoMnO$_6$ is a ferromagnetic material that undergoes paramagnetic to ferromagnetic phase transition around 148 K marked as transition temperature ($T_c$). The effective magnetic moment obtained from experimental data is close to the theoretical value for the spin only system in the high spin state for Co/Mn sublattice. It is further observed that the Sm$^{3+}$ spin ordered in opposite direction to the Co/Mn sublattice thus giving rise to antiferromagnetic ordering at low temperature. Temperature dependent Raman study shows deviation from anharmonic behavior of frequency and line width for A$_{g1}$ Raman mode around $T_c$ which confirms the presence of spin-phonon coupling in Sm$_2$CoMnO$_6$. The dielectric response is studied in detail we observed that the material has a high dielectric constant at low temperature. The dielectric loss shows a relaxation process due to grains, which is thermally activated in nature with an activation energy of 0.14 eV. The impedance spectroscopy and electrical modulus study further confirm that the relaxation process is thermally activated in nature where the relaxation time follows Arrhenius behavior. AC conductivity has been studied with both frequency and temperature dependence. we found that the activation energy decreases with increasing frequency. Further, the exponent factor obtained from conductivity analysis shows that the system deviates from the Debye's model. \section{Acknowledgment} We acknowledge MNIT Jaipur, India for XPS data and AIRF (JNU) for magnetic measurement facilities. We acknowledge UGC-DAE-Consortium Indore and Dr. V. G. Sathe for Raman data. We also acknowledge Dr. A. K. Pramanik for dielectric measurement and UPEA-II funding for LCR meter. We thank Saroj Jha and Dr. Ruchita Pal for help in recording data. Author Ilyas Noor Bhatti acknowledge University Grants Commission, India for financial support.
{'timestamp': '2020-04-29T02:15:53', 'yymm': '2004', 'arxiv_id': '2004.13473', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.13473'}
\subsubsection*{\bibname}} \usepackage{parskip} \input{math} \usepackage{xspace} \usepackage{ifthen} \usepackage{todonotes} \usepackage{hyperref} \hypersetup{ linkbordercolor=cyan, urlbordercolor=magenta, citebordercolor=green, } \newcommand{\topic}[1]{\vspace{1mm}\noindent\textbf{#1}} \newcommand{\meanstd}[3][]{ \ifthenelse{ \equal{#1}{} } {#2 {\scriptstyle \pm #3}} {#2 {\scriptstyle \pm #3}\times10^{#1}} } \newcommand{GsKL\xspace}{GsKL\xspace} \newcommand{D}{D} \newcommand{\normpdf}[3]{\mathcal{N}\left({#1}; {#2}, {#3 } \right)} \newcommand{x}{x} \newcommand{\mathbf{X}}{\mathbf{X}} \newcommand{\mathbf{y}}{\mathbf{y}} \newcommand{\psi}{\psi} \newcommand{C}{C} \newcommand{\sigma}{\sigma} \begin{document} \twocolumn[ \aistatstitle{Parallel MCMC Without Embarrassing Failures} \aistatsauthor{Daniel Augusto de Souza$^1$, Diego Mesquita$^{2,3}$, Samuel Kaski$^{2,4}$, Luigi Acerbi$^5$} \aistatsaddress{ $^1$University College London $^2$Aalto University $^3$Getulio Vargas Foundation\\ $^4$University of Manchester $^5$University of Helsinki\\ \texttt{\small daniel.souza.21@ucl.ac.uk, diego.mesquita@fgv.br, samuel.kaski@aalto.fi, luigi.acerbi@helsinki.fi} }] \runningauthor{Daniel Augusto de Souza, Diego Mesquita, Samuel Kaski, Luigi Acerbi} \begin{abstract} \emph{Embarrassingly parallel} Markov Chain Monte Carlo (MCMC) exploits parallel computing to scale Bayesian inference to large datasets by using a two-step approach. First, MCMC is run in parallel on (sub)posteriors defined on data partitions. Then, a server combines local results. While efficient, this framework is very sensitive to the quality of subposterior sampling. Common sampling problems such as missing modes or misrepresentation of low-density regions are amplified -- instead of being corrected -- in the combination phase, leading to catastrophic failures. In this work, we propose a novel combination strategy to mitigate this issue. Our strategy, Parallel Active Inference (PAI), leverages Gaussian Process (GP) surrogate modeling and active learning. After fitting GPs to subposteriors, PAI (i) shares information between GP surrogates to cover missing modes; and (ii) uses active sampling to individually refine subposterior approximations. We validate PAI in challenging benchmarks, including heavy-tailed and multi-modal posteriors and a real-world application to computational neuroscience. Empirical results show that PAI succeeds where previous methods catastrophically fail, with a small communication overhead. \end{abstract} \section{INTRODUCTION} Markov Chain Monte Carlo (MCMC) methods have become a gold standard in Bayesian statistics \citep{gelman2013bayesian,Carpenter2017}. However, scaling MCMC methods to large datasets is challenging due to their sequential nature and that they typically require many likelihood evaluations, implying repeated sweeps through the data. Various approaches that leverage distributed computing have been proposed to mitigate these limitations \citep{Angelino+others:2016, Robert2018}. In general, we can split these approaches between those that incur constant communication costs and those requiring frequent interaction between server and computing nodes \citep{vehtari2020expectation}. \emph{Embarrassingly parallel} MCMC~\citep{Neiswanger2014} is a popular class of methods which employs a divide-and-conquer strategy to sample from a target posterior, requiring only a single communication step. For dataset $\mathcal{D}$ and model parameters $\theta \in \mathbb{R}^D$, suppose we are interested in the Bayesian posterior $\post \propto p(\theta) p(\mathcal{D} | \theta)$, where $p(\theta)$ is the prior and $p(\mathcal{D} | \theta)$ the likelihood. Embarrassingly parallel methods begin by splitting the data $\mathcal{D}$ into $K$ smaller partitions $\mathcal{D}_1, \ldots, \mathcal{D}_K$ so that we can rewrite the posterior as \vspace{-0.25em} \begin{equation}\label{eq:pde} \post \propto \prod_{k=1}^{K} p(\theta)^{1/K} p(\data_k | \theta) \equiv \prod_{k=1}^{K} p_k(\theta). \vspace{-0.2em} \end{equation} Next, an MCMC sampler is used to draw samples $\samples_k$ from each \emph{subposterior} $p_k(\theta)$, for $k=1\ldots K$, in parallel. Then, the computing nodes send the local results to a central server, for a final aggregation step. These local results are either the samples themselves or approximations $q_1, \ldots, q_K$ built using them. Works in embarrassingly parallel MCMC mostly focus on combination strategies. \citet{Scott2016} employ a weighted average of subposterior samples. \citet{Neiswanger2014} propose using multivariate-normal surrogates as well as non-parametric and semi-parametric forms. \citet{Wang2015} combine subposterior samples into a hyper-histogram with random partition trees. \citet{Nemeth2018} leverage density values computed during MCMC to fit Gaussian process (GP) surrogates to log-subposteriors. \citet{Mesquita2019} use subposterior samples to fit normalizing flows and apply importance sampling to draw from their product. Despite these advances, parallel MCMC suffers from an unaddressed limitation: its dependence on high-quality subposterior sampling. This requirement is especially difficult to meet when subposteriors are multi-modal or heavy-tailed, in which cases MCMC chains often visit only a subset of modes and may underrepresent low-density regions. Furthermore, the surrogates $(q_k)_{k=1}^{K}$ built only on local MCMC samples might match poorly the true subposteriors if not carefully tuned. \paragraph{Outline and contributions.} We first discuss the failure modes of parallel MCMC (Section \ref{sec:embarrassing}). Drawing insight from this discussion, Section \ref{sec:method} proposes a novel GP-based solution, Parallel Active Inference (PAI). After fitting the subposterior surrogates, PAI shares a subset of samples between computing nodes to prevent mode collapse. PAI also uses active learning to refine low-density regions and avoid catastrophic model mismatch. Section \ref{sec:experiments} validates our method on challenging benchmarks and a real-world neuroscience example. Finally, Section \ref{sec:related_works} reviews related work and Section \ref{sec:discussion} discusses strengths and limitations of PAI. \section{EMBARRASSINGLY PARALLEL MCMC: HOW CAN IT FAIL?} \label{sec:embarrassing} \begin{figure*}[tb!] \centering \includegraphics[width=0.9\linewidth]{figures/illustrations/f1_fixed2.pdf} \caption{{\bf Failure modes of embarrassingly parallel MCMC.} \textbf{A--C}. Each column illustrates a distinct failure type described in Section \ref{sec:failures}. For each column, the top rows show two subposteriors $p_k(\theta)$ (black dashed line: ground truth; blue circles: MCMC samples), and the bottom row shows the full posterior $p(\theta|\mathcal{D})$ with the approximate posterior combined using the method specified in the legend (orange line). These failure modes are general and not unique to the displayed algorithms (see Appendix \ref{supp:failure_modes} % for details and further explanations). } \label{fig:toy} \end{figure*} We recall the basic structure of a generic embarrassingly parallel MCMC algorithm in Algorithm \ref{alg:basic}. This schema has major failure modes that we list below, before discussing potential solutions. We also illustrate these pathologies in Fig \ref{fig:toy}. In this paper, for a function $f$ with scalar output and a set of points $\samples = \{s_1, \ldots, s_N \}$, we denote with $f(\samples) \equiv \{f(s_1), \ldots, f(s_N)\}$. \begin{algorithm}[h!] \caption{Generic embarrassingly parallel MCMC}\label{alg:basic} \begin{algorithmic}[1] \small \INPUT Data partitions $\data_1, \ldots, \data_K$; prior $p(\theta)$; likelihood function $p(\data|\theta)$. \ParFor{$1 \ldots K$} \Comment{Parallel steps} \State $\samples_k \leftarrow $ MCMC samples from $p_k(\theta) \propto p(\theta)^{1/K} p(\data_k | \theta)$ \State build subposterior model $q_k(\theta)$ from $\samples_k$ \EndParFor \State Combine: $q(\theta) \propto \prod_{k=1}^K q_k(\theta) $ \Comment{Centralized step} \end{algorithmic} \end{algorithm} \subsection{Failure modes} \label{sec:failures} \paragraph{I: Mode collapse.} It is sufficient that \emph{one} subposterior $q_k$ misses a mode for the combined posterior $q$ to lack that mode (see Fig \ref{fig:toy}A). While dealing with multiple modes is an open problem for MCMC, here the issue is exacerbated by the fact that a single failure propagates to the final solution. A back-of-the-envelope calculation shows that even if the chance of missing a mode is small, $\varepsilon > 0$, the probability of mode collapse in the combined posterior is $\approx (1 - \varepsilon)^K$ making it a likely occurrence for sufficiently large $K$. \begin{tcolorbox}[colback=red!5!white,colframe=red!75!black] \textbf{Insight 1:} For multimodal posteriors, mode collapse is almost inevitable unless the computing nodes can exchange information about the location of important posterior regions. \end{tcolorbox} \paragraph{II: Catastrophic model mismatch.} Since the $q_k$ are approximations of the true subposteriors $p_k$, small deviations between them are expected -- this is not what we refer to here. Instead, an example of \emph{catastrophic} model mismatch is when a simple parametric model such as a multivariate normal is used to model a multimodal posterior with separate modes (see Section \ref{sec:experiments}). Even nonparametric methods can be victims of this failure. For example, GP surrogates are often used to model nonparametric deviations of the log posterior from a parametric `mean function'. While these models can well represent multimodal posteriors, care is needed to avoid grossly mismatched solutions in which a $q_k$ `hallucinates' posterior mass due to an improper placement of the GP mean function (Fig \ref{fig:toy}B). \begin{tcolorbox}[colback=red!5!white,colframe=red!75!black] \textbf{Insight 2:} We cannot take subposterior models at face value. Reliable algorithms should check and refine the $q_k$'s to avoid catastrophic failures. \end{tcolorbox} \paragraph{III: Underrepresented tails.} This effect is more subtle than the failure modes listed above, but it contributes to accumulating errors in the estimate of the combined posterior. The main issue here is that, by construction, MCMC samples and subposterior models based on these samples focus on providing information about the high-posterior-mass region of the subposterior. However, different subposteriors may overlap only in their tail regions (Fig \ref{fig:toy}C), implying that the tails and the nearby `deep' regions of each subposterior might actually be the most important in determining the exact shape of the combined posterior. \begin{tcolorbox}[colback=red!5!white,colframe=red!75!black] \textbf{Insight 3:} Subposterior models built only from MCMC samples (and their log density) can miss important information about the tails and nearby regions of the subposterior which would also contribute to the combined posterior. \end{tcolorbox} \subsection{Past solutions} \label{sec:past} Since there is no guarantee that $q$ approximates well the posterior $p$, \citet{Nemeth2018} refine $q$ with an additional parallel step, called Distributed Importance Sampler (DIS). DIS uses $q$ as a proposal distribution and draws samples $\samples \sim q$, that are then sent back for evaluation of the log density $\log p_k(\samples)$ at each parallel node. The true log density $\log p(\samples) = \sum_k \log p_k(\samples)$ is then used as a target for \emph{importance sampling/resampling} \citep{robert2013monte}. Technically, this step makes the algorithm not `embarrassingly parallel' anymore, but the prospect of fixing catastrophic failures outweighs the additional communication cost. However, DIS does not fully solve the issues raised in Section \ref{sec:failures}. Notably, importance sampling will not help recover the missing regions of the posterior if $q$ does not cover them in the first place. DIS can help in some model mismatch cases, in that `hallucinated' regions of the posterior will receive near-zero weights after the true density is retrieved. \subsection{Proposed solution} \label{sec:solutions} Drawing from the insights in Section \ref{sec:failures}, we propose two key ideas to address the blind spots of embarrassingly parallel MCMC. Here we provide an overview of our solution, which is described in detail in Section \ref{sec:method}. The starting point is modeling subposteriors via Gaussian process surrogates (Fig \ref{fig:sols}A). \paragraph{Sample sharing.} We introduce an additional step in which each node shares a selected subset of MCMC samples with the others (Fig \ref{fig:sols}B). This step provides sufficient information for local nodes to address mode collapse and underrepresented tails. While this communication step makes our method not strictly `embarrassingly' parallel, we argue it is necessary to avoid posterior density collapse. Moreover, existing methods already consider an extra communication step \citep{Nemeth2018}, as mentioned in Section \ref{sec:past}. \paragraph{Active learning.} We use \emph{active learning} as a general principle whenever applicable. The general idea is to select points that are informative about the shape of the subposterior, minimizing the additional communication required. Active learning is used here in multiple steps: when selecting samples from MCMC to build the surrogate model (as opposed to thinning or random subsampling); as a way to choose which samples from other nodes to add to the current surrogate model of each subposterior $q_k$ (only informative samples are added); to actively sample \emph{new} points to reduce uncertainty in the local surrogate $q_k$ (Fig \ref{fig:sols}C). Active learning contributes to addressing both catastrophic model mismatch and underrepresented tails. Combined, these ideas solve the failure modes discussed previously (Fig \ref{fig:sols}D). \begin{figure*}[t!] \centering \includegraphics[width=0.95\linewidth]{figures/illustrations/f2_2.pdf} \vspace{-1em} \caption{{\bf Parallel active inference (PAI).} \textbf{A.} Each log subposterior $\log p_k(\theta)$ (black dashed line) is modeled via Gaussian process surrogates (orange dashed line: mean GP; shaded area: 95\% confidence interval) trained on MCMC samples $\mathcal{S}_k$ (blue circles) and their log-density $\log p_k(\mathcal{S}_k)$. Here, MCMC sampling on the second subposterior has missed a mode. \textbf{B.} Selected subsets of MCMC samples are shared across nodes, evaluated locally and added to the GP surrogates. Here, sample sharing helps finding the missing mode in the second subposterior, but the GP surrogate is now highly uncertain outside the samples. \textbf{C.} Subposteriors are refined by actively selecting new samples (stars) that resolve uncertainty in the surrogates. \textbf{D.} Subposteriors are combined into the full approximate log posterior (orange line); here a perfect match to the true log posterior (black dashed line). \label{fig:sols} } \end{figure*} \section{PARALLEL ACTIVE INFERENCE} \label{sec:method} In this section, we present our framework, which we call Parallel Active Inference (PAI), designed to address the issues discussed in Section \ref{sec:embarrassing}. The steps of our method are schematically explained in Fig \ref{fig:sols} and the detailed algorithm is provided in Appendix \ref{supp:algorithm}. \subsection{Subposterior modeling via GP regression} \label{sec:gpsubposteriors} As per standard embarrassingly parallel algorithms, we assume each node computes a set of samples $\samples_k$ and their log density, $\log p_k(\samples_k)$, by running MCMC on the subposterior $p_k$. We model each log-subposterior $\lq_k(\theta) \equiv \log q_k(\theta)$ using GP regression (Fig \ref{fig:sols}A; see \cite{rasmussen2006gaussian,Nemeth2018,gortler2019visual} and Appendix \ref{supp:gps} for more information). We say that a GP surrogate model is trained on $\samples_k$ as a shorthand for $\left(\samples_k, \log p_k(\samples_k) \right)$. When building the GP model of the subposterior, it is not advisable to use all samples $\samples_k$ because: (1) exact inference in GPs scales cubically in the number of training points (although see \cite{wang2019exact}); (2) we want to limit communication costs when sharing samples between nodes; (3) there is likely high redundancy in $\samples_k$ about the shape of the subposterior. \citet{Nemeth2018} simply choose a subset of samples by `thinning' a longer MCMC chain at regular intervals. Instead, we employ \emph{active subsampling} as follows. First, we pick an initial subset of $\ninit$ samples $\samples_k^{(0)} \subset \samples_k$, that we use to train an initial GP (details in Appendix \ref{sec:step1}). Then, we iteratively select points $\theta^\star$ from $\samples_k$ by maximizing the \emph{maximum interquantile range} (MAXIQR) acquisition function \citep{jarvenpaa2021parallel}: \begin{equation} \label{eq:maxiqr} \begin{split} \theta^\star &= \arg\max_\theta \left\{ e^{m(\theta; \samples_k^{(t)})} \text{sinh} \left(u \cdot s(\theta; \samples_k^{(t)})\right)\right\}, \end{split} \end{equation} where $m(\theta; \samples_k^{(t)})$ and $s(\theta; \samples_k^{(t)})$ are, respectively, the posterior latent mean and posterior latent standard deviation of the GP at the end of iteration $t \geq 0$; and $\sinh(z) = (\exp(z) -\exp(-z))/2$ for $z \in \mathbb{R}$ is the hyperbolic sine. Eq. \ref{eq:maxiqr} promotes selection of points with high posterior density for which the GP surrogate is also highly uncertain, with the trade-off controlled by $u > 0$, where larger values of $u$ favor further exploration. In each iteration $t+1$, we greedily select a batch of $n_\text{batch}$ points at a time from $\samples_k \setminus \samples_k^{(t)}$ using a batch version of MAXIQR \citep{jarvenpaa2021parallel}. We add the selected points to the current training set, $\samples_k^{(t+1)}$, and retrain the GP after each iteration (see Appendix \ref{sec:step1}). After $T$ iterations, our procedure yields a subset of points $\subsamples_k \equiv \samples_k^{(T)} \subseteq \samples_k$ that are highly informative about the shape of the subposterior. \subsection{Sample sharing} \label{sec:samplesharing} In this step, each node $k$ shares the selected samples $\subsamples_k$ with all other nodes (Fig \ref{fig:sols}B). Thus, node $k$ gains access to the samples $\subsamples_{\setminus k} \equiv \bigcup_{j \neq k} \subsamples_j$. Importantly, $\subsamples_{\setminus k}$ might contain samples from relevant subposterior regions that node $k$ has has not explored. As discussed in Section \ref{sec:gpsubposteriors}, for efficiency we avoid adding \emph{all} points $\subsamples_{\setminus k}$ to the current GP surrogate for subposterior $k$. Instead, we add a sample $\theta^\star \in \subsamples_{\setminus k}$ to the GP training set only if the prediction of the current GP surrogate deviates from the true subposterior $\log p_k(\theta^\star)$ in a significant way (see Appendix \ref{sec:step2} for details). After this step, we obtain an expanded set of points $\subsamplestwo_k$ that includes information from all the nodes, minimizing the risk of mode collapse (see Section \ref{sec:failures}). \subsection{Active subposterior refinement} \label{sec:activerefinement} So far, the GP models have been trained using selected subsets of samples from the original MCMC runs. In this step, we refine the GP model of each subposterior by sampling \emph{new} points (Fig \ref{fig:sols}C). Specifically, each node $k$ actively selects new points by optimizing the MAXIQR acquisition function (Eq. \ref{eq:maxiqr}) over $\mathcal{X} \subseteq \mathbb{R}^D$ (see Appendix \ref{sec:step3}). New points are selected greedily in batches of size $n_\text{batch}$, retraining the GP after each iteration. This procedure yields a refined set of points $\subsamplesthree_k$ which includes new points that better pinpoint the shape of the subposterior, reducing the risk of catastrophic model mismatch and underrepresented tails. The final log-subposterior surrogate model $\lq_k$ is the GP trained on $\subsamplesthree_k$. \subsection{Combining the subposteriors} \label{sec:combining} Finally, we approximate the full posterior $\log \post = \sum_{k=1}^K \log p_k(\theta)$ by combining all subposteriors together (Fig \ref{fig:sols}D). Since each log-subposterior is approximated by a GP, the approximate full log-posterior is a sum of GPs and itself a GP, $\lq(\theta) = \sum_{k=1}^K \lq_k(\theta)$. Note that $\lq(\theta)$, being a GP, is still a distribution over functions. We want then to obtain a point estimate for the (unnormalized) posterior density corresponding to $\exp \lq(\theta)$. One choice is to take the posterior mean, which leads to the expectation of a log-normal density \citep{Nemeth2018}. We prefer a robust estimate and use the posterior median instead \citep{jarvenpaa2021parallel}. Thus, our estimate is \begin{equation} \label{eq:sumgp} q(\theta) \propto \exp\left\{\sum_{k=1}^K m_k(\theta; \subsamplesthree_k) \right\}. \end{equation} In low dimension ($D = 1,2$), Eq. \ref{eq:sumgp} can be evaluated on a grid. In higher dimension, one could sample from $q(\theta)$ using MCMC methods such as NUTS \citep{hoffman2014no}, as done by \cite{Nemeth2018}. However, $q(\theta)$ is potentially multimodal which does not lend itself to easy MCMC sampling. Alternatively, \citet{acerbi2018variational} runs variational inference on $q(\theta)$ using as variational distribution a mixture of Gaussians with a large number of components. Finally, for moderate $D$, importance sampling/resampling with an appropriate (adaptive) proposal would also be feasible. As a final optional step, after combining the subposteriors into the full approximate posterior $q(\theta)$, we can refine the solution using \emph{distributed importance sampling} (DIS) as proposed by \citet{Nemeth2018} and discussed in Section \ref{sec:past}. \subsection{Complexity analysis} Similarly to conventional embarrassingly parallel MCMC, we can split the cost of running PAI into two main components. The first consists of local costs, which involve computations happening at individual computing nodes (i.e., model fitting and active refinement). The second are global (or aggregation) costs, which comprise communication and sampling from the combined approximate posterior. \subsubsection{Model fitting} After sampling their subposterior, each computing node $k$ has to fit the surrogate model on the subset of their samples, $\subsamples_k$. These subsets are designed such that their size is $\mathcal{O}(D)$ (see Appendix \ref{supp:algorithm}). Thus, the cost of fitting the surrogate GP models in each of the $K$ computing nodes is $\mathcal{O}(D^3)$. The same applies for $\subsamplestwo_k$ and $\subsamplesthree_k$. \subsubsection{Communication costs} Traditional embarrassingly parallel MCMC methods only require two global communication steps: (1) the central server splits the $N$ observations among $K$ computing nodes; (2) each node sends $S$ subposterior samples of dimension $D$ back to the server, assuming nodes draw the same number of samples. Together, these steps amount to $\mathrm{O}(N + K S D)$ communication cost. PAI imposes another communication step, in which nodes share their subposterior samples and incurring $\mathrm{O}(K S D)$ cost. Furthermore, supposing PAI acquires $A$ active samples to refine each subposterior, the cost of sending local results to servers is increased by $\mathrm{O}(KAD)$. PAI also incur a small additional cost for sending back the value of the GP hyperparameters $\mathrm{O}(KD)$. In sum, since usually $A \ll S$, the asymptotic communication cost of PAI is equivalent to traditional methods. \subsubsection{Active refinement costs} Active learning involves GP training and optimization of the acquisition function, but only a small number of likelihood evaluations. Thus, under the embarrassingly parallel MCMC assumption that likelihood evaluations are costly (e.g., due to large datasets), active learning is relatively inexpensive \citep{acerbi2018variational}. More importantly, as shown in our ablation study in Appendix \ref{sec:ablation}, this step is crucial to avoid the pathologies of embarrassingly parallel MCMC \subsubsection{Sampling complexity} Sampling from the aggregate approximate posterior $q(\theta)$ only requires evaluating the GP predictive mean for each subposterior and does not require access to the data or all samples. The sampling cost is linear in the number of subposteriors $K$ and the size of each GP $\mathcal{O}(D)$. Even if $K$ is chosen to scale as the size of the actual data, each GP only requires a small training set, making them comparably inexpensive. \begin{figure*}[t!] \centering \includegraphics% [height=8cm]% {figures/experiments/4modes_33.pdf} \vspace{-1em} \caption{\textbf{Multi-modal posterior.} Each panel shows samples from the combined approximate posterior (red) against the ground truth (blue). With exception of PAI, all methods completely miss at least one high-density region. Moreover, PAI is the only method that does not assign mass to regions without modes.} \label{fig:exp_4modes} \end{figure*} \section{EXPERIMENTS} \label{sec:experiments} We evaluate PAI on a series of target posteriors with different challenging features. Subsection \ref{subsec:multi_modal} shows results for a posterior with four distinct modes, which is prone to mode collapse (Fig \ref{fig:toy}A). Subsection \ref{subsec:heavy_tails} targets a posterior with heavy tails, which can lead to underrepresented tails (Fig \ref{fig:toy}C). Subsection \ref{subsec:rare_events} uses a rare event model to gauge how well our method performs when the true subposteriors are drastically different. Finally, Subsection \ref{subsec:neuro} concludes with a real-world application to a model and data from computational neuroscience~\citep{acerbi2018bayesian}. We provide implementation details in Appendix \ref{appendix:implementation} and source code is available at \url{https://github.com/spectraldani/pai}. \paragraph{Algorithms.} We compare basic PAI and PAI with the optional distributed importance sampling step (PAI-DIS) against six popular and state-of-the-art (SOTA) embarrassingly parallel MCMC methods: the parametric, non-parametric and semi-parametric methods by \citet{Neiswanger2014}; PART \citep{Wang2015}; and two other GP-surrogate methods \citep{Nemeth2018}, one using a simple combination of GPs (GP) and the other using the distributed importance sampler (GP-DIS; see Section \ref{sec:past}). \paragraph{Procedure.} For each problem, we randomly split the data in equal-sized partitions and divide the target posterior into $K$ subposteriors (Eq. \ref{eq:pde}). We run MCMC separately on each subposterior using Stan with multiple chains \citep{Carpenter2017}. The same MCMC output is then processed by the different algorithms described above, yielding a combined approximate posterior for each method. To assess the quality of each posterior approximation, we compute the mean marginal total variation distance (MMTV), the 2-Wasserstein (W2) distance, and the Gaussianized symmetrized Kullback-Leibler (GsKL\xspace) divergence between the approximate and the true posterior, with each metric focusing on different features. For each problem, we computed ground-truth posteriors using numerical integration (for $D \le 2$) or via extensive MCMC sampling in Stan \citep{Carpenter2017}. For all GP-based methods (PAI, PAI-DIS, GP, GP-DIS), we sampled from the potentially multimodal combined GP (Eq. \ref{eq:sumgp}) using importance sampling/resampling with an appropriate proposal. We report results as mean $\pm$ standard deviation across ten runs in which the entire procedure was repeated with different random seeds. For all metrics, lower is better, and the best (statistically significant) results for each metric are reported in bold. See Appendix \ref{supp:evaluation} for more details. \subsection{Multi-modal posterior} \label{subsec:multi_modal} \paragraph{Setting.} In this synthetic example, the data consist of $N=10^{3}$ samples $y_1, \ldots, y_N$ drawn from the following hierarchical model: \begin{equation*} \label{eq:multimodal} \begin{split} \theta \sim p(\theta) & = \mathcal{N}(0, \sigma_p^2 \mathbb{I}_2)\\ y_1, \ldots, y_N \sim p(y_n | \theta) & = \sum_{i=1}^{2}\frac{1}{2}\mathcal{N}\left(P_i(\theta_i), \sigma^2_l\right) \end{split} \end{equation*} where $\theta \in \mathbb{R}^2$, $\sigma_p=\sigma_l=1/4$ and $P_i$'s are second-degree polynomial functions. By construction, the target posterior $p(\theta | y_1, \dots, y_N) \propto p(\theta) \prod_{n=1}^N p(y_n | \theta)$ is multimodal with four modes. We run parallel inference on $K=10$ equal-sized partitions of the data. We provide more details regarding $P_1, P_2$ in Appendix \ref{supp:models}. \paragraph{Results.} Fig \ref{fig:exp_4modes} shows the output of each parallel MCMC method for a typical run, displayed as samples from the approximate combined posterior overlaid on top of the ground-truth posterior. Due to MCMC occasionally missing modes in subposterior sampling, the combined posteriors from all methods except PAI lack at least one mode of the posterior (\emph{mode collapse}, as seen in Fig \ref{fig:toy}A). Other methods also often inappropriately distribute mass in low-density regions (as seen in Fig \ref{fig:toy}B). In contrast, PAI accurately recovers all the high-density regions of the posterior achieving a near-perfect match. Table \ref{tab:multi-modal} shows that PAI consistently outperforms the other methods in terms of metrics. \begin{table}[h!] \caption{\textbf{Multi-modal posterior.} } \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l c c c } \toprule \bf{Model} & \bf{MMTV} & \bf{W2} & \bf{GsKL\xspace}\\ \toprule Parametric& $\meanstd{0.89}{0.12}$& $\meanstd{1.08}{0.33}$& $\meanstd[2]{8.9}{11}$\\ Semi-param.& $\meanstd{0.81}{0.09}$& $\meanstd{1.08}{0.12}$& $\meanstd[1]{5.6}{1.3}$\\ Non-param.& $\meanstd{0.81}{0.09}$& $\meanstd{1.12}{0.09}$& $\meanstd[1]{5.0}{1.8}$\\ PART& $\meanstd{0.55}{0.09}$& $\meanstd{1.06}{0.33}$& $\meanstd[2]{7.3}{14}$\\ GP& $\meanstd{0.93}{0.16}$& $\meanstd{1.01}{0.36}$& $\meanstd[4]{1.2}{1.3}$\\ GP-DIS& $\meanstd{0.87}{0.18}$& $\meanstd{1.04}{0.34}$& $\meanstd[16]{4.8}{14}$\\ \midrule \textbf{PAI}& $\mathbf{\meanstd{0.037}{0.011}}$& $\mathbf{\meanstd{0.028}{0.011}}$& $\mathbf{\meanstd[-4]{1.6}{1.7}}$\\ \textbf{PAI-DIS}& $\mathbf{\meanstd{0.034}{0.019}}$& $\mathbf{\meanstd{0.026}{0.008}}$& $\mathbf{\meanstd[-5]{3.9}{2.4}}$\\ \bottomrule \end{tabular} } \label{tab:multi-modal} \end{table} \paragraph{Large datasets.} To illustrate the computational benefits of using PAI for larger datasets, we repeated the same experiment in this section but with $10^5$ data points in each of the $K=10$ partitions. Remarkably, even for this moderate-sized dataset, we notice a $6\times$ speed-up -- decreasing the total running time from 6 hours to 57 minutes, (50 for subposterior sampling + 7 from PAI; see Appendix \ref{appendix:large_dataset}). Overall, PAI's running time is in the same order of magnitude as the previous SOTA \citep[e.g.][]{Wang2015, Nemeth2018}. However, only PAI returns correct results while other methods fail. \begin{figure}[t!] \centering \includegraphics% [width=0.9\linewidth]% {figures/experiments/warped_33.pdf} \vspace{-1em} \caption{\textbf{Warped Student's t.} Top: Log marginal posterior for $\theta_1$. Bottom: Log posterior density. Thanks to active sampling, PAI better captures details in the depths of the tails. } \label{fig:warped} \end{figure} \subsection{Warped Student's t} \label{subsec:heavy_tails} \paragraph{Setting.} We now turn to a synthetic example with heavy tails. Consider the following hierarchical model: \begin{equation*} \begin{split} \theta \sim p(\theta) & = \mathcal{N}(0, \sigma^2_p \mathbb{I}_2)\\ y_1, \ldots, y_N \sim p(y_n | \theta) & = \mathrm{StudentT}\left(\nu, \theta_1 + \theta_2^2, \sigma_l^2\right) \end{split} \end{equation*} where $\theta \in \mathbb{R}^2$, $\nu=5$ is the degrees of freedom of the Student's $t$-distribution, $\sigma_p=1$, and $\sigma_l=\sqrt{2}$. This model is a heavy-tailed variant of the Warped Gaussian model studied in earlier work, e.g., \citet{Nemeth2018, Mesquita2019}. As before, we generate $N=10^{3}$ samples and split the data into $K=10$ partitions for parallel inference. \paragraph{Results.} Fig \ref{fig:warped} shows the full posterior and the marginal posterior for $\theta_1$ obtained using the two best-performing methods without DIS refinement, PAI and GP (see Table \ref{tab:warped-gaussian}). While PAI(-DIS) is very similiar to GP(-DIS) in terms of metrics, Fig \ref{fig:warped} shows that, unlike GP(-DIS), PAI accurately captures the far tails of the posterior which could be useful for downstream tasks, avoiding failure mode III (Fig \ref{fig:toy}C). \begin{table}[h] \caption{\textbf{Warped Student's t.}} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l c c c } \toprule \bf{Model} & \bf{MMTV} & \bf{W2} & \bf{GsKL\xspace}\\ \toprule Parametric& $\meanstd{0.51}{0.01}$& $\meanstd{0.71}{0.07}$& $\meanstd[0]{1.9}{0.1}$\\ Semi-param.& $\meanstd{0.50}{0.03}$& $\meanstd{0.57}{0.05}$& $\meanstd[1]{1.1}{0.2}$\\ Non-param.& $\meanstd{0.51}{0.02}$& $\meanstd{0.59}{0.03}$& $\meanstd[1]{1.2}{0.2}$\\ PART& $\meanstd{0.66}{0.07}$& $\meanstd{0.78}{0.09}$& $\meanstd[2]{1.2}{0.7}$\\ GP& $\mathbf{\meanstd{0.015}{0.003}}$& $\mathbf{\meanstd{0.003}{0.002}}$& $\meanstd[-4]{4.5}{10.5}$\\ GP-DIS& $\meanstd{0.018}{0.004}$& $\mathbf{\meanstd{0.002}{0.001}}$& $\meanstd[-5]{6.6}{5.8}$\\ \midrule \textbf{PAI}& $\mathbf{\meanstd{0.015}{0.003}}$& $\mathbf{\meanstd{0.002}{0.001}}$& $\mathbf{\meanstd[-5]{1.2}{0.8}}$\\ {PAI-DIS}& ${\meanstd{0.018}{0.003}}$& $\mathbf{\meanstd{0.002}{0.001}}$& $\meanstd[-5]{3.8}{3.4}$\\ \bottomrule \end{tabular} } \label{tab:warped-gaussian} \end{table} \subsection{Rare categorical events} \label{subsec:rare_events} \paragraph{Setting.} To evaluate how our method copes with heterogeneous subposteriors, we run parallel inference for a synthetic example with Categorical likelihood and $N=10^3$ discrete observations split among three classes. To enforce heterogeneity, we make the first two classes rare (true probability $\theta_1 = \theta_2 = 1/N$) and the remaining class much more likely (true probability $\theta_3 = (N-2)/N$). Since we split the data into $K=10$ equal-sized parts, some of them will not contain even a single rare event. We perform inference over $\theta \in \Delta^2$ (probability 2-simplex) with a symmetric Dirichlet prior with concentration parameter $\alpha = 1/3$. \paragraph{Results.} Fig \ref{fig:rarecat} shows the samples from the combined approximate posterior for each method. In this example, PAI-DIS matches the shape of the target posterior extremely well, followed closely by GP-DIS (see also Table \ref{tab:rare}). Notably, even standard PAI (without the DIS correction) produces a very good approximation of the posterior -- a further display of the ability of PAI of capturing fine details of each subposterior, particularly important here in the combination step due to the heterogeneous subposteriors. By contrast, the other methods end up placing excessive mass in very-low-density regions (PART, Parametric, GP) or over-concentrating (Non-parametric, Semi-parametric). \begin{table}[h] \caption{\textbf{Rare categorical events.} } \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l c c c } \toprule \bf{Model} & \bf{MMTV} & \bf{W2} & \bf{GsKL\xspace}\\ \toprule Parametric& $\meanstd{0.26}{0.14}$& $\meanstd{0.15}{0.19}$& $\meanstd[0]{1.1}{1.4}$\\ Semi-param.& $\meanstd{0.49}{0.21}$& $\meanstd{0.27}{0.23}$& $\meanstd[0]{3.5}{3.4}$\\ Non-param.& $\meanstd{0.43}{0.17}$& $\meanstd{0.19}{0.25}$& $\meanstd[0]{2.8}{3.9}$\\ PART& $\meanstd{0.31}{0.14}$& $\meanstd{0.08}{0.13}$& $\meanstd[-1]{8.6}{10}$\\ GP& $\meanstd{0.16}{0.09}$& $\meanstd{0.04}{0.07}$& $\meanstd[-1]{3.5}{4.8}$\\ GP-DIS& $\meanstd{0.011}{0.002}$& $\meanstd[-4]{6.3}{0.9}$& $\meanstd[-4]{1.1}{1.5}$\\ \midrule PAI& ${\meanstd{0.028}{0.027}}$& ${\meanstd{0.001}{0.002}}$& ${\meanstd[-3]{8.0}{16}}$\\ \textbf{PAI-DIS}& $\mathbf{\meanstd{0.009}{0.002}}$& $\mathbf{\meanstd[-4]{5.4}{0.8}}$& $\mathbf{\meanstd[-5]{4.3}{2.1}}$\\ \bottomrule \end{tabular} } \label{tab:rare} \end{table} \begin{figure*}[t!] \centering \includegraphics% [height=7cm]% {figures/experiments/rarecat_33_wider.pdf} \vspace{-1em} \caption{\textbf{Rare categorical events.} Each ternary plot shows samples from the combined approximate posterior (red) on top of the true posterior (blue). Note that the panels are zoomed in on the relevant corner of the probability simplex. Of all methods, PAI is the one that best captures the shape of the posterior. } \label{fig:rarecat} \end{figure*} \subsection{Multisensory causal inference} \label{subsec:neuro} \paragraph{Setting.} \emph{Causal inference} (CI) in multisensory perception denotes the process whereby the brain decides whether distinct sensory cues come from the same source, a commonly studied problem in computational and cognitive neuroscience \citep{kording2007causal}. Here we compute the posterior for a 6-parameter CI model given the data of subject S1 from \citep{acerbi2018bayesian} (see Appendix \ref{supp:models} for model details). The fitted model is a proxy for a large class of similar models that would strongly benefit from parallelization due to likelihoods that do not admit analytical solutions, thus requiring costly numerical integration. For this experiment, we run parallel inference over $K = 5$ partitions of the $N = 1069$ observations in the dataset. \paragraph{Results.} Table \ref{tab:neuroscience} shows the outcome metrics of parallel inference. Similarly to the rare-events example, we find that PAI-DIS obtains an excellent approximation of the true posterior, with GP-DIS performing about equally well (slightly worse on the GsKL\xspace metric). Despite lacking the DIS refinement step, standard PAI performs competitively, achieving a reasonably good approximation of the true posterior (see Appendix \ref{supp:models}). All the other methods perform considerably worse; in particular the GP method without the DIS step is among the worst-performing methods on this example. \begin{table}[h] \caption{\textbf{Multisensory causal inference.}} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{l c c c } \toprule \bf{Model} & \bf{MMTV} & \bf{W2} & \bf{GsKL\xspace}\\ \toprule Parametric& $\meanstd{0.40}{0.05}$& $\meanstd{4.8}{0.6}$& $\meanstd[1]{1.2}{0.4}$\\ Semi-param.& $\meanstd{0.68}{0.07}$& $\meanstd{9.7}{9.6}$& $\meanstd[1]{5.6}{3.2}$\\ Non-param.& $\meanstd{0.26}{0.02}$& $\meanstd{0.52}{0.14}$& $\meanstd[0]{5.3}{0.3}$\\ PART& $\meanstd{0.24}{0.04}$& $\meanstd{1.5}{0.5}$& $\meanstd[0]{8.0}{5.4}$\\ GP& $\meanstd{0.49}{0.25}$& $\meanstd{17}{23}$& $\meanstd[1]{6.3}{8.9}$\\ GP-DIS& $\mathbf{\meanstd{0.07}{0.03}}$& $\mathbf{\meanstd{0.16}{0.07}}$& $\meanstd[-1]{8.7}{14}$\\ \midrule {PAI}& ${\meanstd{0.16}{0.04}}$& ${\meanstd{0.56}{0.21}}$& ${\meanstd[0]{2.0}{1.7}}$\\ \textbf{PAI-DIS}& $\mathbf{\meanstd{0.05}{0.04}}$& $\mathbf{\meanstd{0.14}{0.13}}$& $\mathbf{\meanstd[-1]{2.9}{3.6}}$\\ \bottomrule \end{tabular} } \vspace{-0.5em} \label{tab:neuroscience} \end{table} \section{RELATED WORKS} \label{sec:related_works} While the main staple of embarrassingly parallel MCMC is being a divide-and-conquer algorithm, there are other methods that scale up MCMC using more intensive communication protocols. For instance, \citet{Ahn2014} propose a distributed version of stochatic gradient Langevin dynamics \citep[SGLD,][]{Welling+Teh:2011} that constantly passes around the chain state to computing nodes, making updates only based on local data. However, distributed SGLD tends to diverge from the posterior when the communications are limited, an issue highlighted by recent work \citep{ElMekkaoui2021, Vono2022}. Outside the realm of MCMC, there are also works proposing expectation propagation as a framework for inference on partitioned data \citep{vehtari2020expectation, bui2018partitioned}. Our method, PAI, builds on top of related work on GP-based surrogate modeling and active learning for log-likelihoods and log-densities. Prior work used GP models and active sampling to learn the intractable marginal likelihood \citep{osborne2012active,gunter2014sampling} or the posterior \citep{kandasamy2015bayesian,wang2017adaptive,jarvenpaa2021parallel}. Recently, the framework of Variational Bayesian Monte Carlo (VBMC) was introduced to simultaneously compute both the posterior and the marginal likelihood \citep{acerbi2018variational,acerbi2019exploration,acerbi2020variational}. PAI extends the above works by dealing with partitioned data in the embarrassingly parallel setting, similarly to \citet{Nemeth2018}, but with the key addition of active learning and other algorithmic improvements. \section{DISCUSSION} \label{sec:discussion} In this paper, we first exposed several potential major failure modes of existing embarrassingly parallel MCMC methods. We then proposed a solution with our new method, \emph{parallel active inference} (PAI), which incorporates two key strategies: sample sharing and active learning. On a series of challenging benchmarks, we demonstrated that `vanilla' PAI is competitive with current state-of-the-art parallel MCMC methods and deals successfully with scenarios (e.g., multi-modal posteriors) in which all other methods catastrophically fail. When paired with an optional refinement step (PAI-DIS), the proposed method consistently performs on par with or better than state-of-the-art. Our results show the promise of the proposed strategies to deal with the challenges arising in parallel MCMC. Still, the solution is no silver bullet and several aspects remain open for future research. \subsection{Limitations and future work} The major limitation of our method, a common problem to surrogate-based approaches, is scalability to higher dimensions. Most GP-based approaches for Bayesian posterior inference are limited to up to $\sim 10$ dimensions, see e.g. \citealp{acerbi2018variational,acerbi2020variational,jarvenpaa2021parallel}. Future work could investigate methods to scale GP surrogate modeling to higher dimensions, for example taking inspiration from high-dimensional approaches in Bayesian optimization (e.g., \citealp{kandasamy2015high}). More generally, the validity of any surrogate modeling approach hinges on the ability of the surrogate model to faithfully represent the subposteriors. Active learning helps, but model mismatch in our method is still a potential issue that hints at future work combining PAI with more flexible surrogates such as GPs with more flexible kernels \citep{wilson2013gaussian} or deep neural networks \citep{Mesquita2019}. For the latter, obtaining the uncertainty estimates necessary for active learning would involve Bayesian deep learning techniques (e.g., \citealp{maddox2019simple}). As discussed before, our approach is not `embarrassingly' parallel in that it requires a mandatory global communication step in the sample sharing part (see Section \ref{sec:samplesharing}). The presence of additional communication steps seem inevitable to avoid catastrophic failures in parallel MCMC, and has been used before in the literature (e.g., the DIS step of \citealp{Nemeth2018}). Our method affords an optional final refinement step (PAI-DIS) which also requires a further global communication step. At the moment, there is no automated diagnostic to determine whether the optional DIS step is needed. Our results show that PAI already performs well without DIS in many cases. Still, future work could include an analysis of the GP surrogate uncertainty to recommend the DIS step when useful. \section*{Acknowledgments} This work was supported by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence FCAI and grants 328400, 325572) and UKRI (Turing AI World-Leading Researcher Fellowship, EP/W002973/1). We also acknowledge the computational resources provided by the Aalto Science-IT Project from Computer Science IT. { \small \bibliographystyle{plainnat}
{'timestamp': '2022-03-31T02:11:18', 'yymm': '2202', 'arxiv_id': '2202.11154', 'language': 'en', 'url': 'https://arxiv.org/abs/2202.11154'}
\section{Introduction} The image description generation (IDG) problem concerns about generating a natural language description that transcribes an input image. Over the years, tremendous effort has been dedicated to developing models that are descriptive. However, little effort is dedicated to generating descriptions that are \emph{stylish} (e.g.\ romantic, lyric, etc). Even for the handful of stylish IDG models that exist, they only have a loose control over the style. Ideally, a stylish IDG model should allow users to flexibly control over the generated descriptions as shown in Fig \ref{fig:Introduction}. Such a model would be useful for increasing user engagement in applications requiring human interaction such as chatbot and social media sharing. A naive approach to tackle the stylish IDG problem is to collect new corpora of paired images and descriptions for training. However, this is expensive. For each style that we wish to generate, we have to ask human annotators to write the romantic descriptions for each image in the training dataset. In this paper, we propose a controllable stylish IDG model. Our model is jointly trained with a paired unstylish image description corpus (source domain) and a monolingual corpus of the specific style (target domain). In this setting, our model can learn to generate various styles without collecting new paired data in the target domain. Our main contribution is to show that the layer normalization can be used to disentangle language styles from the content of source and target domains via a small tweak. This design enables us to use the shared content to generate descriptions that are more relevant to the image as well as control the style by plugging in a set of style-specific parameters. We refer this mechanism as Domain Layer Normalization (DLN) since we treat each style as the target domain in the domain transfer setting We conduct an extensive experimental evaluation to validate the proposed approach using both subjective and objective performance metrics. We evaluate our model on four different styles, including fairy tale, romance, humor, and country song lyrics style (lyrics). Experiment results show that our model generates stylish descriptions that are more preferred by human subjects. It also outperforms prior works on the objective performance metrics. \begin{figure}[t!] \centering \includegraphics[scale=0.26]{a3i_fig1.pdf} \caption{An ideal IDG can generate stylish descriptions for the given image. The generated descriptions should relate to the image content with different language styles.} \label{fig:Introduction} \end{figure} \section{Related Works} \noindent \textbf{Visual style transfer.} Image style transfer has been widely studied in computer vision. Gatys et al.~\cite{gatys2015neural} synthesize a new stylish image by recombining image content with style features extracted from different images. Dumoulin et al.~\cite{45832} propose to learn the style embedding of visual artistic style by conditioning on the parameter of batch normalization~\cite{43442}. Huang et al.~\cite{huang2017arbitrary} use adaptive instance norm. More recent approaches use the generative adversarial network (GAN)~\cite{NIPS2014_5423} to align and transfer images from different domains. Liu et al.~\cite{NIPS2016_6544} employ weight-sharing assumption to learn the shared latent code between two domains and further propose translation stream in~\cite{NIPS2017_6672} to encourage the same image in two domains to be mapped into common latent code. While our method is similar to these works in high level, the discrete property of language required new model design. \noindent \textbf{Language style transfer.} Supervised learning can be used to generate various linguistic attribute (e.g., different sentiments and different degrees of descriptiveness), but it requires a significant amount of labeled data. Many recent works assume there exist a share content space and a latent style vector between two non-parallel corpora for unsupervised language style transfer. Shen et al.~\cite{shen2017style} propose an encoder-decoder structure with adversarial training to learning this space. Following the same line, Melnyk et al.~\cite{melnyk2017improved} introduce content preservation loss and classification loss to improve the transfer performance. Fu et al.~\cite{fu2018style} propose to use a multi-decoder for different styles and a discriminator to learn a shared content code. Zhang et al.~\cite{zhang2018shaped} also use similar structure by using shared and private encoder-decoder. In a recent work, Prabhumoye et al.~\cite{prab2018style} introduce to ground the sentence in translation model, then apply adversarial training to get the desired style. What differs us from prior works is that we require generated stylish descriptions to match the visual content. Moreover, the style transferred in our work is more abstract instead of explicit styles such as sentiment, gender, or authorship in previous works. \noindent\textbf{Image description generation.} Several works have been proposed to generate image descriptions by using paired image description data~\cite{vinyals2015show,krause2016paragraphs,liang2017recurrent}. To increase the naturalness and diversity of generated descriptions, Dai et al.~\cite{dai2017towards} apply adversarial training approach to train an evaluator to score the quality of generated descriptions. Chen et al.~\cite{chen2017show} propose an adversarial training procedure to adapt image captioning style using unpaired images and captions. A new objective is proposed in~\cite{dai2017contrastive} to enhance the distinctiveness of generated captions. On the other hand, there exist a few works proposed to enhance the attractiveness and style of the generated descriptions. Zhu et al.~\cite{zhu2015aligning} align the book and the corresponding movie release to a story-like description of the visual content. However, this method does not preserve the visual content. Matthews et al.~\cite{mathews2016senticap} propose the switch RNN to generate caption with positive and negative sentiments, which requires word level supervision and might not be able to scale. Recently, Gan et al.~\cite{gan2017semantic} investigate to generate tag-dependent caption by extending the weight matrix of LSTM to consider tag information. The following work StyleNet~\cite{gan2017stylenet} explores to decomposes LSTM matrix to incorporate the style information. One key difference is that we leverage an arbitrary stylish monolingual corpus that is not paired with any image dataset as target corpus instead of using paired images with stylish ground truth. The most similar to our work is~\cite{mathews2018semstyle}, the major differences are that we do not exploit the language features such as POS tag of corpus and we do not pre-process the target corpus to make it similar to the source one. Our approach is end to end with minimal pre-process of target corpus. \section{Unsupervised Stylish Image Description Generation} The goal of stylish Image Description Generation (IDG) is to generate a natural language description $d_T$ in space $\mathcal{D}_T$ given an image $I$ in the image space $\mathcal{I}$. The style of the description is implicitly captured in the description space $\mathcal{D}_T$, where we use subscript $T$ to emphasize the target style. There exist two settings for learning a stylish IDG model. \paragraph{\bf Supervised stylish IDG.} In supervised stylish IDG, we are given a training dataset $\mathbb{D}=\{(I^{(n)},d_T^{(n)}), n=1,...,N\}$, where each sample $(I^{(n)},d_T^{(n)})$ is a pair of image and its target stylish description sampled from the joint distribution $p(\mathcal{I},\mathcal{D}_T)$. The goal is to learn the conditional distribution $p(\mathcal{D}_T|\mathcal{I})$ using $\mathbb{D}$ so that we can generate \mbox{stylish image descriptions for an input image.} \paragraph{\bf Unsupervised stylish IDG.} In unsupervised stylish IDG, we are given two training datasets $\mathbb{D}_S$ and $\mathbb{D}_T$. $\mathbb{D}_S=\{(I^{(n)},d_S^{(n)}), n=1,...,N_S\}$ consists of pairs of image and its description $(I^{(n)},d_S^{(n)})$ sampled from $p(\mathcal{I},\mathcal{D}_S)$, where $S$ is referred to as the source domain which is typically unstylish. $\mathbb{D}_T=\{(d_T^{(n)}), n=1,...,N_T\}$ is a dataset of target stylish descriptions $d_T^{(n)}$ sampled from $p(\mathcal{D}_T)$, where the corresponding images are not available. Hence, the learning task is considered as unsupervised. The goal of unsupervised stylish IDG is to learn the conditional distribution $p(\mathcal{D}_T|\mathcal{I})$ using $\mathbb{D}_S$ and $\mathbb{D}_T$. Unsupervised stylish IDG is an ill-posed problem since it is about learning the conditional distribution $p(\mathcal{D}_T|\mathcal{I})$ without using samples from the joint distribution $p(\mathcal{I},\mathcal{D}_T)$. Therefore, learning an unsupervised stylish IDG function is difficult without leveraging some useful assumptions. However, under the unsupervised setting, training data collection is greatly simplified: one could pair a general image description dataset (e.g., the MS-COCO dataset~\cite{lin2014microsoft}) with an existing corpus of the target style (e.g., some romantic novels) for learning. A solution to the unsupervised problem could enable many stylish image description generation applications. \subsection{Unsupervised Stylish IDG via Domain Layer Norm} \label{sec:framework} \begin{figure*}[t] \centering \includegraphics[width=0.75\textwidth]{assumption_v4.pdf} \caption{We make several assumptions to deal with the challenging unsupervised stylish image description generation problem. We first assume there exists a shared latent space $\mathcal{Z}$ so that a latent code $\mathbf{z}\in\mathcal{Z}$ can be mapped to the source description space $\mathcal{D}_S$ and the target stylish description space $\mathcal{D}_T$ via $G_S$ and $G_T$. We also assume there exists a stylish image description embedding function $E_T$ that can map a stylish description to a latent code. Finally, we assume there exists an image embedding function $E_I$ that can map an image to a latent code. Once these functions are learned from data, we can generate a stylish image description for an image by applying $E_I$ and $G_T$ sequentially. } \label{fig::asssumption} \end{figure*} \paragraph{\bf Assumptions.} To deal with the ill-posed unsupervised stylish IDG problem, we make several assumptions illustrated in Figure~\ref{fig::asssumption}. We first assume that there exists a latent space $\mathcal{Z}$ providing a common ground to effectively map to and from the image space $\mathcal{I}$, the source description space $\mathcal{D}_S$, and the target stylish description space $\mathcal{D}_T$. From latent space to description space, we assume that there exists a source description generation function $G_S(\mathbf{z})\in\mathcal{D}_S$ and a target stylish description generation function $G_T(\mathbf{z})\in\mathcal{D}_T$. From non-latent space to latent pace, we assume that there exist an image encoder $E_I(I)\in\mathcal{Z}$ and a target description encoder $E_T(d_T)\in\mathcal{Z}$. Our goal is to learn the generation functions ($G_T$ and $G_S$) and the encoding functions ($E_I$ and $E_T$) from the unsupervised stylish IDG training data $\mathbb{D}_S$ and $\mathbb{D}_T$. Note that this is a challenging learning task if $G_T$ and $G_S$ is completely independent of each other. Hence, we assume that $G_T$ and $G_S$ share the ability to describe the same factual content but with different styles. Once these functions are learned, we can simply first encode the image $I$ to a latent code using $E_I$ and then using $G_T$ to generate a stylish image description. In other words, the stylish image description is given by $G_T(E_I(I))$. We model the conditional distribution as $p(\mathcal{D}_T|\mathcal{I})=\delta(G_T(E_I(I)))$, where $\delta$ is the delta function. \begin{figure*}[t] \centering \includegraphics[width=0.78\textwidth]{model_archicompo_0518.pdf} \caption{The $E_{I}$ and $E_{T}$ map the image and the target stylish description to a shared latent space. Both $G_{S}$ and $G_{T}$ share all weights except the layer norm parameters to capture the similar content in two domains. To disentangled the style factor, we employ different sets of layer norm parameters denoted as $\{\boldsymbol{g}_{S},\boldsymbol{b}_{S}\}$ and $\{\boldsymbol{g}_{T},\boldsymbol{b}_{T}\}$ for source and target domain during training.} \label{fig::model_archi} \end{figure*} Inspired by the success of deep learning, we model both of the generation and encoding functions using deep networks. Specifically, we model $E_I$ using a deep convolutional neural network (CNN)~\cite{NIPS2012_4824} and model $E_T$, $G_T$, and $G_S$ using recurrent neural network as illustrated in Figure~\ref{fig::model_archi}. We also use Skip-Thought Vectors (STV)~\cite{kiros2015skip} to model $E_T$. For $G_T$ and $G_S$, we use Layer Normalized Long Short Term Memory unit (LN-LSTM) as their recurrent module~\cite{ba2016layer,hochreiter1997long}. \begin{figure}[t] \centering \includegraphics[scale=0.18]{a3i_model_cell_tmp.pdf} \caption{Inside the LN-LSTM cell (left) and the operation of layer normalization (right).} \label{fig:DLNdetail} \end{figure} \paragraph{\bf Training sketch.} With the source domain dataset $\mathbb{D}_S$, we can train $\boldsymbol{z}_S=E_I(I)$ and $d_S=G_S(\boldsymbol{z}_S)$ jointly by solving the supervised IDG learning task, where $\boldsymbol{z}_S$ is the learned latent representation in the source domain. On the other hand, with the target domain dataset $\mathbb{D}_T$, we can train $\boldsymbol{z}_T=E_T(d_T)$ and $d_T=G_T(\boldsymbol{z}_T)$ jointly by solving an unsupervised description reconstruction learning task, where $\boldsymbol{z}_T$ is the learned latent representation in the target domain. To ensure that the latent space is shared (i.e., $\boldsymbol{z}_T\in \mathcal{Z}$ and $\boldsymbol{z}_S\in \mathcal{Z}$), we further assume that the generation functions $G_S$ and $G_T$ share most of their parameters. \paragraph{\bf Domain Layer Norm.} Specifically, we assume $G_S$ and $G_T$ share all the parameters except those in their layer norm parameters~\cite{ba2016layer}. In other words, the domain description generators ($G_S$ and $G_T$) only defer in the layer norm parameters. We refer this weight-sharing scheme as the Domain Layer Norm (DLN) scheme. The intuition behind DLN is to encourage the shared weight to capture the factual content between two domains while the differences (i.e., styles) are captured in layer norm parameters. This design helps $G_T$ generate descriptions that are related to the image content even without the supervision of the corresponding images in training. \noindent\textbf{Training $E_I$ and $G_S$ via Supervised IDG.} The goal of supervised image description generation is to learn $p(\mathcal{D}_S|\mathcal{I})$ by using $\mathbb{D}_S$. The $G_S$ consists of an embedding matrix $\boldsymbol{\theta_W}$ that maps input text $x_k$ to a vector $\boldsymbol{e}_k$, an LN-LSTM module, and an output matrix $\boldsymbol{\theta_V}$ that maps hidden state to predicted token $\boldsymbol{\hat{y}}$. Formally, \begin{align}\label{eq:1 (\boldsymbol{\hat{y}}_{k+1}, \boldsymbol{h}_{k+1}) = G_S(\boldsymbol{e}_k,\boldsymbol{h}_{k})~,\\ \boldsymbol{\hat{y}}_{k+1} = \boldsymbol{\theta_V}^T\boldsymbol{h}_{k}~,\\ \boldsymbol{e}_k = \boldsymbol{\theta_W}^T\boldsymbol{1}\{x_k\}~,\\ \boldsymbol{e}_{-1}= E_I(I), \boldsymbol{h}_{-1} = \boldsymbol{0}~, \end{align} where $\boldsymbol{h}_{k}$ is the hidden feature in the LN-LSTM, $k\in\{-1\ldots m-1\}$ is time step of description with length $m$, and $\boldsymbol{1}\{\}$ denotes the operator for one-hot encoding. To train the network, we minimize the sum of cross-entropy of correct words as follows, \begin{align} \mathcal{L}_{S} &= -\sum_{k=1}^{m}\textrm{log}(\boldsymbol{1}\{x_k\}^T\boldsymbol{\hat{y}}_k)~, \end{align} where $x_k$ is the $k^{th}$ word in the ground truth sentence. \noindent\textbf{Training $E_T$ and $G_T$ via Stylish Image Description Reconstruction.} The $G_T$ contains the LN-LSTM module, the same output matrix and embedding matrix used in $G_S$. Formally, \begin{align}\label{eq:3} (\boldsymbol{\hat{y}}_{k+1}, \boldsymbol{h}_{k+1}) = G_T(\boldsymbol{e}_k,\boldsymbol{h}_{k})~,\\ \boldsymbol{\hat{y}}_{k+1} = \boldsymbol{\theta_V}^T\boldsymbol{h}_{k}~,\\ \boldsymbol{e}_k = \boldsymbol{\theta_W}^T\boldsymbol{1}\{d^k_T\}~,\\ \boldsymbol{e}_{-1} = E_T(d_T)~,\\ \boldsymbol{h}_{-1} = \boldsymbol{0}~, \end{align} where $d_T$ is the target style image description. To train the network, we minimize the reconstruction error as follows, \begin{align} \mathcal{L}_{T} &= -\sum_{k=1}^{m}\textrm{log}(\boldsymbol{1}\{d^k_T\}^T\boldsymbol{\hat{y}}_k)~, \end{align} where $d^k_T$ is the $k^{th}$ word in the target style image description. \noindent\textbf{Relating $G_S$ and $G_T$ via Domain Layer Norm.} We relate $G_{S}$ and $G_{T}$ by sharing all weights except layer norm parameters in the LN-LSTM. Details inside the LN-LSTM are shown in Fig \ref{fig:DLNdetail}, where the layer norm operation (LN) is applied to each gate of LSTM. Take the input gate as an example: \begin{align}\label{eq:4 \boldsymbol{\hat{i}}_k &= \textrm{LN}(\boldsymbol{i}_k), \boldsymbol{i}_k = \boldsymbol{\theta}_{ie}\boldsymbol{e}_k+ \boldsymbol{\theta}_{ih}\boldsymbol{h}_{k-1}~, \end{align} where $\boldsymbol{\hat{i}}_k$ and $\boldsymbol{i}_k$ are the normalized and unnormalized input gates, $\boldsymbol{\theta}_{ie}$, $\boldsymbol{\theta}_{ih}$ are two projection matrices that map the embedding vector and the previous hidden state into the same dimension. The LN operation converts any input $\boldsymbol{a}$ to a normalized output $\boldsymbol{\hat{a}}$ as follows, \begin{align} \label{eq:5 \boldsymbol{\hat{a}} &= \frac{\boldsymbol{g}}{\sigma} \odot (\boldsymbol{a} - \mu)) + \boldsymbol{b}~,\\ \mu &= \frac{1}{p_{h}}\sum_{i=1}^{i=p_{h}}a_{i}~,\\ \sigma &= \sqrt{\frac{1}{p_{h}}\sum_{i=1}^{p_{h}}(a_{i} - \mu)^2}~, \end{align} where $a_i$ denotes the $i^{th}$ entry in the vector $\boldsymbol{a}$, $p_h$ is the dimention of the input $\boldsymbol{a}$, $\mu$ and $\sigma$ are the mean and standard deviation of the input $\boldsymbol{a}$, $\boldsymbol{g}$ and $\boldsymbol{b}$ are scaling and shifting vectors (i.e., layer norm parameters) learned from the data. We train the whole network by jointly minimizing the supervised IDG loss $\mathcal{L}_{S}$ and the unsupervised image description reconstruction loss $\mathcal{L}_{T}$ subject to the architectural constraint set to $G_S$ and $G_T$ as below, where $\lambda$ is a hyperparameter. \begin{multline}\label{eq:6} \mathcal{L}(\boldsymbol{\theta}_{E_I} \boldsymbol{\theta}_{G_S},\boldsymbol{\theta}_{E_T},\boldsymbol{\theta}_{G_T}) = \lambda \mathcal{L}_{S}(\boldsymbol{\theta}_{E_I},\boldsymbol{\theta}_{G_S})\\ + (1 - \lambda)\mathcal{L}_{T}(\boldsymbol{\theta}_{E_T},\boldsymbol{\theta}_{G_T})~. \end{multline} \paragraph{\bf Extension to New Target Styles.} Given a model with parameters $\boldsymbol{\theta_{V}}$, $\boldsymbol{\theta_{W}}$, $\boldsymbol{\theta}_{E_I}$, and $\boldsymbol{\theta}_{G_S}$, pre-trained on a pair of the source and one target domain, we aim to adapt it to a new target domain (i.e, style) by enlarging $\boldsymbol{\theta_{V}}$ and $\boldsymbol{\theta_{W}}$ to $\boldsymbol{\theta}^{\prime}_{\boldsymbol{V}}$ and $\boldsymbol{\theta}^{\prime}_{\boldsymbol{W}}$ to accommodate new vocabulary and finetuning the remaining parameters to $\boldsymbol{\theta}^{\prime}_{E_I}$, $\boldsymbol{\theta}^{\prime}_{E_T}$, $\boldsymbol{\theta}^{\prime}_{G_S}$ and $\boldsymbol{\theta}^{\prime}_{G_T}$. Hence, we define a new loss function as: \begin{multline}\label{eq:7} \mathcal{L}(\boldsymbol{\theta}^{\prime}_{E_{I}}, \boldsymbol{\theta}^{\prime}_{G_S},\boldsymbol{\theta}^{\prime}_{E_T},\boldsymbol{\theta}^{\prime}_{G_T}) = \lambda_1\mathcal{L}_{S}(\boldsymbol{\theta}^{\prime}_{E_I},\boldsymbol{\theta}^{\prime}_{G_S})\\ + (1 -\lambda_1)\mathcal{L}_{T}(\boldsymbol{\theta}^{\prime}_{E_T},\boldsymbol{\theta}^{\prime}_{G_T}) + \lambda_2R(\boldsymbol{\theta}^{\prime}_{E_{I}},\boldsymbol{\theta}^{\prime}_{\boldsymbol{W}}, \boldsymbol{\theta}^{\prime}_{\boldsymbol{V}})~, \end{multline} where $\lambda_1$ and $\lambda_2$ are hyperparameters. The regularization term $R(\boldsymbol{\theta}^{\prime}_{E_{I}},\boldsymbol{\theta}^{\prime}_{\boldsymbol{W}}, \boldsymbol{\theta}^{\prime}_{\boldsymbol{V}})=\lVert \boldsymbol{\theta}^{\prime}_{E_{I}} - \boldsymbol{\theta}_{E_{I}}\rVert_{2} + \lVert \boldsymbol{\theta}^{\prime}_{\boldsymbol{W}} - \boldsymbol{\theta_{W}} \rVert_{2} + \lVert \boldsymbol{\theta}^{\prime}_{\boldsymbol{V}} - \boldsymbol{\theta_{V}} \rVert_{2}$ is used to prevent new weights from deviating the pretrained model. This encourages the adapted model to keep the information learned during the pretrained phase. We use pretrained $\boldsymbol{\theta}_{E_I}$ and $\boldsymbol{\theta}_{G_S}$ as initialization of $\boldsymbol{\theta}^{\prime}_{E_I}$ and $\boldsymbol{\theta}^{\prime}_{G_S}$. For $\boldsymbol{\theta}^{\prime}_{G_T}$, we share all parameters in $\boldsymbol{\theta}^{\prime}_{G_S}$ except the layer norm parameters. $\boldsymbol{\theta}^{\prime}_{E_T}$ is trained from scratch. Note that we do not update the source domain layer norm parameters since we do not need to learn source style. \section{Experiment} We conduct two experiments to evaluate our proposed method. First, we demonstrate that our method can generate stylish descriptions based on paired image and unstylish description in the source domain and a stylish monolingual corpus that is not paired with any image dataset in the target domain. Then, we demonstrate the flexibility of our DLN to progressively include new styles one by one in the second experiment. The implementation details are in the supplementary. \subsection{Evaluation Setting} \noindent\textbf{Datasets.} We use paragraphs released in~\cite{krause2016paragraphs} (VG-Para) as our source domain dataset. We do not use caption dataset such as MS-COCO because we found captions are less stylish when transfer to target style domain. We use pre-split data which contain 14575, 2489 and 2487 for training, validation and testing. For target dataset, we use humor and romance novel collections in BookCorpus~\cite{zhu2015aligning}. We also collect country song lyrics and fairy tale to show that our method is effective on corpora with different syntactic structures and word usage. More details can be found in supplementary materials.\\ \noindent\textbf{Baselines.} We compare our method with four baselines: StyleNet~\cite{gan2017stylenet}, Neural Story Teller (NST)~\cite{kiros2015skip}, DLN-RNN and Random. Stylenet generates stylish descriptions in an end-to-end way but with paired image and stylish ground truth description. NST breaks down the task into two steps, which first generate unstylish captions then apply style shift techniques to generate stylish descriptions. DLN-RNN uses the same framework as DLN with only difference in using simple recurrent neural network. Random samples the the same number of nouns as that in the unstylished ground truth from the corresponding vocabulary of target domain. Although a concurrent work ~\cite{mathews2018semstyle} that attempts to solve similar task as ours, the major differences are we do not exploit linguistic features and pre-process the target corpus to facilitate the training. Moreover, it is not sure whether the concurrent work can be applied to other styles or even multiple styles as it only makes a step toward generating sentences with romantic style.\\ \noindent\textbf{Metrics of semantic relevance.} As there is no ground truth sentences for stylish image descriptions in unpaired setting, the conventional n-gram based metrics such as BLEU~\cite{papineni2002bleu}, METEOR~\cite{denkowski2014meteor} and CIDEr~\cite{vedantam2015cider} cannot be applied. It is also not suitable to calculate these metrics between stylish sentences and the unstylished ground truth because the goal of stylish description generation is to change the word usage while preserve certain semantic relevance between the stylish description and images. We propose content similarity to evaluate the semantic relevance between generated stylish sentences and the unstylished ground truth. To calculate content similarity, we define $C_{S}$ as the set of nouns in the ground truth (source domain), and $C^{\prime}_{S}$ as the union between $C_{S}$ and synonyms for each noun in $C_{S}$, for the model may describe the same object with different words (e.g., cup and mug). Similar logic is applied to $C_{T}$ and $C^{\prime}_{T}$ in the generated description (target domain). We calculate: \begin{align} \label{eq:8} p &= \frac{|C_T \cap C^{\prime}_{S}|}{|C_T|} & r = \frac{|C_S \cap C^{\prime}_T|}{|C_S|},~ \end{align} We take the f-score of the $p$ and $r$ as the content similarity score. The overall content similarity score is averaged over the testing data. This is because we assume stylish descriptions should at least contain objects which appear in the image. We also report SPICE~\cite{anderson2016spice} score, which calculate the f-score of semantic tuples between untylished ground truth and the generated stylish descriptions. The final score is average over all testing data.\\ \noindent\textbf{Metrics of stylishness.} We use transfer accuracy to evaluate the stylishness of our generated description. The transfer accuracy is widely used in language style transfer task~\cite{shen2017style,melnyk2017improved,fu2018style}. It measures how often do descriptions have labels of target style on test dataset based on a pre-trained style classifier. We follow the definition of transfer accuracy in~\cite{fu2018style}, which is \begin{equation} \mathcal{T} = \begin{cases} 1 & \text{if $s > 0.5$} \\ 0 & \text{if $s \leq 0.5$} \end{cases} \end{equation} where $s$ is the output probability score of the classifier. We define $R_{T} = \frac{N_{vt}}{N_{vs}}$ as our transfer accuracy, which is the fraction of number of testing $N_{vs}$ data in source domain and number of testing data that correctly transfer description with target style $N_{vt}$. The final score is average over all testing data.\\ \noindent\textbf{Human evaluation.} The difficulty in generating stylish sentence in unpaired setting is to remain semantic relevance. Therefore, we conduct a human study on Amazon Mechanical Turk (AMT) independently for each methods to judge the semantic relevance between image and description. For each model, we randomly sample 100 images then generate stylish descriptions for each style. Two workers are asked to vote the semantic relevance with following prompt: Given an image and a paragraph from the book (Our stylish corpus), how well does the paragraph content relate to objects in the image. Workers are forced to vote from unrelated to related. The criteria for eligible workers are having at least 100 successful HITs with 70\% acceptance rate. The total number of HIT is 2400. For each HIT, the order of options is randomized. Workers are forced to vote and all responses are counted without aggregation. \begin{table*}[t!] \centering \begin{tabular}{lc|ccccccccc} \toprule Model & Data & CS & S & T & $p$ & $r$ & $n_{p}$ & $n_{r}$\\ \midrule NST~\cite{kiros2015skip} & Lyrics& 0.037 & 0.016 & 100\% & 0.041 & 0.044 & 0.68 & 0.75 \\ StyleNet~\cite{gan2017stylenet}&Lyrics& 0.033 & 0.014 & 100\% &0.038 & 0.038 &0.57 &0.67 \\ Random & Lyrics & 0.008 & 0.002 & 55.2\% & 0.007 & 0.012 &0.13 &0.09 \\ DLN-RNN & Lyrics &0.072 & 0.030 &100\% & \textbf{0.101} & 0.069 &\textbf{1.65} &1.17 \\ DLN &Lyrics &\textbf{0.083}& \textbf{0.033} &99.2\% & 0.080& \textbf{0.115} &1.25 & \textbf{1.92} \\ \midrule NST~\cite{kiros2015skip} & Romance& 0.088 & 0.039 & 100\% & 0.087 & 0.113 &\textbf{1.57} &1.90\\ StyleNet~\cite{gan2017stylenet} &Romance& 0.012 & 0.005 & 100\% & 0.032 &0.001 & 0.11&0.14 \\ Random & Romance& 0.005& 0.002& 100\% &0.004 &0.001 &0.07 &0.05 \\ DLN-RNN & Romance& 0.083& 0.034 & 94.3\% & 0.078 & 0.125 &1.27 &0.71 \\ DLN &Romance &\textbf{0.151} & \textbf{0.058} & 95.4\%& \textbf{0.193}& \textbf{0.148} &1.56 &\textbf{2.43}\\ \midrule NST~\cite{kiros2015skip} & Humor& 0.103 & 0.041 & 99.7\%& 0.097&0.143 &2.22 &2.44 \\ StyleNet~\cite{gan2017stylenet}&Humor& 0.010 &0.005 & 99.8\% & 0.024& 0.001&0.12 &0.15 \\ Random & Humor& 0.007 & 0.002 & 100\% & 0.006 &0.014 &0.11 &0.07 \\ DLN-RNN & Humor &0.093 & 0.038 & 89.5\% & 0.095 &0.12 &1.58 &0.92 \\ DLN &Humor &\textbf{0.173} &\textbf{0.065} & 70.0\%&\textbf{0.205} &\textbf{0.182} &\textbf{2.32}&\textbf{2.99} \\ \midrule NST~\cite{kiros2015skip} & Fairy tale& 0.116 &0.044 & 99.8\%&0.116 &\textbf{0.145} &\textbf{2.47} &\textbf{2.44} \\ StyleNet~\cite{gan2017stylenet} &Fairy tale& 0.028 &0.013 & 99.8\%& 0.045& 0.026& 0.34&0.46\\ Random & Fairy tale & 0.004 & 0.001 & 100\% &0.003 &0.010 &0.06 &0.04\\ DLN-RNN & Fairy tale & 0.084 & 0.033 & 79.5\% & 0.076 &0.140 &1.22 &0.72 \\ DLN &Fairy tale &\textbf{0.135} & \textbf{0.050} & 93.7\%&\textbf{0.194} &0.125 &1.29&2.06\\ \bottomrule \end{tabular} \caption{Performance comparison between DLN and several baselines. CS, S and T stand for content similarity, SPICE and transfer accuracy. $p$ and $r$ are as defined in Eq.~\ref{eq:8}. $n_{p}$ and $n_{r}$ are the numerator of each. DLN has generally higher score of content related metrics. Higher is better for all metrics except the transfer accuracy.} \label{tab:exp1} \end{table*} \subsection{Results} The result of the first experiment is summarized in Table~\ref{tab:exp1}. We also report $p$, $r$ and the numerator of each for further comparison. It is worth noting that the perfect transfer accuracy may not be the best since the model could greedily generate the vocabulary used in the target domain and digress from the image content. Therefore, an ideal stylish description is the one with the high content similarity score and an acceptable transfer accuracy. Our DLN consistently outperforms other baselines in term of all semantic related metrics with a marginal drop of transfer accuracy on most datasets. All baselines are better than Random, which suggests all baselines can generate semantic-related description to certain degree. We observe NST has large $n_{p}$ and $n_{r}$ in fairy tale. We think this is because NST tends to generate long sentences. For each style (Fairy, Humor, Romance, and Lyrics), the average sentence length of NST is $(119,109,103,84)$ while that of DLN is $(38,54,41,97)$. Therefore, it is possible that NST generates more nouns in the unstylish ground truth. We also report the performance of DLN and DLN-RNN on unstylish description generation task in Table~\ref{tab:exp1-1}. We calculate the BLEU-4, METEOR and CIDEr scores between generated sentences and unstylished ground truth. Combined with the result of stylish description generation in Table~\ref{tab:exp1}, we can conclude that the proposed domain layer norm can benefit the unpaired image to stylish description as we have a better model in conventional image to text generation. The result of human study is shown in Fig~\ref{fig:exp1_human_study}, we report the best of our model in Table~\ref{tab:exp1} (DLN) and other baselines for comparison. The DLN has the highest related and lowest unrelated votes while over half of descriptions are voted as unrelated in other baselines. Qualitative results in Fig~\ref{fig:exp1_example} shows that the description generated by DLN is related to images. Note that the goal of generated stylish description is not to match every factual aspect of images, it should better be judged whether the description is related to the image if the image appears in the target corpus. \begin{figure} \includegraphics[width=0.40\textwidth]{a3i_l.pdf} \caption{Human study of semantic relevance of all methods. DLN has highest related and lowest unrelated votes compared to other baselines.} \label{fig:exp1_human_study} \end{figure} \begin{table}[t!] \begin{tabular}{lcccc} \toprule Model & BLEU-3 & BLEU-4 & METEOR & CIDEr\\ \midrule DLN-RNN& 0.106 & 0.062 & 0.130 & 0.069 \\ DLN & \textbf{0.132} & \textbf{0.080} & \textbf{0.150} & \textbf{0.127} \\ \bottomrule \end{tabular} \caption{Performance on generate unstylish description. DLN is better than DLN-RNN in all metrics.} \label{tab:exp1-1} \end{table} \begin{figure*}[t!] \includegraphics[width=0.97\textwidth]{AAAI_typical.pdf} \caption{Examples of stylish descriptions by DLN. Note the goal of stylish description is not to match every factual aspect of the image. It should be better judged whether the descriptions are related to the image if the image appears in the context of the target corpus. The semicolon (;) in lyrics serves as new line symbol.} \label{fig:exp1_example} \end{figure*} \noindent\textbf{Multi-style.} We progressively expand DLN to include three target domains (fairy, romance, lyrics) to demonstrate the flexibility of our model. In other words, we follow Eq~\ref{eq:7} to train source and fairy tale style then include romance and lyrics style, which is denoted as DLN-Multi. To generate the description, we use the same target decoder with a different style-specific embedding matrix, layer norm parameters, and output matrix. We conduct another human study by asking five workers to determine the best description given following priorities: content, style, and naturalness. This prompt forces workers to choose the better one if the two options are equally related to images. We sample 100 images for each and use the same criteria to select workers. The result is presented in Table~\ref{tab:exp2}, which shows the performance of DLN-Multi is competitive to DLN. DLN-Multi thus gives users the capability to include new style into the existing model, which is a novel feature not reported in other baselines.\\ \noindent\textbf{Discussion: transfer accuracy and domain shift.} We observe a drop in transfer accuracy on the source to humor transfer in DLN, and we believe this is related to the scale of domain shift. To quantify this, we analyze the percentage of shared noun between the source ($V_{src} = 6.2\textrm{k}$) and target domain, which are $(50\%, 68\%, 74\%, 60\%)$ for lyrics, romance humor and fairy tale. For the transfer from the source to humor domain, the shared nouns account for over 70\% nouns in the source domain, which means the domain shift between the source and humor is smaller than others. This makes it more difficult for the classifier to distinguish two domains. Therefore, the transfer accuracy of the source to humor is lower. We note Random get lowest transfer accuracy in lyrics style and we believe this is because sampling word from the vocabulary of lyrics alone cannot have sentences with new line symbol (i.e. ;), which is an important feature for being classified as stylish. \begin{table}[t!] \centering \begin{tabular}{lc|cccc} \toprule Model & Style & CS& S &T & P\\ \midrule DLN-Multi & Romance&0.116& 0.047 & 97.1\% & 36.7\%\\ DLN &Romance &\textbf{0.151}&0.058 & 95.4\% & \textbf{63.3}\%\\ \midrule DLN-Multi & Lyrics& \textbf{0.118}& 0.047 & 99.7\% & \textbf{54.3}\%\\ DLN &Lyrics &0.083& 0.033 &99.2\% & 45.8\%\\ \midrule DLN-Multi & Fairy tale&0.120& 0.048 & 99.0\% & 47.4\%\\ DLN &Fairy tale &\textbf{0.135}&0.050 & 93.7\% & \textbf{52.6}\%\\ \bottomrule \end{tabular} \caption{Result of DLN and DLN-Multi. CS, S, T and P are content similarity, SPICE, tansfer accuracy and human preference score. Overall, the performance of DLN-Multi is competitive to DLN in all metrics.} \label{tab:exp2} \end{table} \section{Conclusion and future work} We propose a novel unsupervised stylish IDG model via domain layer norm with the capability to progressively include new styles. Experiment results show that our stylish IDG results are more preferred by human subjects. We plan to invesitgate the intermediate style generated by interpolation of domain layer norm parameter and address the fluency of generated sentences in the future. \small{
{'timestamp': '2018-09-18T02:19:40', 'yymm': '1809', 'arxiv_id': '1809.06214', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.06214'}
\section{INTRODUCTION} \label{sec:intro} A Luneburg lens is a spherically symmetric optic with a variable index of refraction $n$ that is a function of the radius $r$ such that $n = n(r)$. The equations that govern the relationship between $n$ and $r$ can give the lens the ability to form perfect geometrical images of two concentric spheres onto one another. A description of the Luneburg lens was first given by Rudolf Luneburg~\cite{Luneburg44}, where he derived equations for the refractive index $n(r)$ of a spherical lens with two external foci, which gives a maximum refractive index at the lens center that decreases radially outward to the lens surface. If one of the concentric spheres is at infinity, light rays originating from the outer sphere will produce parallel rays incident on the inner sphere that are then focused to a point on the opposite surface of the inner sphere, given the correct description for the gradient of the refractive index $n(r)$. Some applications for a lens with these properties include communication antennas\cite{Walter60,Scheel70, Higgs84} and optical waveguides\cite{Zernike74, Southwell77, Sochacki82}. The behavior of a Luneburg lens also has benefits for small scale astronomical observations. Under good observing conditions the Luneburg lens can create an image of the night sky over 2$\pi$ steradians with no active mechanical repointing required. We aim to test the feasibility of the Luneburg lens for small scale observations, such as those done at a high school or university, or for outreach and educational purposes. As a passive instrument with a wide field of view, it would be a useful tool for low-budget astronomy operations if the lens can be manufactured at a reasonable cost. In this paper we consider limitations regarding the production and performance of a Luneburg lens and optimize the lens characteristics for the best image quality. A continuously varying index of refraction is not practical to construct, but the Luneburg lens can be approximated with a series of discrete layers, each with its own refractive index. We explore the number of discrete lens layers needed for adequate imaging of naked eye stars and determine the quality of an image at the focal surface using the enclosed intensity as our metric. \section{DESCRIPTION OF STEPPED LUNEBURG LENS} To construct a model for all-sky imaging, we are concerned only with the two external foci case for the Luneburg lens\cite{Morgan58} with an index of refraction $n_{\rm L}(r)$ described by: \begin{align} &n_{\rm L} = \exp[\omega(\rho,r_0) + \omega(\rho,r_1)] \notag \ {\rm and}\\ &\rho = r n_{\rm L} \ {\rm for} \ 0 \geq \rho \geq 1, \label{Lun_gen} \end{align} \noindent where $r_0$ and $r_1$ indicate the location of the focal point of each sphere. We set $r_1 = \infty$ for imaging stars in the sky. Equation~\ref{Lun_gen} was derived under the condition that the radius of the inner sphere is set to unity. Since incoming rays originate at infinity, they should focus to a point on the surface of the lens at $r=1$, thus we set $r_0 = 1$. The function $\omega$ at $r_0 = 1$ and $r_1 = \infty$ is described by: \begin{align} &\omega(\rho,1) = \frac{1}{2} \log[1 + (1-\rho^{2})^{1/2}], \notag \\ &\omega(\rho,\infty) = 0. \label{omega_spec} \end{align} Combining Equation~\ref{Lun_gen} with Equation~\ref{omega_spec}, we obtain the simplified function $n(r)$ for the refractive index of a Luneburg lens with radius set to unity: \begin{equation} n = (2-r^{2})^{1/2}. \label{Lun_simp} \end{equation} Since we are modeling a Luneburg lens in discrete steps rather than a continuously variable refractive index, Equation~\ref{Lun_simp} gives us the refractive index for a shell of a given radius within the lens. At the surface of each lens layer a ray of light will encounter a refractive index transition and its behavior will be described by the vector form of Snell's law, which is based on the light ray direction vector $\vec{d}$ and the normal vector of the lens layer $\vec{n}$, both normalized to unity. \begin{align} &\cos{\theta_1} = \vec{n} \cdot (-\vec{d}) \notag \\ &\cos{\theta_2} = \sqrt{1 - \left(\frac{n_1}{n_2}\right)^{2} (1-(\cos{\theta_1})^{2})} \notag \\ &\vec{\mathrm{v}}_{\mathrm{reflected}} = \vec{d} + (2 \cos{\theta_1}) \vec{n} \notag \\ &\vec{\mathrm{v}}_{\mathrm{refracted}} = \left(\frac{n_1}{n_2}\right) \vec{d} + \left(\frac{n_1}{n_2} \cos{\theta_1} - \cos{\theta_2}\right) \notag \\ &\mathrm{where} \, \, \cos{\theta_1} \geq 0. \label{Snell} \end{align} With Equation~\ref{Snell} we are able to calculate the reflected and refracted rays produced for each ray-layer intersection until all rays have exited the lens. The result is an image at the focal point surrounded by an accumulation of extraneous rays due to the discrete nature of the lens. Analysis of the image produced at the lens focal point and halos allows us to determine the resolution and overall image quality for a stepped Luneburg lens. \section{STACK RAY TRACE CODE} In order to determine the scattered light properties of a stepped Luneburg lens, we have written a stack-based ray tracing code that can propagate the large number of reflected and refracted rays generated from one incident ray. With this code we are able to accurately propagate thousands of rays and track the position, direction, and amplitude of each ray individually. The code is written for three-dimensional ray propagation so that manufacturing errors, such as the decentering of layers or deviations from the ideal spherical layer may be introduced. Construction of a Luneburg lens first requires the creation of a multi-layer sphere in three-dimensional space. The origin of our coordinate system is designated as $(0,0,0)$ in a Cartesian $(x,y,z)$ system and the axes are chosen such that the $y$-axis runs through the zenith and nadir of the lens, while the top and bottom hemispheres of the lens are divided by the $x-z$ plane (see Figure~\ref{fig_orientation}). \begin{figure}[!b] \centering \includegraphics[width=10cm,height=10cm,keepaspectratio=true]{lens_axes} \caption{\small{Orientation of the Luneburg lens model in three-dimensional $(x,y,z)$ coordinates.} } \label{fig_orientation} \end{figure} The number of discrete lens layers can be adjusted, but the outermost radius is normalized to unity for the purpose of calculating the Luneburg refractive indices in Equation~\ref{Lun_simp}. The code initializes input arrays for a wavefront of light rays located above the lens and stores the position, direction, and amplitude of all incoming light rays. Each initial input ray is given an amplitude equal to 1. The input arrays can be constructed in four ways: a line of rays along the $x$-axis at $y=2$, a uniform square grid of rays in the $x-z$ plane from $(-1,-1)$ to $(1,1)$ at $y=2$, a random grid of rays in the $x-z$ plane from $(-1,-1)$ to $(1,1)$ at $y=2$, and a user-provided star pattern where a uniform square grid of rays at $y=2$ is rotated to the three-dimensional position of each star such the focal points of the wavefronts recover the star pattern on the lens bottom surface. The tracing of each ray is done as part of a stacked, iterative loop process, where ray properties are extracted for one ray at a time and the direction of the ray determines whether or not that ray intersects with a layer of the lens. We define the ray properties in vector form, thus we can test for the intersection between a ray and a lens interface using a ray described by the equation: \begin{equation} \vec{p} = t\vec{d} + \vec{p}_0, \label{line} \end{equation} \noindent where $\vec{p}$ is a point $(x,y,z)$, $t$ is a parametric variable, $\vec{d}$ is the direction vector, and $\vec{p}_0$ is the original $(x,y,z)$ point of the ray. We can insert the ray vector into the equation for a sphere and get the following quadratic equation: \begin{align} &x^{2} + y^{2} + z^{2} = \vec{p} \cdot \vec{p} - r^{2} = 0, \notag \\ &(t\vec{d} + \vec{p}_0 - \vec{p}_c) \cdot (t\vec{d} + \vec{p}_0 - \vec{p}_c) - r^{2} = 0, \label{eq_sphere} \end{align} where $\vec{p}_c$ is the center of the lens. The quadratic formula can be used to solve for $t$, and the sign of the discriminant can be used to test for the intersection between the ray and the lens interface: \begin{align} &b^{2} - 4 a c \notag \\ & \notag \\ &a = \vec{d} \cdot \vec{d} \notag \\ &b = 2 \vec{d} \cdot (\vec{p}_0 - \vec{p}_c) \notag \\ &c = (\vec{p}_0 - \vec{p}_c) \cdot (\vec{p}_0 - \vec{p}_c) - r^{2}, \label{discr} \end{align} If both solutions to the discriminant in Equation~\ref{discr} are negative, no intersection occurs. If a there is at least one positive value, the ray will intersect the lens interface. We take the smallest value of the solutions for $t$ because this corresponds to an intersection between the ray and the closest lens interface. \begin{table}[!ht] \begin{tabular}{l} \textbf{Stack-based Ray Trace Routine} \\ \hline \\ 1. Input array of position, direction, amplitude for incoming light rays \\ 2. Calculate intersections of rays and lens layer interface \\ 3. Implement Snell's law for all ray intersections \\ 4. Store position, direction, amplitude of reflected and refracted rays \\ 5. Discard rays that fall below the amplitude cutoff or exit the top of the lens \\ 6. Use reflected and refracted rays as input for next iteration of stack \\ 7. Go to Step 1 until all rays have exited the lens \\ \end{tabular} \caption{The structure of the Ray Trace Algorithm} \label{tab_stacksteps} \end{table} In Table~\ref{tab_stacksteps} the algorithm loop is detailed. The first loop of the stacked, iterative ray tracing routine is a test for intersection with the outermost surface of the lens (i.e. a test to see if the ray will enter the lens). All incoming rays that do not intersect the lens surface are dropped from the routine. If a ray enters into the bottom hemisphere of the lens or if it exits through the top hemisphere of the lens, it is also dropped. If the lens is to be manufactured, only the top hemisphere would be available for incident light while the bottom hemisphere is for detection and imaging. For rays that do enter the lens, the resulting direction vector is calculated using Equation~\ref{Snell} at the lens surface. Each implementation of Snell's law calculates a new value for the amplitude of the reflected and refracted rays. If at any point the amplitude of a ray falls below a preset amplitude threshold, it is dropped from the routine. We set the amplitude threshold at 0.01. This avoids the accumulation of large numbers of rays within the lens that contribute little to the overall output intensity. With a sufficient number of iterations, each ray will either be dropped due to low amplitude or exit the lens. For all rays that exit the bottom hemisphere of the lens, the position, direction, and amplitude for is recorded analysis. To test for exiting rays, we consider the dot product of a ray $(x,y,z)$ point and a ray direction vector that intersects the outermost layer of the lens. \begin{equation} \vec{p} \cdot \vec{d} \, \, \, \mathrm{for} \, \, \, r = 1. \label{dot_pr} \end{equation} If Equation~\ref{dot_pr} is negative for the first loop iteration, then the ray enters the lens. For positive dot products at the outermost layer, the ray exits the lens. The position, direction, and amplitude of bottom-exiting rays are stored in preparation for two-dimensional mapping and graphical analysis of the resulting images that form on the bottom hemisphere of the Luneburg lens. \section{RESULTS FROM SIMULATIONS} \subsection{Mapping Luneburg lens ray tracing} \begin{figure}[!b] \centering \includegraphics[width=13cm,height=13cm,keepaspectratio=true]{luneburg_raytrace_10lyr_0d50exp_49rays_grid_plotc} \caption{\small{Tracking rays through the Luneburg lens with the stack-based ray tracing routine. The plot shows the trajectory of incident rays and trajectories of reflected and refracted rays (black), intersections between a ray and a lens interface (red dots), rays that intersect the lens surface (orange dots), and rays that exit the lens (yellow dots). Rays marked with a black X failed to enter/exit the lens properly.}} \label{fig_raytracebig} \end{figure} A toy model run to showcase the ray tracing code is shown in Figure~\ref{fig_raytracebig} for a lens with 10 layers and 49 rays initialized in a uniform square grid pattern. The routine tracks the progress of each ray in three-dimensions as it travels through the lens and plots the rays from a side view of the lens in the $x-y$ plane. Each black line indicates the trajectory of a ray between consecutive intersections with a surface of the lens. Red dots indicate a ray intersecting a lens interface. When a ray intersects the outermost surface of the lens, the intersection point is marked with an orange circle. When a ray exits the lens the intersection point is additionally marked with a smaller yellow circle. Any point marked with an X indicates that the ray did not properly enter or exit the lens. Either the ray was incident on the bottom hemisphere, or it exited from the top hemisphere, and therefore must be discarded before further analysis. This graphic representation of ray trajectories allows us to track each ray individually by visual inspection to confirm that surface interactions are properly implementing Snell's law. Additionally, we are able to see any potential extraneous rays that might be created due to large angles of incidence at a lens interface. To plot the image produced by rays exiting the bottom hemisphere of the lens, the three-dimensional position of each exiting ray is projected onto a two-dimensional plot and weighted with the square of the amplitude to display the intensity associated with each ray. In this way we are able to create a two-dimensional map of the bottom hemisphere of the lens and obtain a qualitative representation of the position and intensity of the resulting image. We are able to infer the position of halos around the focal point that are due the discrete lens layers. For a more quantitative examination of the image made on the bottom hemisphere of the lens, we generate a pixelated intensity map. The intensity map is a two-dimensional projection of the three-dimensional positions and the intensities that are stored when the rays exit the bottom of the lens. The map is centered on the focal point of incident wavefront of light rays and plots a central region that contains a user-provided fraction of the total output intensity. The central region is then normalized to 1 in order to show the relative intensity of the surrounding halos. Halos with sufficient intensity will cause contamination in the case of multiple bright sources and will distort the final image, reducing the quality of the Luneburg lens. \subsection{Optimizing lens parameters} \begin{figure}[!t] \centering \includegraphics[width=13cm,height=13cm,keepaspectratio=true]{luneburg_enc_int_40lyr_multiexp_1024rays_grid} \caption{\small{The enclosed (cumulative) intensity of exit rays versus the angle $\theta$ away from the focal point of the initial wavefront for a 40 layer Luneburg lens with 1024 rays. $\theta$ is plotted in log scale to emphasize the region close to the focal point. Colored lines represent the power-law exponent $i$ in Equation~\ref{Lun_adj}.}} \label{fig_encint} \end{figure} For analysis of the Luneburg lens imaging quality we use a wavefront of 1024 incident parallel rays uniformly gridded in the $x-z$ plane as the initial conditions of a representative model for observing a single star. To obtain the best image quality for the lens we consider the effects of changing the number of lens layers and adjusting the exponent of the Luneburg power law in Equation~\ref{Lun_simp} such that we can manually designate a different exponent $i$ to create a new equation for refractive indices. \begin{equation} n = (2-r^{2})^{i}. \label{Lun_adj} \end{equation} Our chosen metric for image quality is $\theta_{50\%}$, the angular radius of a circle centered on the chief ray within which 50\% of the total output intensity is contained. We adjust the other parameters of the lens to minimize this angle. The enclosed intensity as a function of the angle away from the focal point is analyzed for multiple values of the power-law exponent $i$ over the range 0.30 $\leq i \leq$ 0.70. In Figure~\ref{fig_encint} it is clear that $i = 0.55$ provides the best image quality in the 40 layer case, as it contains the most intensity at small angles away from the focus. At larger angles we expect the enclosed intensity for all values of $i$ to follow roughly the same trend since the intensity contribution from the outer halos is due mostly to extraneous rays that are not sufficiently refracted through the lens and exit quickly without intersecting many lens interfaces. We test the same range of power-law exponents for 5 layer, 10 layer, 20 layer, and 40 layer configurations of the lens. For each layer configuration, we calculate $\theta_{50\%}$ and record the value for each of the Luneburg power-law exponents. In this way we are able to use both parameters in a single analysis to determine which combination of lens layers and power-law exponents will result in the smallest angle for 50\% enclosed intensity. \begin{figure}[!b] \centering \includegraphics[width=13cm,height=13cm,keepaspectratio=true]{luneburg_theta_vs_exp_multilyr_multiexp_1024rays_grid} \caption{\small{The angle $\theta_{50\%}$ containing 50\% of the total output intensity of exit rays versus the power-law exponent $i$ (Equation~\ref{Lun_adj}) for models with 1024 rays. The colored lines represent different numbers of lens layers.} } \label{fig_angexp} \end{figure} From Figure~\ref{fig_angexp} we can see that for increasing lens layers the optimum angle improves as the Luneburg power law tends toward $i = 0.55$. While the 40 layer configuration provides the sharpest focusing power, the 20 layer configuration performs nearly as well for $i=0.55$. These results are consistent with the expectation that as the number of discrete layers increases and the refractive index $n(r)$ for the lens approaches a continuous gradient, the focusing power should improve. We find that the mathematical description for $n(r)$ is best for $i=0.55$ in Equation~\ref{Lun_adj}, which differs from the theoretical prediction by Luneburg in Equation~\ref{Lun_simp}. We also test the effect of the amplitude cutoff against the completeness of the ray tracing routine. Snell's law is implemented when a ray intersects and lens interface. After the implementation of Snell's law, if the amplitude of the reflected ray or the refracted ray falls below 1\% of the original ray prior to Snell's law, then the ray is dropped from the ray tracing routine. An amplitude limit of 1\% corresponds to an intensity limit of 0.01\%. For the 40 layer Luneburg lens configuration, we set the amplitude cutoff at 10\%, 1\%, and 0.1\% and run the model for multiple values of the power-law exponent $i$ over the range 0.30 $\leq i \leq$ 0.70. \begin{figure}[!t] \centering \includegraphics[width=13cm,height=13cm,keepaspectratio=true]{luneburg_int_vs_exp_multilyr_multiexp_1024rays_grid_multiampcut} \caption{\small{The total output intensity of bottom-exiting rays versus the power-law exponent $i$ (Equation~\ref{Lun_adj}) for a 40 layer Luneburg lens model with 1024 rays, where the three lines represent amplitude cutoffs of 10\% (solid), 1\% (dashed), and 0.1\% (dotted). The inset shows a zoomed region for $i=0.55$. There is little improvement between 1\% and 0.1\% amplitude cutoffs. Note that the $y$-axis values differ from the raw amplitude values listed in Table~\ref{tab_eff} because here we plot ray intensity rather than amplitude.} } \label{fig_angexp_1pct} \end{figure} In Figure~\ref{fig_angexp_1pct} we plot the total intensity output of the Luneburg lens versus the power-law exponent $i$ for the three different amplitude cutoff values. There is a significant improvement between the 10\% and 1\% amplitude cutoffs for most values of $i$. Lowering the amplitude cutoff to 0.1\% results in little improvement but a significant increase in the runtime of the code. Table~\ref{tab_eff} compares the amplitude completion of the ray tracing routine for the 40 layer lens configuration. \begin{table}[!b] \begin{center} \resizebox{14cm}{!}{ \begin{tabular}{lrrrrr} \multicolumn{6}{c}{Ray amplitude completion in the Luneburg lens model} \\ \hline \hline Cutoff & Total input & Total output & Completion & \multicolumn{2}{c}{Amplitude dropped} \\ \hline & & & & Due to cutoff & Exit top hemisphere \\ 10.0\% & 740.00 & 669.44 & 90.5\% & 70.56 & 0.00 \\ 1.0\% & 740.00 & 710.34 & 96.0\% & 26.28 & 3.38 \\ 0.1\% & 740.00 & 724.31 & 97.9\% & 9.92 & 5.77 \\ \hline \end{tabular} } \end{center} \caption{Amplitude completion of a 40 layer Luneburg lens with respect to the total amplitude of rays incident on the lens. Calculations are made with a Luneburg power-law exponent $i$ = 0.55 and 1024 rays in the initial wavefront. Note that not all rays enter the lens in the uniformly gridded initial wavefront.} \label{tab_eff} \end{table} The effect of decreasing the amplitude cutoff is significant for a reduction from 10\% to 1\%, where we see the completion increase from 90.5\% to 96.0\%. The completion only improves by 2\% when the amplitude cutoff is reduced from 1\% to 0.1\%. We conclude that there is little improvement for amplitude cutoff values below 1\%, and we can successfully model a Luneburg lens with a 1\% amplitude cutoff without loss of significant ray data. The analysis performed in this section suggests that the 40 layer Luneburg lens with a power-law exponent $i = 0.55$ from Equation~\ref{Lun_adj} is the optimal set of parameters for best image quality. \subsection{Observing Quality} We are able to determine the resolving power of the Luneburg lens using $\theta_{50\%}$. Analysis of the 40 layer lens configuration with power-law exponent $i=0.55$ and 1024 uniformly gridded initial input rays gives a value $\theta_{50\%}$ = 1.6 degrees. This is the angle as measured from the center of the focal point, meaning that the value must be doubled to obtain the lens imaging resolution, giving the optimized Luneburg lens an angular resolution of $\theta_{res}$ = 3.2 degrees. The lens should be capable of resolving all bright stars that are separated by more than 3.2 degrees on the sky. The central region and any rings containing some fraction of the output intensity must be corrected for the area of the region. We calculate the area of the central region as $\pi r_{50\%}^{2}$ and the area of each ring as $\pi(r_{outer}^{2} - r_{inner}^{2})$ and proceed to divide the summed intensity in each region by the area in which they are contained. After correcting for area, we further normalize by dividing each ring by the area-corrected intensity of the central region out to $r_{50\%}$. By doing so, we are able to use the central region containing 50\% of the total output intensity as a normalized reference point and can investigate the halo intensities relative to this central point. \begin{figure}[!b] \centering \includegraphics[width=8cm,height=8cm,keepaspectratio=true]{luneburg_int_map_40lyr_0d55exp_1024rays_grid} \includegraphics[width=8cm,height=8cm,keepaspectratio=true]{luneburg_int_map_40lyr_0d55exp_1024rays_grid_hist} \caption{\small{({\it Left}) Pixelated intensity map showing the relative intensity of halos due to the discrete nature of the stepped Luneburg lens. halos are plotted in magnitudes with respect to a central region enclosing 50\% of the output intensity for a 40 layer Luneburg lens with power-law exponent $i=0.55$ and 1024 rays. ({\it Right}) Histogram showing the location and relative magnitude of halos. The dashed black line indicates the sky brightness background.} } \label{fig_imgmaphist} \end{figure} In Figure~\ref{fig_imgmaphist} we see that the majority of halos have very low relative intensity. Beyond the normalized $r_{50\%}$ central region, most halos range from $4--15$ magnitudes, which are unlikely to diminish imaging quality by overlap or smearing. Halos from the first $2--3$ rings are stronger, with magnitudes $2--4$ times greater than the central region, which may affect the image quality if there are other nearby bright stars. Any halos with magnitude $>$13 are below our designated sky brightness background and would not affect observations. \subsection{Feasibility analysis} When considering the observational limitations of the Luneburg lens, we are interested in star proximity and the ability of the lens to properly resolve stars that are sufficiently close to one another. With an image resolution of $\theta_{res}$ = 3.2 degrees we expect to lose resolving capability for stars that are separated by less than 3.2 degrees on the sky. Our observations focus on the night sky as seen by the naked eye, therefore we do not consider stars that have apparent magnitudes $>6$. Of particular interest is the separation of $<$6 magnitude stars with respect to the very brightest stars in the night sky. A bright star in close proximity to other stars will be the most difficult to resolve, as the high degree of stellar brightness will create strong halos that diminish resolving ability in addition to the close angular separation. Using the Yale Bright Star Catalog\cite{Hoffleit91}, we compile a list of all stars with magnitude $<$1 and search for all stars with magnitude $<$6 that are within 3 degrees of angular separation from the magnitude $<$1 stars. \begin{table}[!t] \begin{center} \begin{tabular}{llrrrc} \multicolumn{6}{c}{Close-proximity bright stars} \\ \hline \hline Name & HD identifier & RA (J2000) & DEC (J2000) & $V_{\rm mag}$ & Stellar neighbors \\ & & [hh;mm;ss] & [dd;mm;ss] & & $V_{\rm mag} < 6$; $\theta_{\rm sep} < 3^{\circ}$ \\ \hline Alp Eri & 10144 & 01 37 42.9 & -57 14 12 & 0.46 & 2 \\ 87 Alp Tau & 29139 & 04 35 55.2 & +16 30 33 & 0.85 & 15 \\ 13 Alp Aur & 34029 & 05 16 41.4 & +45 59 53 & 0.08 & 1 \\ 19 Bet Ori & 34085 & 05 14 32.3 & -08 12 06 & 0.12 & 6 \\ 58 Alp Ori & 39801 & 05 55 10.3 & +07 24 25 & 0.50 & 5 \\ Alp Car & 45348 & 06 23 57.1 & -52 41 45 & -0.72 & 4 \\ 9 Alp CMa & 48915 & 06 45 08.9 & -16 42 58 & -1.46 & 8 \\ 10 Alp CMi & 61421 & 07 39 18.1 & +05 13 30 & 0.38 & 4 \\ 67 Alp Vir & 116658 & 13 25 11.6 & -11 09 41 & 0.98 & 2 \\ Bet Cen & 122451 & 14 03 49.4 & -60 22 23 & 0.61 & 1 \\ 16 Alp Boo & 124897 & 14 15 39.7 & +19 10 57 & -0.04 & 2 \\ Alp 1 Cen & 128620 & 14 39 35.9 & -60 50 07 & -0.01 & 5 \\ 21 Alp Sco & 148478 & 16 29 24.4 & -26 25 55 & 0.96 & 3 \\ 3 Alp Lyr & 172167 & 18 36 56.3 & +38 47 01 & 0.03 & 6 \\ 53 Alp Aql & 187642 & 19 50 47.0 & +08 52 06 & 0.77 & 8 \\ \hline \end{tabular} \end{center} \caption{List of all stars with magnitude $<$1 taken from the Yale Bright Star Catalog\cite{Hoffleit91} and their $<$6 magnitude neighbors within 3 degrees angular separation.} \label{tab_brightstars} \end{table} For the 15 brightest stars with magnitude $<$1, there are a total of 72 neighboring stars with magnitude $<$6 and a separation of $<$3 degrees. This indicates that these stars cannot be resolved by the lens. If we reduce the magnitude threshold for neighboring stars to $<$5, then the number of unresolved close-proximity stars drops to 26. Furthermore, the quality of the image might be affected by the large difference in magnitude between the primary bright star and its higher magnitude neighbor(s). The magnitudes of the halos closest to the focal point ($<$4) are on par with the magnitudes of the neighboring stars. While we acknowledge that the proximity of two moderately low magnitude stellar neighbors is still a potential source of error in the resolving ability of the Luneburg lens, we believe that stellar proximity for high magnitude--low magnitude neighbours represent the worst case scenario for obtaining good image quality. The Luneburg lens is likely to lose accurate data for up to 72 stars at $<$6 magnitude, which we consider to be an acceptable level of performance for all-sky imaging. \begin{table}[!b] \begin{center} \begin{tabular}{lc} \multicolumn{2}{c}{Detector specifications and Luneburg lens size} \\ \hline \hline Pixel length & $1.75 \times 10^{-3}$ mm \\ Pixel resolution & $2592 \times 1944$ (5 MP) \\ CCD dimensions & 4.54 mm $\times$ 3.39 mm \\ Pixels/arcmin & 10 \\ Physical length $L$ & $1.75 \times 10^{-2}$ $\frac{\rm mm}{\rm arcmin}$ \\ Lens radius $r$ & 60 mm \\ \hline \end{tabular} \end{center} \caption{Technical specifications of the iPhone 4 camera\cite{GSMarena,TechSpec} are used to calculate the radius of a Luneburg lens with 10 pixels per arcminute chosen as the minimum detecting quality.} \label{tab_camera} \end{table} One of the considerations of manufacturing such a lens is cost. We aim to keep lens production and implementation inexpensive. For this reason we seek to use common consumer camera arrays as detectors to be placed on the bottom hemisphere of the lens for recording observations. Using the technical specifications of the iPhone 4 camera\cite{GSMarena,TechSpec} as an example, we are able to approximate the physical size of the Luneburg lens. \begin{equation} \frac{L}{2 \pi r} = \frac{\theta}{360 \ \mathrm{deg}} \label{lens_radius} \end{equation} We choose a relationship between pixels and angular separation -- the example given here is 10 pixels per arcminute -- to set the limit for minimum detecting quality. Given the pixels per arcminute and pixel edge length, we calculate the value for the physical length $L$ per arcminute on the detector. With Equation~\ref{lens_radius} with $\theta = 1 \ {\rm arcmin} = 0.0167 \ {\rm degrees}$ we are able to calculate the maximum lens radius $r$, as seen in Table~\ref{tab_camera}. One factor not included in this feasibility analysis is the effect of the Moon on the lens, such as image saturation during a full Moon. A correctly modeled lunar brightness profile, which can be approximated as azimuthally symmetric, could be used to subtract away moonlight collected during a full Moon or other bright phases. Thus, we do not consider the presence of the moon as a negative factor for observing capabilities as it can be corrected via a model lunar brightness profile. \begin{figure}[!b] \centering \includegraphics[width=13cm,height=13cm,keepaspectratio=true]{luneburg_enc_int_40lyr_0d55_1024rays_grid} \caption{\small{Enclosed intensity as a function of angle away from the focal point for an optimized Luneburg lens with 40 layers and a Luneburg power-law exponent of $i$ = 0.55 for 1024 rays.} } \label{fig_encintopt} \end{figure} \section{CONCLUSIONS} The stepped Luneburg lens is a compact, potentially inexpensive camera with no moving parts suitable for all-sky imaging in remote locations. Using the \textsc{python} programming language, we develop our own ray tracing code that is able to track thousands of rays individually and record their position, direction, and amplitude upon exiting the lens. We successfully model a stepped Luneburg lens with the ray tracing routine and optimize its parameters based on a combination for the best imaging quality. After the analysis of different combinations of layer numbers, Luneburg power-law exponents for the refractive index, and amplitude cutoff limits, we conclude that the optimal configuration is a lens with 40 layers and a power-law exponent of $i=0.55$. Fig.~\ref{fig_encintopt} shows the enclosed intensity as a function of angle for the optimized Luneburg lens. Decreasing the amplitude cutoff limit does not significantly improve the efficiency or the resolution of the lens below a threshold of 1\%; increasing the amplitude cutoff to 10\% results in significantly poorer lens performance. The full image resolution of the lens $\theta_{res} = 3.2$ degrees for the optimized Luneburg lens parameters. Based on stellar data from the Yale Bright Star Catalog for the magnitude and proximity of bright stars, we conclude that there will be 72 stellar neighbors with magnitude $<$6 and an angular separation of $<$3 degrees that the Luneburg lens will have difficulty properly resolving. We consider this to be an adequate level of performance for all-sky observing. Additionally, we consider the physical limitations of the lens based on the type of camera implemented for detecting. We take as an example the CCD chip from the iPhone 4 camera and use its specifications to determine the physical size of the Luneburg lens, which we calculate to be $r$ = 6 cm. Most stars in the night sky brighter than 6 magnitude could be monitored photometrically using a Luneburg lens. The lens could use simple consumer cameras as detection devices for observations, with constraints on the size depending on the desired number of pixels per arcminute on the detector. Factors not considered here, but potentially important, are the feasibility of fabricating a Luneburg lens with as many as 40 glass layers; potential manufacturing errors such as the decentering of lens layers or surface aberrations; and the explicit manufacturing costs of layered glass with different refractive indices.
{'timestamp': '2018-06-15T02:12:19', 'yymm': '1806', 'arxiv_id': '1806.05661', 'language': 'en', 'url': 'https://arxiv.org/abs/1806.05661'}
\section{Introduction} We suppose that we have observed data $(Y_1,X_1),\ldots, (Y_n,X_n)$ from a strictly stationary process $(Y_k,X_k)_{k\in \mathbb{Z}}$ that are assumed to follow a general functional linear regression model of the form \begin{equation}\label{mod} Y_k=\varrho(X_k)+\varepsilon_k. \end{equation} Here $Y_k=(Y_k(t)\colon t\in [0,1])$ is a curve in a normed function space $H_2$, the covariates $X_k$ take values in a normed space $H_1$ and are distributed so that $X_k$ is independent of the model error $\varepsilon_k$, and $\varrho$ is a linear operator mapping $H_1$ to $H_2$. For example, $X_k$ might be a single curve living in the same space as the response, in which case \eqref{mod} describes simple linear function-on-function regression. This setting also includes functional autoregressive models \citep{bosq:2000} when $X_k = Y_{k-1}$. Generally though, $X_k$ might be comprised of several curves, a mixture of curves and scalar covariates, etc., and more detailed assumptions on the nature of the space $H_2$ will follow. Suppose $(Y,X)$ is a generic pair following \eqref{mod}. The goal of this paper is to introduce and study methods to consistently estimate the conditional distribution of $Y$ given $X$, $P(Y\in A|X)$, for some specific sets of interest $A\subset H_2$. By choosing appropriate sets $A$, one may make inference on a wide range of interesting properties of $Y$: \begin{enumerate}[topsep=5pt,itemsep=2pt] \item Often we are interested in some transformation $T$ of the response, and then might consider sets of the form $A=\{y\colon T(y)\in B\}$. For instance, % when $Ty=\lambda(\{t : y(t) \in B\})$, with $\lambda$ denoting standard Lebesgue measure on $[0,1]$, $A$ contains curves that occupy a range of interest for a certain amount of time. More generally, when $T(y)$ is a scalar, then we are often interested in the conditional distribution function $$F(z|X)=P(Y\in T^{-1}(-\infty,z]|X).$$ \item Similarly, when $Z=T(Y)$ is again a scalar, for $p\in (0,1)$, we may wish to estimate the conditional quantile function $q_p(Z|X):=\inf\{z\in \mathbb{R}\colon F(z|X)\geq p\}$. In financial applications and when $p$ is close to zero or one, then estimating $q_p(Z|X)$ is related to Value-at-Risk (VaR) estimation. See \cite{kato:2012} and \cite{sang:2020}. \item We might wish to choose $A$ such that it yields a prediction set for $Y$, so that $P(Y\in A_p|X)=p$ for a given $p\in (0,1)$. Estimating $P(Y\in A_p|X)$ can be used to appropriately calibrate $A_p$. See \cite{goldsmith:2013}, \cite{choi:2016}, \cite{liebl:2019}, \cite{hyndman:shang:2009}, and \cite{paparoditis:2020} for a review of methods for constructing prediction sets for functional responses and parameters. \end{enumerate} At this point, when referring to examples (a) and (b), an important remark is necessary. In the case where $Z=T(Y)$ is scalar, it might appear more natural to directly employ some scalar-on-function regression with response variable $Z$. \emph{However, one of the main strengths of the approach we pursue and which is a clear distinction to competitive methods, is that we first model the entire response curve and then extract the feature of interest. This has the advantage that we can harness the full information contained in the functional responses when estimating the conditional distribution of $Z$.} Aside from interest in the general problem, this work was primarily motivated by the statistical challenge of forecasting aspects of response curves $Y_k$ describing daily electricity prices. The specific data that we consider consists of hourly electricity prices, demand, and wind energy production in Spain over the period from 2014 to 2019, which includes observations from 2191 days (the data are available at \texttt{www.esios.ree.es}). We project the hourly data onto a basis of 18 twice differentiable B-splines to construct daily price, demand, and wind energy production curves, as illustrated in Figure~\ref{fig:spanishdata}. The price of electricity naturally fluctuates based on supply and demand, and exhibits daily, weekly, and yearly seasonality. The rather predictable variation in demand does not influence the price as much as surges in wind energy production, especially if they occur on days with weak demand. Letting $Y_k$ denote the price curves and $X_k$ the vector of the demand and wind curves, both adjusted for yearly seasonality and trends, we then model $Y_k$ using an FAR(7) model with exogenous variables \begin{equation}\label{e:spainfarx} Y_k = \sum_{i=1}^7 \Psi_i Y_{k-i} + \varrho X_k + \varepsilon_k, \end{equation} where $\Psi_1,...,\Psi_7$ denote autoregressive operators; see \cite{gonzalez:munoz:perez:2018}. The details of this are explained in Section~\ref{s:realdata:sim}, but for now it suffices to acknowledge that this is a regression model of the form \eqref{mod}. For such electricity price curves, their likelihood of falling within sets of the following type are of particular interest: \begin{Example}[Level sets]\label{E:levelsets} Let \[ A_{\alpha,z} = \big\{ y\in H_2\colon \lambda(t\colon y(t)>\alpha)\leq z \big\} \] for some $z\in [0,1]$ and $\alpha\in \mathbb{R}$. $A_{\alpha,z}$ contains curves that stay a limited amount of time $z$ above a threshold $\alpha$. \end{Example} Forecasting whether price or demand curves will spend prolonged periods of time above certain levels is useful in anticipating volatility in continuous intraday electricity markets, and planning for peak loads \citep{vilar:2012}. This falls within the scope of the general problem we consider. \begin{figure}[t] \includegraphics[width=\textwidth, trim=0 18 0 2mm,clip]{Spain-price-2014-11-z-0point5.pdf} \includegraphics[width=\textwidth, trim=0 29 0 1mm,clip]{Spain-demand-2014-11.pdf} \includegraphics[width=\textwidth, trim=0 18 0 1mm,clip]{Spain-wind-2014-11.pdf} \includegraphics[width=\textwidth, trim=3 5 0 0mm,clip]{Spain-price-prob-2014-11.pdf} \caption{Spanish electricity data on price, demand, and wind energy production during two weeks in November 2014. Price curves are colored blue or red according to whether or not they lie in the level set $\{y \in H_2: \lambda(t: y(t) > 50) \leq 0.5 \}$. The bar plot on the bottom shows the estimated conditional probability for $Y_k$ to lie in this set, with the decision threshold $1/2$, indicated by a dotted line, whether the event occurred is indicated by black dots.} \label{fig:spanishdata} \end{figure} The literature in functional data analysis on regression models of the form \eqref{mod} is vast, although the most frequent problems considered regarding consistent estimation in model \eqref{mod} are (i) how to find a consistent estimator $\hat\varrho_n$ of $\varrho$, and (ii) how to forecast consistently, i.e.\ to guarantee that $\hat\varrho_n(X)-\varrho(X)\to 0$ suitably in probability. Moreover, the majority of the literature on the topic of function-on-function linear regression concentrates on the setting when $H_1=H_2=L^2[0,1]$, the separable Hilbert space of square integrable functions on $[0,1]$, equipped with its standard inner product and norm. \cite{ramsay:silverman:2006} for example proposes a double truncation scheme based on functional principal component analysis to estimate $\varrho$ in this setting, and \cite{mas:2007}, \cite{imaizumi:kato:2018} derive a convergence rate for $\| \hat\varrho_n - \varrho \|_\mathcal{S}$ in a ``single-truncation'' estimation scheme based on an increasing (in the sample size) number of principal components, where here $\| \cdot \|_\mathcal{S}$ denotes the Hilbert--Schmidt norm. Similar consistency results for the resulting forecasts in functional linear regression can be found in \cite{crambes:mas:2013}, and under general stationarity conditions and in the FAR setting in \cite{hormann:kidzinski:2015} and \cite{aue:2015}. Estimating the operator $\varrho$ can be viewed as a special case of estimating the conditional mean $E(Y|X)$, and this general problem has also been extensively considered; see \cite{Chiou04functionalresponse}, \cite{ferraty:2012:fonfregressionbootstrap}, and \cite{wang:chiou:muller:review}. The problem of estimating the conditional distribution of $Y$ given $X$ has been comparatively far less studied. Numerous methods have been proposed to estimate the conditional distribution of a scalar response $Y$ with a functional covariate $X$, including \cite{chen:muller:2012:funquantile}, \cite{kato:2012}, \cite{yao:suechee:wang}, \cite{wang:chiou:muller:review}, and \cite{sang:2020}, who propose estimators based on quantile regression, and \cite{ferraty:vieu:2006}, who propose Nadaraya--Watson style kernel-smoothed estimators. Estimating the conditional distribution of $Y$ when $Y$ and $X$ take values in a general function space is largely unexplored to our knowledge, even in the context of model \eqref{mod}. \cite{fernandez:guillas:manteiga:2005:bootstrap} and \cite{paparoditis:2020} develop bootstrap procedures based on functional principal component analysis to produce prediction sets in the context of forecasting with Hilbertian FAR models, which can be viewed as a special case of this problem. For functional data taking values in $L^2[0,1]$, \cite{chen:muller:2016} and \cite{fan:muller:2021} develop methods for estimating the conditional distribution of $Y$ given $X$ assuming $X$ and $Y$ are jointly Gaussian, and that the conditional distribution of the response has sample paths satisfying natural differentiability conditions. A technical problem that is encountered in consistently estimating the conditional probability $P(Y\in A | X)$ is that one could at most expect consistent estimation for continuity sets of the distribution of the response, i.e. sets $A$ for which $P(Y\in \partial A )=0$. This property evidently depends strongly on the choice of the space $H_2$, as well as the norm that it is equipped with. For many interesting examples, the metric on the space $L^2[0,1]$ is too weak to allow for meaningful continuity sets $A$. An illustrative example is simple prediction band sets of the form $A=\{y\colon \lambda( t : a(t)<y(t)<b(t))=1\}$, where $a$ and $b$ are continuous functions on $[0,1]$, in which case $\partial A=A$ when $A$ is viewed as a subset of $L^2[0,1]$. More appropriate spaces to handle many interesting examples in functional data analysis involving path properties of the response, such as level sets, are the spaces $C[0,1]$, the space of continuous function on $[0,1]$ equipped with the supremum norm, or the Sobolev spaces equipped with their canonical norms; see \cite{brezis:2010}. For these latter spaces, the problem of estimating, and consistently forecasting with, $\varrho$ has been only lightly studied to date, and in specialized settings; see \cite{pumo:1999}, \cite{RUIZMEDINA:2019:banachFAR}, and \cite{bosq:2000} in the context of FAR estimation. Further, the problem of consistently estimating the conditional distribution $P(Y\in A | X)$ in these settings has not been studied, to our knowledge. We also refer the reader to \cite{dette:kokot:aue:2020} for a review of functional data analysis methods in $C[0,1]$. In this paper, we propose natural procedures to estimate $P(Y\in A |X)$, in which we first estimate $\varrho$ with a suitably consistent estimator $\hat{\varrho}_n$, and then either (i) resample the estimated residuals $\hat{\varepsilon}_{k,n} = Y_k - \hat{\varrho}_n(X_k)$ to estimate $P(Y\in A |X)$ with the empirical distribution of $\hat{\varrho}_n(X) + \hat{\varepsilon}_{k,n}$, or (ii) assuming Gaussianity of the model errors $\varepsilon_k$, we estimate $P(Y\in A |X)$ using simulation by modelling $Y$ conditioned on $X$ as a Gaussian process with mean $\hat{\varrho}_n(X)$, and covariance estimated from the residual sequence $\hat{\varepsilon}_{k,n}$. We establish general conditions on the estimator $\hat{\varrho}_n$ in both the settings when $H_2$ is a separable Hilbert space, and when $H_2$ is $C[0,1]$, such that these procedures will lead to consistent estimation of $P(Y\in A |X)$. Subsequent to this, we define an estimator $\hat{\varrho}_n$ that we show satisfies these conditions under regularity assumptions on the operator $\varrho$, and the process $(Y_k,X_k)_{k\in \mathbb{Z}}$, which allow for serial dependence of both the response and covariates. In the space $H_2=C[0,1]$, we introduce a number of examples of sets $A$ of potential interest, compute their boundaries, and establish under what conditions $P(Y\in \partial A )=0$, which can be non-trivial even when $Y$ is a Gaussian process. In several simulation studies and data analyses, the proposed methods generally outperformed competing methods in which $P(Y\in A |X)$ is estimated using functional logistic regression, functional Nadaraya--Watson estimation, or functional quantile regression. Another advantage, which derives from the simple form of our estimators for $P(Y\in A |X)$, is that they satisfy basic properties of a probability measure, e.g.\ they are monotone in $A$. While this may seem like an obvious requirement, it is not necessarily fulfilled by some competing approaches. The rest of the paper is organized as follows. In Section~\ref{sec-1}, we formally introduce the methods described above to estimate $P(Y\in A |X)$, and present results on their consistency, including results on uniform consistency over monotone families of sets $A$ that are relevant in constructing prediction sets with a specified coverage and quantile function estimates. These results depend on the properties of the estimator $\hat{\varrho}_n$, and we define an estimator based on functional principal component analysis and single truncation scheme, and establish that it leads to consistent estimation of $P(Y\in A |X)$ when $H_2$ is a separable Hilbert space and when $H_2=C[0,1]$ in Section~\ref{s:theoretical}. Section~\ref{s:partialA} presents numerous examples of sets $A$ of interest, and a discussion of their boundaries in $C[0,1]$. A number of competing methods are introduced in Section~\ref{s:realdata:sim}, and these are compared and studied with the proposed methods in several simulations studies and real data illustrations. The proofs of all technical results follow these main sections. \section{Estimation procedures and consistency results}\label{sec-1} We let $\int = \int_{0}^{1}$, and for $f,g \in L^2[0,1]$, we use the notation $\langle f , g \rangle = \int f(t)g(t)dt$ to denote the standard inner product on $L^2[0,1]$, with induced norm $\|\cdot\|_{L^2}^2 = \langle \cdot , \cdot \rangle$. In order to consider path properties of functions in $H_2$, we consider the space $C[0,1]$ equipped with the supremum norm $\|f\|_\infty = \sup_{t\in [0,1]} \big| f(t) \big|$. While $L^2[0,1]$ is a separable Hilbert space, $C[0,1]$ is a Banach space, with their norms satisfying $\|\cdot\|_{L^2} \leq \|\cdot\|_{\infty}$. The space $C[0,1]$ may hence be naturally embedded in $L^2[0,1]$. We use the tensor product notation $\otimes$ to denote the operator $a\otimes b(\cdot) = a\langle \cdot,b\rangle$ if $b$ is viewed as an element of a Hilbert space, and the kernel integral operator with kernel $a\otimes b (t,s) = a(t)b(s)$ if $b$ is viewed as an element of $C[0,1]$. We assume that the covariate space $H_1$ is a Hilbert space with norm $\|\cdot\|_{H_1}$. In order to lighten the notation, and when it is clear from the context, we write $\|\cdot\|$ in place of a specific norm on either the space $H_1$ or $H_2$. In the context of Hilbert spaces, we use $\|\cdot\|_\mathcal{S}$ and $\|\cdot\|_1$ to denote the Hilbert--Schmidt norm and the trace norms of operators, respectively. We assume throughout that the covariates $X_k$ and the model errors $\varepsilon_k$ satisfy the following independence condition, which we do not explicitly state in the below results, but take as granted. \begin{Assumption}\label{a:errorind} In model \eqref{mod}, $E\varepsilon_k = 0$, and $\varepsilon_k$ is independent from $(X_j)_{j\leq k}$ for all $k \in \mathbb{Z}$. \end{Assumption} In order to formally describe the methods we use to estimate $P(Y\in A|X)$, we assume that we may consistently estimate $\varrho$ with an estimator $\hat{\varrho}_n$ based on the sample $(Y_1,X_1),...,(Y_n,X_n)$. Specific conditions on this estimator, and examples satisfying these conditions, will follow. The first method we describe is based on applying an i.i.d.\ bootstrap to the estimated residuals. \medskip \noindent\underline{ Algorithm~1, Residual Bootstrap (abbreviated {\bf boot}):} \begin{enumerate} \item Estimate $\varrho$ in \eqref{mod} with $\hat\varrho_n$. \item Calculate the model residuals $\hat\varepsilon_{k,n} = Y_k-\hat\varrho_n(X_k)$. \item Define the estimator of $P(Y\in A|X)$ as \[ \hat{P}_n^\text{B}(Y\in A|X)=\frac{1}{n}\sum_{k=1}^n \mathds{1}\{\hat\varrho_n(X)+\hat\varepsilon_{k,n}\in A\}. \] \end{enumerate} We show below that this estimator is consistent under quite mild conditions. The algorithm {\bf boot} can be applied without specific distributional assumptions on the errors. If though the model errors $\varepsilon_k$ are thought to be Gaussian processes, then we may take this into account in estimating $P(Y\in A |X)$. As the mean of the model errors is zero by assumption, in this case their distribution is determined by their covariance $\Gamma = \var(\varepsilon_k)$, which is defined by \[ \var(\varepsilon_k) = E\big[ \varepsilon_k \otimes \varepsilon_k \big]. \] The above algorithm may then be adapted as follows: \medskip \noindent\underline{ Algorithm~2, Gaussian process simulation (abbreviated {\bf Gauss}):} \begin{enumerate} \item Estimate $\varrho$ in \eqref{mod} with $\hat\varrho_n$. \item Calculate the model residuals $\hat\varepsilon_{k,n} = Y_k-\hat\varrho_n(X_k)$. \item Estimate the empirical covariance operator \begin{align} \widehat\Gamma_{\varepsilon,n} = \frac{1}{n}\sum_{k=1}^n (\hat\varepsilon_{k,n}-\bar{\varepsilon}_{\cdot,n})\otimes(\hat\varepsilon_{k,n}-\bar{\varepsilon}_{\cdot,n}), \label{e:gammahat} \end{align} where $\bar{\varepsilon}_{\cdot,n}=\frac{1}{n}\sum_{k=1}^n \hat\varepsilon_{k,n}$. Let $\hat{\nu}_1 \ge \hat{\nu}_2 \ge \cdots $ denote the ordered eigenvalues of $\widehat\Gamma_{\varepsilon,n}$, with corresponding eigenfunctions $(\hat{\psi}_j)_{j\ge 1}$ satisfying $\widehat\Gamma_{\varepsilon,n}(\hat{\psi}_j) = \hat{\nu}_j \hat{\psi}_j$, $\langle \hat{\psi}_j, \hat{\psi}_\ell \rangle = \mathds{1}\{j = \ell\}$. \item Let $\{Z_i, \; i\ge 1\}$ denote a sequence of i.i.d.\ standard normal random variables, independent of the sample $(Y_1,X_1),...,(Y_n,X_n)$, and define \[ \varepsilon^{(n)} = \sum_{j=1}^{\infty}\hat{\nu}_j^{1/2} Z_j \hat{\psi}_j. \] Note that conditionally on the sample, in particular on $\widehat\Gamma_{\varepsilon,n}$, $\varepsilon^{(n)}$ is a Gaussian process with mean zero and covariance operator $\widehat\Gamma_{\varepsilon,n}$. Define the estimator of $P(Y\in A|X)$ as \begin{align}\label{eq:gprob} \hat{P}^\text{G}_{n}(Y\in A|X) =P(\hat\varrho_n(X)+\varepsilon^{(n)}\in A|X). \end{align} The right hand side above can be approximated by Monte-Carlo simulation. To do so, generate an i.i.d.\ sample conditionally on $\widehat\Gamma_{\varepsilon,n}$, $(\varepsilon_k^{(n)})_{k\geq 1}$, distributed as $\varepsilon^{(n)}$, by simulating independent standard Gaussian sequences $\{Z_{i,k}, \; i\ge 1\}_{k\ge1}$ and setting \[ \varepsilon^{(n)}_k = \sum_{j=1}^{\infty}\hat{\nu}_j^{1/2} Z_{j,k} \hat{\psi}_j. \] The right hand side of \eqref{eq:gprob} can be estimated, for a large $M$, by \[ \frac{1}{M}\sum_{k=1}^M \mathds{1}\{\hat\varrho_n(X)+\varepsilon_k^{(n)}\in A\}. \] \end{enumerate} \begin{Remark} {\rm The scaling $1/n$ in the definition of $\widehat\Gamma_{\varepsilon,n}$ does not take into account the degrees of freedom $T_n$ lost in the estimation of the regression operator $\varrho$. It has thus been advocated, for example in \cite{crambes:2016}, to instead divide by $n - T_n$, where $T_n$ is related to the dimension of the dimensionality reduction technique used in estimating $\hat{\varrho}_n$. If $ET_n = o(n)$, as is the case for most estimation approaches, the resulting scaling difference is asymptotically negligible. Some authors also propose splitting the sample and estimating the regression operator and the noise covariance operator on separate parts of the sample in order to reduce the bias of the estimator $\widehat\Gamma_{\varepsilon,n}$; see \cite{crambes:mas:2013}. } \end{Remark} \begin{Remark} {\rm Algorithm {\bf Gauss} can be extended to other parametric distributions of the noise. A notable example for this are infinite dimensional elliptic distributions, where $\varepsilon_k = \Xi_k \varepsilon^\prime_k$ with two independent random variables $\varepsilon^\prime_k \in H_2$, which is Gaussian, and $\Xi_k \geq 0$ from a known univariate parametric distribution. The following investigation can easily be adapted to this setting. For details on elliptical distributions of functional data, we refer to \cite{boente:2014}. } \end{Remark} We now aim to establish consistency results for these algorithms. In order to keep the results as general as possible and allow for different estimators of $\varrho$, these results are stated in terms of the following consistency properties of $\hat{\varrho}_n$. \begin{Assumption}\label{a:consist} The estimator $\hat\varrho_n$ is such that \begin{enumerate}[(a),topsep=0pt,itemsep=0pt] \item \label{a:consistOut} its out-of-sample prediction is consistent, i.e.\ if $X \stackrel{d}{=} X_1$, and is independent from the sample, then \[ \|\hat\varrho_n(X)-\varrho(X)\| \stackrel{P}{\to} 0, \quad n \to \infty. \] \item \label{a:consistIn} its in-sample prediction is consistent, i.e.\ let $K_n$ be independent from the sample and uniformly distributed on $\{1,\dots,n\}$, then \[ \|\hat\varrho_n(X_{K_n})-\varrho(X_{K_n})\| \stackrel{P}{\to} 0, \quad n \to \infty. \] \end{enumerate} \end{Assumption} In order to establish the consistency of the algorithm {\bf Gauss}, we additionally need conditions on the estimator of the covariance operator of the model errors defined by \eqref{e:gammahat}. We state two conditions depending on whether $H_2$ is a separable Hilbert space, or $H_2 = C[0,1]$. \begin{Assumption}\label{a:consist2} $H_2$ is a separable Hilbert space. The estimator $\hat\varrho_n$ is such that \[ E \Bigg\| \frac 1n \sum_{i=1}^n (\hat\varrho_n - \varrho) X_k \otimes X_k (\hat\varrho_n - \varrho)^* \Bigg\|_1 \to 0, \mbox{ as } \quad n\to\infty. \] \end{Assumption} \begin{Assumption}\label{a:consist2C} ~ \begin{enumerate}[(a),topsep=0pt,itemsep=0pt] \item $H_2 = C[0,1]$. The estimator $\hat\varrho_n$ is such that \[ \sup_{t,s\in[0,1]} \Bigg| \frac 1n \sum_{i=1}^n (\hat \varrho_n - \varrho)(X_k)\otimes (\hat \varrho_n - \varrho)(X_k)(t,s) \Bigg| \stackrel{P}{\to} 0, \quad n \to \infty. \] \item The estimated variance of the model errors, \[ V_n^2(t,s) = \var\big( \varepsilon^{(n)}(t) - \varepsilon^{(n)}(s) \big| \widehat\Gamma_{\varepsilon,n} \big) = \widehat\Gamma_{\varepsilon,n}(t,t) - 2 \widehat\Gamma_{\varepsilon,n}(t,s) + \widehat\Gamma_{\varepsilon,n}(s,s) \] satisfies the H\"{o}lder condition \[ V_n^2(t,s) < M_V^2 \, |t-s|^{2\alpha}, \quad t,s\in[0,1], \] for some $0<\alpha\leq 1$, where $M_V$ is a positive random variable with $E M_V < \infty$. \end{enumerate} \end{Assumption} Assumption~\ref{a:consist2C} is a $C[0,1]$ analog of Assumption~\ref{a:consist2}, but with the addition of Assumption~\ref{a:consist2C}~(b) that implicitly demands a degree of continuity of $\varrho(X_k)$ and the model errors $\varepsilon_k$. Under the above assumptions, we can now formulate our main consistency results. \begin{Theorem}\label{t:main1} Suppose that Assumption~\ref{a:consist} holds and that $P(Y\in \partial A)=0$. Then $\hat P^\text{B}_n(Y\in A|X)\stackrel{P}{\to} P(Y\in A|X)$ as $n\to\infty$. \end{Theorem} \begin{Theorem}\label{t:main2} Suppose that Assumption~\ref{a:consist}~\ref{a:consistOut} holds and either Assumption~\ref{a:consist2} or Assumption~\ref{a:consist2C} holds. Assume that $(\varepsilon_k)_{k\geq 1}$ are i.i.d.\ Gaussian random variables in $H_2$, and that $P(Y\in \partial A)=0$. Then $\hat P^\text{G}_n(Y\in A|X)\stackrel{P}{\to} P(Y\in A|X)$ as $n\to\infty$. \end{Theorem} Theorems~\ref{t:main1} and \ref{t:main2} show that consistent estimation of $P(Y\in A |X)$ can be achieved by both {\bf boot} and {\bf Gauss} when $Y$ takes values in either a separable Hilbert space, or $C[0,1]$, under natural consistency conditions on $\hat{\varrho}_n$, and when $A$ is a continuity set of the response $Y$. We note that these results can be readily extended to sets $A$ that, rather than being fixed, are dependent on the predictor $X$, as well as using the estimator $\hat{\varrho}_n$, so long as there is a certain degree of continuity in relating $\{Y\in A\}$ to $\hat{\varrho}_n(X)$. This is often of interest when constructing prediction sets for the response $Y$, as in the following examples in which it is natural to consider $H_2 = C[0,1]$. \begin{Example}[Pointwise and uniform prediction sets]\label{example:pred} Suppose $a$ and $b$ are positive functions in $C[0,1]$. Given a covariate $X$, let, for $s\in [0,1]$, \[ \hat{A}_{a,b}^{(n)}(s)= \big\{y\in C[0,1] \colon \hat{\varrho}_n(X)(s)- a(s) \le y(s) \le \hat{\varrho}_n(X)(s)+ b(s) \big\} \;\; \text{(Point prediction sets)}, \] and \[ \hat{U}_{a,b}^{(n)}= \big\{y\in C[0,1] \colon \lambda(t : \hat{\varrho}_n(X)(t)- a(t) \le y(t) \le \hat{\varrho}_n(X)(t)+ b(t) )=1 \big\} \;\; \text{(Uniform prediction sets)}. \] These approximate the sets \[ A_{a,b}(s)= \big\{y\in C[0,1] \colon \varrho(X)(s)- a(s) \le y(s) \le \varrho(X)(s)+ b(s) \big\}, \] and \[ U_{a,b}= \big\{y\in C[0,1] \colon \lambda(t : \varrho(X)(t)- a(t) \le y(t) \le \varrho(X)(t)+ b(t) )=1 \big\} . \] \end{Example} \begin{Corollary}\label{cor-predsets} For some $s\in [0,1]$, let $\hat{A}_{a,b}^{(n)}(s)$, $\hat{U}_{a,b}^{(n)}$, $A_{a,b}(s)$, and $U_{a,b}$ be defined in Example~\ref{example:pred}. Suppose that $P(Y \in \partial A_{a,b}(s))=0$. % If Assumption~\ref{a:consist} holds, then \begin{equation} \label{e:randomsets} \hat P^\text{B}_n(Y\in \hat{A}_{a,b}^{(n)}(s)|X)\stackrel{P}{\to} P(Y\in A_{a,b}(s)|X),\quad\text{as $n\to\infty$.} \end{equation} If Assumptions~\ref{a:consist}~\ref{a:consistOut} and \ref{a:consist2C} hold, then \eqref{e:randomsets} holds with $\hat P^\text{G}_n$ instead of $P^\text{B}_n$. Under $P(Y \in \partial U_{a,b})=0$, the analogue results hold with the sets $\hat{U}_{a,b}^{(n)}$ and $U_{a,b}$. \end{Corollary} \subsection{Uniform consistency over monotone families of sets}\label{s:uniform:mono} For a potentially unbounded interval $[a,b] \subset \overline{\mathbb{R}}$, we call a family $\mathcal{A}=\{A_\xi\colon \xi\in [a,b]\}$ of measurable subsets of $H_2$ monotone if the sets $A_\xi$ are increasing or decreasing in $\xi$. Suppose that $A_\xi$ is increasing, the decreasing case can be handled similarly, and that we are interested in finding \[ \xi_p(X)=\inf\{\xi \in [a,b]\colon P(Y \in A_\xi | X ) \geq p\},\quad p\in (0,1). \] As an example where this problem is relevant, consider a scalar transformation of the response $Z=T(Y)$, and suppose we wish to estimate the conditional quantile of $Z$ given the covariate $X$ \begin{align*} q_p(Z|X)&=\inf\{\xi \in [a,b]\colon P(Z\leq \xi | X ) \geq p\}=\inf\{\xi \in [a,b]\colon P\big(Y \in T^{-1}\big([a,\xi]\big) | X \big) \geq p\}. \end{align*} The sets $A_\xi:= T^{-1}\big([a,\xi]\big)$ evidently define a monotone family. Consistent scalar-on-function quantile regression can hence be cast as the problem of consistently estimating $\xi_p(X)$ from the sample, which can be done using $\hat{P}_n^\text{B}$ or $\hat{P}_n^\text{G}$. To this end we consider the estimator \begin{align} \hat \xi_p^\text{B} (X):= \inf \big\{ \xi \in [a,b]\colon\, \hat P^\text{B}_n(Y \in A_\xi | X ) \geq p \big\}. \label{e:predictionxi} \end{align} We note that based on the definition of $\hat{P}^\text{B}_n$, $p\mapsto \hat\xi_p^\text{B}(X)$ is a non-decreasing function in $p$. The same holds for $\hat \xi_p^\text{G}(X)$, which is defined using $\hat{P}_n^\text{G}$. While this observation is rather trivial, in other approaches to scalar-on-function quantile regression one often has to take special care in order to guarantee monotonicity of estimators of $q_p(Z|X)$, see e.g.\ \cite{kato:2012}. The goal is now to show that $\hat\xi_p^\text{B}(X)\stackrel{P}{\to}\xi_p(X)$ and $\hat\xi_p^\text{G}(X)\stackrel{P}{\to}\xi_p(X)$. In order to do so, we need the following uniform convergence result for the estimated conditional probabilities. \begin{Proposition}\label{p:uniformconv} Let $\{ A_\xi\colon \xi \in [a,b] \}$ be a monotone family of sets such that $P(Y \in A_\xi | X)$ is a.s. continuous in $\xi$. Suppose the estimator $\hat P_n(Y\in A_\xi|X)$ is non-decreasing, right-continuous, and satisfies $\hat P_n(Y\in A_\xi|X)\stackrel{P}{\to} P(Y \in A_\xi | X)$ for all $\xi \in [a,b]$. Then \[ \sup_{\xi \in [a,b]} \Big| \hat P_n(Y \in A_\xi | X) - P(Y \in A_\xi | X) \Big| \stackrel{P}{\to} 0, \quad n \to \infty. \] \end{Proposition} We note that both $\hat P_n^\text{B}(Y\in A_\xi|X)$ and $\hat P_n^\text{G}(Y\in A_\xi|X)$ satisfy the conditions of Proposition~\ref{p:uniformconv} under the conditions of Theorems~\ref{t:main1} and \ref{t:main2}. \begin{Corollary}\label{cor:unif} Define $\hat\xi_p(X)$ as in \eqref{e:predictionxi} for a some general estimator $\hat P_n(Y \in A_\xi | X)$. Under the assumptions of Proposition~\ref{p:uniformconv} with increasing sets $A_\xi$, we have $P(Y\in A_{\hat\xi_p(X)}|X)\stackrel{P}{\to} p$. If $P(Y \in A_\xi | X)$ is strictly increasing in $\xi$, then $\hat \xi_p(X) \stackrel{P}{\to} \xi_p(X)$. \end{Corollary} \section{Estimation of the regression operator}\label{s:theoretical} In this section we aim to define an estimator $\hat{\varrho}_n$ that satisfies the consistency conditions detailed in Assumptions~\ref{a:consist}, \ref{a:consist2}, and \ref{a:consist2C}. In order to do so, we make the following assumptions on model \eqref{mod}. \begin{Assumption}\label{a:hilbertsetting} \begin{enumerate}[(a),topsep=0pt,itemsep=0pt] \item $H_1$ is a separable Hilbert space. \item The process $(X_k)_{k \in \mathbb{Z}}$ has mean zero, and is $L^4$-$m$-approximable in $H_1$ (see \cite{hormann:kokoszka:2010}). \item The operator $\varrho\colon H_1\to H_2$ is a bounded linear operator. \item The sequence $(\varepsilon_k)_{k \in \mathbb{Z}} $ is a mean zero, i.i.d.\ sequence in $H_2$, and satisfies $E\|\varepsilon_k\|^4<\infty$. \end{enumerate} \end{Assumption} Assumption~\ref{a:hilbertsetting}~(b) supposes that $X_k$ is a (strongly) stationary and ergodic sequence with $E\|X_k\|_{H_1}^4<\infty$, and allows the $X_k$ to be weakly serially dependent in a certain sense. \cite{hormann:kokoszka:2010} show that many commonly studied stationary time series in function space, like FAR processes or functional analogs of GARCH processes, are $L^4$-$m$-approximable under suitable moment conditions. The estimator that we consider is a truncated (functional) principal components based estimator. Let the empirical covariance operator of $X_k$, and the empirical cross-covariance operator between $Y_k$ and $X_k$, be denoted as \[ \widehat C_{XX} = \frac 1 n \sum_{k=1}^n X_k \otimes X_k, \quad\text{and}\quad \widehat C_{YX} = \frac 1 n \sum_{k=1}^n Y_k \otimes X_k. \] Letting $\langle \cdot , \cdot \rangle_{H_1}$ denote the inner product on $H_1$, we note that $\widehat C_{XX}$ defines a non-negative sequence of eigenvalues $\hat \lambda_i$, and eigenfunctions $\hat v_i$, satisfying $\widehat C_{XX}(\hat{v}_i) = \hat{\lambda}_i \hat{v}_i $, $\langle \hat{v}_i , \hat{v}_j \rangle_{H_1} = \mathds{1}\{i=j\}$. We then define \begin{equation}\label{e:rhohat} \hat \varrho_n(x) := \sum_{i=1}^{T_n} \frac{1}{\hat \lambda_i} \, \widehat C_{YX} \, \hat v_i \otimes \hat v_i (x), \end{equation} The estimator \eqref{e:rhohat} only truncates the covariance operator of $X$ in order to obtain a feasible approximation to $\widehat C_{XX}^{-1}$, yielding a so-called ``single-truncated'' estimator. The asymptotic properties of these estimated operators have e.g.\ been studied in \cite{mas:2007} and \cite{hormann:kidzinski:2015}. In order to select the truncation parameter $T_n$ in such a way that leads to asymptotic consistency of $\hat{\varrho}_n$, we use the following criterion: \begin{equation}\label{e:chooseTn} T_n = \max\big\{j \geq 1 \colon \hat \lambda_j \geq m_n^{-1} \big\}, \quad \text{with }\; m_n \to \infty. \end{equation} Here $m_n$ is a tuning parameter, tending to infinity at a rate specified in the results below. We note that another standard way to select $T_n$ is to use the percentage of variance explained (PVE) approach, which entails taking \[ T_n = \min\left\{ d : \frac{\sum_{j=1}^{d} \hat{\lambda}_j}{\sum_{j=1}^{\infty} \hat{\lambda}_j } \ge v \right\}, \] where $v$ is a user specified percentage treated as a tuning parameter. While the criterion in \eqref{e:chooseTn} is more transparent in terms of describing the asymptotic consistency of $\hat{\varrho}_n$, since it gives a direct description of the decay rate of the sequence of eigenvalues $\hat{\lambda}_j$, in applications the PVE criterion is prevailing, due to its ease of interpretation. By choosing the associated tuning parameters appropriately, the two criteria may be made comparable. Now we present results which imply Assumptions~\ref{a:consist}--\ref{a:consist2C}, and hence the consistency of the estimators in the Algorithms {\bf boot} and {\bf Gauss}. \begin{Proposition}\label{p:main1} Suppose that $H_2$ is a separable Hilbert space, Assumption~\ref{a:hilbertsetting} holds and we define $\hat\varrho_n$ as in \eqref{e:rhohat} with $m_n = o\big(\sqrt n\big)$. Then Assumption~\ref{a:consist} holds. \end{Proposition} \begin{Proposition}\label{p:main2} Suppose that $H_2$ is a separable Hilbert space, Assumption~\ref{a:hilbertsetting} holds and that the true regression operator $\varrho$ is Hilbert--Schmidt. If $\hat\varrho_n$ is defined as in \eqref{e:rhohat} with $m_n = o\big(\sqrt n\big)$, then Assumption~\ref{a:consist2} holds. \end{Proposition} In the case when $H_2 = C[0,1]$, we add the following assumption in addition to Assumption~\ref{a:hilbertsetting}, supposing a degree of smoothness to $\varrho(X)$ and $\varepsilon_k$: \begin{Assumption}\label{a:continuoussetting} If $H_2 = C[0,1]$, and for some $0 < \alpha \leq 1$, \begin{enumerate}[(a),topsep=0pt,itemsep=0pt] \item The model errors $\varepsilon_k$ a.s.\ satisfy the Hölder condition \begin{align} \big| \varepsilon_k(t) - \varepsilon_k(s) \big| &< M_k \, |t-s|^\alpha \label{e:holdercondition} \end{align} where $M_k$ is a positive random variable independent from $X_k$, with $E M_k^2 < \infty$. \item For all $x \in H_1$, the regression operator $\varrho$ satisfies \[ \big| \varrho x(t) - \varrho x(s) \big| \leq M_\varrho \, \|x\| \, |t-s|^\alpha, \] where $M_\varrho$ is a finite constant. \end{enumerate} \end{Assumption} Assumption~\ref{a:continuoussetting}~(a) is fulfilled by a wide range of stochastic processes, most notably the Brownian motion and the fractional Brownian motion. Since $\varrho$ is linear, Assumption~\ref{a:continuoussetting}~(b) is a natural formulation of the Hölder condition for the conditional mean of the response. In particular, this implies that $\varrho$ is a bounded, compact operator. \begin{Remark}{\rm Suppose $H_1=L^2[0,1]$, so that model \eqref{mod} describes function-on-function regression. A frequently employed class of operators $\varrho$ in this setting are kernel integral operators, defined by a continuous kernel $\rho \in C[0,1]^2$ as \[ \varrho x(t) = \int \rho(t,u) x(u) du. \] If there exists an $a \in H_1$ such that almost everywhere \[ \big| \rho(t,u) - \rho(s,u) \big| < a(u) \, |t-s|^\alpha, \] then one can easily verify that \begin{align*} \big| \varrho x(t) - \varrho x(s) \big| &\leq \|a\| \, \|x\| \, |t-s|^\alpha, \end{align*} and thus Assumption~\ref{a:continuoussetting}~(b) is fulfilled.} \end{Remark} \begin{Proposition}\label{p:mainc1} Suppose that Assumption~\ref{a:hilbertsetting} and Assumption~\ref{a:continuoussetting}~(a) hold, and we define $\hat\varrho_n$ as in \eqref{e:rhohat} with $m_n = o\big( n^{\alpha/2} \big)$. Then Assumption~\ref{a:consist} holds. \end{Proposition} \begin{Proposition}\label{p:mainc2} Suppose that Assumption~\ref{a:hilbertsetting} and Assumption~\ref{a:continuoussetting} hold, and we define $\hat\varrho_n$ as in \eqref{e:rhohat} with $m_n = o\big( n^{\alpha/2} \big)$. Then Assumption~\ref{a:consist2C} holds. \end{Proposition} The proofs of Propositions~\ref{p:main1}--\ref{p:mainc1} are relegated to Section~\ref{s:proofs}, while the proof of Proposition~\ref{p:mainc2} is given in Appendix~\ref{a:additionalproofs}. We conclude this section with some technical discussion. We begin by noting that the sequence $m_n$, which controls how many principal components of $X_k$ are used in forming $\hat{\varrho}_n$, can be of asymptotically higher order if $H_2$ is a Hilbert space compared to the setting when $H_2=C[0,1]$, and $\alpha < 1$. In the case where the Hölder exponent in Assumption~\ref{a:continuoussetting} is $\alpha=1$, the responses $Y_k$ are Lipschitz continuous, which implies they are weakly differentiable. As a result one may then take $H_2=W$, a separable Hilbert space, leading back to the rate condition $m_n = o(\sqrt{n})$. The order $n^{\alpha/2}$ is sufficient but not sharp. In fact, for $\alpha < 1/2$, a different proof yields that $m_n = o\big( n^{1/(2+\alpha^{-1})} \big)$ also leads to consistency. In the case of the Brownian motion, $\alpha=1/2$ demands that $m_n=o\big(n^{1/4}\big)$. This is still of higher order than that suggested to be used by \cite{hormann:kidzinski:2015} for consistent estimation of the regression operator in Hilbert spaces. Our second technical remark concerns the choice of $H_1$. Assumption~\ref{a:hilbertsetting}~(a) requires $H_1$ to be a Hilbert space. While typically this is not a restriction, some care needs to be taken in the case of an FAR model. Here, when we study continuous functions, we choose $H_2=C[0,1]$. While it is natural to assume that the covariate and response space coincide for an FAR (i.e.\ requiring $H_1=C[0,1]$, too), this is not necessarily the case. For example when we consider a kernel integral operator $\varrho$ with a continuous kernel, then we may still use $H_1=L^2[0,1]$ and $H_2=C[0,1]$ using the natural embedding of $C[0,1]$ in $L^2[0,1]$. Alternatively we can resort to the Sobolev space $H_1=W^{1,2}[0,1]$ of once (weakly) differentiable functions in $L^2[0,1]$ equipped with the norm $\|f\|_W = \|f\|_{L^2} + \|f'\|_{L^2} $; see Chapter 8 of \cite{brezis:2010}. The space $W^{1,2}[0,1]$ is a separable Hilbert space, and because $\|\cdot\|_{\infty} \leq \|\cdot\|_{W}$, the space $W^{1,2}[0,1]$ can be embedded in $C[0,1]$. This will allow for more general class of continuous operators (for example, including pointwise evaluations). When viewed with the moment conditions on $\|X_k\|_{H_1}$ implicit to Assumption~\ref{a:hilbertsetting}~(b), this can be done so long as the covariates $X_k$ are sufficiently smooth. \section{Some further examples of events $A$}\label{s:partialA} In addition to level sets and pointwise or uniform prediction sets mentioned in Examples~\ref{E:levelsets} and \ref{example:pred} above, in this section we list some specific examples of sets $A$ that are of interest for the data that we discuss, and which might be useful in other applications. \begin{Example}[Contrast sets] For some $\gamma\in H_2$ and $a\in\mathbb{R}$ let \[ A = \left\{ y\in H_2\colon \int_0^1 \gamma(t)y(t)dt>a \right \}. \] For example, when $\gamma\equiv 1$, then $A$ is the set of curves which are in average above level $a$. If $\gamma(t)=2\mathds{1}\{t\leq 1/2\}-1$, or $\gamma(t)=1/2-t$ and $a=E\int_0^1y(t)dt$, then the set $A$ can be identified as functions with decreasing trend. \end{Example} \begin{Example}[Extremal sets.]\label{E:extremalsets} Let $H_2=C[0,1]$ and let $d\in\mathbb{R}$ and \[ A = \left\{ y \in H_2\colon \max_{u\in[0,1]} y(u) > d \right\}. \] Then $A$ contains functions which will exceed a certain threshold $d$. Note that this is the compliment of a boundary set with bounds $\alpha = -\infty$ and $\beta = d$. \end{Example} \begin{Example}[Excursion sets.] Let $H_2=C[0,1]$, $d\in\mathbb{R}$ and $c\in (0,1)$. Set \[ A = \left\{ y \in H_2\colon \exists \; 0 \leq a < b \leq 1 \text{ with } b-a \geq c \text{ s.t. } \min_{u\in[a,b]} y(u) > d \right\}. \] Then $A$ are the functions which uninterruptedly stay strictly above a certain threshold for a certain amount of time. \end{Example} A crucial condition in Theorems~\ref{t:main1} and \ref{t:main2} is that $P(Y\in \partial A)=0$. Below we discuss some examples for which this requirement is fulfilled. For the purpose of illustration, we give details in the case of level sets (Example~\ref{E:levelsets}) and $H_2=C[0,1]$. The other examples can be explored similarly. \begin{Proposition}\label{l:level1} Let $\alpha \in \mathbb{R}$ and $z \in [0,1)$. We define $A=\{y\in C[0,1]\colon \lambda(y>\alpha)\leq z\}$. The following conditions imply $P(Y\in\partial A)=0$: \begin{align} & (i) \; P\big(\lambda(Y=\alpha)>0\big)=0 \quad\text{and}\quad (ii)\; P\big(\lambda(Y>\alpha)=z\big)=0 && \text{for $z\in (0,1)$},\label{e:condlevel}\\ &P\big(\sup_{t\in [0,1]} Y(t)=\alpha\big)=0 && \text{for $z=0$}.\label{e:condlevel2} \end{align} \end{Proposition} The conditions in \eqref{e:condlevel} and \eqref{e:condlevel2} are satisfied by many well known processes, including Brownian motion. They are also generally satisfied by continuously differentiable Gaussian processes under standard non-degeneracy conditions. Such processes might be used to model functional data generated by applying standard smoothing operations, for instance using cubic-splines or trigonometric polynomials, to raw discrete data. We note that comparable differentiability conditions are assumed in \cite{fan:muller:2021}. The following proposition, whose proof we defer to Section~\ref{s:proofs}, describes these conditions. \begin{Proposition}\label{gauss-lemm} Suppose that $Y$ is a continuously differentiable Gaussian process with covariance kernel $C_Y$. If $C_Y(t,t) > 0$ for all $t\in [0,1]$, then \eqref{e:condlevel}(i) holds. For $\ell \in \mathbb{N}$, and $0\le t_1< \cdots < t_\ell \le 1$, let \[ r_Y(t,s) = \frac{C_Y(t,s)}{[C_Y(t,t) C_Y(s,s)]^{1/2}}, \mbox{ and } R_{t_1,...,t_\ell} = \{ r_Y(t_i,t_j)\}_{1\le i,j \le \ell} \in \mathbb{R}^{\ell\times \ell}. \] If in addition for all $\ell\in \mathbb{N}$ and $0\le t_1< \cdots < t_\ell \le 1$, there exists constants $c_1, c_2 > 0 $ such that $det( R_{t_1,...,t_\ell}) \ge c_1 \min_{ 1 \le i\ne j \le \ell } | t_i - t_j |^{c_2}$, then \eqref{e:condlevel}(ii) holds. If $Y$ is twice continuously differentiable, and $(Y(t_1),\dots,Y(t_\ell),Y'(t_1),\dots,Y'(t_\ell),Y''(t_1),\dots,Y''(t_\ell))$ has a non-degenerate distribution, then \eqref{e:condlevel2} holds. \end{Proposition} If $A = \{y \in H_2: \langle y, \gamma \rangle > c \}$ is a contrast set with some $\gamma \in H_2$, $c \in \mathbb{R}$, then from the continuity of the inner product it follows that $\partial A = \{y \in H_2: \langle y, \gamma \rangle = c \}$, both for $H_2=L^2[0,1]$, and $H_2=C[0,1]$. As for the boundary set $B_{\alpha,\beta} = \{y\in H_2\colon y([0,1]) \subseteq [\alpha, \beta] \}$, with $H_2=C[0,1]$, $\partial B_{\alpha,\beta} = \{y\in H_2\colon \sup_{t\in [0,1]} y(t) = \beta \vee \inf_{t\in [0,1]} y(t) = \alpha \}.$ \section{Simulation experiments and data illustrations}\label{s:realdata:sim} In this section we present the results of several simulation experiments and real data analyses that aimed to evaluate and compare the performance of the algorithms {\bf boot} and {\bf Gauss}, and illustrate their application. We begin by defining some alternate methods that may be used to estimate $P(Y\in A |X)$, and we describe two recent procedures proposed for functional quantile regression and construction of prediction sets in functional data prediction, respectively. \subsection{Competing methods} A simple method to estimate $P(Y\in A |X)$ is to employ functional binomial regression. This entails positing the model \[ P(Y \in A | X=x) = g\big( \beta_0 + \langle x, \beta \rangle \big) \] for some $\beta_0 \in \mathbb{R}$ and $\beta \in L^2[0,1]$, and a link function $g$ that can be chosen from a variety of possibilities, but is most often the logistic link function, or the cumulative distribution function of a standard normal random variable (the ``probit link''). For more details of such models, we refer to \cite{muller:stadtmuller:2005} and \cite{mousavi:sorensen:2017}. One drawback of note in applying logistic regression in this setting is that changing the set $A$ necessitates refitting the model, which can be computationally cumbersome, and further, as a consequence, the resulting estimators of $P(Y \in A | X)$ need not be monotone with respect to increasing (or decreasing) sets $A$. An approach to adjust such estimators to restore monotonicity is to use rearrangement or isotonization, as discussed in e.g. \cite{chernozhukov:2010}. Since the exact relationship between the function $X$ and the event $\{Y \in A\}$ is unknown and difficult to describe in parametric terms, even under model \eqref{mod}, another promising approach is to use nonparametric techniques such a kernel estimators. Generalizing the method found in Section~5.4 of \cite{ferraty:vieu:2006}, the conditional distribution $P(Y \in A | X = x) = E(\mathds{1}\{Y \in A\} | X = x)$ can be estimated by the functional extension of the Nadaraya--Watson estimator \begin{align} \hat P^\text{NW}(Y\in A|X=x) = \frac{ \sum_{i=1}^n K\big( h^{-1} d(x,X_i) \big) \; \mathds{1}\{Y_i \in A\} } { \sum_{i=1}^n K\big( h^{-1} d(x,X_i) \big) }, \label{e:nadwat} \end{align} where $K$ is a kernel function on the nonnegative real numbers, $d$ is a distance measure on $H_1$, and $h > 0$ is a smoothing parameter corresponding to the bandwidth of the kernel. Note that while the choice of $K$ is typically unproblematic, the choice of $d$ is more intricate and is often taken to depend on the data. The bandwidth $h$ represents the trade-off between bias (oversmoothing) and error (undersmoothing), and is normally taken to decrease with the sample size $n$. \cite{ferraty:vieu:2006} establish consistency conditions for the estimator \eqref{e:nadwat} in the case when the sequence $\{ (X_k,Y_k)\colon k\geq 1 \}$ is $\alpha$-mixing and $Y$ is scalar. When we apply this method below, we take $K$ to be the standard Gaussian kernel, $d$ to be the norm on $H_1$, and select $h$ using cross-validation. We note that similarly to functional logistic regression based estimators, a draw back of these estimators is that if one changes the set $A$, then the bandwidth $h$ in general should be recalibrated, and the resulting estimators need not be monotone in $A$ if the bandwidth $h$ is not held fixed for all sets $A$. Similar options may be derived from the local linear functional estimator, which improves upon the Nadaraya--Watson estimator by including a linear terms of the form $\langle x - X_i, \beta \rangle$ into the computation of the weights; see \cite{berlinet:2011}. The $k$-nearest neighbors (kNN) functional estimator is a variation on the Nadaraya--Watson estimator with adaptive bandwidth, i.e. $h$ is the smallest number such that $\big|\{ X_i\colon d(x,X_i) \leq h \}\big| = k$. The kNN estimator has been shown to be consistent for non-parametric regression by \cite{kudraszow:2013}. In order to evaluate the proposed algorithms for the construction of prediction sets, we compared to the method of \citet{paparoditis:2020} in the setting of forecasting FAR(1) processes $Y_k-\mu = \varrho (Y_{k-1}-\mu) + \varepsilon_k$. Subsequent to forming the estimator $\hat{\varrho}_n$ using functional principal component analysis, their method entails performing a (sieve) bootstrap on the functional principal component scores of the residuals $\hat{\varepsilon}_{k,n}$ in order to estimate the distribution of the prediction error. $Y_{n+1}$ is then forecast by $\widehat Y_{n+1} = \hat{\mu}+ \hat{\varrho}_n(Y_n-\hat\mu)$, and uniform prediction sets for $Y_{n+1}$ are constructed of the form \[ \{ y \in C[0,1] : \widehat Y_{n+1}(t) + L \, \sigma_{n+1}(t) \le y(t) \le \widehat Y_{n+1}(t) + U \, \sigma_{n+1}(t), \mbox{ for all } t\in[0,1] \}, \] where for a specified coverage level $1-\alpha$, \[ \sigma_{n+1}^2(t) = \widehat\var( \varepsilon^{(n)}(t)), \mbox{ and with } M = \sup_{t\in[0,1]} \frac{ |\varepsilon^{(n)}(t) | }{\sigma_{n+1}(t)}, \quad L = Q_{\alpha/2}( M), \mbox{ and } U = Q_{1-\alpha/2}(M). \] In the setting of scalar-on-function quantile regression, we compare to the method of \cite{sang:2020}, which entails for a scalar response $T(Y)$ modelling \[ T(Y) = g\big( \beta_0 + \langle x, \beta \rangle \big)+ \varepsilon, \] where $g$ is an assumed to be unknown link function. The link function $g$ as well as the parameter function $\beta$ are assumed to be linear combinations of splines, and estimated in order to estimate the level $p$ quantile of $T(Y)$ by minimizing the check function loss \[ \rho_p(y) = \big(p - \mathds{1}\{y \leq 0\}\big) \, y, \] subject also to a roughness penalty on the functions $g$ and $\beta$. \subsection{Construction of prediction sets}\label{s:pred} Following the simulation experiment considered in \cite{paparoditis:2020}, we construct a time series of continuous functions as follows: \begin{align} Y_k(t) = \int_0^1 \rho(t,s) Y_{k-1}(s) ds + b\cdot Y_{k-2}(t) + B_k(t), \quad 1\leq k \leq n, \; t\in[0,1], \label{e:paparoditis} \end{align} where $\rho(t,s) = 0.34\, e^{(t^2+s^2)/2}$, and $B_k$ is a standard Brownian motion. We fit an FAR(1) model to each simulated sample where we chose the truncation parameter $T_n$ using the PVE criterion with $v=0.85$. This is the same value as used in \cite{paparoditis:2020}. If $b$ is chosen as $0$, the FAR(1) model is correctly specified, whereas with $b=0.4$, there is a model misspecification that should be detrimental to the quality of the model predictions. Following the method proposed in \cite{paparoditis:2020} and as described above, we constructed uniform prediction sets to forecast each series 1-step ahead, with nominal coverage probabilities of 80\% and 95\%. This was repeated independently 1000 times, with sample sizes $n \in \{ 100,200,400,800\}$. While the model for the forecast is the same for both methods, the difference between our approach and \cite{paparoditis:2020} is in the methods used to estimate the noise distribution $\varepsilon^{(n)}$. These results are summarised in Table~\ref{tab:paparoditis} in terms of empirical coverage probabilities from the 1000 replications. In the case $n=100$ and $b=0$, the method {\bf boot} yielded empirical coverage probabilities that were up to 4--6 percentage points below the method of \cite{paparoditis:2020}, which are both below the nominal level. Apart from this notable exception, the empirical coverage probabilities are comparable to those of \cite{paparoditis:2020}, and were closer to nominal coverage in 10 out of 16 cases considered. The results of {\bf Gauss} were generally better, which is to be expected since the model errors are Gaussian processes, especially for the nominal coverage probability of 95\%. \begin{table} \centering {\small \begin{tabular}{llcccccccc} \hline \multicolumn{2}{l}{Nominal} & \multicolumn{2}{c}{$n=100$} & \multicolumn{2}{c}{$n=200$} & \multicolumn{2}{c}{$n=400$} & \multicolumn{2}{c}{$n=800$} \\ \multicolumn{2}{l}{coverage} & $b=0$ & $b=0.4$ & $b=0$ & $b=0.4$ & $b=0$ & $b=0.4$ & $b=0$ & $b=0.4$ \\ \hline 80\% & {\bf boot} & 0.683 & 0.694 & 0.745 & 0.754 & 0.777 & 0.778 & 0.789 & 0.791 \\ % & {\bf Gauss} & 0.703 & \textbf{0.716} & 0.756 & \textbf{0.763} & 0.781 & \textbf{0.783} & 0.791 & \textbf{0.793} \\ % & \textit{P., S. (2020)} & \textbf{0.740} & 0.689 & \textbf{0.766} & 0.740 & \textbf{0.791} & 0.768 & \textbf{0.803} & 0.786 \\ % \hline 95\% & {\bf boot} & 0.861 & 0.872 & 0.913 & 0.917 & 0.933 & 0.936 & 0.944 & 0.944 \\ % & {\bf Gauss} & 0.898 & \textbf{0.904} & \textbf{0.927} & \textbf{0.931} & \textbf{0.940} & \textbf{0.943} & \textbf{0.946} & \textbf{0.946} \\ % & \textit{P., S. (2020)} & \textbf{0.902} & 0.856 & 0.918 & 0.899 & 0.927 & 0.913 & 0.936 & 0.924 \\ % \hline \end{tabular} } \caption{\label{tab:paparoditis} Empirical coverage probabilities of uniform prediction intervals for the data generating process \eqref{e:paparoditis} calculated via {\bf boot}, as well as using the method of \cite{paparoditis:2020}, abbreviated \textit{P., S. (2020)}. } \end{table} \subsection{Comparison to functional GLM and Nadaraya--Watson estimation}\label{ss:pm10data} \begin{figure}[t] \begin{minipage}[b]{0.60\textwidth} \includegraphics[width=\textwidth]{PM10-realdata.pdf} \includegraphics[width=\textwidth]{PM10-simdata.pdf} \end{minipage} \includegraphics[width=0.39\textwidth]{PM10-operator-contour.pdf} \caption{Top left: the raw PM$_{10}$ measurements (blue) with the fitted curves (black). Bottom left: simulated synthetic PM$_{10}$ data (black) with $\alpha = \sqrt{50}$ (red) that we considered in the level set case. Right: the kernel operator $\varrho(t,s)$ used in the data generating process.} \label{fig:PM10DGP} \end{figure} In this simulation experiment, we generated synthetic data under model \eqref{mod} in such a way that it resembled a real functional time series derived from daily square-root transformed PM$_{10}$ concentration curves constructed by smoothing half-hourly measurements of PM$_{10}$. This is done using the function {\tt Data2fd} in the {\tt fda} package with default settings; see \cite{fdapackage}. PM$_{10}$ concentration denotes the concentration in air of respirable coarse particles having a diameter less than 10$\mu m$, and the data that we consider was collected in Graz, Austria over the period from October 1st, 2010 to March 31st, 2011. An illustration of these data is given in Figure~\ref{fig:PM10DGP}, and they are available in the {\tt ftsa} package in {\tt R}; see \cite{hynd:shang:ftsa:2020}. We use these data as a means to devise a realistic data generating process. To this end, we first fit an FAR$(1)$ model to the square-root transformed PM$_{10}$ curves. The estimator for the FAR operator $\varrho$ obtained in this way differs from operators typically used in simulation settings in that the estimator for the kernel is highly asymmetric, as illustrated in the right hand panel of Figure~\ref{fig:PM10DGP}. With the estimated sample mean and the fitted FAR operator, we then generate synthetic FAR$(1)$ time series samples by drawing model errors $\varepsilon_k$ from a Gaussian distribution, with the covariance operator estimated from the residuals of the FAR(1) model fit to the original data. This can be done as in the algorithm {\bf Gauss}. The first 30 observations are dropped as part of the burn-in phase. A snapshot of the raw data in comparison to the synthetic data can be seen in Figure~\ref{fig:PM10DGP}. In this manner we may generate time series of arbitrary sample sizes that are similar to the original PM$_{10}$ data. We generated 1000 independent samples for each sample size $n \in \{50, 100, 250, 1000\}$. Then, for 50 different values of predictors $Y_0^*$, simulated independently from the stationary distribution of the data generating process, we estimated the conditional probability of $Y^*_{1}$ lying in the level set $P(\lambda(Y_{1}^* > \sqrt{50}) \leq 0.5| Y_0^*)$ for each such sample. For each of the 50 predictors, we also approximated the true probability using Monte-Carlo simulation ($n_{\text{MC}} = 10000$) from the data generating process. We compared the estimators from algorithms {\bf boot} and {\bf Gauss}, as well as from a logistic functional GLM, and Nadaraya--Watson estimation. The number $T_n$ of principal components used to estimate $\varrho$ was chosen using criterion \eqref{e:chooseTn}, so that \[ T_n = \max\big\{j \geq 1 \colon \hat \lambda_j \geq m_n^{-1} \hat \lambda_1 \big\}, \quad \text{with }\; m_n = 5 n^{0.45}. \] We introduce $\hat \lambda_1$ into the definition of $T_n$ so that the criterion does not depend on the scale of the eigenvalues, yielding a more practicable way of choosing $T_n$. For $n=1000$, this approximately covers 98\% of the variance of the simulated curves in the sense of the PVE criterion. Naturally, less variance is covered in for smaller sample sizes. For the logistic GLM, we used the approach suggested by \cite{muller:stadtmuller:2005} and took the truncated Karhunen--Loève expansion as the predictor. In order to keep the methods comparable, we used the same number $T_n$ of principal components for our algorithms and for the functional GLM. We calibrated the bandwidth $h$ for the Nadaraya--Watson estimator using leave-one-out cross-validation on each generated sample. The results in terms of the root mean squared error (RMSE) over the 1000 simulations are displayed in Figure~\ref{fig:PM10RMSElevel}. Because it is difficult to visualize this for the 50 different predictors, we present boxplots summarizing the RMSE of each method over all predictors $Y_0^*$. More details on the results for a variety of specific values of $Y_0^*$ can also be found in Table~\ref{tab:pm10rmselevel} in the Appendix. \begin{figure} \includegraphics[width=\textwidth]{PM10-RMSE-a-sqrt50-z05.pdf} \caption{ RMSE of $\hat P$ for 50 random predictors $Y_0^*$ and 1000 independent simulations of samples of size $n\in \{ 50, 100, 250, 1000 \}$ based on the estimators {\bf boot}, {\bf Gauss}, functional logistic regression, and Nadaraya--Watson estimators of the probability $P(\lambda(Y_{1}^* > \sqrt{50}) \leq 0.5| Y_0^*)$.} \label{fig:PM10RMSElevel} \end{figure} We observed that algorithms~{\bf boot} and {\bf Gauss} exhibited similar predictive performance in both examples and over all sample sizes. These methods clearly outperformed functional logistic regression and Nadaraya--Watson estimation in estimating the conditional probability of level sets. The proposed methods achieved a similar mean squared error in this case to functional logistic regression with about a quarter of the sample size. The performance of the Nadaraya--Watson estimator was poor compared to the other methods considered in both cases and varied strongly depending on the predictor $Y_0^*$. In Appendix~\ref{s:additionalsim}, we also present results in which we considered contrast sets rather than level sets, in which case the same overall pattern was observed, although the results were more comparable across the four methods. \begin{table} \centering \begin{tabular}{|r|rr|rr|rr|rr|} \hline & \multicolumn{2}{c|}{$n=50$} & \multicolumn{2}{c|}{$n=100$} & \multicolumn{2}{c|}{$n=250$} & \multicolumn{2}{c|}{$n=1000$} \\ & \multicolumn{2}{c|}{$p=0.98$} & \multicolumn{2}{c|}{$p=0.99$} & \multicolumn{2}{c|}{$p=0.996$} & \multicolumn{2}{c|}{$p=0.999$} \\ $Y_0^*$ & \textbf{boot} & \textbf{Gauss} & \textbf{boot} & \textbf{Gauss} & \textbf{boot} & \textbf{Gauss} & \textbf{boot} & \textbf{Gauss} \\ \hline 1 & 0.770 & 0.704 & 0.559 & 0.491 & 0.457 & 0.356 & 0.487 & 0.434 \\ 2 & 0.530 & 0.437 & 0.396 & 0.292 & 0.333 & 0.170 & 0.341 & 0.189 \\ 3 & 0.517 & 0.419 & 0.432 & 0.287 & 0.348 & 0.199 & 0.308 & 0.137 \\ 4 & 0.595 & 0.501 & 0.464 & 0.371 & 0.404 & 0.262 & 0.347 & 0.187 \\ 5 & 0.589 & 0.480 & 0.470 & 0.355 & 0.378 & 0.247 & 0.336 & 0.144 \\ \hline \end{tabular} \caption{\label{tab:pm10varlevel} RMSE for $\hat\alpha_p$, 5 different predictors and 1000 replications. We estimate $\hat\alpha_p$ such that $P(\lambda(Y_{1}^* > \alpha_p)) \leq 0.5 | Y_0^*) = p$, where $p = 1-n^{-1}$.} \end{table} Although the estimator {\bf Gauss} performs similarly to {\bf boot} in the above example, it can be expected that {\bf boot} runs into problems when $P(Y \in A|X)$ is close to $0$ or $1$, since {\bf boot} only uses the $n$ estimated model residuals to estimate $P(Y \in A|X)$, whereas in producing the estimator \textbf{Gauss}, one can generate a Monte-Carlo sample of residuals as large as needed to give a non-degenerate estimate of these probabilities, which can be expected to be accurate if the Gaussian assumption is plausible. To highlight this, we present the results of a short simulation study in which for a probability $p_n=1-1/n$, we aimed to estimate $\alpha_p$ using {\bf boot} and {\bf Gauss} such that $P(\lambda(Y_{1}^* > \alpha_{p_n}) \leq 0.5 | Y_0^*) = p_n$. This problem is hence related to the Value-at-Risk estimation. We compared the RMSE of $\hat\alpha_{p_n}$ from the two algorithms for 50 different realizations of the predictor $Y_0^*$ that were simulated from the same data generating process. We note that the value of $\alpha_p$ varies between 7.26 and 11.77, depending on $Y_0^*$ and $p_n$. In Table~\ref{tab:pm10varlevel}, we present the results from a subset of five predictors $Y_0^*$ that were representative of the variability observed in the simulated series. It is apparent from these results that {\bf Gauss} outperforms {\bf boot} in all cases, and the relative advantage increases with sample size. If we look at the results for all 50 predictors, RMSE of $\hat\alpha_p$ decreases by about 15\% for $n=50$, 22\% for $n=100$, 35\% for $n=250$ and 42\% for $n=1000$. This gives some indication of the difference in performance that can be expected between the two methods in forecasting extreme quantiles or events whenever the Gaussian assumption is plausible. \subsection{Functional quantile regression}\label{ss:quantilereg} In this application, we compare to the data analysis of \cite{sang:2020}. As in our previous example, those authors consider the functional time series of daily square-root transformed PM$_{10}$ concentration curves constructed by smoothing half-hourly measurements of PM$_{10}$. The goal of the analysis is to compare forecasts of the quantiles of the maximum values $M_t = \max_{u\in[0,1]} Y_{t}(u)$ (we remark the relation to Example~\ref{E:extremalsets}), where $Y_t(u)$ is the transformed PM$_{10}$ curve on day $t$ at intraday time $u$. As the covariate the curve $Y_{t-1}$ is used. Now we model the relationship between $(Y_t,Y_{t-1})$ by a FAR(1) process and apply the method {\bf boot} to estimate the conditional quantile of $M_t$. We select the truncation parameter $T_n$ in order to explain 98\% of the variance in the variables $Y_t$ since for this fixed sample size, tuning $T_n$ by an asymptotic criterion is not meaningful. At a quantile level $p$, we compared these methods by 5-fold cross-validation the mean check-function loss $\rho_p\big( M_i - \hat q_p(M_i|Y_{i-1})$. We did this for seven different quantile levels $p \in \{0.05, 0.15, 0.25, 0.50, 0.75, 0.85, 0.95\}$. The experiment was repeated on 50 random splits of the data set. The results are displayed in Figure~\ref{fig:sangcao}. In 87.4\% of the cases, {\bf boot} outperformed the functional single-index quantile regression model of \cite{sang:2020} in terms of the loss considered. This advantage was much smaller for more central quantiles and became more apparent for the more extreme quantiles. \begin{figure}[t] \centering \includegraphics[width=0.90\textwidth]{SangCaoComparison.pdf} \caption{Performance of {\bf boot} compared to the functional single-index quantile regression model proposed by \cite{sang:2020}. The prediction error is compared using 5-fold cross-validation on 50 random splits of the PM$_{10}$ data set.}\label{fig:sangcao} \end{figure} \subsection{Spanish electricity price data}\label{ss:electricitydata} \begin{table} \centering \begin{tabular}{lrrrrrrrrrr} \hline & $\alpha$ & \hspace{5mm} 30 & 35 & 40 & 45 & 50 & 55 & 60 & 65 & 70 \\ \hline & \textbf{boot} & \textbf{0.03} & \textbf{0.06} & \textbf{0.10} & \textbf{0.13} & \textbf{0.16} & \textbf{0.24} & 0.23 & \textbf{0.19} & \textbf{0.16} \\ $z=0$ & GLM & 0.11 & 0.17 & 0.23 & 0.36 & 0.30 & 0.24 & \textbf{0.22} & 0.21 & 0.24 \\ & N--W & 0.05 & 0.10 & 0.20 & 0.21 & 0.22 & 0.31 & 0.29 & 0.27 & 0.26 \\ \hline & \textbf{boot} & \textbf{0.05} & \textbf{0.07} & \textbf{0.10} & \textbf{0.15} & \textbf{0.15} & \textbf{0.19} & \textbf{0.17} & \textbf{0.16} & \textbf{0.12} \\ $z=\frac 16$ & GLM & 0.22 & 0.18 & 0.22 & 0.25 & 0.18 & 0.19 & 0.17 & 0.19 & 0.24 \\ & N--W & 0.08 & 0.14 & 0.18 & 0.22 & 0.25 & 0.26 & 0.23 & 0.22 & 0.18 \\ \hline & \textbf{boot} & \textbf{0.05} & \textbf{0.10} & \textbf{0.12} & \textbf{0.13} & \textbf{0.20} & \textbf{0.17} & \textbf{0.15} & \textbf{0.13} & \textbf{0.09} \\ $z= \frac 26$ & GLM & 0.25 & 0.15 & 0.26 & 0.23 & 0.23 & 0.17 & 0.23 & 0.20 & 0.39 \\ & N--W & 0.10 & 0.16 & 0.23 & 0.23 & 0.26 & 0.22 & 0.24 & 0.22 & 0.14 \\ \hline & \textbf{boot} & \textbf{0.08} & \textbf{0.10} & \textbf{0.11} & \textbf{0.17} & \textbf{0.20} & \textbf{0.17} & \textbf{0.15} & \textbf{0.11} & 0.07 \\ $z= \frac 36$ & GLM & 0.11 & 0.15 & 0.26 & 0.27 & 0.23 & 0.19 & 0.18 & 0.20 & 0.37 \\ & N--W & 0.14 & 0.17 & 0.26 & 0.24 & 0.27 & 0.23 & 0.23 & 0.17 & \textbf{0.07} \\ \hline & \textbf{boot} & \textbf{0.09} & \textbf{0.11} & \textbf{0.14} & \textbf{0.19} & \textbf{0.22} & \textbf{0.18} & \textbf{0.12} & \textbf{0.08} & 0.03 \\ $z= \frac 46$ & GLM & 0.12 & 0.18 & 0.27 & 0.26 & 0.25 & 0.24 & 0.20 & 0.40 & 0.12 \\ & N--W & 0.16 & 0.20 & 0.23 & 0.26 & 0.28 & 0.25 & 0.19 & 0.13 & \textbf{0.02} \\ \hline & \textbf{boot} & \textbf{0.14} & \textbf{0.17} & \textbf{0.20} & \textbf{0.24} & \textbf{0.23} & \textbf{0.14} & \textbf{0.08} & \textbf{0.02} & \textbf{0.00} \\ $z= \frac 56$ & GLM & 0.21 & 0.26 & 0.25 & 0.26 & 0.27 & 0.17 & 0.21 & 0.13 & 0.10 \\ & N--W & 0.24 & 0.29 & 0.29 & 0.31 & 0.29 & 0.19 & 0.09 & 0.03 & 0.00 \\ \hline \end{tabular} \caption{\label{tab:spaincrossentropy} The cross-entropy of the estimated conditional probability $P(\lambda(Y >\alpha) \leq z)$ for different values $\alpha$ and $z$, evaluated on the test set. The comparison value GLM is a logit regression model with the same predictors. N--W is the Nadaraya--Watson estimator. The smallest value in each cell is marked in bold font, and any apparent ties are merely a result of the rounding to two digits.} \end{table} We now return to the Spanish electricity price data that we gave as an introductory example. We take as the goal of this analysis to compare estimates for the conditional probability that the price curves will lie in specified level sets, given the covariates of demand, and wind energy production. In order to compare the various methods for doing this, we split the data into a training and testing set by randomly taking four months from each year, and assigning them to the test set, which created a 2:1 split between the training and testing set. Since we used 6 years of this data, the training set thus consists of 1453 days, and the test set consists of 731 days. Let $Z_t$ denote one of functional variables electricity price, demand or wind energy production. Then these variables were deseasonalized as follows: \[ \widetilde Z_t = Z_t - Z_t^{(Y)} - Z_t^{(W)}, \] where $Z_t^{(Y)}$ is the yearly seasonality obtained by taking the mean for each day of the year and smoothing the result using a rolling mean with a window size of 21 days. $Z_t^{(W)}$ is the weekly seasonality that is estimated as the mean for each day of the week. For the wind curves, no weekly seasonality was removed. In order to employ the methods {\bf boot} and {\bf Gauss}, we fit the FARX(7) model described in \eqref{e:spainfarx} using the estimator introduced in Section~\ref{s:theoretical} with the data in the training set. The truncation parameter $T_n$ was again chosen in order to explain 98\% of the variance of the covariates. In order to compare the estimated conditional probabilities to the realized outcomes on the test set, we used the cross-entropy measure. The cross-entropy of a distribution $P$ relative to a distribution $Q$ is defined as $H(P,Q) = -\mathbb{E}_P\big[ \log(q(Y)) \big]$, where $q$ is the probability mass function of $Q$; see Section 2.8 of \cite{murphy2012machine}. Given the realisations $y_i = \mathds{1}\{Y_i \in A\}$, $i \in \{1,...,N\}$, and corresponding estimated conditional probabilities $\hat{p}_i= \hat{P}(Y_i \in A | X_i)$ in the testing set of size $N$, the plug-in estimator of the cross-entropy estimated on the test set is \[ \hat{H}(\hat{P}) = -\frac 1 N \sum_{k=1}^N [y_k \log\big( \hat{p}_k \big)+ (1-y_k) \log\big(1- \hat{p}_k \big)]. \] We considered level sets of the form $A= \{ y \in C[0,1] : \lambda(Y > \alpha) \leq z)\}$ for various values of $\alpha$ and $z$, and calculated the cross-entropy on the test set of estimates of $P(Y \in A |X)$ using the method {\bf boot}, as well as for functional logistic regression, which was estimated using the same covariates (and PVE criterion) as those considered in generating the estimator in {\bf boot}, as well as functional Nadaraya--Watson estimation with a Gaussian kernel with the predictors \emph{demand}, \emph{wind} and \emph{lagged price}, and the bandwidth parameters were selected using leave-one-out cross-validation on the training set. We do not present the results for the method {\bf Gauss}, as the results are again very similar to the method {\bf boot}. The estimated cross entropies on the test set for each set $A$ considered are presented in Table~\ref{tab:spaincrossentropy}. The smallest value in each cell is marked in bold font. The method {\bf boot} achieved lower values of cross-entropy on the test set compared to the competing methods for most combinations of $\alpha$ and $z$. {\bf boot} had higher estimated cross-entropy in one case compared to functional logistic regression model, and two cases compared to functional Nadaraya--Watson estimation. The values of $\alpha$ and $z$ considered were chosen such in a way that most price curves belong to at least one set, and so we do not think this superior performance resulted from the sets $A$ focusing on outcomes that are well modelled using the FARX(7) model. \section{Summary} We considered two methods, based on either a residual bootstrap or Gaussian process simulation, to estimate the conditional distribution $P(Y\in A|X)$, where $Y$ and $X$ satisfy the functional linear regression model \eqref{mod}. We showed under mild consistency conditions on the estimated regression operator $\hat{\varrho}_n$ that these methods lead to consistent estimation, in particular in the setting where $Y$ is assumed to be an element of the Banach space $C[0,1]$, which allows for the consideration of sets $A$ that describe more detailed path properties of the response. We put forward one example of an operator estimator $\hat{\varrho}_n$ that has the specified consistency properties under natural regularity conditions on the covariates, which allow for weak serial dependence, and on the choice of tuning parameters. In several simulation experiments and data analyses we observed that these methods generally outperformed prominent competitors, which are often more complicated to implement. \addcontentsline{toc}{section}{References}
{'timestamp': '2021-05-05T02:16:23', 'yymm': '2105', 'arxiv_id': '2105.01412', 'language': 'en', 'url': 'https://arxiv.org/abs/2105.01412'}
\section{Introduction} With the ever larger data output volume of modern astronomical observatories, automated data processing pipelines are an indispensable part of any such project. New observatory projects are now aware that data analysis (DA) software needs a significant lead time and has to be budgeted for carefully. But even with adequate financial means, it is nearly impossible for the DA software to be ready {\it and verified} the moment the observatory records its first ``real'' data. Any complex and novel observatory/detector system will have its unexpected features and quirks which cannot be foreseen and simulated ahead of time. As these unexpected features of the real data are gradually becoming apparent and understood, the DA software needs to be adapted and equipped with corresponding heuristics. This is the usual commissioning phase which is to be kept as short as possible such that scientific results can be produced soon. Usually, a compromise is made and the new observatory/instrument is made available to scientific users with a reduced set of features which is then gradually expanded as commissioning of additional features is completed. But even for relatively small sets of new capabilities, the commissioning to the point of the release of a fully automated pipeline (PL), can be a long process of many months if not years because the work requires know-how which is only available to a small expert team that cannot arbitrarily be enlarged. In this paper, I am presenting an approach to prototyping the automated PL which has helped the ALMA observatory \citep{Cortes2020} since 2012 to shorten the time-to-release of new capabilities by ca. one year. Its usefulness is not confined to the realm of radio astronomy. \section{Three-stage development start-up} At the beginning of the construction of a new observatory, the arrival of the first real data is years away. At this point, the data model for the raw data needs to be defined and the creation of simulated raw data prepared. Then the development of the DA software can proceed in the following three stages: {\bf 1 - Scriptable toolkit:} The first layer of functionality which needs to be provided for commissioning scientists and later science users of the observatory, is a kit of DA tools from which an arbitrary sequence of calibration and analysis steps can be constructed. These tools need to be bound to a convenient scripting language, e.g. Python. It needs to include efficient methods to import raw data into variables/arrays of the scripting language, perform common operations on the imported data, and derive calibration information which can subsequently be stored and applied to the raw data producing calibrated data in a convenient and efficient storage format. Furthermore tools for data visualization need to be provided including interactive GUIs for exploring the raw and calibrated data. Finally, tools for transforming the calibrated data into astronomical images can already be developed in this first phase based purely on simulated data. {\bf 2 - Script generator:} Once the beginning of instrument commissioning comes in sight and the scriptable toolkit has reached some maturity, the development of the second layer of functionality can commence: a tool to create standard data processing scripts for the given instrument - the {\it script generator} (SG). This tool can be understood as a prototype PL: it examines a given set of raw data and then outputs a (nearly) complete processing script tailored to the given dataset and using the commands from the scriptable toolkit. The SG already contains many of the heuristics which will later be at work in the automated PL but it (at least initially, in its first releases) doesn't attempt to do the full job. Sections of the process which are not yet fully understood or which are too complex to automate, may still be omitted in the output script draft such that users can complete it manually. \articlefigure[width=.5\textheight]{X9-005_f1.eps}{processgant}{Simplified Gantt chart of the data analysis software development for an arbitrary observatory following the approach described in this paper. $\Delta T$ is the time period by which new capabilities can be released earlier because an SG was employed as a PL prototype. The exact time scale will depend on the project complexity. Continuation of the development of data model and toolkit is not shown.} The resulting SG tool is a help for commissioning scientists and data analysts. It automates those parts of the data analysis process which are already common knowledge, saving its users time that they can spend on the less well understood parts of the analysis. The development of an SG is not trivial but much simpler than that of a fully fledged PL. Since the SG needs to read the instrument data, the SG needs to use the scriptable toolkit and therefore should be coded in the scripting language which is bound to it. A first version of the SG can already be produced before the instrument's first light. As soon as real data is available, the SG is rapidly updated with new heuristics. Unit tests of the SG can be constructed from test datasets which exhibit certain instrument features and capabilities. The execution time of an SG is typically much shorter than that of the subsequent execution of the DA script which it produces because the SG mostly examines the {\it metadata} of the dataset, not the bulk data. The automation of the scripting permits to produce fancy, standardized, well formated and marked-up scripts for the end-users. At the end of the initial commissioning phase, when the observatory starts to take its first science data, the SG, like an expert system for writing data analysis scripts, encapsulates all the data processing knowledge which was accumulated by the commissioning scientists up to this point. The SG can then be used for standard data processing of the observatory's science data while the development team can give full attention to the third stage of development. {\bf 3 - Automated pipeline:} The final stage of the first round of DA software development is the complete automation of the data calibration and science product generation (depending on the nature of the instrument this could, e.g., be imaging, the production of spectra etc.). The fact that the SG is in place for the time being to provide a reasonably fast means of processing data semi-automatically with the help of a number of data analysts, buys time for the PL development team. This is the most labour-intensive phase of the project w.r.t. data processing. Three teams work in parallel: (1) The commissioning scientists continue to monitor the most recent raw data and find further improvements to the processing heuristics quickly updating them in the SG. (2) Data analysts use the SG to generate data processing scripts which they run to produce the calibrated data and derived data products for the observatory archive and its users. (3) The PL development team works on automating the process in order to speed it up and reduce the number of necessary data analysts in preparation for the ramping up of the output data volume of the observatory. Thorough testing on a diverse selection of datasets is needed. Once the first official data processing PL is released, the work of the data analysts begins to change from mostly working with the SG to more and more double-checking the PL products. As confidence in the PL grows, the fraction of the total data volume which is handled by it approaches 100\%. Now the observatory can afford to further increase its output data volume. The SG is only used in exceptional cases, {\it and} during commissioning of new capabilities. \section{Gradual addition of further capabilities} For ground-based observatories, the set of capabilities can be extended indefinitely. Every time a new capability is added, the commissioning scientists can fall back to semi-automatic data processing with the help of the SG. They develop new heuristics for the new type of data, and add them to the features of the SG. Once the new capability is released to the observatory users, the observatory data analysts will at first use the SG again to process this subset of the observatory output. At the same time, the PL development team can start to enable the PL to handle the new type of data. After one development cycle, the PL has learned to handle the new capability. In this way, observatory users can be given access to well calibrated data and good derived data products from recently added observatory upgrades with a minimum delay. \section{The process in real life: ALMA} The data analysis software development approach described above is the path which the ALMA observatory followed on its way into science operations and through many capability expansions. The raw data model and format chosen by ALMA is the ALMA Science Data Model (ASDM, \citet{2006ASPC..351..627V}). ALMA's scriptable data analysis toolkit is CASA \citep{2007ASPC..376..127M, 2020ASPC..527..267E} which uses the scripting language Python. The ALMA raw data was calibrated during the first years of operations starting 2012 using exclusively the Calibration Script Generator (CSG) \citep[see][]{2014SPIE.9152E..0JP}, a tool also written in Python. Then, in 2014, the first ALMA calibration PL was released (the documentation of this and subsequent PL versions can be found at \url{https://almascience.org/processing/science-pipeline}, the latest is \citet{ALMAPipe2021}). Since then, the CSG and the calibration PL have co-existed. ALMA data is calibrated by the calibration PL whenever possible. For new observing modes, the CSG is upgraded first and used for ca. one year of operations while the PL is being upgraded correspondingly. After that, the PL takes over the processing of the new mode. Since ALMA also aims to offer high-quality images to its users, there is a second processing step after calibration to produce these. This was initially done using imaging script templates. Then a second script generator was created, the Imaging Script Generator (ISG). In 2016, the first ALMA PL with automated imaging was released. Today, 95\% of the ALMA data are PL calibrated and imaged \citep{2020SPIE11449E..1TN}. CSG and ISG have been released in 2021 for public use (see \url{https://confluence.alma.cl/display/EAPR/ALMA+Data+Analysis+Utilities}).
{'timestamp': '2021-12-21T02:15:50', 'yymm': '2112', 'arxiv_id': '2112.10050', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.10050'}
\section{Introduction}\label{sec: introduction} Throughout the paper, we work over the complex numbers. The minimal model conjecture is one of the main problems in birational geometry. \begin{conjecture}[Minimal model conjecture]\label{conj:logmmp} For a $\mathbb{Q}$-factorial dlt pair $(X,B)$, if $K_X+B$ is pseudo-effective, then $(X,B)$ has a log terminal model. \end{conjecture} Conjecture \ref{conj:logmmp} is known when $\dim X\le 4$, \cite{Sho09, Birkar10}, or when $(X,B)$ is a $\mathbb{Q}$-factorial klt pair with a big boundary $B$, \cite{BCHM10}. But when $\dim X\ge 5$, Conjecture \ref{conj:logmmp} is still widely open. Birkar related the minimal model conjecture to the existence of sections for pseudo-effective adjoint divisors (i.e. the non-vanishing conjecture), \cite{Bir07, Birkar10, Birkar11}. The minimal model conjecture is also related to the existence of Zariski decompositions for adjoint divisors, \cite{Birkarhuweak14}, or even weaker, to the existence of \emph{weak} Zariski decompositions for adjoint divisors, \cite{Birkarweak12}. The later means that $K_X+B\equiv P+N$ where $P$ is a nef divisor and $N$ is an effective divisor. Note that if $K_X+B$ is numerically equivalent to an effective divisor $N$, then it automatically admits a weak Zariski decomposition with $P=0$. In \cite{Birkarhuweak14}, Birkar and Hu asked that if the existence of weak Zariski decompositions for adjoint pairs implies the minimal model conjecture. This is our starting point of the current paper. However, it seems that the natural setting of Birkar-Hu's question lies in a larger category -- the generalized polarized pairs (g-pairs), and this observation leads to the analogous conjecture on the existence of minimal models for g-pairs (see Conjecture \ref{conj:glogmmp} below). A g-pair $(X, B+M)$ consists of an ``ordinary'' log pair $(X, B)$ plus an auxiliary nef part $M$. A rudimentary model of such pairs had emerged in \cite{Birkarweak12, Birkarhuweak14} but the actual definition only appeared in \cite{BZ16} (although they came from different sources: the former came from the weak Zariski decomposition while the later came from the canonical bundle formula). A pair (resp. g-pair) is said to be pseudo-effective if $K_X+B$ (resp. $K_X+B+M$) is pseudo-effective. We use ``$/Z$'' to denote a pair relative to the variety $Z$, and the abbreviation ``NQC'' below stands for ``nef $\mathbb{Q}$-Cartier combinations'' (see Definition \ref{def: NQC}). Also see Definition \ref{def: NQC} for the meaning of a birational NQC weak Zariski decomposition. The NQC pairs are those behave well in the Minimal Model Program (MMP). Hence we state the following conjectures for such pairs. \begin{conjecture}[Minimal model conjecture for g-pairs]\label{conj:glogmmp} For a $\mathbb{Q}$-factorial NQC g-dlt pair $(X/Z,B+M)$, if $K_X+B+M$ is pseudo-effective$/Z$, then $(X/Z,B+M)$ has a g-log terminal model$/Z$. \end{conjecture} \begin{conjecture}[Birational weak Zariski decomposition conjecture for g-pairs]\label{conj: NQCbirweakzar} Let $(X/Z,B+M)$ be a $\mathbb{Q}$-factorial NQC g-dlt pair. If $K_X+B+M$ is pseudo-effective$/Z$, then it admits a birational NQC weak Zariski decomposition$/Z$. \end{conjecture} \begin{remark} By Proposition \ref{prop: g-log minimal model implies NQC zarski decomposition}, Conjecture \ref{conj:glogmmp} implies Conjecture \ref{conj: NQCbirweakzar}. \end{remark} Under the NQC assumption, we can answer the aforementioned question of Birkar-Hu in the generalized polarized categories. \begin{theorem}\label{thm: weak zariski equiv mm} The birational weak Zariski decomposition conjecture for g-pairs (Conjecture \ref{conj: NQCbirweakzar}) is equivalent to the minimal model conjecture for g-pairs (Conjecture \ref{conj:glogmmp}). \end{theorem} Besides, for Conjecture \ref{conj:glogmmp}, we show that it is a consequence of the termination conjecture for ordinary log pairs. \begin{theorem}\label{thm: ter lc implies g-log minimal model} Assume the termination of flips for $\mathbb{Q}$-factorial dlt pairs$/Z$, then any pseudo-effective $\mathbb{Q}$-factorial NQC g-dlt pair $(X/Z,B+M)$ has a g-log terminal model. \end{theorem} As for the termination of g-MMP, we show the following result. \begin{theorem}\label{thm: weak zariski scaling ample} Assume the birational weak Zariski decomposition conjecture (Conjecture \ref{conj: NQCbirweakzar}). Let $(X/Z,B+M)$ be a pseudo-effective $\mathbb{Q}$-factorial NQC g-dlt pair. Then any sequence of g-MMP on $(K_X+B+M)$ with scaling of an ample divisor$/Z$ terminates. \end{theorem} We briefly describe the idea of the proof of Theorem \ref{thm: weak zariski equiv mm}. The nontrivial part is to show that Conjecture \ref{conj: NQCbirweakzar} implies Conjecture \ref{conj:glogmmp}. Suppose that $K_X+B+M \equiv P+N$ is a weak Zariski decomposition with $P$ nef and $N \geq 0$. As it was pointed out in \cite[\S 6]{Birkarhuweak14}, one only needs to consider the case $\operatorname{Supp} N \subseteq \operatorname{Supp}\lfloor B\rfloor$. In this case, we run a special g-MMP on $(K_X+B+M)$ with scaling of a divisor (not necessarily effective) such that the g-MMP is $N$-negative and in each step, $N_i+\nu_i P_i$ is nef, where \[ \nu_i \coloneqq \inf\{t \geq 1 \mid N_i+tP_i\text{~is nef}\} \] is the nef threshold. Notice that once $\nu_i=1$, then $K_{X_i}+B_i+M_i \equiv P_i+N_i$ is nef, and we are done. Thus the idea is to decrease $\nu_i$ until it reaches $1$. In fact, as long as $\nu_i>1$, we can get a smaller $\nu_{i+1}$. After this, we use the special termination and the induction on dimensions to conclude that the sequence $\{v_i\}$ cannot be infinite. The special termination still holds for this setting (see Theorem \ref{prop: special termination 2}) by an appropriate adaptation of the argument in \cite{Birkar12}. Finally, by putting these together, we can prove the result. \begin{remark} By different approaches, Hacon and Moraga independently obtain some of the above results in certain general forms, \cite{HM18weak}. They use ideas from \cite{Bir07}, while our proof is based on ideas in \cite{Birkar11,Birkar12,Birkarhuweak14}. Furthermore, our results are about boundaries with real coefficients, while results in \cite{HM18weak} require the boundaries with rational coefficients.\end{remark} The paper is organized as following. In Section \ref{sec: preliminaries}, we collect the relevant definitions. In Section \ref{sec: LMMP}, we first elaborate on the MMP for g-pairs (g-MMP) developed in \cite{BZ16}, and then prove some standard results in this setting. We also introduce the g-MMP with scaling of an NQC divisor. In Section \ref{sec: special termination}, we establish the special termination result for g-MMP with scaling. The proofs of the theorems are given in Section \ref{sec: proof}. \medskip \noindent\textbf{Acknowledgements}. We would like to thank Caucher Birkar, the paper is deeply influenced by his ideas. We thank Chen Jiang for showing us a simple proof of the length of extremal rays for the g-pairs. We also thank the participants of the Birational Geometry Seminar at BICMR/Peking University for their interests in the work, especially Yifei Chen and Chuyu Zhou. J. H. thanks Caucher Birkar for the invitation to the University of Cambridge where the paper was written, the thanks also go to his advisors Gang Tian and Chenyang Xu for constant support and encouragement. Z. L. thanks Keiji Oguiso for the invitation to the University of Tokyo where some key ideas are conceived. This work is partially supported by NSFC Grant No.11601015. \section{Preliminaries}\label{sec: preliminaries} \subsection{Generalized polarized pairs} \begin{definition}[Generalized polarized pair]\label{def: g-pair} A \emph{generalized polarized pair (g-pair)} over $Z$ consists of a normal variety $X$ equipped with projective morphisms \[ \tilde X \xrightarrow{f}X \to Z, \] where $f$ is birational and $\tilde X$ is normal, an $\mathbb{R}$-boundary $B \geq 0$ on $X$, and an $\mathbb{R}$-Cartier divisor $\tilde M$ on $\tilde X$ which is nef$/Z$ such that $K_{X}+B+M$ is $\mathbb{R}$-Cartier, where $M\coloneqq f_*\tilde M$. We say that $B$ is the boundary part and $M$ is the nef part. \end{definition} For our convenience, when the base $Z$, the boundary part and the nef part are clear from the context, we will just say that $(X,B+M)$ is a g-pair. Notice that, in contrast to \cite{BZ16}, we denote the generalized polarized pair by $(X, B+M)$ instead of $(X', B'+M')$. Let $g:X'\to \tilde{X}$ be a birational morphism, such that $X'\to X$ is a log resolution of $(X,B)$. Let $$K_{X'}+B'+M'=g^{*}(K_X+B+M),$$ where $M'=g^{*}\tilde{M}$. We say that $(X',B'+M')\to X$ is a \emph{log resolution} of $(X,B+M)$. By replacing $(\tilde{X},\tilde{B}+\tilde{M})$ with $(X', B'+M')$, we may assume that $\tilde X\to X$ is a log resolution of $(X,B+M)$. In the same fashion, $\tilde X$ can be chosen as a sufficiently high model of $X$. In particular, if there exists a variety $Y$ birational to $X$, we can always assume that there exists a morphism from $\tilde X$ to $Y$ which commutes with $X \dasharrow Y$. Many definitions/notions for ordinary log pairs have counterparts for generalized polarized pairs. For convenience, we use the prefix ``g-'' to denote the corresponding notions. For example, one can define the \emph{generalized log discrepancy} (g-log discrepancy) of a prime divisor $E$ over $X$: let $\tilde X$ be a high enough model which contains $E$, and let \[ K_{\tilde X}+\tilde B+\tilde M=f^*(K_{X}+B+M). \] Then the g-log discrepancy of $E$ is defined as (see \cite{BZ16} Definition 4.1) \[ a(E, X, B+M)=1-{\rm mult}_E\tilde B. \] A g-lc place is a divisor $E$ on a birational model of $X$, such that $a(E; X, B+M)=0$. A g-lc center is the image of a g-lc place, and the g-lc locus is the union of all the g-lc centers. \medskip We say that $(X,B+M)$ is generalized lc (g-lc) (resp. generalized klt (g-klt)) if the g-log discrepancy of any prime divisor is $\geq 0$ (resp. $>0$). Moreover, as $\tilde M$ is a nef divisor, if $M$ is $\mathbb{R}$-Cartier, by the negativity lemma (see \cite[Lemma 3.39]{KM98}), $f^* M=\tilde M+E$ with $E \geq 0$ an exceptional divisor. In particular, this implies that if $K_{X}+B$ is $\mathbb{R}$-Cartier, then the log discrepancy of a divisor $E$ with respect to $(X, B)$ is greater than or equal to the g-log discrepancy of $E$ with respect to $(X, B+M)$. The definition of generalized dlt (g-dlt) is subtle. \begin{definition}[G-dlt]\label{def:g-dlt} Let $(X,B+M)$ be a g-pair. We say that $(X,B+M)$ is \emph{g-dlt} if it is g-lc and there is a closed subset $V\subset X$ ($V$ can be equal to $X$) such that \begin{enumerate} \item $X\backslash V$ is smooth and $B|_{X\backslash V}$ is a simple normal crossing divisor, \item if $a(E, X, B+M)=0$, then the center of $E$ satisfies $\operatorname{Center}_X(E) \not\subset V$ and $\operatorname{Center}_X(E)\backslash V$ is a lc center of $(X\backslash V, B\backslash V)$. \end{enumerate} \end{definition} \begin{remark}\label{rmk: klt} If $(X,B+M)$ is a $\mathbb{Q}$-factorial dlt pair, then $X$ is klt. \end{remark} Our definition of g-dlt is slightly different from the definition in \cite[page 13]{Bir16a}. We will show that our definition of g-dlt is preserved under adjunctions and running MMPs. \begin{remark}\label{rmk: dlt} Another possible definition of g-dlt is as follows: A g-pair $(X,B+M)$ is g-dlt if there exists a log resolution $\pi: X'\to X$ of $(X,B+M)$, such that $a(E,X,B+M)>0$ for every exceptional divisor $E\subset Y$. For the ordinary log pairs (i.e. $\tilde M=0$), the above two definitions are equivalent (see \cite[Theorem 2.44]{KM98}). However, for g-pairs, it is not known whether the two definitions are the same or not. \end{remark} The adjunction formula for g-lc pairs is given in \cite[Definition 4.7]{BZ16}. \begin{definition}[Adjunction formula for g-pairs]\label{def: g-adjunction} Let $(X/Z,B+M)$ be a g-dlt pair with data $\tilde X \xrightarrow{f} X \to Z$ and $\tilde M$. Let $S$ be a component of $\lfloor B \rfloor$ and $\tilde S$ its birational transform on $\tilde X$. We may assume that $f$ is a log resolution of $(X,B+M)$. Write \[ K_{\tilde X} +\tilde B+\tilde M=f^*(K_{X} +B +M), \] then \[ K_{\tilde S} +B_{\tilde S} +M_{\tilde S} \coloneqq (K_{\tilde X} +\tilde B+\tilde M)|_{\tilde S} \] where $B_{\tilde S} = (\tilde B-\tilde S)|_{\tilde S}$ and $M_{\tilde S} =\tilde M|_{\tilde S}$. Let $g$ be the induced morphism $\tilde S\to S$. Set $B_{S} = g_*B_{\tilde S}$ and $M_{S} =g_*M_{\tilde S}$. Then we get the equality \[ K_{S}+B_{S}+M_{S} = (K_{X}+B+M)|_{S}, \] which is referred as \emph{generalized adjunction formula}. \end{definition} Suppose that $\tilde M=\sum \mu_i \tilde M_i$, where $\tilde M_i$ is a nef$/Z$ Cartier divisor for each $i$, and $B=\sum b_j B_j$ the prime decomposition of the $\mathbb{R}$-divisor $B$. Let $\bm{b}=\{b_j\}$, $\bm{\mu}=\{\mu_i\}$ be the coefficient sets. For a set of real numbers $\Gamma$, set \begin{equation}\label{eq: S} \mathbb{S}(\Gamma) \coloneqq \{1-\frac{1}{m}+\sum_{j} \frac{r_j\gamma_j}{m}\le 1 \mid m\in\mathbb{Z}_{>0},r_j\in\mathbb{Z}_{\ge0}, \gamma_j \in \Gamma\}\cup\{1\}. \end{equation} Then the coefficients of $B_{S}$ belong to the set $\mathbb{S}(\bm{b},\bm{\mu}) \coloneqq \mathbb S(\bm{b} \cup \bm{\mu})$ (see \cite[Proposition 4.9]{BZ16}). \begin{lemma}\label{lem:adjgdlt} Let $(X/Z,B+M)$ be a g-dlt pair with data $\tilde X \xrightarrow{f} X \to Z$ and $\tilde M$. Let $S$ be a component of $\lfloor B \rfloor$, and $B_S,M_S$ be the divisors in the g-adjunction formula (see Definition \ref{def: g-adjunction} ). Then $(S,B_S+M_S)$ is still g-dlt. \end{lemma} \begin{proof} We use the notation in Definition \ref{def: g-adjunction}. Let $V$ be the closed subset $V\subset X$ in Definition \ref{def:g-dlt}, and $V_S=V\cap S$. It is clear that $S\backslash V_S$ is smooth and $B|_{S\backslash V_S}$ is a simple normal crossing divisor. If $a(E,S,B_S+M_S)=0$, then $\operatorname{Center}_{\tilde S}(E)$ is a stratum of $(\tilde{S},\tilde{B}_{\tilde S})$, and thus a stratum of $({\tilde X}, \tilde B)$. Let $E'$ be a g-lc place of $(X,B+M)$, such that $\operatorname{Center}_{\tilde S}(E)=\operatorname{Center}_{\tilde X}(E')$. Since $(X,B+M)$ is g-dlt, $\operatorname{Center}_{X}(E') \not\subset V$ and $\operatorname{Center}_{X}(E')\backslash V$ is a lc center of $(X\backslash V,B\backslash V)$. Thus, $\operatorname{Center}_{S}(E)\backslash V_S$ is a lc center of $(S\backslash V_S,B_S|_{S\backslash V_S})$. \end{proof} \begin{remark} In general, $K_S+B_S$ may not be $\mathbb{R}$-Cartier and thus $(S,B_S)$ may not be dlt. In particular, $(S, B_S+M_S)$ may not be g-dlt in the sense of \cite{Bir16a}. This is the main reason that we do not use the definition of g-dlt as \cite{Bir16a}. \end{remark} The following proposition is similar to \cite[Proposition 3.9.2]{Fujino07}. \begin{proposition}\label{prop: intersection on g-lc centers} Let $(X/Z,B+M)$ be a g-dlt pair with data $\tilde X \xrightarrow{f} X \to Z$ and $\tilde M$. Suppose that $\tilde M=\sum \mu_i \tilde M_i$, where $\tilde M_i$ is a nef$/Z$ Cartier divisor for every $i$, and let $B=\sum b_j B_j$ be the prime decomposition of an $\mathbb{R}$-divisor $B$. Let $V$ be a g-lc center of $(X,B+M)$. Then there exists a g-dlt pair $(V, B_V+M_V)$ such that \[ (K_X+B+M)|_{V}=K_V+B_{V}+M_{V}, \] where $M_{V}$ is the push forward of $\tilde{M}|_{\tilde{V}}$ on $V$, and $\tilde{V}$ is the birational transform of $V$ on $\tilde{X}$. Moreover, the coefficients of $B_{V}$ belong to the set $\mathbb{S}(\bm{b},\bm{\mu})$. \end{proposition} \begin{proof} Let $k$ be the codimension of $V$. By definition of g-dlt, $V$ is an irreducible component of $V_{1}\cap V_{2}\cap \ldots\cap V_{k}$ for some $V_{i}\subseteq \lfloor B\rfloor$. Under the notation of \eqref{eq: S}, a straightforward computation shows that $\mathbb{S}(\bm{b},\bm{\mu})=\mathbb{S}(\mathbb{S}(\bm{b},\bm{\mu}))$ (for example, see \cite[Proposition 3.4.1]{HMX14}). Then the claim follows from applying Lemma \ref{lem:adjgdlt} $k$ times. \end{proof} \subsection{G-log minimal models and g-log terminal model}\label{sec: MM and Mori fibre space} The notions of log minimal/terminal models still make sense in the generalized polarized setting. Following Shokurov \cite{Sho92}, certain extractions of g-lc places are allowed for g-log minimal models. First, if $f: X \dashrightarrow Y$ is a birational map, and $B$ is an effective divisor on $X$, we define \begin{equation}\label{eq: B_Y} B_Y \coloneqq f_*B +E, \end{equation} where $f_*B$ is the birational transform of $B$ on $Y$, and $E$ is the sum of reduced exceptional divisors of $f^{-1}$. \begin{definition}[G-log minimal model \& g-log terminal model]\label{def: g-log minimal model and g-log terminal model} Let $(X/Z, B+M)$ be a g-pair with data $\tilde X \to X$ and nef part $M$. Then a g-pair $(Y/Z, B_Y+M_Y)$ is called a \emph{g-log minimal model} of $(X/Z, B+M)$, if \begin{enumerate} \item there is a birational map $X \dashrightarrow Y$, \item $B_Y$ is the same as \eqref{eq: B_Y}, and $M_Y$ is the pushforward of $\tilde M$ (we can always assume that there exists a morphism $\tilde X \to Y$), \item $K_Y+B_Y+M_Y$ is nef, \item $(Y/Z, B_Y+M_Y)$ is $\mathbb{Q}$-factorial g-dlt with data $\tilde X \to Y$ and nef part $M_Y$, \item $a(D, X, B+M)<a(D,Y, B_Y+M_Y)$ for any divisor $D$ on $X$ which is exceptional over $Y$. \end{enumerate} Furthermore, if $X \dasharrow Y$ is a birational contraction (i.e. there is no divisor on $Y$ which is exceptional over $X$), we say that $(Y/Z, B_Y+M_Y)$ is a \emph{g-log terminal model} of $(X/Z, B+M)$. \end{definition} \begin{remark}\label{rmk: extract lc places} Just as the case for log pairs, a g-log minimal model can only extracts g-lc places. That is, a divisor $E\subset Y$ is exceptional over $X$ satisfies $a(E, X, B+M)=0$. \end{remark} \subsection{Weak Zariski Decompositions}\label{sec: Zariski decomposition} On a normal variety $X$ over $Z$ (we write this by $X/Z$), an $\mathbb{R}$-Cartier divisor $D$ is said to admit a \emph{weak Zariski decomposition} if \[D \equiv N+P/Z,\] with $N \geq 0$ and $P$ a nef$/Z$ $\mathbb{R}$-Cartier divisor (see \cite[Definition 1.3]{Birkarweak12}). Unlike Zariski decompositions, weak Zariski decompositions may not be unique. \begin{definition}\label{def: bir zariski decomposition} Let $X/Z$ be a variety and $D$ be a Cartier divisor. We say that $D$ admits a \emph{birational Zariski decomposition} if there exists a birational morphism $f: Y \to X$ from a normal variety $Y$, such that $f^*D$ admits a weak Zariski decomposition. \end{definition} Notice that birational weak Zariski decomposition is called weak Zariski decomposition in \cite[Definition 1.3]{Birkarweak12}. For a lc pair $(X, \Delta)$, the non-vanishing conjecture asserts that $K_X+\Delta \sim_\mathbb{R} N$ for some effective divisor $N$. This implies that $K_X+\Delta$ admits a weak Zariski decomposition by taking $P=0$. For (weak) Zariski decompositions, the most important case is when $D$ equals to the adjoint divisor $K_X+B$ (or $K_X+B+M$). In the sequel, when saying ``existence a (weak) Zariski decomposition'' without referring to a divisor, it should be understood that the decomposition is for the adjoint divisor. \begin{remark} Zariski proved that on a smooth projective surface, an effective divisor $D$ can be decomposed as a sum of a nef divisor and an effective divisor with some extra properties, \cite{Zariski62}, which is known as the Zariski decomposition. There are various generalizations to higher dimensions. See \cite{Nakayamazariski} for more details. For an arbitrary divisor, it may not always admit a weak Zariski decomposition, \cite{John14}. But for the adjoint divisor $K_X+B$, the existence of a weak Zariski decomposition is a consequence of the existence of a minimal model, which is highly plausible. \end{remark} \subsection{Nef $\mathbb{Q}$-Cartier combinations (NQC)}\label{sec: NQC} We need a technical assumption to guarantee that certain g-MMP on g-pairs behave as the ordinary log pairs (see Section \ref{subsection: length of extremal rays and its applications}). Here the abbreviation ``NQC'' stands for ``nef $\mathbb{Q}$-Cartier combinations''. \begin{definition}\label{def: NQC} We have following definitions on decompositions of nef$/Z$ $\mathbb{R}$-Cartier divisors in various settings. \begin{enumerate} \item We say that an $\mathbb{R}$-Cartier divisor $M$ is \emph{NQC} over $Z$, if \[ M\equiv\sum_i r_i M_i/Z, \] where $r_i \in \mathbb{R}_{>0}$ and $M_i$ are $\mathbb{Q}$-Cartier nef$/Z$ divisors. \item A g-pair $(X/Z, B+M)$ with data $\tilde X \xrightarrow{f}X \to Z$ and $\tilde M$ is said to be an \emph{NQC g-pair}, if $\tilde M$ is NQC. \item We define \emph{NQC g-lc, NQC g-klt}, etc. if an NQC g-pair is g-lc, g-klt, etc. \item We say that a g-pair $(X/Z, B+M)$ admits a \emph{birational NQC weak Zariski decomposition$/Z$}, if there exists a birational morphism $g: Y \to X/Z$ such that $g^*(K_X+B+M)\equiv P+N/Z$, where $N \geq 0$ and $P$ is NQC over $Z$. \end{enumerate} \end{definition} We will avoid repeating ``over $Z$'' if the base $Z$ is clear in the context. By definition, the NQC property is preserved under g-MMPs and generalized adjunctions. The NQC assumption excludes the pathological phenomenon incurred by the nef part. In \cite{BZ16}, Birkar proved ACC for g-lc thresholds and Global ACC for NQC pairs (though the name was not given there). In the current paper, we need the g-lc pairs to be NQC in order to run some kind of special g-MMPs. In Proposition \ref{prop: g-log minimal model implies NQC zarski decomposition}, we show that the existence of a g-log minimal model for an NQC g-lc pair implies the existence of a birational NQC weak Zariski decomposition for this g-pair. \section{MMP for generalized polarized pairs}\label{sec: LMMP} \subsection{MMP for generalized polarized pairs}\label{subsection: MMP for g-pairs} For g-lc pairs, the cone theorem, the existence of flips and the termination of flips are still expected to hold true. \begin{conjecture}[Cone theorem for g-lc pairs] Let $(X,B+M)$ be a g-lc pair. We have the following claims. \begin{enumerate} \item There are countably many curves $C_j\subset X$ such that $0<-(K_X+B+M)\cdot C_j\le 2\dim X$, and $$\overline{\rm NE}(X)=\overline{\rm NE}(X)_{(K_X+B+M)\ge0}+\sum\mathbb{R}_{\ge0}[C_j].$$ \item Let $F\subset \overline{\rm NE}$ be a $(K_X+B+M)$-negative extremal face. There there a unique morphism ${\rm cont}_{F}:X\to Y$ to a projective variety such that $({\rm cont}_{F})_{*}\mathcal{O}_X=\mathcal{O}_Y$, and an irreducible curve $C\subset X$ is mapped to a point by ${\rm cont}_{F}$ if and only if $[C]\in F$. \item Let $F$ and ${\rm cont}_{F}:X\to Y$ be as in $(2)$. Let $L$ be a line bundle on $X$ such that $L\cdot C=0$ for every curve $C$ with $[C]\in F$. Then there is a line bundle $L_Y$ on $Y$ such that $L\simeq {\rm cont}_{F}^{*}L_Y$. \end{enumerate} \end{conjecture} \begin{definition}[Flips for g-pairs] Let $(X,B+M)$ be a g-lc pair. A \emph{$(K_X+B+M)$-flipping contraction} is a proper birational morphism $f:X\to Y$ such that $\operatorname{Exc}(f)$ has codimension at least two in $X$, $-(K_X+B+M)$ is $f$-ample and the relative Picard group has rank $\rho(X/Y)=1$. A g-lc pair $(X^{+},B^{+}+M^{+})$ together with a proper birational morphism $f^{+}:X^{+}\to Y$ is called a \emph{$(K_X+B+M)$-flip} of $f$ if \begin{enumerate} \item $B^{+}, M^{+}$ are the birational transforms of $B, M$ on $X^{+}$, respectively, \item $K_{X^{+}}+B^{+}+M^{+}$ is $f^{+}$-ample, and \item $\operatorname{Exc}(f^{+})$ has codimension at least two in $X^{+}$. \end{enumerate} For convenience, We call the induced birational map, $X\dashrightarrow X^{+}$, a $(K_X+B+M)$-flip. \end{definition} \begin{conjecture}[Existence of flips for g-lc pairs] For a g-lc pair, the flip of a flipping contraction always exists. \end{conjecture} \begin{conjecture}[Termination of flips for g-lc pairs] There is no infinite sequence of flips for g-lc pairs. \end{conjecture} Although the MMP for g-pairs is not established in full generality, some important cases could be derived from the standard MMP. We elaborate these results which are developed in \cite[\S 4]{BZ16}. Let $(X/Z,B+M)$ be a $\mathbb{Q}$-factorial g-lc pair, and $A$ be a general ample$/Z$ divisor. \begin{quote} ($\star$) Suppose that for any $0<\epsilon\ll1$, there exists a boundary $\Delta_{\epsilon} \sim_\mathbb{R} B+M+\epsilon A/Z$, such that $(X,\Delta_{\epsilon})$ is klt. \end{quote} Under the assumption ($\star$), we can run a g-MMP$/Z$ on $(K_X+B+M)$, although the termination is not known. In fact, let $R$ be an extremal ray$/Z$, such that $(K_X+B+M)\cdot R<0$. For $0<\epsilon\ll 1$, we have $(K_X+B+M+\epsilon A)\cdot R<0$. By assumption, there exists $\Delta_{\epsilon} \sim_\mathbb{R} B+M+\epsilon A/Z$, such that $(X,\Delta_{\epsilon})$ is klt, and $(K_X+\Delta_{\epsilon})\cdot R<0$. Now, $R$ can be contracted and its flip exists if the contraction is a flipping contraction. If we obtain a g-log minimal model or a g-Mori fiber space, we stop the process. Otherwise, let $X\dashrightarrow Y$ be the divisorial contraction or the flip, then $(Y,B_{Y}+M_{Y})$ is naturally a g-lc pair. Moreover, for any $0<\epsilon\ll 1$, $(Y,\Delta_{\epsilon,Y})$ is klt. Repeating this process gives the g-MMP$/Z$. The usual notion of g-MMP with scaling of the general divisor $A$ also makes sense under assumption ($\star$) (see \cite{BZ16}). The following lemma shows that assumption ($\star$) is satisfied in two cases. As a result, we may run a g-MMP for such g-pairs. \begin{lemma}\label{lem:glcklt} Let $(X/Z,B+M)$ be a g-lc pair, and $A$ be an ample$/Z$ divisor. Suppose that either (i) $(X, B+M)$ is g-klt, or (ii) there exists a boundary $C$, such that $(X,C)$ is klt. Then, there exists a boundary $\Delta \sim_\mathbb{R} B+M+A/Z$, such that $(X,\Delta)$ is klt. Moreover, if $X$ is $\mathbb{Q}$-factorial, we may run a g-MMP on $K_X+B+M$. \end{lemma} \begin{proof} Suppose that $(X, B+M)$ is g-klt. We have \[ K_{\tilde X}+\tilde B+\tilde M+f^*(A) =f^*(K_{X}+B+M+A), \] where $\tilde M+f^*(A)$ is big and nef. Hence for $k \in \mathbb{N}$, there exists an ample divisor$/Z$ $H_k\ge0$, and an effective divisor $E$, such that $\tilde{M}+f^*(A)\sim_\mathbb{R} H_k+\frac {1}{k}E$. For general $H_k$ and $k \gg 1$, $K_{\tilde X}+\tilde B+H_k+\frac{1}{k}E$ is sub-klt. Let \[ \Delta\coloneqq f_*(\tilde B+H_k+\frac 1 k E)\sim_\mathbb{R} B+M+A. \] Since \[ K_{\tilde X}+\tilde B+\tilde M+f^*(A) \sim_\mathbb{R} K_{\tilde X}+\tilde B+H_k+\frac{1}{k}E, \] we have \[ K_{\tilde X}+\tilde B+H_k+\frac{1}{k}E = f^*(K_{X}+\Delta), \] and $(X,\Delta)$ is klt. Suppose that (ii) holds. By assumption, $B+M-C$ is $\mathbb{R}$-Cartier, and there exists $0<\epsilon\ll1$, such that $\epsilon(B-C+M)+A/2$ is ample. Let $H$ be a general ample divisor, such that $\epsilon H\sim_{\mathbb{R}}\epsilon(B-C+M)+A/2$, and $(X,C+H)$ is klt. Thus $(X, (\epsilon C+(1-\epsilon) B)+(\epsilon H+(1-\epsilon)M))$ is g-klt with boundary part $\epsilon C+(1-\epsilon) B$ and nef part $\epsilon H+(1-\epsilon)M$. Besides, \begin{align*} &K_X+(\epsilon C+(1-\epsilon) B)+(\epsilon H+(1-\epsilon)M)\\ \sim_{\mathbb{R}}&\epsilon(K_X+C+H)+(1-\epsilon)(K_X+B+M)\\ \sim_{\mathbb{R}}&K_X+B+M+A/2. \end{align*} Apply (i) to $(X, (\epsilon C+(1-\epsilon) B)+(\epsilon H+(1-\epsilon)M))$ with $A/2$ we get (ii). \end{proof} \begin{remark} As a simple corollary, suppose that $X$ is $\mathbb{Q}$-factorial klt and $(X/Z,B+M)$ is g-lc, then there are countably many extremal rays $R/Z$, such that $(K_X+B+M)\cdot R<0$. \end{remark} A g-MMP for a g-dlt pair preserves g-dltness. \begin{lemma}\label{lem:dltpreserved} Let $(X/Z,B+M)$ be a $\mathbb{Q}$-factorial g-dlt pair. Let $g: X\dashrightarrow Y/Z$ be either a divisorial contraction of a $(K_X+B+M)$-negative extremal ray or a $(K_X+B+M)$-flip. Let $B_Y=g_{*}(B), M_Y=g_{*}(M)$, then $(Y,B_Y+M_Y)$ is also $\mathbb{Q}$-factorial g-dlt. \end{lemma} \begin{proof} Fix a general ample$/Z$ divisor $H$. As $X$ is klt (see Remark \ref{rmk: klt}), by Lemma \ref{lem:glcklt}, there exist $0<\epsilon\ll 1$ and $\Delta_{\epsilon}$, such that $\Delta_{\epsilon}\sim_{\mathbb{R}} B+M+\epsilon H$, and $(X,\Delta_{\epsilon})$ is $\mathbb{Q}$-factorial klt. Moreover, $g$ is also either a divisorial contraction of a $(K_X+\Delta_{\epsilon})$-negative extremal ray or a $(K_X+\Delta_{\epsilon})$-flip. By \cite[Proposition 3.36, 3.37]{KM98}, $Y$ is $\mathbb{Q}$-factorial. To show that the g-dltness is preserved, we use a similar argument as \cite[Lemma 3.44]{KM98}. Let $V$ be the closed subset $V\subset X$ in Definition \ref{def:g-dlt}. When $g$ is a divisorial contraction, set $V_Y= g(V \cup \operatorname{Exc}(g))$. Then $X \backslash (V\cup \operatorname{Exc}(g)) \simeq Y \backslash V_Y$, and thus $Y\backslash V_Y$ is smooth and $B|_{Y\backslash V_Y}$ is a simple normal crossing divisor. By the negativity lemma, for any divisor $E$, we have $a(E, Y, B_Y+M_Y) \geq a(E, X, B+M) \geq 0$, and $a(E, Y, B_Y+M_Y)> a(E, X, B+M)$ when $\operatorname{Center}_X(E)\subset \operatorname{Exc}(g)$ (see \cite[Lemma 3.38]{KM98}). When $a(E, Y, B_Y+M_Y)=0$, we have $a(E, X, B+M)=0$ and $\operatorname{Center}_X(E)\not\subset \operatorname{Exc}(g)$. Thus $\operatorname{Center}_X(E) \not\subset V \cup \operatorname{Exc}(g)$ by the definition of g-dlt. Hence $\operatorname{Center}_Y(E) \not\subset V_Y$. This shows that $(Y, B_Y+M_Y)$ is g-dlt. When $g$ is a flip, let $c_1: X \to W/Z, c_2: Y \to W/Z$ be the corresponding contractions. Let $L_1, L_2$ be the contraction loci of $c_1, c_2$ respectively. Set $V_X = V \cup L_1 \cup c_1^{-1}(c_2(L_2))$ and $V_Y = c_2^{-1}(c_1(V)) \cup L_2 \cup c_2^{-1}(c_1(L_1))$. Then $X \backslash V_X \simeq Y \backslash V_Y$, and thus $Y\backslash V_Y$ is smooth and $B|_{Y\backslash V_Y}$ is a simple normal crossing divisor. By the negativity lemma, for any divisor $E$, we have $a(E, Y, B_Y+M_Y) \geq a(E, X, B+M) \geq 0$, and $a(E, Y, B_Y+M_Y)> a(E, X, B+M)$ when $\operatorname{Center}_X(E)\subset L_1$ or $\operatorname{Center}_Y(E)\subset L_2$ (see \cite[Lemma 3.38, 3.44]{KM98}). When $a(E, Y, B_Y+M_Y)=0$, we have $a(E, X, B+M)=0$ and $\operatorname{Center}_Y(E)\not\subset L_2 \cup c_2^{-1}(c_1(L_1))$. Besides, by the definition of g-dlt, $\operatorname{Center}_X(E) \not\subset V$. Thus $\operatorname{Center}_Y(E) \not\subset V_Y$. This shows that $(Y, B_Y+M_Y)$ is g-dlt. \end{proof} \begin{comment} By definition of g-dlt, there exists a dlt pair $(X,D)$, such that the lc places of $(X,D)$ are the same as the g-lc places of $(X,B+M)$. Let $D_Y$ be the birational transform of $D$ on $Y$. We claim that $(Y,D_Y)$ is dlt. Since $(X,D)$ is dlt, there exists a closed subset $V\subset X$, such that for any irreducible divisor $E$ whose center is contained in $V$, we have $a(E,X,D)>0$. Since the g-lc place of $(X,B+M)$ and the lc place of $(X,D)$ are the same, for such $E$, we have $a(E,X,D)>0$. Let $V_Y\coloneqq g(V)\cup \operatorname{Exc}(g^{-1})$. $Y\backslash V_{Y}$ is isomorphic to an open subset of $X\backslash V$, thus $Y\backslash V_Y$ is smooth and $B|_{Y\backslash V_Y}$ is a snc divisor. Let $E$ be an exceptional divisor over $Y$ such that $\operatorname{Center}_{Y}E\subset V_Y$. Then $\operatorname{Center}_{Y}E\subset V\cup \operatorname{Exc}(g)$. Thus, $$a(E,Y,D_Y)\ge a(E,Y,B_{Y}+M_Y)\ge a(E,X,B+M)\ge0.$$ If $\operatorname{Center}_X E\subset V$ then the third inequality is strict by the definition of dlt. If $\operatorname{Center}_X E\subset\operatorname{Exc}(g)$ then the second inequality is strict by the negativity lemma. Thus, $(Y,D_Y)$ is dlt. Suppose $E$ is a g-lc place of $(Y,B_Y+M_Y)$. By the negativity lemma, $E$ is a g-lc place of $(X,B+M)$, and $g$ is an isomorphism locally around the generic point of $E$ on $X$ and on $Y$. Since $E$ is a lc place of $(X,D)$, $E$ is also a lc place of $(Y,D_Y)$. \end{comment} \begin{comment} \begin{proof} Each claim except (3) and (5) is explicitly stated in \cite{BZ16} Lemma 4.5. (3) holds because we obtain $X'$ by running an MMP/$X$ with scaling and in each step of it, we construct a klt pair. Hence, in each step, the variety itself is klt, and thus for the last step, $(X',0)$ is klt. For (5), when $\lfloor B' \rfloor = 0$, we see that in the construction as \cite[Lemma 4.5]{BZ16}, $\lfloor B \rfloor = 0$, and the g-log discrepancy of any exceptional divisor in the log resolution $\tilde X \to X$ is larger than $0$ (otherwise it would be preserved in $\lfloor B' \rfloor$). By the definition of g-klt singularity, we see that it is independent of the log resolution, hence the original pair $(X, B+M)$ is g-klt. This implies that $(X', B'+M')$ is also g-klt. \end{proof} If $X$ has klt singularities, we could extract the divisors which are exactly $S_1, \ldots, S_r$. \begin{proposition}[{\cite[Lemma 4.6]{BZ16}}]\label{prop: dlt for klt} Under the notation and assumptions of Proposition \ref{prop: dlt}, further assume that $(X,C)$ is klt for some $C$, and that the generalized log discrepancies of the $S_i$ with respect to $(X, B+ M)$ are $< 1$. Then we can construct $\phi$ so that in addition it satisfies: \begin{enumerate} \item its exceptional divisors are exactly $S_1, \ldots, S_r$, and \item if $r = 1$ and $X$ is $\mathbb{Q}$-factorial, then $\phi$ is an extremal contraction. \end{enumerate} \end{proposition} \end{comment} Let $f: X \to Y$ be a projective morphism of varieties and $D$ be an $\mathbb{R}$-Cartier divisor on $X$, then $D$ is called a \emph{very exceptional} divisor of $f$ (\cite[Definition 3.2]{Sho03}) if (1) $f(D) \subsetneq Y$, and (2) for any prime divisor $P$ on $Y$ there is a prime divisor $Q$ on $X$ which is not a component of $D$ but $f(Q)=P$. Notice that if $f$ is birational, then any exceptional divisor is very exceptional. The point is that the negativity lemma also holds for very exceptional divisors (\cite[Lemma 3.3]{Birkar12}). The following Proposition is an easy consequence of the negativity lemma, which is a generalization of \cite[Theorem 3.5]{Birkar12} in the setting of g-lc pairs. \begin{proposition}\label{prop: termination for very exceptional divisor} Let $(X/Z, B+M)$ be a g-lc pair with $X$ klt such that $K_X+B+M\equiv D_1-D_2$, where $D_1\geq 0,D_2\ge0$ have no common components. Suppose that $D_1$ is a very exceptional divisor over $Z$. Then any g-MMP$/Z$ on $K_X+B+M$ with scaling of an ample divisor$/Z$ either terminates to a Mori fiber space or contracts $D_1$ after finite steps. Moreover, if $D_2=0$, then the g-MMP terminates to a model $Y$ such that $K_Y+B_Y+M_Y \equiv 0/Z$. \end{proposition} \begin{proof} Let $A$ be an ample$/Z$ divisor, by Lemma \ref{lem:glcklt}, we can run a g-MMP$/Z$ with scaling of $A$. Let $\nu_i=\inf\{t \in\mathbb{R}_{\geq0} \mid K_{X_i}+B_i+M_i+tA_i \text{~is~nef~over~}Z\}$ be the nef threshold in each step of the scaling. Set $\mu = \lim \nu_i$. If $\mu>0$, then the g-MMP is also a g-MMP on $K_X+B+M+\mu A$. By Lemma \ref{lem:glcklt}, there exists a boundary $\Delta$, such that $K_X+B+M+\mu A \sim_\mathbb{R} K_X+\Delta$, and $(X, \Delta)$ is klt with $\Delta$ big. Then the $(K_X+\Delta)$-MMP with scaling terminates by \cite[Corollary 1.4.2]{BCHM10}. In this case, without loss of generality, we can assume that $K_X+B+M$ is nef$/Z$. If $\mu=0$, after finite steps, we can assume that the g-MMP only consists of flips. Thus $K_X+B+M$ is a limit of movable$/Z$ $\mathbb{R}$-Cartier divisors. In either case, for any prime divisor $S$ on $X$, and a very general curve $C\subset S$, we have $(K_X+B+M)\cdot C =(D_1-D_2)\cdot C\geq 0$. Since $D_1$ is a very exceptional divisor over $Z$, by the negativity lemma (\cite[Lemma 3.3]{Birkar12}), $D_2-D_1 \geq 0$, which implies that $D_1=0$. Hence the g-MMP contracts $D_1$ after finite steps. When $D_2=0$, on a model $Y$ such that $D_1$ is contracted, we have $K_Y+B_Y+M_Y \equiv 0/Z$. \end{proof} The proof of \cite[Lemma 4.5]{BZ16} gives the existence of g-dlt modifications. \begin{proposition}[G-dlt modification {\cite[Lemma 4.5]{BZ16}}]\label{prop: dlt} Let $(X, B + M)$ be a g-lc pair with data $\tilde X \xrightarrow{f} X \to Z$ and $\tilde M$. Then perhaps after replacing $f$ with a higher resolution, there exist a $\mathbb{Q}$-factorial g-dlt pair $(X', B' + M')$ with data $\tilde X \xrightarrow{g} X' \to Z$ and $\tilde M$, and a projective birational morphism $\phi: X' \to X$ such that $K_{X'}+B'+M' = \phi^*(K_{X}+B+M)$. Moreover, each exceptional divisor of $\phi$ is a component of $\lfloor B'\rfloor$. We call $(X', B' + M')$ a g-dlt modification of $(X,B+M)$. \end{proposition} \begin{proof} We may assume that $f$ is a log resolution of $(X,B+M)$. Let $E_i$ be an irreducible exceptional divisor of $f$. We have \[ K_{\tilde{X}}+\tilde{B}+E+\tilde{M}=f^*(K_{X}+B+M)+E\equiv E/X, \] where $E=\sum a_iE_i\ge0 $, and $a_i=a(E_i, X, B+M)$ is the g-log discrepancy. We can run a g-MMP$/X$ on $(K_{\tilde{X}}+\tilde{B}+E+\tilde{M})$ with scaling of an ample divisor. By Proposition \ref{prop: termination for very exceptional divisor}, the g-MMP terminates to $X'$, and $E$ is contracted. Thus $K_{X'}+B'+M'=\phi^*(K_{X}+B+M)$. As $(\tilde{X}, \tilde{B}+E+\tilde{M})$ is g-dlt, by Lemma \ref{lem:dltpreserved}, $(X', B' + M')$ is also $\mathbb{Q}$-factorial g-dlt. By the construction of $E$, we see that each exceptional divisor of $\phi$ is a component of $\lfloor B'\rfloor$ \end{proof} Although the MMP is expected to hold for g-pairs, abundance conjecture, finite generations of canonical rings and non-vanishing conjecture all fail for g-pairs (see \cite[\S 3]{Birkarhuweak14} for discussions). However, as for non-vanishing conjecture, one can still ask under the numerical sense. In general, abundance conjecture does not hold under the numerical sense. \begin{example} Let $X$ be $\mathbb{P}^2$ blown up 9 points in sufficiently general positions, $M=-2K_X$. Then $K_X+M=-K_X$ is nef, but there is no semiample divisor $N$ such that $K_X+M\equiv N$, \cite{bauer2004}. \end{example} We ask the following question. \begin{conjecture}[Weak non-vanishing \& weak abundance for g-pairs] Let $(X/Z,B+M)$ be a $\mathbb{Q}$-factorial NQC g-dlt pair. Suppose that $K_X+B+M$ is nef, then (1) there exists an effective $\mathbb{R}$-divisor $N$, such that $K_X+B+M\equiv N$, (2) there exists $0\le t\le 1$, and a semi-ample $\mathbb{R}$-divisor $D$, such that $K_X+B+tM\equiv D$. \end{conjecture} \subsection{Length of extremal rays}\label{subsection: length of extremal rays and its applications} The bound on the length of extremal rays also holds for g-pairs. Following \cite[Definition 1]{Sho09}, we define extremal curves. \begin{definition}[Extremal curve]\label{def: extremal curve} An irreducible curve $C$ on $X/Z$ is called \emph{extremal} if it generates an extremal ray $R = \mathbb{R}_+[C]$ of the Kleiman-Mori cone $\overline{\rm NE}(X/Z)$, and has the minimal degree for this ray (with respect to any ample divisor). \end{definition} By definition, there exists an ample$/Z$ divisor $H$, such that \[ H \cdot C = \min\{H\cdot\Gamma \mid [\Gamma]\in R\}. \] We have the following result on the length of extremal rays. We thank Chen Jiang for showing us the following simple proof. \begin{proposition}[The length of extremal rays for g-pairs]\label{prop: length of extremal rays for g-pair} Let $X$ be a $\mathbb{Q}$-factorial klt variety, and $(X/Z,B+M)$ be a g-lc pair. Then for any $(K_X+B+M)$-negative extremal ray $R/Z$, there exists a curve $C$, such that $[C]\in R$, and $$0<-(K_X+B+M)\cdot C \leq 2\dim X.$$ \end{proposition} \begin{proof} Let $C$ be any extremal curve such that $[C]\in R$. By definition, there exists an ample$/Z$ divisor $H$, such that \[ H \cdot C = \min\{H\cdot\Gamma \mid [\Gamma]\in R\}. \] By Lemma \ref{lem:glcklt}, for any $1 \gg \epsilon>0$, there exists a klt pair $(X, \Delta_\epsilon)$ with $K_X+\Delta_\epsilon \sim_\mathbb{R} K_X+B+M+\epsilon H$, such that $R$ is also a $(K_X+\Delta_\epsilon)$-negative extremal ray. By Kawamata's length of extremal rays, \cite{Kawamata91}, there exists a rational curve $\Gamma_\epsilon$, such that $[\Gamma_\epsilon]\in R$, and $$0<-(K_X+B+M+\epsilon H) \cdot \Gamma_\epsilon \leq 2\dim X.$$ By the definition of extremal curve, we have $\frac{H \cdot C}{H \cdot \Gamma_{\epsilon}} \leq 1$, thus \begin{align*} -(K_X+B+M+\epsilon H) \cdot C &= -((K_X+B+M+\epsilon H) \cdot \Gamma_\epsilon )(\frac{H \cdot C}{H \cdot \Gamma_{\epsilon}})\\ & \leq 2\dim X. \end{align*} Let $\epsilon \to 0$, we finish the proof. \end{proof} \subsection{Shokurov's polytope}\label{subsection: Shokurov polytope} In this subsection, we have the following setups. Let $X$ be a $\mathbb{Q}$-factorial klt variety, $(X/Z, B + M)$ be an NQC g-lc polarized pair with data $\tilde X \xrightarrow{f} X \to Z$ and $\tilde M$. Suppose $\tilde M=\sum \mu_j \tilde{M_j}$, where $\tilde{M_i}$ is a nef $\mathbb{Q}$-Cartier divisor and $\mu_i\ge 0$. Let $M_j$ be the pushforward of $\tilde{M_j}$ on $X$. Let $B=\sum b_iB_i$ be the prime decomposition. Consider the vector space \[ V \coloneqq (\oplus_i \mathbb{R} B_i) \oplus (\oplus_j \mathbb{R} M_j). \] If $x_i, y_j \geq 0$, then $(X/Z, \sum_i x_i B_i + \sum_j y_j M_j)$ is a g-pair with data $\tilde X \to X$ and a nef divisor $\sum_j y_j \tilde M_j$. Let $\Delta=\sum d_iB_i$, $N=\sum \nu_j M_j$, and set the Euclidean norm \[ ||B+M-\Delta-N||=(\sum_i (b_i-d_i)^2+\sum_j (\mu_j-\nu_j)^2)^{1/2}. \] The set of NQC g-lc pairs \[ \mathcal{L}(B,M)\coloneqq \{\sum_i x_i B_i + \sum_j y_j M_j \in V \mid (X, \sum_{i} x_i B_i + \sum_j y_j M_j) \text{~is g-lc~}\} \] is a rational polytope (may be unbounded). In fact, we may assume that $f:\tilde{X}\to X$ is a log resolution of $(X/Z,B+M)$. Given any $\Delta+N\in V$ with $\Delta \in (\oplus_i \mathbb{R}_{\ge0} B_i)$ and $N \in (\oplus_j \mathbb{R}_{\ge0} M_j)$, if we write \[ K_{\tilde{X}}+\tilde{\Delta}+\tilde{N}=f^{*}(K_X+\Delta+N), \] then the coefficients of $\tilde{\Delta}$ are \emph{rational} affine linear functions of the coefficients of $\Delta$ and $N$. The g-lc condition is the same as that such coefficients are chosen from $[0,1]$. Hence these rational affine linear functions cut out a rational polytope. \begin{lemma}\label{lem:lengthbdd} Under the above notation. Let $X$ be a $\mathbb{Q}$-factorial klt variety, and $(X/Z,B+M)$ be an NQC g-lc pair. (1) Then there exists a positive real number $\alpha>0$, such that for any extremal curve $\Gamma$$/Z$, if $(K_X+B+M)\cdot \Gamma>0$, then $(K_X+B+M)\cdot \Gamma>\alpha$. (2) There exists a positive number $\delta>0$, such that if $\Delta+N\in \mathcal{L}(B,M)$, $||(\Delta-B)+(N-M)||<\delta$, and $(K_X+\Delta+N)\cdot R\le 0$ for an extremal ray $R/Z$, then $(K_X+B+M)\cdot R\le 0$. \end{lemma} \begin{proof}(1) Because $\mathcal{L}(B, M)$ is a rational polytope, there exist $r,m \in \mathbb{N}, a_j \in \mathbb{R}_{\geq 0}$ and $D_j \in (\oplus_i \mathbb{Q}_{\ge0} B_i), N_j \in (\oplus_j \mathbb{Q}_{\ge0} M_j)$ such that $\sum_{j=1}^r a_j=1$, \[ (K_X+B+M)=\sum_{j=1}^r a_j(K_X+D_j+N_j), \] $(X/Z,D_j+N_j)$ is NQC g-lc, and $m(K_X+D_j+N_j)$ is Cartier. By Proposition \ref{prop: length of extremal rays for g-pair} and the definition of the extremal curve (see Definition \ref{def: extremal curve}), if $(K_X+D_j+N_j)\cdot \Gamma<0$, then $(K_X+D_j+N_j)\cdot \Gamma\ge -2\dim X$. Hence, if $(K_X+D_j+N_j)\cdot \Gamma\le 1$, then there are only finitely many possibilities for the intersection numbers $(K_X+D_j+N_j)\cdot \Gamma$. Therefore there are also finite many intersection numbers for $(K_X+B+M) \cdot \Gamma <1$, and the existence of $\alpha$ is clear. (2) Let $\mathbb B(B+M, 1) \subset V$ be a ball centered at $B+M$ with radius $1$. Because $\mathcal{L}(B,M)$ is a rational polytope which may be unbounded, we choose a \emph{bounded} rational polytope $\mathcal{L}'(B,M) \supset \mathcal{L}(B,M) \cap \mathbb B(B+M, 1)$. First, choose $\delta<1$. Let $\Delta'+N'$ be the intersection point of the line connecting $B+M$ and $\Delta+N$ on the boundary of $\mathcal{L}'(B,M)$, such that $\Delta+N$ lies inside the interval between $\Delta'+N'$ and $B+M$ (if this line lies on the boundary of $\mathcal{L}'(B,M)$, we choose $\Delta'+N'$ to be the furthest intersection point). There exist $r,s \in \mathbb{R}_{\geq 0}$, such that $r+s=1$, and $\Delta+N=r(B+M)+s(\Delta'+N')$. Suppose that there is an extremal ray $R/Z$ such that $(K_X+\Delta+N)\cdot R\le 0$ and $(K_X+B+M)\cdot R> 0$. Let $\Gamma$ be an extremal curve of $R$. By (1) there exists $\alpha>0$, such that $(K_X+B+M)\cdot \Gamma>\alpha$. If \[ r>\frac{2s\dim X}{\alpha}, \text{~or equivalently~} r>\frac{2\dim X}{2\dim X+\alpha}, \] then by the proof of Proposition \ref{prop: length of extremal rays for g-pair}, we have \begin{align*} (K_X+\Delta+N)\cdot \Gamma=&r(K_X+B+M)\cdot \Gamma +s(K_X+\Delta'+N')\cdot\Gamma\\ >&r\alpha-2s\dim X>0, \end{align*} which is a contradiction. From the above discussion, we can construct the desired $\delta$ as follows. Let $l>0$ be the minimal non-zero distance from $B+M$ to the boundary of $\mathcal{L}'(B,M)$. We choose $0<\delta \ll1$ such that \[ \frac{l-\delta}{l}>\frac{2\dim X}{2\dim X+\alpha}. \] Then by the choice of $l$, we see that $r=\frac{l'-\delta}{l'}\geq \frac{l-\delta}{l}$, where $l'$ is the distance from $B+M$ to $\Delta'+N'$. This $\delta$ satisfies the requirement. \end{proof} \begin{example}\label{rem:reasonNQC} Without the NQC assumption, Lemma \ref{lem:lengthbdd}(1) no longer holds true. Indeed, let $E$ be a general elliptic curve, $X=E\times E$. Fix a point $P\in E$, consider the divisor classes $f_1=[\{P\}\times E],f_2=[E\times\{P\}],\delta=[\Delta]$, where $\Delta\subset E\times E$ is the diagonal. According to \cite[Lemma 1.5.4]{LazarsfeldPositivity1}, $N=f_1+\sqrt{2}f_2+(\sqrt{2}-2)\delta$ is nef. It is not hard to show that for any $\epsilon>0$, there exists a curve $C$, such that $N\cdot C<\epsilon$ (see \cite{Rnefmathoverflow0508}). \end{example} Let $\dim (\oplus_i \mathbb{R} B_i) \oplus (\oplus_j \mathbb{R} M_j)=d$. For $k \in \mathbb{Q}$, let \[ [0, k]^d \coloneqq [0, k] \times \cdots \times [0,k] \subset \mathbb{R}^d.\] If $\{R_t\}_{t\in T}$ is a family of extremal rays of $\overline{\rm NE}(X/Z)$, set \[ \mathcal{N}_{T}=\{\Delta+N\in \mathcal{L}(B,M) \mid (K_X+\Delta+N)\cdot R_t\ge0~\forall~ t\in T\}. \] By the same argument as \cite{Birkar11} (cf. the original proof in \cite{Sho09}), we have the following result for NQC g-lc pairs. \begin{proposition}\label{le: decomposition to nef Cartier divisors} Under the above notation. Let $X$ be a $\mathbb{Q}$-factorial klt variety, and $(X/Z,B+M)$ be an NQC g-lc pair. Then the set \[ \mathcal{N}_{T} \cap [0, k]^d \] is a rational polytope for any $k\in\mathbb{Q}$. In particular, if $K_X+B+M$ is nef$/Z$, then there exist NQC g-lc pairs $(X, D_i+N_i)$ with nef part $N_i$ and boundary part $D_i$, and $a_i \in \mathbb{R}_{>0}$, such that \[ K_X+B+M=\sum_i a_i (K_X+D_i+N_i)\] with $\sum_{i} a_i=1$. Moreover, there exists $m\in \mathbb{N}$, such that $m(K_X+D_i+N_i)$ is a nef$/Z$ and Cartier divisor for each $i$. \end{proposition} \begin{proof} By definition, $\mathcal{N}_{T} \cap [0, k]^d$ is just \[ \{\Delta+N\in \mathcal{L}(B,M) \cap [0, k]^d \mid (K_X+\Delta+N)\cdot R_t\ge0~\forall~ t\in T\}, \] and $\mathcal{L}(B,M) \cap [0, k]^d$ is a bounded rational polytope. Since $\mathcal{N}_{T} \cap [0, k]^d$ is compact, by Lemma \ref{lem:lengthbdd}(2), there are \[ (\Delta_1+N_1),\ldots,(\Delta_n+N_n)\in \mathcal{N}_{T}\cap [0, k]^d, \] and $\delta_1,\ldots,\delta_n>0$, such that $\mathcal{N}_{T} \cap [0, k]^d$ is covered by \[ \mathcal{B}_i=\{\Delta+N \in \mathcal{N}_{T} \cap [0, k]^d \mid ||\Delta+N-(\Delta_i+N_i)||<\delta_i\}, 1 \leq i \leq n. \] Moreover, if $\Delta+N\in\mathcal{B}_i$ with $(K_X+\Delta+N)\cdot R_t<0$ for some $t\in T$, then $(K_X+\Delta_i+N_i)\cdot R_t=0$. Let \[ T_i=\{t\in T \mid (K_X+\Delta+N)\cdot R_t<0 \text{~for some~}\Delta+N\in \mathcal{B}_i\}. \] Then, $(K_X+\Delta_i+N_i)\cdot R_t=0$ for each $t\in T_i$. Moreover, we have \[ \mathcal{N}_T \cap [0, k]^d=\bigcap_{i=1}^n (\mathcal{N}_{T_i}\cap [0, k]^d). \] It suffices to show that $\mathcal{N}_{T_i}\cap [0, k]^d$ is a rational polytope. By replacing $T$ with $T_i$ and $\Delta+N$ with $\Delta_i+N_i$, we may assume that there exists $\Delta+N$ such that $(K_X+\Delta+N)\cdot R_t=0$ for any $t\in T$. Because \[ \{\Delta+N \in \mathcal{L}(B,M) \mid (K_X+\Delta+N) \cdot R_t = 0 ~\forall~t\in T\} \] is a rational polytope, we can further assume that $\Delta+N$ is a rational point. We do induction on dimensions of polytopes. If $\dim (\mathcal{L}(B, M)\cap [0, k]^d)=1$, then the statement is straightforward to verify. If $\dim(\mathcal{L}(B, M)\cap [0, k]^d)>1$, let $\mathcal{L}^1,\ldots,\mathcal{L}^p$ be the proper faces of $\mathcal{L}(B, M)\cap [0, k]^d$. By induction on dimensions, $\mathcal{N}_{T}^i \coloneqq \mathcal{N}_{T}\cap \mathcal{L}^i$ is a rational polytope. If $\Delta' + N' \in \mathcal{N}_T\cap [0, k]^d$, then there exists a line connecting with $\Delta' + N'$ and $\Delta+ N$ which intersects some $\mathcal{L}^i$ on $\Delta'' + N''$. Moreover, we can assume that $\Delta' + N'$ lies inside the line segment between $\Delta+N$ and $\Delta'' + N''$. Because $(K_X+\Delta+N)\cdot R_t=0$, we have $(K_{X''}+\Delta''+N'')\cdot R_t \geq 0$ for any $t\in T$. Thus $\Delta''+N'' \in \mathcal{N}_{T}^i$. This shows that $\mathcal{N}_T\cap [0, k]^d$ is the convex hull of $\Delta+N$ and all $\mathcal{N}_T^i$, which is also a rational polytope. \end{proof} Let $D$ be a divisor on $X$, we say that a divisorial/flipping contraction $f: X \to Y$ is \emph{$D$-trivial}, if for any contraction curve $C$, we have $D \cdot C =0$. \begin{lemma}\label{prop: K trivial} Let $(X/Z, (B+A)+M)$ be a $\mathbb{Q}$-factorial NQC g-lc pair with boundary part $B+A$ and nef part $M$. Suppose that $X$ is klt, $(X/Z,B+M)$ is g-lc and $K_X+B+M$ is nef. Then there exists $\delta_0>0$, such that for any $\delta \in (0, \delta_0)$, any sequence of g-MMP$/Z$ on $(K_X+B+\delta A+M)$ is $(K_X+B+M)$-trivial. \end{lemma} \begin{proof} By Proposition \ref{le: decomposition to nef Cartier divisors}, there exist NQC g-lc pairs $(K_X+B'_k +M_k')$, such that $K_X+B+M = \sum a_k (K_X+B'_k +M_k')$ with $\sum a_k =1, a_k >0$. Moreover, $K_X+B'_k +M_k'$ is a nef$/Z$ $\mathbb{Q}$-Cartier divisor for each $k$. Thus there exists $m\in\mathbb{N}$, such that $m(K_X+B'_k +M_k')$ is Cartier. Let \[ \alpha\coloneqq \min_k\{\frac{a_k}{m}\} \text{~and~} \delta_0=\frac{\alpha}{2\dim X+\alpha}. \] Choose $\delta \in (0,\delta_0)$, then for any extremal curve $C$ of $(K_X+B+M+\delta A)$, if $(K_X+B+M) \cdot C>0$, we have $(K_X+B+M) \cdot C \geq \alpha$. By the length of extremal rays (Proposition \ref{prop: length of extremal rays for g-pair}), \begin{align*} &(K_X+B+M+\delta A)\cdot C\\ =&\delta(K_X+B+M+A)\cdot C+(1-\delta)(K_X+B+M)\cdot C\\ \ge& -2\delta\dim X+(1-\delta)\alpha>0. \end{align*} This is a contradiction. Thus, any $(K_X+B+M+\delta A)$-flip or divisorial contraction, $f: X \dashrightarrow Y/Z$, is $(K_X+B+M)$-trivial. As $K_X+B'_k +M_k'$ is nef, $f$ is also $(K_X+B'_k +M_k')$-trivial, and thus $m(K_Y+B'_{Y,k}+M'_{Y,k})\coloneqq mf_{*}(K_X+B'_k +M_k')$ is nef and Cartier. We can repeat the above argument on $Y$. This proves the claim. \end{proof} The dual of the above result is the following lemma. \begin{lemma}\label{prop: P trivial} Let $(X/Z, B+M)$ be a g-lc pair with $X$ a $\mathbb{Q}$-factorial klt variety. Suppose that $P$ is an NQC divisor$/Z$. Then for any $\beta \gg 1$, any sequence of g-MMP$/Z$ on $(K_X+B+M+\beta P)$ is $P$-trivial. \end{lemma} \begin{proof} Since $P$ is NQC, there exists $\alpha>0$, such that for any curve $C/Z$, if $P\cdot C\neq 0$, then $P\cdot C>\alpha$. Set $d=\dim X$ and choose $\beta>\frac{2d}{\alpha}$. Suppose that $C$ is an extremal curve such that $(K_X+B+M+\beta P)\cdot C<0$. If $P\cdot C\neq0$, then by the length of extremal rays (Proposition \ref{prop: length of extremal rays for g-pair}), we have \begin{align*} (K_X+B+M+\beta P)\cdot C&=(K_X+B+M)\cdot C+\beta P\cdot C\\ &\ge -2d+\beta\cdot\alpha>0. \end{align*} This is a contradiction, and thus $P\cdot C=0$. Just as Lemma \ref{prop: K trivial}, by the $P$-triviality, we can continue this process and $\alpha$ is independent of this g-MMP. \end{proof} \subsection{G-MMP with scaling of an NQC divisor} \label{subsection: scalingnqcdiv} In this subsection, we will define a g-MMP with scaling of a divisor $Q=E+P$, where $E$ is an effective divisor and $P$ is the pushforward of an NQC divisor. Notice that $P$ may not be an effective divisor. To emphasize this special property, we coin the name ``g-MMP with scaling of an NQC divisor'', although there also exists an effective part (i.e. $E$) in $Q$. \begin{lemma}\label{lem:MMPscalingNQC} Let $X$ be a $\mathbb{Q}$-factorial klt variety, and $(X/Z,B+M)$ be an NQC g-lc pair with boundary part $B$ and nef part $M$. Let $E$ be an effective divisor on $X$ and $P$ be a pushforward of an NQC divisor from a sufficiently high model. Set $Q=E+P$. Suppose that $(X/Z,(B+E)+(M+P))$ is NQC g-lc with boundary part $B+E$ and nef part $M+P$, and $K_X+(B+E)+(M+P)$ is nef$/Z$. Then, either $K_X+B+M$ is nef$/Z$, or there is an extremal ray $R/Z$ such that $(K_X+B+M)\cdot R<0$, $(K_X+B+M+\nu Q)\cdot R=0$, where \[ \nu:=\inf\{t\ge0| K_X+B+M+tQ \text{~ is nef}/Z\}. \] In particular, $K_X+B+M+\nu Q$ is nef$/Z$, \end{lemma} \begin{proof} Suppose that $K_X+B+M$ is not nef$/Z$. Let $\{R_i\}_{i\in\mathcal{I}}$ be the set of $(K_X+B+M)$-negative extremal rays$/Z$, and $\Gamma_i$ be an extremal curve of $R_i$. As $\mathcal{L}(B, M)$ is a rational polytope, by Proposition \ref{prop: length of extremal rays for g-pair}, there are $r_1,\ldots,r_s \in \mathbb{R}_{>0}$ and $m \in \mathbb{N}$, such that \[-2\dim X\le (K_X+B+M)\cdot\Gamma_i=\sum_{j=1}^s\frac{r_j n_{i,j}}{m}<0,\] where $-2m(\dim X)\le n_{i,j}\in\mathbb{Z}$. By Proposition \ref{le: decomposition to nef Cartier divisors}, there are $r'_1,\ldots,r'_t \in \mathbb{R}_{>0}$ and $m \in \mathbb{N}$ (after changing the above $m$ by a sufficiently divisible multiple), such that \[(K_X+B+E+M+P)\cdot\Gamma_i=\sum_{k=1}^t \frac{r'_kn'_{i,k}}{m},\] where $n'_{i,k}\in\mathbb{Z}_{\geq 0}$. Since $n_{i,j}$ is bounded above, $\{n_{i,j}\}$ is a finite set, and so is $\{\sum_{j}r_j n_{i,j}\}$. Moreover, $\sum_{k} r'_kn'_{i,k}$ belongs to a DCC set, where DCC stands for descending chain condition. Let \[\nu_i\coloneqq\frac{-(K_X+B+M)\cdot \Gamma_i}{Q\cdot \Gamma_i}.\] Thus, \[\frac{1}{\nu_i}=\frac{\sum_{k} r'_kn'_{i,k}}{-\sum_{j}r_j n_{i,j}}+1\] belongs to a DCC set. Hence there exists a maximal element $\nu=\nu_s$ in the set $\{\nu_i\}_{i\in\mathcal{I}}$. Then, \[(K_X+B+M+\nu Q)\cdot \Gamma_i\ge0\] for any $i\in\mathcal{I}$. For the extremal curve $\Gamma_s$, $(K_X+B+M+\nu Q)\cdot \Gamma_s=0$. \end{proof} \begin{definition}[G-MMP with scaling of an NQC divisor]\label{defn:MMPsP} Under the same assumptions and notation of Lemma \ref{lem:MMPscalingNQC}, we define the g-MMP with scaling of an NQC divisor as follows. (1) If $K_X+B+M$ is nef$/Z$, we stop. (2) If $K_X+B+M$ is not nef$/Z$, there exists an extremal ray $R$ as in Lemma \ref{lem:MMPscalingNQC}. By Lemma \ref{lem:glcklt}, we can contract $R$. \begin{itemize} \item If the contraction is a Mori fiber space, we stop. \item If the contraction is a divisorial (resp. flipping) contraction, let $X\dashrightarrow X_1$ be the corresponding contraction (resp. flip). Let $(X_1,B_1+M_1+\nu Q_1)$ be the birational transform of $(X, B+M+\nu Q)$. We can continue the previous process on $(X_1,B_1+M_1+\nu Q_1)$ in place of $(X, B+M+Q)$. In fact, by $(X, B+M+\nu Q)$-triviality, $(X_1,B_1+M_1+\nu Q_1)$ is nef$/Z$.\end{itemize} By doing this, we obtain a sequence (may be infinite) of varieties $X_i$ and corresponding nef thresholds $\nu_i\coloneqq \inf\{t\ge0| K_{X_i}+B_i+M_i+tQ_i \text{~ is nef}/Z\}.$ \end{definition} \begin{remark}\label{rk: decreasing sequence} By definition, the nef thresholds $\nu_i \geq \nu_{i+1}$ for each $i$. \end{remark} \subsection{Lifting the sequence of flips} \label{subsection: Lifting a sequence of flips with quasi-scaling} We use the same notation as Section \ref{subsection: scalingnqcdiv}. Suppose that a sequence of g-MMP$/Z$ with scaling of $Q=E+P$ on $K_{X}+B+M$ only consists of flips, $X_i\dashrightarrow X_{i+1}/Z_i$, where $X_i \to Z_i$ is the flipping contraction. Let $h_0:(X'_0/Z,B'_0+M'_0)\to X_0$ be a g-dlt modification of $(X_0,B_0+M_0)$ (see Proposition \ref{prop: dlt}). Pick an ample$/Z_0$ divisor $H\ge0$, such that $$K_{X_0}+B_0+M_0+H\sim_{\mathbb{R}}0/Z_0,$$ and $(X_0,B_0+M_0+H)$ is g-lc. By Lemma \ref{lem:glcklt}, $(X_0,\Delta_0)$ is klt for some boundary $\Delta_0\sim_{\mathbb{R}} B_0+M_{0}+\epsilon H/Z_0$. According to the proof of Lemma \ref{lem:glcklt}, we may choose $\Delta_0\ge0$, such that $h_0^{*}(K_{X_0}+\Delta_0)=K_{X'_0}+\Delta'_0$ for some effective divisor $\Delta'_0$, and $({X'_0},\Delta'_0)$ is klt. Now run an MMP$/Z_0$ on $K_{X'_0}+\Delta'_0$ with scaling of $h_0^{*}(H)$. By \cite[Corollary 1.4.2]{BCHM10}, the MMP terminates with a log terminal model, $X'_0\dashrightarrow X'_1$. By construction, we have \begin{align*} (1-\epsilon)(K_{X'_0}+B'_{0}+M'_{0})&\sim_{\mathbb{R}} (1-\epsilon)h_0^{*}(K_{X_0}+B_{0}+M_{0})\\ &\sim_{\mathbb{R}}h_0^{*}(K_{X_0}+B_{0}+M_{0}+\epsilon H)/Z_0\\ &\sim_{\mathbb{R}}K_{X'_0}+\Delta'_0/Z_0. \end{align*} Thus this MMP is also a g-MMP$/Z_0$ on $K_{X'_0}+B'_{0}+M'_{0}$, and thus $K_{X'_1}+B'_{1}+M'_{1}$ is nef$/Z_0$. We define $Q_0'=h_0^{*}(Q_0)$ as follows. Suppose that \[ W \xrightarrow{p} X_0' \xrightarrow{h_0} X_0 \] is a sufficiently high log resolution such that $P_0=(h_0 \circ p)_* P_W$ for an NQC divisor $P_W$ on $W$. Then by the negativity lemma, $P_W+F=(h_0 \circ p)^*P_0$ with $F \geq 0$. Set \begin{equation}\label{eq: E', P'} E_0'\coloneqq h_0^*E_0+p_*F \text{~and~} P_0' \coloneqq p_*P_W, \end{equation} and \begin{equation}\label{eq: Q'} Q_0'\coloneqq E_0'+P_0'=h_0^*(E_0)+p_*(p^*\circ h_0^*(P_0))=h_0^*(E_0+P_0)=h_0^*(Q_0). \end{equation} Because $\rho(X_0/Z_0)=1$, $Q_0\equiv aH/Z_0$ for some $a>0$. Thus the g-MMP/$Z_0$ $X'_i \dashrightarrow X'_{i+1}$ is also a g-MMP$/Z_0$ on $K_{X'_0}+B'_{0}+M'_{0}$ with scaling of $Q_0'$. Because $X_0, X_1$ are isomorphic in codimension $1$ and $K_{X_1}+B_{1}+M_{1}$ is ample$/Z_0$, $({X_1}, B_{1}+M_{1})$ is a g-log canonical model of $({X_0}, B_0+M_0)$ over $Z_0$ (here g-log canonical model means a g-log terminal model with $K_{X_1}+B_{1}+M_{1}$ ample$/Z_0$). Thus there exists a morphism $h_1: X'_1\to X_1$ such that $K_{X'_1}+B'_{1}+M'_{1}=h_1^*(K_{X_1}+B_{1}+M_{1})$, which is also a g-dlt modification of $({X_1}, B_{1}+M_{1})$. We can continue the above process for $X_1, X'_1$, etc. in places of $X_0, X_0'$, etc. From the above, we have a sequence of g-MMP$/Z$ on $(K_{X'_0}+B'_{0}+M'_{0})$ with scaling of $Q_0'$. The reason is as follows. A priori, the g-MMP $X'_i \dashrightarrow X'_{i+1}$ with scaling of $Q_0'$ is over $Z_0$ \emph{rather than over $Z$}. We denote this g-MMP$/Z_0$ by \[ X_i'=Y_0 \dashrightarrow Y_1 \dashrightarrow\cdots \dashrightarrow Y_{k}=X_{i+1}', \] and let $\nu_j', 0 \leq j \leq k$ be the corresponding nef thresholds$/Z_0$. By $K_{X_i'}+B_i'+M_i'+\nu_iQ_i' \equiv 0/Z_0$, we have $\nu_j'=\nu_i$ for $0 \leq j \leq k-1$. Thus \[ \nu_j' = \inf\{t \mid K_{Y_j}+B_{Y_j}+M_{Y_j}+tQ_{Y_{j}} \text{~is nef over~} Z\}, \] for $0 \leq j \leq k-1$. This shows that the g-MMP with scaling of $Q_i'$ is also over $Z$. By doing above, we lift the original g-MMP with scaling to a new g-MMP with scaling. The advantage is that each $(X'_i, B_i'+M'_i)$ becomes $\mathbb{Q}$-factorial and g-dlt. \section{Special termination for g-MMP with scaling}\label{sec: special termination} It is crucial to observe that some termination results still hold for g-MMP with scaling of an NQC divisor. The following is a variation of \cite[Theorem 1.9]{Birkar12}. \begin{theorem}\label{thm: gmmtermination} Under the assumptions and notation of Definition \ref{defn:MMPsP}. Suppose that there is a g-MMP with scaling of $Q$. Let $\mu = \lim_{j \to \infty} \nu_j$. If $\mu\neq \nu_j$ for any $j$, and $(X/Z,(B+\mu E)+(M+\mu P))$ has a g-log minimal model, then the g-MMP terminates. \end{theorem} This theorem is proved in several steps. \begin{proposition}\label{prop: a special termination 1} Under the above notation, Theorem \ref{thm: gmmtermination} holds if there is a birational map $\phi:X\dashrightarrow Y/Z$ between $\mathbb{Q}$-factorial varieties satisfying: \begin{enumerate} \item the induced map $X_i\dashrightarrow Y$ is isomorphic in codimension one for every $i$, \item $(Y/Z,(B_Y+\mu E_Y)+(M_Y+\mu P_Y))$ is a g-log minimal model of $(X/Z,(B+\mu E)+(M+\mu P))$, \item there is a reduced divisor $A \geq 0$ on $X$, whose components are movable divisors and generate $N^1(X/Z)$, \item there exists $\epsilon>0$, such that $(X/Z,(B+E+\epsilon A)+(M+P))$ is g-dlt with boundary part $(B+E + \epsilon A)$ and nef part $(M+P)$, \item there exists $\delta>0$, such that $(Y/Z, (B_Y+(\mu+\delta)E_Y + \epsilon A_Y) + (M_Y+(\mu+\delta)P_Y))$ is g-dlt with boundary part $(B_Y+(\mu+\delta) E_Y + \epsilon A_Y)$ and nef part $(M_Y+(\mu+\delta) P_Y))$. \end{enumerate} \end{proposition} \begin{proof} \noindent Suppose that the g-MMP does not terminate. Pick $j\gg 1$, so that $\nu_{j-1} >\nu_{j}$. Then $X\dashrightarrow X_j$ is a partial g-MMP/$Z$ on $K_{X}+B+M+\nu_{j-1}Q$. It is also a partial g-MMP/$Z$ on $K_{X}+B+M+\nu_{j-1}Q+\epsilon A$ after replacing $\epsilon$ with a smaller number. In particular, $(X_j/Z,(B_j+\epsilon A_j)+M_j+\nu_{j-1}Q)$ is $\mathbb{Q}$-factorial g-dlt, where $A_j$ is the birational transform of $A$ on $X_j$. As $j \gg1$, after reindexing, we may assume that the g-MMP only consists of flips$/Z$ starting with $(X_1/Z,B_1+M_1)=(X/Z,B+M)$. Moreover, by replacing $B+M$ with $B+M+\mu Q$, we may assume that $\mu=0$. Possibly by choosing a smaller $\epsilon$ again, by Lemma \ref{prop: K trivial}, we may assume that any sequence of g-MMP$/Z$ on $(K_{X_{j}}+(B_j+\nu_{j-1} E_j + \epsilon A_j)+(M_j+\nu_{j-1} P_j))$ is a sequence of $(K_{X_j}+(B_j+\nu_{j-1}E_j)+(M_j+\nu_{j-1} P_j))$-flop. By assumption, $K_{X_j}+(B_j+\nu_{j-1} E_j +\epsilon A_j)+(M_j+\nu_{j-1}P_j)$ is a limit of movable$/Z$ $\mathbb{R}$-divisors. Since the components of $A_j$ generate $N^1(X_j/Z)$, there exists a general ample$/Z$ divisor $H$ and an effective divisor $H' < A_j$, such that $A_j \sim_\mathbb{R} H+H'$, and $(X_j/Z, (B_j+\nu_{j-1} E_j+\epsilon H' + \epsilon H)+(M_j+\nu_{j-1} P_j))$ is g-dlt. By Lemma \ref{lem:glcklt}, there exists a klt pair $(X_j, \Delta_j)$ such that \[ K_{X_j}+\Delta_j \sim_\mathbb{R} K_{X_j}+(B_j+\nu_{j-1} E_j + \epsilon H' + \epsilon H)+(M_j+\nu_{j-1} P_j). \] By \cite{BCHM10}, we may run an MMP$/Z$ with scaling of an ample divisor on $K_{X_j}+\Delta_j$, which is the same as an MMP$/Z$ on $(K_{X_j}+(B_j+\nu_{j-1} E_j + \epsilon A_j)+(M_j+\nu_{j-1} P_j))$. It terminates with a g-log minimal model $(T/Z, (B_T+\nu_{j-1} E_T+\epsilon A_T)+ (M_T+ \nu_{j-1} P_T))$. Notice that $X_j, T$ are isomorphic in codimension $1$, and $(K_T+(B_T+\nu_{j-1} E_T)+ (M_T+ \nu_{j-1} P_T))$ is nef$/Z$. Again, since the components of $A_T$ generate $N^1(T/Z)$, we can choose $0 < D_T \leq A_T$ such that $K_{T}+B_T+M_T+\nu_{j-1}Q_T+\epsilon D_T$ is ample. Moreover, $K_T+B_T+M_T+\nu_{j-1} Q_T$ is nef$/Z$ by the choice of $\epsilon$. For the same reason, possibly by choosing smaller $\nu_j$ and $\epsilon$, we can run a g-MMP$/Z$ on $(K_Y+B_Y+M_Y+\nu_{j-1}Q_Y+\epsilon D_Y)$ with scaling of an ample divisor, and get a g-log minimal model, $(Y', B_{Y'}+M_{Y'}+\nu_{j-1} Q_{Y'}+\epsilon D_{Y'})$, such that both $K_{Y'}+B_{Y'}+M_{Y'}+\nu_{j-1} Q_{Y'}+\epsilon D_{Y'}$ and $K_{Y'}+B_{Y'}+M_{Y'}$ are nef (see Lemma \ref{prop: K trivial}). Because $Y, Y'$ are $\mathbb{Q}$-factorial varieties which are isomorphic in codimension $1$ and $K_{T}+B_T+M_T+\nu_{j-1}Q_T+\epsilon D_T$ is ample$/Z$, we have $Y'=T$. Hence, both \[ K_T+B_T+M_T+\nu_{j-1} Q_T \text{~and~} K_T+B_T+M_T \] are nef$/Z$. By $\nu_{j-1}>\nu_{j}>\mu=0$, $K_T+B_T+M_T+\nu_{j}Q_T$ is nef$/Z$. Let $r: U \to X_j, s: U \to T$ be a common log resolution. By the negativity lemma, we have \begin{align*} r^*(K_{X_{j}}+B_{j}+M_{j}) > &s^*(K_T+B_T+M_T),\\ r^*(K_{X_{j}}+B_{j}+M_{j}+\nu_{j-1} Q_{j}) =& s^*(K_T+B_T+M_T+\nu_{j-1} Q_T),\\ r^*(K_{X_{j}}+B_{j}+M_{j}+\nu_{j} Q_{j}) =& s^*(K_T+B_T+M_T+\nu_{j} Q_T). \end{align*} This is a contradiction. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: gmmtermination}] Let $(Y/Z,(B_Y+\mu E_Y)+(M_Y+\mu P_Y))$ be the g-log minimal model of $(X/Z,(B+\mu E)+(M+\mu P))$ with corresponding map $\phi: X \dashrightarrow Y$. As in Proposition \ref{prop: a special termination 1}, we may assume that $\mu=0$, and the g-MMP$/Z$ only consists of flips, $X_i\dashrightarrow X_{i+1}$. Because there are finite many g-lc centers, we can assume that no g-lc centers are contracted in the sequence. Moreover, choose $\nu_{i-1}>\nu_{i}$, then for any birational morphism $f: W \to X_i$, we can write \begin{equation}\label{eq: pullback Q} f^*Q_i = f^*(E_i+P_i)=\tilde E_i+P_{W,i}+\Theta_{W,i}, \end{equation} with $\tilde E_i$ the birational transform of $E_i$. The meanings of $P_{W,i}, \Theta_{W,i}$ are as follows (cf. \eqref{eq: E', P'}). By definition, we can assume that $P_i=q_*P'$ where $q: W' \to X_i$ is a sufficiently high model and $P'$ is an NQC divisor. By taking a common log resolution, we can assume that there also exists a morphism $p: W' \to W$. Then we set $P_{W,i}=p_*P'$. By $E_i \geq 0$ and the negativity lemma, $\Theta_{W,i} \geq 0$ is a $f$-exceptional divisor. By $(X_{i}, B_{i}+M_{i}+\nu_{i-1}Q_{i})$ is g-lc, there is no g-lc place of $(X_{i}, B_{i}+M_{i}+\nu_{i}Q_{i})$ which is contained in $\operatorname{Supp} \Theta_{W,i}$. We can replace $(X/Z,B+M)$ with $(X_i/Z,B_i+M_i)$ and $Q$ with $\nu_iQ_i$. Step 1. Let $f: W \to X$ and $g: W \to Y$ be a sufficiently high common log resolution of $(X/Z,(B+E)+(M+P))$ and $(Y/Z,B_Y+M_Y+Q_Y)$. We have \begin{equation}\label{eq: common resolution} \begin{split} F &\coloneqq f^*(K_{X}+B+M) -g^*(K_Y+B_Y+M_Y), \text{~and}\\ F' &\coloneqq K_W+B_W+M_W - f^*(K_{X}+B+M), \end{split} \end{equation} where $B_W$ is defined as \eqref{eq: B_Y}. Then $F, F'$ are effective exceptional divisors over $Y, X$ respectively. By the definition of g-log minimal model, $F'$ is also exceptional over $Y$. Let $E_W$ be the birational transform of $E$ on $W$, and $P_W$ be the nef$/Z$ divisor corresponding to $P$ on $W$. Set $Q_W=P_W+E_W$. We have \begin{equation}\label{eq: W} \begin{split} K_W+B_W+M_W &\equiv F+F'/Y. \end{split} \end{equation} By Proposition \ref{prop: termination for very exceptional divisor}, we can run a g-MMP$/Y$ on $(K_W+B_W+M_W)$ with scaling of an ample divisor, and it terminates with a model $Y'$, such that $F+F'$ is contracted. Thus $(Y',B_{Y'}+M_{Y'})$ is a g-dlt modification of $(Y, B_Y+M_Y)$. Step 2. We prove that $\phi: X\dashrightarrow Y$ does not contract any divisor. Otherwise, let $D$ be a prime divisor on $X$ which is contracted by $\phi$, and $D_W$ be the birational transform of $D$ on $W$. Since $a(D,X,B+M)<a(D,Y,B_Y+M_Y)$, $D_W$ is a component of $F$. In Step 1, the g-MMP$/Y$ on $(K_W+B_W+M_W)$ contracts $D_W$. We will get a contradiction as follows. Let $\nu_i$ be sufficiently small so that $W\dashrightarrow Y'$ is a partial g-MMP$/Z$ on $K_W+B_W+M_W+\nu_i Q_W$. Since $(X/Z,B+M+\nu_iQ)$ is g-lc, $$K_W+B_W+M_W+\nu_i Q_W-f^{*}(K_X+B+M+\nu_i Q)$$ is effective and exceptional over $X$. On the other hand, $X\dashrightarrow X_i$ is a partial g-MMP$/Z$ on $(K_X+B+M+\nu_i Q)$, we have $$f^{*}(K_X+B+M+\nu_i Q)\ge N,$$ where \[ N=p_{*}q^{*}(K_{X_{i}}+B_i+M_i+\nu_i Q_i) \] for some common log resolution $p:W'\to W, q:W'\to X_i$. Since $K_{X_{i}}+B_i+M_i+\nu_i Q_i$ is nef$/Z$, $N$ is a pushforward of a nef divisor. In particular, $N$ is a limit of movable$/Z$ divisors. We have \[ K_W+B_W+M_W+\nu_i Q_W=N+G, \] where $G \geq 0$ is exceptional over $X$. Here we use the fact that $X$ and $X_i$ are isomorphic in codimension one. Since $G$ is exceptional$/X$, $D_W$ is not a component of $G$. For the g-MMP in Step 1, if $D_W$ were contracted by an extremal contraction of a curve $C$, we have $(K_W+B_W+M_W+\nu_i Q_W) \cdot C<0$. But $N \cdot C \geq0$ and $G \cdot C\geq 0$. Thus $D_W$ cannot be contracted. This is a contradiction. Step 3. From $W$, we construct a g-dlt modification of $(X, B+M)$. Let \[ F'' \coloneqq K_W+(B_W+E_W)+(M_W+P_W) -f^*(K_{X}+(B+E)+(M+P)), \] which is effective and exceptional over $X$. We run a g-MMP$/X$ on $K_W+(B_W+E_W)+(M_W+P_W)$ which terminates with a model $h: X' \to X$ and contracts $F''$. This $h$ is a g-dlt modification of $(X, (B+E)+(M+P))$. Let \[ K_{X'}+(B'+E')+(M'+P') =h^*(K_{X}+(B+E)+(M+P)), \] where $E'$ is the strict transform of $E$ and $P'$ is the pushforward of $P_W$. By assumption (see the paragraph before Step 1) that for \[ Q'\coloneqq h^*(E+P)=E'+P'+\Theta' \] as in \eqref{eq: pullback Q}, there is no g-lc place of $({X}, B+M)$ which is contained in $\Theta'$. Thus $\Theta'=0$. Hence $h$ is also a g-dlt modification of $({X}, B+M)$, that is \[ K_{X'}+B'+M' =h^*(K_{X}+B+M). \] In particular, $h$ extracts all the g-lc places of $(X, B+M)$ on $W$. Because $\phi^{-1}: Y \dashrightarrow X$ can only extract g-lc places of $(X,B+M)$ (see Remark \ref{rmk: extract lc places}), we see that these divisors are all on $X'$. Step 4. By Subsection \ref{subsection: Lifting a sequence of flips with quasi-scaling}, we can lift the sequence $X_i\dashrightarrow X_{i+1}/Z_i$ to a g-MMP$/Z$ on $K_{X'}+B'+M'$ with scaling of $Q'$. Hence, each $({X_i'}, B_i'+M_i')$ is $\mathbb{Q}$-factorial and g-dlt. Step 5. Possibly by replacing $X'$ with $X'_i$ for $i\gg 1$, we show that $X', Y'$ are also isomorphic in codimension $1$, and $(Y'/Z,B'+M')$ is a g-log minimal model of $(X/Z,B+M)$. First, We show that $Y'\dashrightarrow X'$ does not contract any divisor. Suppose that $D\subset Y'$ is a prime divisor which is exceptional over $X'$. If $D$ is on $Y$, then $a(D,X,B+M)=0$ as $D$ is exceptional over $X$. Thus, by Step 3, $D$ is on $X'$, a contradiction. If $D$ is exceptional over $Y$, as $(Y',B_{Y'}+M_{Y'})$ is a g-dlt modification of $(Y, B_Y+M_Y)$, we have $a(D,Y,B_Y+M_Y)=0$. This implies that $a(D,X,B+M)=0$, and again we get a contradiction from Step 3. Next, We show that $X' \dashrightarrow Y'$ does not contract any divisor. Possibly by replacing $X'$ with $X'_i$ for $i\gg 1$, we may assume that the g-MMP$/Z$ on $(K_{X'}+B'+M')$ with scaling of $Q'$ only consists of flips. By using the same method as Step 2, it suffices to show that $(Y'/Z,B_{Y'}+M_{Y'})$ is a g-log minimal model of $(X/Z,B+M)$. Thus we only need to compare g-log discrepancies. Suppose that $D\subset X'$ is a prime divisor which is exceptional over $Y'$. Since $X, Y$ are isomorphic in codimension $1$, $D$ is exceptional over $X$. Hence $a(D, X', B'+M')=a(D, X, B+M)=0$. If $a(D,Y',B_{Y'}+M_{Y'})=0$, then $a(D,Y,B_Y+M_Y)=0$. Thus the birational transform of $D$ cannot be a component of $F+F'$ in \eqref{eq: common resolution}, and it can not be contracted over $Y'$. This is a contradiction. Therefore, $a(D,Y',B_{Y'}+M_{Y'})>0$, which implies that $(Y'/Z,B'+M')$ is a g-log minimal model of $(X/Z,B+M)$. Step 6. Let $A\ge0$ be a reduced divisor on $W$ whose components are general ample$/Z$ divisors such that they generate $N^1(W/Z)$. Since $X'$ is obtained by running some g-MMP on $K_W+B_W+M_W+Q_W$, this g-MMP is also a partial g-MMP on $K_W+B_W+M_W+Q_W+\epsilon A$ for any $1 \gg \epsilon>0$. In particular, $(X'/Z,(B'+E'+\epsilon A')+(M'+P'))$ is g-dlt, where $A'$ is the birational transform of $A$. For similar reasons, we can choose $1\gg\epsilon,\delta>0$, so that $(Y'/Z,(B_{Y'}+\delta E_{Y'}+\epsilon A_{Y'})+M_{Y'}+\delta P_{Y'})$ is also g-dlt. Now, by Proposition \ref{prop: a special termination 1}, the g-MMP$/Z$, $X'_i \dashrightarrow X'_{i+1}$, terminates. This implies that the original g-MMP$/Z$, $X_i \dashrightarrow X_{i+1}$, also terminates. This finishes the proof. \end{proof} We introduce the notion of difficulty for g-pairs before proving the special termination. \begin{definition}[Difficulty for g-pairs] Let $(X,B+M)$ be a $\mathbb{Q}$-factorial g-dlt pair with data $\tilde X \xrightarrow{f} X \to Z$ and $\tilde M$. For $\mathbb{R}$-divisors $B, \tilde M$, assume that $B=\sum b_j B_j$ is the prime decomposition of $B$, and $\tilde M=\sum \mu_i \tilde M_i$ with $\tilde M_i$ a nef$/Z$ Cartier divisor for each $i$. Let $\bm{b}=\{b_j\}$, $\bm{\mu}=\{\mu_i\}$. Recall that \[\mathbb{S}(\bm{b},\bm{\mu})= \{1-\frac{1}{m}+\sum_{j} \frac{r_jb_j}{m}+\sum_{i}\frac{s_i \mu_i}{m} \leq 1\mid m\in\mathbb{Z}_{>0},r_j,s_i\in\mathbb{Z}_{\ge0}\}\cup\{1\}.\] Let $S$ be a g-lc center of $(X,B+M)$, then \[ K_{S}+B_{S}+M_{S} = (K_{X}+B+M)|_{S} \] is defined in Proposition \ref{prop: intersection on g-lc centers}. The \emph{difficulty} of the g-pair $(X,B+M)$ is defined to be \begin{align*} &d_{\bm{b},\bm{\mu}}(S,B_S+M_S)\\ =&\sum_{\alpha\in \mathbb{S}(\bm{b},\bm{\mu})}\#\{E \mid a(E,B_S+M_S)<1-\alpha,\operatorname{Center}_{S}(E)\nsubseteq \lfloor B_S \rfloor\}\\ &+\sum_{\alpha\in\mathbb{S}(\bm{b},\bm{\mu})}\#\{E \mid a(E,B_S+M_S)\le1-\alpha, \operatorname{Center}_{S}(E)\nsubseteq \lfloor B_S \rfloor\}. \end{align*} \end{definition} \begin{remark} Notice that $d_{\bm{b},\bm{\mu}}(S,B_S+M_S)$ is slightly different from \cite[Definition 4.2.9]{Fujino07} (cf. \cite[7.5.1 Definition]{Fli92}): in the second summand, we also includes $E$ whose g-log discrepancy \emph{equals} $1-\alpha$. By doing this, the standard argument can be simplified (cf. \cite[Proposition 4.2.14]{Fujino07} and the argument below). Just as for log pairs, $d_{\bm{b},\bm{\mu}}(S,B_S+M_S)<+\infty$ (cf. \cite[4.12.2 Lemma]{Fli92}). \end{remark} \begin{theorem}\label{prop: special termination 2} Under the assumptions and notation of Definition \ref{defn:MMPsP}. We run a g-MMP$/Z$ with scaling of $Q$ on $K_X+B+M$. Assume the existence of g-log minimal models for pseudo-effective NQC g-lc pairs in dimensions $\le \dim X-1$. Suppose that $\nu_i>\mu$ for $\mu=\lim \nu_i$ (in particular, the g-MMP is an infinite sequence). Then, after finitely many steps, the flipping locus is disjoint from the birational transform of $\lfloor B \rfloor$. \end{theorem} \begin{proof} We follow the proof of \cite[Theorem 4.2.1]{Fujino07}. Because the number of g-lc centers of a g-lc pair is finite, we may assume that after finitely many steps, the flipping locus contains no g-lc centers. Thus $\phi_i: X_i\dashrightarrow X_{i+1}$ induces an isomorphism of $0$-dimensional g-lc centers for every $i$. We show that $\phi_i$ induces an isomorphism of every g-lc center by induction on dimensions $d$ of g-lc centers. Then the theorem follows from $d=\dim X -1$. Now, for each $d \leq k-1$, we assume that $\phi_i$ induces an isomorphism for every $d $-dimensional g-lc centers. Let $S$ be a $k$-dimensional g-lc center of $(X,B+M)$, and $S_i$ be the birational transform of $S$ on $X_i$. By adjunction formula (Proposition \ref{prop: intersection on g-lc centers}), $(K_{X_i}+B_i+M_i)|_{S_i}=K_{S_i}+B_{S_i}+M_{S_i}$, and the coefficients of $B_{S_i}$ belong to the set $\mathbb{S}(\bm{b},\bm{\mu})$. By the induction hypothesis, after finitely many flips, $\phi_i$ induces an isomorphism on $\lfloor B_{S_i}\rfloor$, and thus $\operatorname{Center}_{S_i}E\subseteq \lfloor B_{S_i}\rfloor$ if and only if $\operatorname{Center}_{S_{i+1}}E\subseteq \lfloor B_{S_{i+1}}\rfloor$. By the negativity lemma, $a(E,S_i,B_{S_i}+M_{S_i})\le a(E,S_{i+1},B_{S_{i+1}}+M_{S_i})$. Hence, \[ d_{\bm{b},\bm{\mu}}(S_i,B_{S_i}+M_{S_i})\ge d_{\bm{b},\bm{\mu}}(S_{i+1},B_{S_{i+1}}+M_{S_{i+1}}). \] Moreover, if $S_i$ and $S_{i+1}$ are not isomorphic in codimension $1$, then the above inequality is strict. In fact, if there exists a divisor $E \subset S_i$ which is not on $S_{i+1}$, then $E$ is counted by the second summand in $d_{\bm{b},\bm{\mu}}(S_i,B_{S_i}+M_{S_i})$, while not counted in $d_{\bm{b},\bm{\mu}}(S_{i+1},B_{S_{i+1}}+M_{S_{i+1}})$. Similarly for the case that $E$ is on $S_{i+1}$ but not on $S_i$. For $i\gg 1$, we can assume that the difficulties are constant, and thus $S_i$ and $S_{i+1}$ are isomorphic in codimension $1$. This is the advantage of introducing the above difficulty: in \cite[Proposition 4.2.14]{Fujino07}, the case that $S_i \to T_i$ is a divisorial contraction but $S_{i+1} \to T_i$ is a small contraction cannot be excluded by the difficulty therein. Let $T$ be the normalization of the image of $S_1$ (hence the image of any $S_i$) in $Z$, and $T_i$ be the normalization of the image of $S_i$ in $Z_i$. In general, $S_i\dashrightarrow S_{i+1}/T_i$ may not be a $(K_{S_i}+B_{S_i}+M_{S_i})$-flip$/T$. However, we can use the same argument as Subsection \ref{subsection: Lifting a sequence of flips with quasi-scaling} to construct a sequence of g-MMP$/T$ with scaling of an NQC divisor over some g-dlt modifications, $S_i' \dashrightarrow S_{i+1}'$. For simplicity, we just sketch the argument below. Because $K_{X_1}+B_1+M_1+\nu_1 Q_1 \equiv 0/Z_1$, we have \[ K_{S_1}+B_{S_1}+M_{S_1}+\nu_1 Q_{S_1} \equiv 0/T_1, \] where $Q_{S_1} = Q_1|_{S_1}$ is defined inductively as follows (cf. \eqref{eq: E', P'}\eqref{eq: Q'}). Suppose that $\pi: \tilde X_1 \to X_1$ is a model of $X_1$ such that $P_1=\pi_*\tilde P_1$ with $\tilde P_1$ an NQC divisor. Then we have $\pi^*P_1 = \tilde P_1 +F$ with $F \geq 0$. Notice that $S_1$ is an irreducible component of $V_1 \cap \cdots \cap V_{n-k}$, where $V_i \subset \lfloor B_i \rfloor$ (see Proposition \ref{prop: intersection on g-lc centers}). We first define $Q_{V_1}$. Let $\pi$ also denote the induced morphism $\tilde V_1 \to V_1$. Then $\pi^*(P_1|_{V_1}) = \tilde P_1|_{\tilde V_1}+F|_{\tilde V_1}$. Here $\tilde P_1|_{\tilde V_1}$ is still an NQC divisor, and $F|_{\tilde V_1}$ is an effective divisor. Because $\nu_1>0$, no component of $E_1$ is contained in $\lfloor B_1 \rfloor$, and thus $E_1|_{V_1} \geq 0$. Now set \[ E_{V_1} = E_1|_{V_1} + \pi_*(F|_{\tilde V_1}) \text{~and~} P_{V_1} = \pi_*(\tilde P_1|_{\tilde V_1}), \] and \[ Q_{V_1} \coloneqq E_{V_1} + P_{V_1}= \pi_*(\pi^*(E_1|_{V_1}+P_1|_{V_1}) = Q_1|_{V_1}.\] We can repeat the above process to define $Q_{S_1}, P_{S_1}$. Notice that $P_{S_1}$ is a pushforward of an NQC divisor. Let $K_{S'_1}+ B_{S'_1}+M_{S'_1} = h_i^*(K_{S_1}+ B_{S_1}+M_{S_1})$ be a $\mathbb{Q}$-factorial g-dlt modification of $(S_1, B_{S_1}+M_{S_1})$. By the same argument as Subsection \ref{subsection: Lifting a sequence of flips with quasi-scaling}, we can run a g-MMP$/T_1$ on $K_{S'_1}+ B_{S'_1}+M_{S'_1}$ with scaling of $Q_1'$. It terminates with $(S'_2, B_{S'_2}+M_{S'_2})$ which is a g-dlt modification of $(S_2, B_{S_2}+M_{S_2})$. We can continue this process on $K_{S'_2}+B_{S'_2}+M_{S'_2}$. This gives a sequence of g-MMP$/T$ with scaling of $Q_{S_1'}$. If this sequence does not terminate. Then by the assumption, there exists a g-log minimal model$/T$ for $K_{S'_1}+B_{S'_1}+M_{S'_1}+\mu Q_{S'_1}$. By Theorem \ref{thm: gmmtermination}, the g-MMP terminates, and this is a contradiction. Hence the g-MMP$/T$ terminates, that is, $(S'_i, B_{S_i'}+M_{S_i'})= (S'_{i+1}, B_{S_{i+1}'}+M_{S_{i+1}'})$ for $i\gg 1$. This implies that $S_i \simeq S_{i+1}$ by \cite[Lemma 4.2.16]{Fujino07}. \end{proof} \begin{comment} \begin{proposition}\label{prop:specialtersmall} Let $(X,B+M)$ be a g-dlt pair with data $\tilde X \xrightarrow{f} X \to Z$ and $\tilde M$, where $\tilde M=\sum \mu_i \tilde M_i$, $\tilde M_i$ are nef$/Z$ Cartier divisors. Consider a sequence of log flips starting from $(X,B+M)=(X_0,B_0+M_0)$: \[(X_0,B_0+M_0)\dashrightarrow (X_1,B_1+M_1)\dashrightarrow (X_2,B_2+M_2)\dashrightarrow ,\] where $\phi_i: X_i\to Z_i$ is a contraction of an extremal ray $R_i$ with $(K_{X_i}+B_i+M_i)\cdot R_i<0$, and $\phi_{i}^{+}: X_i^{+}=X_{i+1}\to Z_i$ is the flip. Let $S_0$ be any component of $\lfloor B_0\rfloor$, $S_i$ be the birational transform of $S_0$ on $X_i$. Then after finitely many flips, $S_i$ are isomorphic to each other in codimension 1. \end{proposition} The proof of Proposition \ref{prop:specialtersmall} use almost the same argument of special termination for dlt pairs (c.f. \cite{Fujino05}). \begin{proof} We note that the number of g-lc center is finite. If the flipping locus contains a g-lc center, then the number of g-lc centers decreases by negativity lemma. Moreover, by negativity lemma, we have $a(E,B_{S_i}+M_{S_i})\le a(E,B_{S_{i+1}}+M_{S_i})$. It is clear that Now, we assume that $\phi_i: X_i\dashrightarrow X_{i+1}$ induces an isomorphism of every $(d-1)$-dimensional g-lc center for every $i$. \end{proof} \end{comment} \section{Proofs of the main results}\label{sec: proof} A birational NQC weak Zariski decomposition can be obtained from a g-log minimal model. \begin{proposition}\label{prop: g-log minimal model implies NQC zarski decomposition} Let $(X/Z,B+M)$ be an NQC g-lc pair. Suppose that $(X/Z,B+M)$ has a g-log minimal model, then $(X/Z,B+M)$ admits a birational NQC weak Zariski decomposition. \end{proposition} \begin{proof} Let $(Y/Z,B_Y+M_Y)$ be a g-log minimal model of $(X/Z,B+M)$. By Proposition \ref{le: decomposition to nef Cartier divisors}, there exist $\mathbb{Q}$-Cartier nef$/Z$ divisors $M_i$, and $\mu_i\in\mathbb{R}_{>0}$, such that \[ K_{Y}+B_Y+M_Y=\sum \mu_i M_i. \] Let $p:W\to X,q:W\to Y$ be a common resolution of $X\dashrightarrow Y$, then \begin{align*} p^{*}(K_X+B+M)&=q^{*}(K_Y+B_Y+M_Y)+E\\ &=\sum \mu_i q^{*}(M_i)+E, \end{align*} with $E \geq 0$. This is a birational NQC weak Zariski decomposition of $(X/Z,B+M)$. \end{proof} This shows one direction of Theorem \ref{thm: weak zariski equiv mm}. For the other direction, we first show the existence of g-log minimal models instead of g-log terminal models (see Definition \ref{def: g-log minimal model and g-log terminal model}). \begin{definition}[\cite{Birkarweak12} Definition 2.1]\label{def: theta} For a g-pair $(X/Z, B+M)$ with boundary part $B$ and nef part $M$. Let $f:W\to X$ be a projective birational morphism from a normal variety, and $N$ be an effective $\mathbb{R}$-divisor on $W$. Let $f_{*}N=\sum_{i\in I} a_i N_i$ be a prime decompostion. We define \[ \theta(X/Z,B+M,N) \coloneqq \#\{i\in I \mid N_i \text{~is not a component of~} \lfloor B\rfloor\}. \] \end{definition} \begin{definition} A g-pair $(X/Z, B+M)$ is called \emph{log smooth} if $X$ is smooth, with data $X \xrightarrow{{\rm id}} X \to Z$ and $M$ (in particular, $M$ is nef$/Z$), and \[ \operatorname{Supp}(B)\bigcup\operatorname{Supp}(M) \] is a simple normal crossing divisor. \end{definition} \begin{theorem}\label{thm: weak zariski implies mm} Suppose that Conjecture \ref{conj: NQCbirweakzar} holds in dimensions $\le d$. Then g-log minimal models exist for pseudo-effective NQC g-lc pairs of dimensions $\le d$. \end{theorem} \begin{proof} Step 1. It is enough to show Theorem \ref{thm: weak zariski implies mm} in the log smooth case (cf. \cite[Remark 2.6]{Birkar10} or \cite[Remark 2.8]{Birkar12}). In fact, let $(X/Z,B+M)$ be an NQC g-lc pair. Let $\pi:(W/Z,B_W+M_W)\to X$ be a log resolution of $(X,B+M)$, where $B_W$ is defined as \eqref{eq: B_Y}, and $M_W$ is an NQC divisor. Thus \[ K_W+B_W+M_W=\pi^{*}(K_X+B+M)+F, \] with $F\ge0$ an exceptional divisor. Let $(Y/Z,B_Y+M_Y)$ be a g-log minimal model of $(W/Z,B_W+M_W)$ and $D$ be a prime divisor on $X$ which is contracted over $Y$. Then, \[ a(D,X,B+M)=a(D,W,B_W+M_W)<a(D,Y,B_Y+M_Y). \] This implies that $(Y/Z,B_Y+M_Y)$ is also a g-log minimal model of $(X/Z,B+M)$ (see Definition \ref{def: g-log minimal model and g-log terminal model}). Assume that $\pi: W \to X$ is a sufficiently high model such that $\pi^{*}(K_X+B+M)=N+P/Z$ is an NQC weak Zariski decomposition, where $P$ is an NQC divisor and $N$ is effective. Then \[ K_W+B_W+M_W=\pi^{*}(K_X+B+M)+F=(N+F)+P/Z. \] This is an NQC weak Zariski decomposition$/Z$ for $K_W+B_W+M_W$. Moreover, \[ \theta(X/Z,B+M,N)=\theta(W/Z,B_W+M_W,N)=\theta(W/Z,B_W+M_W,N+F). \] Thus we may assume that $(X,B+M)$ is log smooth with $M$ an NQC divisor, and $K_X+B+M \equiv P+N/Z$ is an NQC weak Zariski decomposition. Moreover, by induction on dimensions, we can assume that Theorem \ref{thm: weak zariski implies mm} holds in dimensions $\leq d-1$. In the following, we prove Theorem \ref{thm: weak zariski implies mm} by induction on $\theta(X,B+M,N)$. Step 2. When $\theta(X,B+M,N)=0$, we show Theorem \ref{thm: weak zariski implies mm}. By definition, $\theta(X,B+M,N)=0$ implies that $\operatorname{Supp}\lfloor B \rfloor \supset \operatorname{Supp} N$. By Lemma \ref{lem:glcklt}, for any $\beta_0 > 0$, we can run a g-MMP$/Z$ on $(K_X+B+M+\beta_0 P)$ with scaling of an ample divisor. By proposition \ref{prop: P trivial}, for $\beta_0\gg 1$, such g-MMP$/Z$ is $P$-trivial. Thus it is also a g-MMP$/Z$ on $(K_X+B+M)$. Moreover, by $K_X+B+M \equiv P+N/Z$ and $P$ is nef$/Z$, the contracting locus belongs to the birational transform of $\operatorname{Supp} N \subset \operatorname{Supp} B$. Because $M+\beta_0 P$ is NQC, the above g-MMP$/Z$ terminates by Theorem \ref{prop: special termination 2}. Let $(X_1, B_0+M_1+\beta P_1)$ be the corresponding g-log minimal model with $K_{X_1}+B_1+M_1 \equiv N_1+P_1/Z$. Moreover, $P_1$ is nef$/Z$. Next, we run a special kind of g-MMP$/Z$ on $(K_{X_1}+B_1+M_1)$ with scaling of $P_1$ as follows. Suppose that we have constructed $(X_i,B_i+M_i)$. Let \[ \nu_i = \inf\{t \geq 1 \mid K_{X_i}+B_i+M_i+t P_i \text{~is nef~}/Z\}. \] (i) If $\nu_i=0$. Then $({X_i}, B_i+M_i)$ is a g-log minimal model, and we have done. (ii) If $0<\nu_i<\nu_{i-1}$ (we set $\nu_0=+\infty$). By Lemma \ref{lem:MMPscalingNQC}, there exists an extremal ray $R$ such that $(K_{X_i}+B_i+M_i)\cdot R<0$ and $(K_{X_i}+B_i+M_i+\nu_iP_i)\cdot R=0$. We contract $R$ and get a divisorial contraction or a flipping contraction. Let $X_i\dashrightarrow X_{i+1}$ be the corresponding divisorial contraction or flip. (iii) If $\nu_i=\nu_{i-1}>0$. Choose $\beta_i<\nu_i$ sufficiently close to $\nu_i$ ($\beta_i$ can be determined from the discussion later), we run a g-MMP$/Z$ with scaling of an ample$/Z$ divisor $H$ on $(K_{X_i}+B_i+M_i+\beta_i P_i)$. We claim that this g-MMP$/Z$ is also a g-MMP$/Z$ with scaling of $P_i$, and it terminates. Let $(X_{i+1}, B_{i+1}+M_{i+1})$ be the resulting g-pair. In particular, $\nu_{i+1} \leq \beta_i<\nu_i$. \begin{proof}[Proof of the Claim in (iii)] Since for any $\nu_i>\beta_i>0$, we have \begin{equation}\label{eq: K+B+M+betaP} \begin{split} &\frac{\nu_i}{\nu_i-\beta_{i}}(K_{X_i}+B_i+M_i+\beta_i P_i)\\ =&(K_{X_i}+B_i+M_i)+\frac{\beta_i}{\nu_i-\beta_i}(K_{X_i}+B_i+M_i+\nu_i P_i). \end{split} \end{equation} If $\beta_i$ is sufficiently close to $\nu_i$, then $\frac{\beta_i}{\nu_i-\beta_i}$ is sufficiently large. For a g-MMP$/Z$ with scaling of an ample$/Z$ divisor on $(K_{X_i}+B_i+M_i+\beta_i P_i)$, by \eqref{eq: K+B+M+betaP}, it is also a g-MMP$/Z$ on $(K_{X_i}+B_i+M_i)+\frac{\beta_i}{\nu_i-\beta_i}(K_{X_i}+B_i+M_i+\nu_i P_i)$. By Proposition \ref{le: decomposition to nef Cartier divisors}, $K_{X_i}+B_i+M_i+\nu_i P_i$ is an NQC divisor$/Z$. Hence by Proposition \ref{prop: P trivial}, for a sufficiently large $\frac{\beta_i}{\nu_i-\beta_i}$, this g-MMP$/Z$ is $(K_{X_i}+B_i+M_i+\nu_i P_i)$-trivial. By $\beta_i<\nu_i$, it is also a g-MMP$/Z$ on $(K_{X_i}+B_i+M_i)$ with scaling of $P_i$. In fact, if $Y$ is an intermediate variety in this g-MMP$/Z$, for a contracting curve $\Gamma$, \[ (K_{Y}+B_Y+M_Y+\beta_i P_Y)\cdot \Gamma<0 \text{~and~} (K_{X_Y}+B_Y+M_Y+\nu_i P_Y) \cdot \Gamma=0, \] thus $(K_{X_Y}+B_Y+M_Y)\cdot \Gamma<0$ and $P_Y \cdot \Gamma>0$. Next, we show that the g-MMP$/Z$ terminates. Because \begin{align*} K_{X_i}+B_i+M_i&=N_i+P_i\\ &=\frac{1}{1+\nu_i}(N_i+(1+\nu_i)P_i)+\frac{\nu_i}{1+\nu_i}N_i\\ &=\frac{1}{1+\nu_i}(K_{X_i}+B_i+M_i+\nu_i P_i)+\frac{\nu_i}{1+\nu_i}N_i, \end{align*} a flipping curve intersects the birational transform of $N_i$ negatively. Thus the flipping locus is contained in the birational transform of $\operatorname{Supp} N_i \subseteq \operatorname{Supp} \lfloor B_i\rfloor$. Suppose that the g-MMP$/Z$ does not terminate. Because the g-MMP$/Z$ is a scaling of an ample$/Z$ divisor, the corresponding nef thresholds $\nu_j'$ satisfies $\nu_j' \neq \lim \nu_j'$. Otherwise, $\mu'= \lim \nu_j'>0$, then the g-MMP$/Z$ can be viewed as a g-MMP$/Z$ on $K_{X_i}+B_i+M_i+\beta_iP_i+\mu' H$, then it terminates by Lemma \ref{lem:glcklt} and \cite[Corollary 1.4.2]{BCHM10}. However, by Theorem \ref{prop: special termination 2}, the above g-MMP$/Z$ terminates. This is a contradiction. \end{proof} By applying (i)-(iii), we obtain a g-MMP$/Z$ on $K_{X_i}+B_i+M_i$ with scaling of $P_1$, \[ \dashrightarrow X_i = Y_i^{1} \dashrightarrow \cdots \dashrightarrow Y_i^{k_1} = X_{i+1} \dashrightarrow. \] Let $\tilde\nu_j$ be the corresponding nef thresholds. Then either the g-MMP$/Z$ terminates or \[ \lim\tilde \nu_j = \lim\nu_i>\tilde \nu_j. \] Moreover, as $K_{Y^j_i}+B_{Y^j_i}+M_{Y^j_i}=N_{Y^j_i}+P_{Y^j_i}$ and $P_{Y^j_i} \cdot \Gamma>0$ for a contracting curve $\Gamma$, we have $N_{Y^j_i} \cdot \Gamma<0$. Thus the flipping locus is contained in the birational transform of $\operatorname{Supp} N_i \subseteq \operatorname{Supp} \lfloor B_i\rfloor$. By Theorem \ref{prop: special termination 2} again, the g-MMP$/Z$ terminates. This finishes the proof of the $\theta(X,B+M,N)=0$ case. Step 3. Next we show the induction step. The argument is identical to \cite[Proof of Theorem 1.5]{Birkarweak12}, except that we deal with the g-pairs. First, for a divisor $D=\sum_i d_i D_i$, we write $D^{\leq 1} \coloneqq \sum_i \min\{d_i, 1\}D_i$. Suppose that Theorem \ref{thm: weak zariski implies mm} does not hold. We assume that $\theta(X,B+M,N) \geq 1$ is the minimal number such that $K_X+B+M$ does not have a log minimal model. By Step 1, we can assume that $(X,B+M)$ is log smooth. By $\theta(X,B+M,N) \geq 1$, \[ \alpha \coloneqq \min\{t>0 \mid \lfloor (B+tN)^{\leq 1} \rfloor \neq \lfloor B \rfloor\} \] is a finite number. Let $C$ be the divisor such that $(B+\alpha N)^{\leq 1} = B+C$. Thus $\operatorname{Supp} C \subseteq \operatorname{Supp} N$, and \begin{equation}\label{eq: components of C} \theta(X,B+M,N) = \#\{\text{components of~}C\}. \end{equation} Let $A \geq 0$ satisfy $\alpha N = C+A$, then $\operatorname{Supp} A \subseteq \operatorname{Supp} \lfloor B \rfloor$, and $A=(B+\alpha N)-(B+\alpha N)^{\leq 1}$. Because $\theta(X, (B+C)+M,N+C)<\theta(X, B+M,N)$, by the induction hypothesis, $(X, (B+C)+M)$ has a log minimal model, $(Y,(B+C)_Y+M_Y)$. Notice that $(X, (B+C)+M)$ is a g-lc pair with boundary part $B+C$ and nef part $M$. Let $g: U \to X, h: U \to Y$ be a sufficiently high log resolution, then \begin{equation}\label{eq: common U} g^*(K_X+(B+C)+M)=h^*(K_Y+ (B+C)_Y+M_Y)+N', \end{equation} with $N' \geq 0$ and $h$-exceptional. Let \[ P' \coloneqq h^*(K_Y+ (B+C)_Y+M_Y), \] then it is nef$/Z$ and NQC by Proposition \ref{le: decomposition to nef Cartier divisors}. Thus $P'+N'$ is an NQC weak Zariski decomposition$/Z$ for $g^*(K_X+(B+C)+M)$. We have \[ g^{*}(N+C)-N'=h^{*}(K_Y+ (B+C)_Y+M_Y)-g^{*}P. \] Since $h^{*}(K_Y+ (B+C)_Y+M_Y)-g^{*}P$ is anti-nef$/Y$, by the negativity lemma, $N' \leq g^*(N+C)$. As $\operatorname{Supp} C \subseteq \operatorname{Supp} N$, we have $\operatorname{Supp} N'\subseteq \operatorname{Supp} g^{*}N$. By above, we have \[ \begin{split} (1+\alpha)g^*(K_X+B+M) &= g^*(K_X+B+M)+\alpha g^*P+\alpha g^*N\\ &=g^*(K_X+B+M)+\alpha g^*P+ g^*C+ g^*A\\ & = P'+N'+\alpha g^*P+ g^*A. \end{split} \] Thus, \[g^*(K_X+B+M) = \frac{1}{1+\alpha}(P'+\alpha g^*P)+\frac{1}{1+\alpha}(N'+g^*A).\] Set \[ P'' \coloneqq \frac{1}{1+\alpha}(P'+\alpha g^*P) \text{~and~} N''\coloneqq \frac{1}{1+\alpha}(N'+g^*A), \] then \[ g^*(K_X+B+M)=P''+N'' \] is a birational NQC weak Zariski decomposition$/Z$ for $K_X+B+M$. Since $\alpha N=C+A$, we have $\operatorname{Supp} N'' \subseteq \operatorname{Supp} g^*N$, and thus $\operatorname{Supp} g_* N'' \subseteq \operatorname{Supp} N$. Because $\theta(X, B+M,N)$ is minimal, $\theta(X, B+M,N) = \theta(X, B+M,g_{*}N'')$, and every component of $C$ is also a component of $g_*N''$ according to \eqref{eq: components of C}. Because $A, C$ do not have common components, we have $\operatorname{Supp} C \subseteq \operatorname{Supp} g_*N'$, and thus $C$ is exceptional over $Y$ by \eqref{eq: common U}. Hence by definition $(B+C)_Y=B_Y$, and $P' = h^*(K_Y+B_Y+M_Y)$. We will compare the g-log discrepancies below. Let $G \geq 0$ be the largest divisor such that $G \leq g^{*}C$ and $G \leq N'$. Set $\tilde C = g^{*}C-G, \tilde N' = N' -G$. By \eqref{eq: common U}, we have \begin{equation}\label{eq: tilde C} g^*(K_X+B+M)+\tilde C = P'+\tilde N'. \end{equation} (i) If $\tilde C$ is exceptional over $X$, then because $g^*(K_X+B+M)-P'=\tilde N'-\tilde C$ is anti-nef over $X$, by the negativity lemma, $\tilde N'-\tilde C \geq 0$, which implies $\tilde C=0$ as $\tilde C$ and $\tilde N'$ have no common components. From \eqref{eq: tilde C}, \[ g^*(K_X+B+M)= P'+\tilde N' \] is a birational NQC-weak Zariski decomposition$/Z$ for $K_X+B+M$. Moreover, \[ \begin{split} &g^*(K_X+B+M)-h^*(K_Y+B_Y+M_Y)\\ =&\sum_D \left(a(D; Y, B_Y+M_Y)-a(D;X, B+M)\right)D\\ =&\tilde N', \end{split} \] where $D$ runs over the prime divisors on $U$. (ia) Suppose that $\operatorname{Supp} g_*\tilde N' = \operatorname{Supp} g_*N'$. Then by \eqref{eq: common U}, $\operatorname{Supp} \tilde N'$ contains the birational transform of all the prime exceptional$/Y$ divisors on $X$. Hence $(Y, B_Y+M_Y)$ is also a g-log minimal model of $(X, B+M)$, a contradiction. (ib) Hence we can assume $\operatorname{Supp} g_*\tilde N' \subsetneq \operatorname{Supp} g_*N'$. Thus, $$\operatorname{Supp}(g_{*}N'-g_{*}G)=\operatorname{Supp} g_{*}\tilde N'\subsetneq \operatorname{Supp} g_{*} N'\subseteq \operatorname{Supp} N.$$ Since $G$ is the largest divisor such that $G \leq g^{*}C$ and $G \leq N'$, some component of $C$ is not a component of $g_*\tilde N'$. By \eqref{eq: components of C}, we have \[ \theta(X/Z, B+M, \tilde N') <\theta(X/Z, B+M, N), \] which contradicts the minimality of $\theta(X/Z, B+M, N)$. (ii) Hence $\tilde C$ is not exceptional over $X$. Let $\beta>0$ be the smallest number such that \[ \tilde A \coloneqq \beta g^*N-\tilde C \text{~and~} g_*\tilde A \geq 0. \] Then there exists a component $D$ of $g_*\tilde C$ which is not a component of $g_*\tilde A$. We have \[ \begin{split} (1+\beta)g^*(K_X+B+M) &= g^*(K_X+B+M) +\tilde C+\tilde A+\beta g^*P\\ &=P'+\beta g^*P+\tilde N' + \tilde A. \end{split} \] By the negativity lemma, we have $\tilde N' + \tilde A \geq 0$. Let \[ P''' \coloneqq\frac{1}{1+\beta}(P'+\beta g^*P) \text{~and~} N'''\coloneqq\frac{1}{1+\beta}(\tilde N' + \tilde A), \] then \[ g^*(K_X+B+M) = P'''+N''' \] is a birational NQC weak Zariski decomposition of $K_X+B+M$. By construction, $D$ is a component of $\operatorname{Supp} g_{*}\tilde C\subseteq \operatorname{Supp} C \subseteq \operatorname{Supp} N$. Since $\operatorname{Supp}\tilde C \cap \operatorname{Supp}\tilde N' = \emptyset$, $D$ is not a component of $g_{*}\tilde N'$. Thus, $D$ is not a component of \[ \operatorname{Supp}( g_*N''')=\operatorname{Supp}(g_{*}\tilde N')\cup\operatorname{Supp} (g_{*}\tilde {A}). \] Hence \[ \theta(X, B+M,N)>\theta(X,B+M,N'''), \] which still contradicts the minimality of $\theta(X/Z, B+M, N)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: weak zariski scaling ample}] Let $\nu_i$ be the nef threshold in the g-MMP$/Z$ with scaling of an ample$/Z$ divisor $A$ (see Definition \ref{defn:MMPsP}). By Lemma \ref{lem:glcklt}, for any $\epsilon>0$, there exists a klt pair $(X,\Delta_{\epsilon})$, such that $\Delta_{\epsilon}\sim_{\mathbb{R}} B+M+\epsilon A/Z$. If $\lim \nu_i = \mu>0$, then this g-MMP$/Z$ is also a MMP$/Z$ on $K_X+\Delta_{\mu} \sim_\mathbb{R} B+M+\mu A/Z$. By \cite[Corollary 1.4.2]{BCHM10}, it terminates. Hence we have $\mu=0$. If this g-MMP$/Z$ does not terminate, we have $\nu_i>\mu=0$ for all $i$. By Theorem \ref{thm: weak zariski implies mm}, $(X/Z,B+M)$ has a g-log minimal model $(Y/Z,B_Y+M_Y)$. Hence the g-MMP terminates by Theorem \ref{thm: gmmtermination}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: weak zariski equiv mm}] The g-minimal model conjecture (Conjecture \ref{conj:glogmmp}) implies the birational weak Zariski decomposition conjecture (Conjecture \ref{conj: NQCbirweakzar}) by Proposition \ref{prop: g-log minimal model implies NQC zarski decomposition}. For the other direction, Theorem \ref{thm: weak zariski scaling ample} implies that any g-MMP$/Z$ with scaling of an ample divisor terminates. The resulting model is a g-log terminal model as a g-MMP$/Z$ does not extract divisors. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: ter lc implies g-log minimal model}] First, we show that $(X/Z,B+M)$ has a g-log minimal model (see Definition \ref{def: g-log minimal model and g-log terminal model}). By Step 1 of the proof of Theorem \ref{thm: weak zariski implies mm}, we can assume that $(X/Z,B+M)$ is log smooth. By Lemma \ref{prop: P trivial}, there exists a $\beta \gg1$, such that a g-MMP$/Z$ with scaling of an ample divisor on $(K_X+B+M+\beta M)$ is $M$-trivial. Thus, this g-MMP$/Z$ is also a MMP$/Z$ on the dlt pair $(K_X+B)$. By assumption, it terminates with a model $Y$. Since $(X/Z,B+M)$ is pseudo-effective, $X\dashrightarrow Y/Z$ is birational and $K_Y+B_Y+(\beta+1) M_Y$ is nef. By Lemma \ref{lem:MMPscalingNQC}, we can run a g-MMP$/Z$ on $(K_Y+B_Y+M_Y)$ with scaling of $M_Y$. This g-MMP is also a MMP$/Z$ on $(K_Y+B_Y)$. By assumption, it terminates with $Y'$. Because the sequence $X \dashrightarrow Y \dashrightarrow Y'$ is also a g-MMP$/Z$ on $K_X+B+M$, $(Y'/Z,B_{Y'}+M_{Y'})$ is a desired g-log minimal model. From a g-log minimal model to a g-log terminal model, we use the same argument as Theorem \ref{thm: weak zariski equiv mm}. In fact, the existence of g-log minimal model implies the existence of birational weak Zariski decomposition by Proposition \ref{prop: g-log minimal model implies NQC zarski decomposition}, then by Theorem \ref{thm: weak zariski scaling ample}, any g-MMP$/Z$ with scaling of an ample divisor terminates. The resulting model is a g-log terminal model. \end{proof} \bibliographystyle{abbrv}
{'timestamp': '2019-01-29T02:26:46', 'yymm': '1806', 'arxiv_id': '1806.01234', 'language': 'en', 'url': 'https://arxiv.org/abs/1806.01234'}
\section{Introduction} In complex many-body systems, initially localized information quickly spreads throughout the entire system --- a process known as information scrambling. Though information is ultimately conserved, it gets encoded into global entanglement among many degrees of freedom, and hence becomes inaccessible by local measurement. Information scrambling was originally studied in the context of black hole physics \cite{Hayden2007Black,Kitaev2015,Maldacena2016Bound}, and has since emerged as a field with a wide-ranging impact across different areas in physics, e.g., quantum chaos in many-body systems \cite{Rozenbaum2017Lyapunov,Roberts2017Chaos,Cotler2017Chaos,Lin2018Out,Garciaa2018Chaos,Yan2020Information,Yan2020Recovery}, phase transition \cite{Sahu2019Scrambling,Choi2020Quantum}, and quantum machine learning \cite{Holmes2021Barren}. Considerable effort has also been made in probing this effect in various experimental systems \cite{Landsman2019Verified,Li2017Measuring,Garttner2017Measuring}. Information scrambling is usually measured by the temporal decay of the out-of-time order correlators (OTOCs) \cite{Kitaev2015,Larkin1969,Swingle2018Unscrambling}, defined as \begin{equation} \langle W^\dag(t)V^\dag W(t)V \rangle, \end{equation} where the average is taken over a given quantum state. $W$ and $V$ are local operators, usually considered to act on distinct subsystems. $W(t)$ is the Heisenberg evolution of $W$, which becomes a global operator as scrambling proceeds, causing decay of the OTOCs. However, it is difficult to distinguish between scrambling and decoherence: the latter causes leakage of the system information to the environment and, in general, induces decay of the OTOC as well. Protocols that measure the OTOC often involve forward and backward evolution of the system, which in practice do not exactly match each other due to operational errors. This can also cause decay of the OTOC signals. Understanding the behavior of scrambling in presence of decoherence and operational errors is an active and ongoing research in the field \cite{Swingle2018Resilience,Knap2018Entanglement,Syzranov2018Out,Zhang2019Information,Gonzalez019Out,Yoshida2019Disentangling,Joshi2020Quantum,Touil2021Information,Dominguez2021Dynamics,Zanardi2021Information}. In this line of effect, the first experiment that provided a positive verification of scrambling was performed with trapped ions \cite{Landsman2019Verified}, using a teleportation-based protocol \cite{Yoshida2019Disentangling}. This approach uses two copies of the system and an entangled input state (Bell state) between the copies, requiring sophisticated engineering of the system and therefore hindering its practical applications. More recently, the field has witnessed an increasing number of studies of information scrambling in various experimental platforms, e.g., superconductors \cite{Zhao2021Probing}, trapped-ions \cite{Joshi2020Quantum}, and cloud-based quantum computers \cite{Braumuller2021Probing,Mi2021Information,Geller2021Quantum}. However, it remains a challenge to design a simple and robust protocol for benchmarking the true signals of scrambling from a noisy background. Thus a solution to this problem is in demand and is beneficial for the development of the field, especially for experimental studies. In this work, we propose a solution to this task. Our approach is based on a novel quantum butterfly effect \cite{Yan2020Recovery,Sinitsyn2020Quantum} that can unambiguously distinguish scrambling and non-scrambling dynamics. Due to the global entanglement generated by scrambling, information becomes more robust against local perturbations, and hence, when scrambled and disturbed by local perturbations, can still be partially recovered through a reversed unscrambling process. In contrast, decoherence and errors only temper this effect and weaken the signal produced by true scrambling. In the following, we develop the underlying theory, introduce the benchmarking protocol, and apply it to a model problem of fast scrambling in the presence of decoherence. \begin{figure}[b!] \centering \includegraphics[width=\columnwidth]{fig_butterfly.pdf} \caption{In the Lorenz picture of the butterfly effect, one compares two trajectories evolved under the same Hamiltonian, but with slightly different initial conditions. In the Bradbury picture, perturbation is applied in the past. These two pictures are equivalent in the classical world but have subtle differences in quantum dynamics.} \label{fig:butterfly} \end{figure} In chaotic classical dynamics, small perturbations in the initial conditions can trigger dramatic changes in the time evolution. This effect is well-known as the Lorenz butterfly effect. A few years before Lorenz, Bradbury in his famous story \cite{bradbury1952} introduced a variant of the butterfly effect. In this picture, one starts with an initial state in the present, evolves backward in time, applies a small perturbation, and then travels forward back to the present. These two pictures offer equivalent descriptions of the same classical butterfly effect (Fig.~\ref{fig:butterfly}). However, they exhibit subtle differences in quantum dynamics \cite{Yan2020Recovery,Cao2021Origin}: the overlap between the two trajectories (wavefunctions) in the Lorenz picture remains a constant during time evolution, due to the unitarity of (isolated) quantum dynamics. On the other hand, in the Bradbury's picture, the overlap between the two states at time $t=0$ --- the initial input state and the final output state after the backward and forward evolution loop --- does decay as a function of the evolution time. Moreover, asymptotically, the output state always contains partial information of the initial one, with the amount of information determined by the type of perturbation. This is in sharp contrast with classical chaotic dynamics, which on average smear all the initial information over the entire accessible phase space. For these reasons, we propose to call this phenomenon the quantum anti-butterfly effect. More precisely, the perturbation in the `past' can be described by a general quantum channel $\Lambda$: \begin{equation}\label{eq:perturbation} \rho \rightarrow \Lambda(\rho) = \sum_k M^\dagger_k \rho M_k,\quad \sum_k M_kM_k^\dagger = \mathbb{I}, \end{equation} where $\mathbb{I}$ is the identity operator, and $M_k$ are the Kraus operators. The initial state $\rho$, after the Bradbury's process aforementioned, becomes \begin{equation}\label{eq:outstate} \rho(t) = U_t\Lambda(U^\dag_t \rho U_t)U^\dag_t, \end{equation} where $U_t$ is the evolution operator for a time $t$. This process is also recognized as the quantum twirling channel of the perturbation $\Lambda$ \cite{Dankert2009Exact}. The fidelity of the output state of the twirling channel, with respect to the initial state, is \begin{equation}\label{eq:fidelity} F(t)\equiv {\rm Tr}\left[\rho(t)\cdot\rho\right] = \sum_k {\rm Tr} \left[M^\dag_k(t)\rho M_k(t)\rho\right]. \end{equation} This quantity is the central object for the protocol developed in the following section. After a long time evolution with a chaotic Hamiltonian, when the evolution operator becomes sufficiently random, the asymptotic state can on average be described as \begin{equation}\label{eq:asymstate} \rho_{\rm as} = p\rho + (1-p)\frac{\mathbb{I}}{d}, \end{equation} where $d$ the dimension of the total Hilbert space and the probability $p$ is determined by the error channel, namely, \begin{equation}\label{eq:probasym} p = \frac{\sum_k {\rm Tr} M_k {\rm Tr}M^\dagger_k-1}{d^2 - 1} \end{equation} The corresponding asymptotic value of the fidelity (\ref{eq:fidelity}) has the simple expression \begin{equation}\label{eq:asymfidelity} F_{\rm as} = p{\rm Tr}\left[\rho^2\right] + (1-p)/d. \end{equation} At this point, we would like to emphasize several remarks:\par i) The asymptotic state (\ref{eq:asymstate}) is obtained by averaging over an assemble of random unitaries with respect to Haar measure \footnote{Technically, the same result can be obtained for an ensemble of unitaries which is as random as a unitary 2-design, since the average is performed for an expression in terms of the second moment of the unitary.}. However, the fluctuation from the mean is exponentially small in the size of the total system. Consequently, for a single unitary randomly drawn from such an ensemble, deviations from this averaged behavior are exponentially suppressed. Therefore, we can treat (\ref{eq:asymstate}) as the asymptotic state for a typical complex evolution. \par ii) The fidelity (\ref{eq:fidelity}) is identified as a sum of special types of OTOCs between the state $\rho$ and the Kraus operators. Hence, scrambling causes decay of the fidelity in the same manner as to the OTOCs. This particular type of OTOC has been used \cite{Garttner2017Measuring} and demonstrated to have various benefits \cite{Swan2019Unifying}. As will be shown in the following, this quantity can also acquire a large universal asymptotic value (\ref{eq:asymfidelity}), which will response further to decoherence and errors. This unique feather lies in the core of our protocol for singling out information scrambling. Inspired by the above theory of the anti-butterfly effect, we propose the following protocol to detect and benchmark information scrambling. In this protocol, the total system is sent through a Bradbury process, and only the fidelity of a subsystem is measured: \begin{enumerate} \item[1.] Initialize the total system such that a small subsystem is prepared in state $\rho_S$. \item[2.] Evolve the system forward for a time $t$; apply a perturbation, and then evolve the system backward for the same time $t$. \item[3.] Measure the same subsystem, and evaluate its fidelity with respect to the initial state $\rho_S$. \end{enumerate} We choose to measure the fidelity of a subsystem, since evaluating that of the total system is generally infeasible for large systems. One can verify that the asymptotic value of the fidelity (\ref{eq:asymfidelity}) also applies to any subsystems. For example, for a (large) spin-$1/2$ system with scrambling dynamics, we prepare a single target spin in any pure state and choose the perturbation channel as a projective measurement on any single spin. The fidelity of the target spin, computed with (\ref{eq:asymfidelity}) and (\ref{eq:probasym}), saturates to $\sim 0.75$. On the other hand, in presence of decoherence and errors, the asymptotic state of the target spin will become random, hence resulting in $\sim 0.5$ fidelity. If we chooses to measure the fidelity of the total system, the corresponding values of the asymptotic fidelity would be $\sim 0.5$ and zero for ideal scrambling and decoherence respectively. We also note the similarity between our approach and randomized benchmarking (RB) protocol \cite{Emerson2005Scalable,Magesan2011Scalable} for quantum computers. Indeed, our approach inherits the main benefits of RB, i.e., it is scalable and is independent with errors in state initialization and readout. In practice, subsystem state initialization and measurement may not be directly accessible. For these systems, the corresponding steps in the protocol can be adapted as \begin{enumerate} \item[1'.] Measure a local observable with a discrete set of outcomes. \item[3'.] Measure the same observable, and evaluate the coincide rate, i.e., the probability that the two measurements produce the same outcome. \end{enumerate} It can be verified that the coincidence rate is identical to the fidelity of the same subsystem on which the measurements are performed. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{fig_otoc.pdf} \caption{Measurements of the OTOC. (a) Quantum circuit for the interference protocol (left). Structure of a single evolution step of the fast scrambling model. (b) OTOC for the cases of scrambling and decoherence (Table~\ref{table}), which produces similar signals that can hardly be distinguished.} \label{fig:otoc} \end{figure} As an illustration, we study a fast scrambling model recently proposed in Ref.~\cite{Belyansky2020Minimal}. We perform simulations of this model under decoherence and compare our protocol with a standard OTOC measurement. The evolution unitary in the fast scrambling model consists of repetitive layers of circuit evolution. Each layer (Fig.~\ref{fig:otoc}a, right) is composed of Haar random single-qubit unitaries, immediately followed by a global entangling gate, i.e., which for $n$ qubits is given by \begin{equation} \exp{\left(-i\frac{g}{2\sqrt{n}}\sum_{i<j}Z_iZ_j\right)}, \end{equation} where $g$ is a constant parameter that controls the scrambling strength. This building block can be viewed as a trotterization of the evolution generated by a spin Hamiltonian with strong random local fields (generating single-gate random rotations) and all-to-all two-body couplings (creating the global entangling gate). To simulate the effect of a noisy environment, we introduce errors in each layer of the single-qubit gates --- after each single-qubit Haar random gate, a Pauli $X$-gate and a Pauli $Z$-gate are applied independently with probability $p$. Note that this error channel of the system qubits can be extended to a unitary evolution in an enlarged Hilbert space including ancillary qubits. Hence, the error model describes a decoherence process as well. We now have two parameters, $g$ and $p$, that control the strength of scrambling and decoherence, respectively. Table~\ref{table} lists the parameters we considered, namely, two cases for ideal scrambling (S1 and S2), two cases for strong decoherence (D1 and D2), and the cases (I) where scrambling and decoherence become comparable and compete with each other. \begin{table}[h] \caption{\label{table}Cases compared for the fast scrambling model. The rate of scrambling increases with the value of $g$. The strength of the decoherence increase with the value of $p$.} \begin{ruledtabular} \begin{tabular}{ c c c c c c c } cases & S1 & S2 & D1 & D2 & I\\ \hline $g$ & 1 & 2 & 0.5 & 0.5 & 1\\ $p$ & 0 & 0 & 0.025 & 0.1 & 0.001 \\ \end{tabular} \end{ruledtabular} \end{table} The OTOC measurement is achieved using the interference protocol developed in Ref.~\cite{Swingle2016Measuring}, as shown in the quantum circuit diagram in Fig.~\ref{fig:otoc}a. The intermediate $X$-gate is placed on the $i$-th qubit $q_i$. The final measurement of the average value $\langle\sigma^0_x\rangle$ (or $\langle\sigma^0_y\rangle$) on the ancillary qubit $q_0$ then gives the real (or imaginary) part of the OTOC $\langle \sigma^0_x(t)\sigma^i_x\sigma^0_x(t)\sigma^i_x\rangle$, where $t$, in analog to time, is the number of layers in the forward (and hence the backward) evolution. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{fig_recovery.pdf} \caption{Measurement of the recovery signal (fidelity) of the benchmark protocol. (a) Schematic of the quantum circuit. (b) Fidelity decay for the cases of scrambling and decoherence (Table~\ref{table}). (c,~d) Fidelity as a function of the number of layers ($t_2$) in the backward evolution, for fixed numbers of layers $t_1$ in the forward evolution. (e,~g) Density plot for the fidelity.} \label{fig:recovery} \end{figure*} Figure~\ref{fig:otoc}b compares the evolution of the OTOC for ideal scrambling (without noise) and weak scrambling with strong decoherence, which exhibit similar decay curves. Hence, these two situations are practically indistinguishable from the OTOC measurements. We now examine the performance of our benchmark protocol for the fast scrambling model under the same conditions. The circuit diagram for this protocol is shown in Fig.~\ref{fig:recovery}a. Here, all the qubits are prepared in the computational $|0\rangle$. The recovery signal is obtained by measuring the fidelity of the first qubit $q_1$ with respect to its initial state. We also specify the intermediate perturbation between the forward and backward evolution as a projective measurement on a single qubit (other than the qubit-1) along the $Z$-axis. Note that since the evolution unitary contains Haar-random single-qubit gates in each layer, the measurement in a fixed direction is equivalent to random projective measurements. As discussed in the foregoing section, for this particular perturbation channel, the expected value of fidelity (\ref{eq:fidelity}) is $0.75$ for ideal scrambling unitaries. On the other hand, in the case of strong decoherence, when the qubits eventually lose their coherence information, the fidelity would be trivially $0.5$. Hence, the cases of scrambling with and without decoherence exhibit distinct asymptotic values of fidelity. This is clearly demonstrated in Fig.~\ref{fig:recovery}b. To vitalize the emergence of the recovery signal produced by scrambling, we also performed simulations with different numbers of layers in the forward and backward evolution. That is, the fidelity is measured after $t_1$ and $t_2$ layers of forward and backward evolution respectively, and if $t_2>t_1$, an additional $t_2-t_1$ layers in the backward evolution are chosen as independently random. Figures~\ref{fig:recovery}(c,~d) depict the fidelity as a function of $t_2$ for fixed values of $t_1$. The density plots for the fidelity scanned through various $t_1$ and $t_2$ are shown in Fig.~\ref{fig:recovery}(e,~f). The recovery signal emerges in a finite window around the peak $t_1=t_2$. The width of the peak reflects the time scale for local dissipation. For the current fast scrambling model, due to the random single-qubit rotation in every single layer, the recovery signal disappears as soon as $t_2$ is one layer away from $t_1$. The above simulations demonstrate that our protocol can unambiguously distinguish between scrambling and decoherence. In general, both of these two competing factors contribute to the decay of the fidelity, but they make the fidelity saturate to different values. This gives rise to a two-stage decay: In the early scrambling stage, fidelity decay is influenced by both scrambling and decoherence, until the information is fully scrambled and the fidelity reaches the saturation value (\ref{eq:asymfidelity}). In the latter decoherence stage, fidelity further decays to a smaller value, at a rate determined purely by decoherence. The appearance of the two-stage decay indicates the presence of scrambling. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{fig_benchmark.pdf} \caption{Fidelity decay for case I (Table~\ref{table}) with both scrambling and decoherence. Squares are numerical data. The red solid curve is the best fit to the ansatz (\ref{eq:ansatz}). To visualize the two stages of the fidelity decay, we also plotted (black dashed) the long time pure exponential part of the ansatz (\ref{eq:ansatz}), $a_2\exp{(-\lambda_dt)}+F^d_{\rm as}$, which clearly departs from the early decay. Insets: $F(t)-F^d_{\rm as}$ in semi-log scale, which show a single exponential decay at long times, and a sum of two exponential decays at early times. At the very beginning, fidelity decay takes a quadratic form \cite{Yan2021Decoherence}.} \label{fig:benchmark} \end{figure} Suppose $\lambda_s$ and $\lambda_d$ are the exponential decay rates corresponding to scrambling and decoherence, respectively. In the extreme cases of strong scrambling and strong decoherence, the decay of the fidelity can be described as \begin{equation} F(t) = \begin{cases} \left(1-F^s_{\rm as}\right) e^{-\lambda_s t} + F^s_{\rm as}, & \lambda_s \gg \lambda_d\\ \left(1-F^d_{\rm as}\right) e^{-\lambda_d t} + F^d_{\rm as}, & \lambda_s \ll \lambda_d \end{cases} \end{equation} where $F^s_{\rm as}$ and $F^d_{\rm as}$ are the asymptotic values of the fidelity induced by scrambling and decoherence, respectively. In the intermediate regime, such that the scrambling and decoherence are comparable, we propose an ansatz of the fidelity decay: \begin{equation}\label{eq:ansatz} F(t)=\left(a_1 e^{-\lambda_s t} + a_2 \right)e^{-\lambda_d t} + F^d_{\rm as}. \end{equation} With this, we can fit the observed fidelity at long times (when the system gets sufficiently scrambled and the first exponential term vanishes) to a pure exponential function and extract the decoherence rate $\lambda_d$. Then the scrambling rate $\lambda_s$ can be extracted by fitting the early time fidelity to the ansatz (\ref{eq:ansatz}). We applied this procedure to case I (Table~\ref{table}), where both scrambling and decoherence contribute substantially to the fidelity decay. Figure~\ref{fig:benchmark} shows simulations of the fidelity decay, which is described accurately by ansatz (\ref{eq:ansatz}). This allows us to separate the scrambling rate $\lambda_s = 0.216$ and the decoherence rate $\lambda_d = 0.040$, and hence positively verify the presence of scrambling. The scrambling rate is determined purely by the underlying scrambling dynamics, and should not be altered by the intermediate perturbation. We verified this with different types of perturbations (\ref{eq:perturbation}) and recovery probabilities (\ref{eq:probasym}). The extracted scrambling rates are the same with a precision to the second decimal place. \vspace{5pt} To summarize, we have developed a protocol to benchmark information scrambling. This approach distinguishes between information scrambling and fake positive signals, produced by decoherence and operational errors in experiments unambiguously. It can be also used to quantify the degree of scrambling from the noisy background. Our method requires only a single loop of forward and backward evolution, and hence can be applied to any system with access to time-reversing the dynamics. \begin{acknowledgements} This work was carried out under support of the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, Condensed Matter Theory Program. BY also acknowledges support from the Center for Nonlinear Studies and the U.S DOE under the LDRD program in Los Alamos. JH was supported by the U.S. DOE through a quantum computing program sponsored by the Los Alamos National Laboratory (LANL) Information Science \& Technology Institute. This research used quantum computing resources provided by the LANL Institutional Computing Program, which is supported by the U.S. DOE National Nuclear Security Administration under Contract No. 89233218CNA000001. \end{acknowledgements}
{'timestamp': '2021-10-26T02:17:55', 'yymm': '2110', 'arxiv_id': '2110.12355', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.12355'}
\section{Introduction}\label{intro} ~~~We consider the large time behavior of a viscous incompressible flow around a rotating rigid body in $\mathbb{R}^3$. Assume that both a compact rigid body $\mathscr{O}$ and a viscous incompressible fluid which occupies the outside of $\mathscr{O}$ are initially at rest; then, the body starts to rotate with the angular velocity which gradually increases until it reaches a small terminal one at a certain finite time and it is fixed afterwards. We then show that the fluid motion converges to a steady solution obtained by Galdi \cite{galdi2003} as time $t\rightarrow \infty$ (Theorem \ref{thm1} in Subsection \ref{maintheorem2}). This was conjectured by Hishida \cite[Section 6]{hishida2013}, but it has remained open. Such a question is called the starting problem and it was originally raised by Finn \cite{finn1965}, in which rotation was replaced by translation of the body. Finn's starting problem was first studied by Heywood \cite{heywood1972}; since his paper, a stationary solution is said to be attainable if the fluid motion converges to it as $t\rightarrow \infty$. Later on, by using Kato's approach \cite{kato1984} (see also Fujita and Kato \cite{fuka1964}) together with the $L^q$-$L^r$ estimates for the Oseen equation established by Kobayashi and Shibata \cite{kosh1998}, Finn's starting problem was completely solved by Galdi, Heywood and Shibata \cite{gahesh1997}. \par Let us introduce the mathematical formulation. Let $\mathscr{O}\subset \mathbb{R}^3$ be a compact and connected set with non-empty interior. The motion of $\mathscr{O}$ mentioned above is described in terms of the angular velocity \begin{align} \omega(t)=\psi(t)\omega_0,\quad \omega_0=(0,0,a)^{\top} \end{align} with a constant $a\in \mathbb{R}$, where $\psi$ is a function on $\mathbb{R}$ satisfying the following conditions: \begin{align}\label{psidef} \psi\in C^1(\mathbb{R};\mathbb{R}),\quad |\psi(t)|\leq 1~~{\rm{for}}~t\in\mathbb{R}, ~~\psi(t)=0~~{\rm{for}}~t\leq 0,~~ \psi(t)=1~~{\rm{for}} ~t\geq 1. \end{align} Here and hereafter, $(\cdot)^\top$ denotes the transpose. Then the domain occupied by the fluid can be expressed as $D(t)=\{y=O(t)x;\,x\in D\}$, where $D=\mathbb{R}^3\setminus \mathscr{O}$ is assumed to be an exterior domain with smooth boundary $\partial D$ and \begin{align*} O(t)=\left( \begin{array}{ccc} \cos\Psi(t)&-\sin\Psi(t)&0\\ \sin\Psi(t)&\cos\Psi(t)&0\\ 0&0&1 \end{array}\right),\quad \Psi(t)=\int_0^t \psi(s)a\,ds. \end{align*} We consider the initial boundary value problem for the Navier-Stokes equation \begin{align}\label{NS1} \left\{ \begin{array}{r@{}c@{}ll} \partial_t w+w\cdot \nabla_y w &{}={}&\Delta_y w-\nabla_y \pi,&y\in D(t),t>0,\\ \nabla_y\cdot w&{}={}&0,&y\in D(t),t\geq 0,\\ w|_{\partial D(t)}&{}={}&\psi(t)\omega_0\times y,&t\geq 0,\\ w(y,t)&{}\rightarrow{}&0&{\rm{as}}~|y|\rightarrow \infty,\\ w(y,0)&{}={}&0,&y\in D, \end{array}\right. \end{align} where $w=(w_1(y,t),w_2(y,t),w_3(y,t))^\top$ and $\pi=\pi(y,t)$ denote unknown velocity and pressure of the fluid, respectively. To reduce the problem to an equivalent one in the fixed domain $D$, we take the frame $x=O(t)^\top y$ attached to the body and make the change of the unknown functions: $u(x,t)=O(t)^\top w(y,t),~p(x,t)=\pi(y,t)$. Then the problem (\ref{NS1}) is reduced to \begin{align}\label{NS2} \left\{ \begin{array}{r@{}c@{}ll} \partial_t u+u\cdot \nabla u &{}={}&\Delta u+(\psi(t)\omega_0\times x)\cdot \nabla u -\psi(t)\omega_0\times u-\nabla p, &x\in D,t>0,\\ \nabla\cdot u&{}={}&0,\hspace{2cm} x\in D,t\geq 0,\\ u|_{\partial D}&{}={}&\psi(t)\omega_0\times x,\hspace{0.3cm}t\geq 0,\\ u&{}\rightarrow{}&0\hspace{2.2cm}{\rm{as}}~|x|\rightarrow \infty,\\ u(x,0)&{}={}&0,\hspace{2cm} x\in D. \end{array}\right. \end{align} \par The purpose of this paper is to show that (\ref{NS2}) admits a global solution which tends to a solution $u_s$ for the stationary problem \begin{align}\label{sta} \left\{ \begin{array}{r@{}c@{}ll} u_s\cdot \nabla u_s&{}={}&\Delta u_s+(\omega_0\times x)\cdot \nabla u_s -\omega_0\times u_s-\nabla p_s,&x\in D,\\ \nabla\cdot u_s&{}={}&0,\hspace{1.3cm}x\in D,\\ u_s|_{\partial D}&{}={}&\omega_0\times x,\\ u_s&{}\rightarrow{}&0\hspace{1.5cm}{\rm{as}}~|x|\rightarrow \infty. \end{array}\right. \end{align} The rate of convergence in $L^r$ with $r\in (3,\infty]$ is also studied. In \cite{galdi2003}, Galdi successfully proved that if $|\omega_0|$ is sufficiently small, problem (\ref{sta}) has a unique smooth solution $(u_s,p_s)$ with pointwise estimates \begin{align}\label{pointwise} |u_s(x)|\leq \frac{C|\omega_0|}{|x|},\quad |\nabla u_s(x)|+|p_s(x)| \leq \frac{C|\omega_0|}{|x|^2}. \end{align} We note that the decay rate (\ref{pointwise}) is scale-critical, which is also captured in terms of the Lorentz space (weak-Lebesgue space) $L^{3,\infty}$. This was in fact done by Farwig and Hishida \cite{fahi2007} even for the external force being in a Lorentz-Sobolev space of order $(-1).$ \par Let us mention some difficulties of our problem and how to overcome them in this paper. In \cite{gahesh1997}, the $L^q$-$L^r$ estimates for the Oseen semigroup play an important role. In the rotational case with constant angular velocity, Hishida and Shibata \cite{hish2009} also established the $L^q$-$L^r$ estimates of the semigroup generated by the Stokes operator with the additional term $(\omega_0\times x)\cdot \nabla-\omega_0\times$. If we use this semigroup as in \cite{gahesh1997}, we have to treat the term $(\psi(t)-1)(\omega_0\times x)\cdot\nabla v$, which is however no longer perturbation from the semigroup on account of the unbounded coefficient $\omega_0\times x$, where $v=u-\psi(t)u_s$. In this paper, we make use of the evolution operator $\{T(t,s)\}_{t\geq s\geq 0}$ on the solenoidal space $L^q_{\sigma}(D)~(1<q<\infty$), which is a solution operator to the linear problem \begin{align}\label{EV1} \left\{ \begin{array}{r@{}c@{}ll} \partial_t u&{}={}&\Delta u+(\psi(t)\omega_0\times x)\cdot \nabla u-\psi(t)\omega_0\times u-\nabla p,&x\in D,t>s,\\ \nabla\cdot u&{}={}&0,\hspace{0.3cm} x\in D,t\geq s,\\ u|_{\partial D}&{}={}&0,\hspace{0.3cm} t> s,\\ u&{}\rightarrow{}&0\hspace{0.45cm}{\rm{as}}~|x|\rightarrow \infty,\\ u(x,s)&{}={}&f,\hspace{0.3cm} x\in D. \end{array}\right. \end{align} Hansel and Rhandi \cite{harh2014} succeeded in the proof of generation of this evolution operator with the $L^q$-$L^r$ smoothing rate. They constructed the evolution operator in their own way since the corresponding semigroup is not analytic (Hishida \cite{hishida1999}, Farwig and Neustupa \cite{fane2007}). Recently, Hishida \cite{hishida2018,hishidapre} developed the $L^q$-$L^r$ decay estimates of the evolution operator, see Section 3. With those estimates, we solve the integral equation which perturbation from the stationary solution $u_s$ obeys. However, it is difficult to perform analysis with the standard Lebesgue space on account of the scale-critical pointwise estimates (\ref{pointwise}). Thus, we first construct a solution for the weak formulation in the framework of Lorentz space by the strategy due to Yamazaki \cite{yamazaki2000}. We next identify this solution with a local solution possessing better regularity in a neighborhood of each time. The later procedure is actually adopted by Kozono and Yamazaki \cite{koya1998}. Furthermore, we derive the $L^{\infty}$ decay which is not observed in \cite{gahesh1997}. When the stationary solution possesses the scale-critical rate $O(1/|x|)$, Koba \cite{koba2017} first derived the $L^{\infty}$ decay of perturbation with less sharp rate in the context of stability analysis, see also Remark \ref{rmk3}. Although he used both the $L^1$-$L^r$ estimates of the Oseen semigroup $T(t)$ and the $L^q$-$L^r$ estimates (yielding the $L^q$-$L^\infty$ estimates) of the composite operator $T(t)P{\rm{div}}\,$, where $P$ denotes the Fujita-Kato projection (see Subsection \ref{notation}), it turns out that either of them is enough to accomplish the proof. In this paper, we employ merely the $L^1$-$L^r$ estimates of the adjoint evolution operator $T(t,s)^*$ to simplify the argument. \par The paper is organized as follows. In Section 2 we introduce the notation and give the main theorems. Section 3 is devoted to some preliminary results on the stationary problem and the evolution operator. In Section 4 we give the proof of the main theorems. \section{Main theorems} \label{maintheorem} ~~~In this section, we first introduce some notation and after that, we give our main theorems. \subsection{Notation}\label{notation} ~~~We introduce some function spaces. Let $D\subset \mathbb{R}^3$ be an exterior domain with smooth boundary. By $C^{\infty}_0(D)$, we denote the set of all $C^{\infty}$ functions with compact support in $D$. For $1\leq q\leq \infty$ and nonnegative integer $m$, $L^q(D)$ and $W^{m,q}(D)$ denote the standard Lebesgue and Sobolev spaces, respectively. We write the $L^q$ norm as $\|\cdot\|_q$. The completion of $C_0^\infty(D)$ in $W^{m,q}(D)$ is denoted by $W_0^{m,q}(D)$. Let $1<q<\infty$ and $1\leq r\leq \infty$. Then the Lorentz spaces $L^{q,r}(D)$ are defined by \begin{align*} L^{q,r}(D)=\{f:{\rm{Lebesgue ~measurable~ function}} \mid\|f\|^*_{q,r}<\infty\}, \end{align*} where \begin{align*} \|f\|^*_{q,r}= \begin{cases} \left(\displaystyle\int_0^{\infty}\big(t \mu(\{x\in D\mid|f(x)|>t\})^{\frac{1}{q}}\big)^r\frac{dt}{t}\right) ^{\frac{1}{r}} &1\leq r<\infty,\\ \displaystyle\sup_{t>0} t\mu(\{x\in D\mid|f(x)|>t\})^{\frac{1}{q}}&r=\infty \end{cases} \end{align*} and $\mu(\cdot)$ denotes the Lebesgue measure on $\mathbb{R}^3$. The space $L^{q,r}(D)$ is a quasi-normed space and it is even a Banach space equipped with norm $\|\cdot\|_{q,r}$ equivalent to $\|\cdot\|^*_{q,r}$. The real interpolation functor is denoted by $(\cdot,\cdot)_{\theta,r}$, then we have \begin{align*} L^{q,r}(D)=\left(L^{q_0}(D),L^{q_1}(D)\right)_{\theta,r}, \end{align*} where $1\leq q_0<q<q_1\leq \infty$ and $0<\theta<1$ satisfy $1/q=(1-\theta)/q_0+\theta/q_1$, while $1\leq r\leq \infty$, see Bergh-L\"{o}fstr\"{o}m \cite{belobook1976}. We note that if $1\leq r<\infty$, the dual of the space $L^{q,r}(D)$ is $L^{q/(q-1),r/(r-1)}(D)$. It is well known that if $1\leq r<\infty$, the space $C^{\infty}_{0}(D)$ is dense in $L^{q,r}(D)$, while the space $C^{\infty}_{0}(D)$ is not dense in $L^{q,\infty}(D)$. \par We next introduce some solenoidal function spaces. Let $C_{0,\sigma}^\infty(D)$ be the set of all $C_0^{\infty}$-vector fields $f$ which satisfy ${\rm{div}}\, f=0$ in $D$. For $1<q<\infty$, $L^{q}_{\sigma}(D)$ denote the completion of $C_{0,\sigma}^\infty(D)$ in $L^q(D)$. For every $1<q<\infty$, we have the following Helmholtz decomposition: \begin{align*} L^q(D)=L^q_{\sigma}(D)\oplus\{\nabla p\in L^q(D)\mid p\in L^q_{\rm{loc}}(\overline{D})\}, \end{align*} see Fujiwara and Morimoto \cite{fumo1977}, Miyakawa \cite{miyakawa1982}, and Simader and Sohr \cite{siso1992}. Let $P_q$ denote the Fujita-Kato projection from $L^q(D)$ onto $L^q_{\sigma}(D)$ associated with the decomposition. We remark that the adjoint operator of $P_q$ coincides with $P_{q/(q-1)}$. We simply write $P=P_q$\,. By real interpolation, it is possible to extend $P$ to a bounded operator on $L^{q,r}(D)$. We then define the solenoidal Lorentz spaces $L^{q,r}_{\sigma}(D)$ by \begin{align*} L^{q,r}_{\sigma}(D)=PL^{q,r}(D)= \left(L^{q_0}_\sigma(D),L^{q_1}_\sigma(D)\right)_{\theta,r}, \end{align*} where $1<q_0<q<q_1<\infty$ and $0<\theta<1$ satisfy $1/q=(1-\theta)/q_0+\theta/q_1$, while $1\leq r\leq \infty$, see Borchers and Miyakawa \cite{bomi1995}. We then have the duality relation $L^{q,r}_{\sigma}(D)^*=L_{\sigma}^{q/(q-1),r/(r-1)}(D)$ for $1<q<\infty$ and $1\leq r<\infty$. We denote various constants by $C$ and they may change from line to line. The constant dependent on $A,B,\cdots$ is denoted by $C(A,B,\cdots)$. Finally, if there is no confusion, we use the same symbols for denoting spaces of scalar-valued functions and those of vector-valued ones. \subsection{Main theorems}\label{maintheorem2} ~~~It is reasonable to look for a solution to (\ref{NS2}) of the form \begin{align*} u(x,t)=v(x,t)+\psi(t)u_s,\quad p(x,t)=\phi(x,t)+\psi(t)p_s. \end{align*} Then the perturbation $(v,\phi)$ satisfies the following initial boundary value problem \begin{align} \left\{ \begin{array}{r@{}c@{}ll} \partial_t v&{}={}&\Delta v+(\psi(t)\omega_0\times x)\cdot\nabla v -\psi(t)\omega_0\times v\\ &&\hspace{4.3cm}+(Gv)(x,t)+H(x,t)-\nabla \phi, &x\in D,t>0,\\ \nabla\cdot v&{}={}&0,\hspace{0.3cm} x\in D,t\geq 0,\\ v|_{\partial D}&{}={}&0,\hspace{0.3cm}t> 0,\label{a}\\ v&{}\rightarrow{}&0\hspace{0.47cm}{\rm{as}}~|x|\rightarrow \infty,\\ v(x,0)&{}={}&0,\hspace{0.3cm} x\in D, \end{array}\right. \end{align} where \begin{align} &(Gv)(x,t)=-v\cdot\nabla v-\psi(t)v\cdot\nabla u_s-\psi(t)u_s\cdot \nabla v, \label{g}\\ &H(x,t)=\psi(t)(\psi(t)-1)\{-u_s\cdot \nabla u_s-\omega_0 \times u_s+(\omega_0\times x)\cdot\nabla u_s\} -\psi'(t)u_s.\label{h} \end{align} In what follows, we concentrate ourselves on the problem (\ref{a}) instead of (\ref{NS2}). In fact, if we obtain the solution $v$ of (\ref{a}) which converges to $0$ as $t\rightarrow \infty$, the solution $u$ of (\ref{NS2}) converges to $u_s$ as $t\rightarrow \infty$. By using the evolution operator $\{T(t,s)\}_{t\geq s\geq 0}$ on $L^q_{\sigma}(D)~(1<q<\infty$) associated with (\ref{EV1}), problem (\ref{a}) is converted into \begin{align}\label{integraleq} v(t)=&\int_0^t T(t,\tau)P[(Gv)(\tau)+H(\tau)]\,d\tau. \end{align} \par We are now in a position to give our attainability theorem. \begin{thm}\label{thm1} Let $\psi$ be a function on $\mathbb{R}$ satisfying (\ref{psidef}) and put $\alpha:=\displaystyle\max_{t\in\mathbb{R}}|\psi'(t)|.$ For $q\in (6,\infty)$, there exists a constant $\delta(q)>0$ such that if $(\alpha+1)|a|\leq \delta,$ problem (\ref{integraleq}) admits a solution $v$ which possesses the following properties: \begin{align*} ({\rm{i}})\, v\in BC_{w^*}\big((0,\infty);L^{3,\infty}_{\sigma}(D)\big),~\, \|v(t)\|_{3,\infty}\rightarrow 0~\, {\rm{as}}~ t\rightarrow 0,~\, \sup_{0<t<\infty}\|v(t)\|_{3,\infty}\leq C(\alpha+1)|a|, \end{align*} where $BC_{w^*}(I;X)$ is the set of bounded and weak-$\ast$ continuous functions on the interval $I$ with values in $X$, the constant $C$ is independent of $a$ and $\psi$; \begin{align*} &({\rm{ii}})\,v\in C\big((0,\infty);L^r_{\sigma}(D)\big) \cap C_{w^*}\big((0,\infty);L^\infty(D)\big),~ \nabla v\in C_w\big((0,\infty);L^r(D)\big)~{\rm for}~r\in (3,\infty);\\ &({\rm{iii}})\,({\rm{Decay}})\quad \|v(t)\|_{r}=O(t^{-\frac{1}{2}+\frac{3}{2r}})\quad {\rm{as}}~t\rightarrow \infty \quad {\rm{for~all~}} r\in(3,q),\\ &\hspace{2.585cm}\|v(t)\|_{q,\infty} =O(t^{-\frac{1}{2}+\frac{3}{2q}})\quad {\rm{as}}~t\rightarrow \infty,\\ &\hspace{2.585cm}\|v(t)\|_{r}=O(t^{-\frac{1}{2}+\frac{3}{2q}}) \quad {\rm{as}}~t\rightarrow \infty \quad {\rm{for~all~}} r\in(q,\infty]. \end{align*} \end{thm} \begin{rmk} We can obtain the $L^q$ decay of $v(t)$ like $O(t^{-1/2+3/(2q)}\log t)$ as $t\rightarrow\infty$, but it is not clear whether $\|v(t)\|_q=O(t^{-1/2+3/(2q)})$ holds. \end{rmk} To prove Theorem \ref{thm1}, the key step is to construct a solution of the weak formulation \begin{align}\label{NS6} (v(t),\varphi)=\int_0^t \big(v(\tau)\otimes v(\tau) &+\psi(\tau)\{v(\tau)\otimes u_s+u_s\otimes v(\tau)\}, \nabla T(t,\tau)^*\varphi\big)\,d\tau\nonumber\\& +\int_0^t(H(\tau),T(t,\tau)^*\varphi)\,d\tau, \quad \forall \varphi\in C^{\infty}_{0,\sigma}(D) \end{align} as in Yamazaki \cite{yamazaki2000}, where $T(t,\tau)^*$ denotes the adjoint of $T(t,\tau)$ and, here and in what follows, $(\cdot,\cdot)$ stands for various duality pairings. In this paper, a function $v$ is called a solution of (\ref{NS6}) if $v\in L^{\infty}_{\rm{loc}}\big([0,\infty);L^{3,\infty}_{\sigma}(D)\big)$ satisfies (\ref{NS6}) for a.e. $t$. By following Yamazaki's approach, we can easily see that the solution obtained in Theorem \ref{thm1} is unique in the small, see Proposition \ref{thm2}. In the following theorem, we give another result on the uniqueness without assuming the smallness of solutions. \begin{thm}\label{thm3} Let $q\in (3,\infty).$ Then there exists a constant $\widetilde{\delta}>0$ independent of $q$ and $\psi$ such that if $|a|\leq \widetilde{\delta}$, problem (\ref{NS6}) admits at most one solution within the class \begin{align*} \big\{v\in L^{\infty}_{\rm{loc}} \big([0,\infty);L^{3,\infty}_{\sigma}(D)\big)\cap L^{\infty}_{\rm{loc}} \big(0,\infty;L^q_\sigma(D)\big)\,\big|\, \lim_{t\rightarrow 0}\|v(t)\|_{3,\infty}=0\big\}. \end{align*} \end{thm} \begin{rmk} Theorem \ref{thm3} asserts that if the angular velocity is small enough and if $\widetilde{v}$ is a solution within the class above which is not necessarily small, then it coincides with the solution obtained in Theorem \ref{thm1}. \end{rmk} \section{Preliminary results}\label{preliminary} ~~~~In this section, we prepare some results on the stationary solutions and the evolution operator. For the stationary problem (\ref{sta}), Galdi \cite{galdi2003} proved the following result. \begin{prop}[\cite{galdi2003}]\label{galdi2003} There exists a constant $\eta\in(0,1]$ such that if $|\omega_0|=|a|\leq \eta$, the stationary problem (\ref{sta}) admits a unique solution $(u_s,p_s)$ with the estimate \begin{align*} \sup_{x\in D}\big\{(1+|x|)|u_s(x)|\big\}+\sup_{x\in D}\big\{(1+|x|^2)(|\nabla u_s(x)|+|p_s(x)|)\big\}\leq C|a|, \end{align*} where the constant $C$ is independent of $a$. \end{prop} From now on, we assume that the angular velocity $\omega_0=(0,0,a)^\top$ always satisfies $|\omega_0|=|a|\leq \eta$. Proposition \ref{galdi2003} then yields \begin{align*} u_s\in L^{3,\infty}(D)\cap L^{\infty}(D),\quad \nabla u_s\in L^{\frac{3}{2},\infty}(D)\cap L^{\infty}(D), \quad |x|\nabla u_s\in L^{3,\infty}(D)\cap L^\infty(D) \end{align*} and \begin{align}\label{hstaest2} H(t)\in L^{3,\infty}(D),\quad \|H(t)\|_{3,\infty}\leq C(a^2+\alpha|a|) \end{align} for all $t>0$. Here, $H(t)$ is defined by (\ref{h}) and $\alpha=\displaystyle \sup_{t\in\mathbb{R}}|\psi'(t)|.$ \par We next collect some results on the evolution operator associated with (\ref{EV1}). We define the linear operator by \begin{align*} \mathscr{D}_q(L(t))&=\{ u\in L^q_{\sigma}(D)\cap W_0^{1,q}(D)\cap W^{2,q}(D)\mid (\omega_0\times x)\cdot \nabla u\in L^q(D)\},\\ L(t)u&=-P[\Delta u+ (\psi(t)\omega_0\times x)\cdot \nabla u- \psi(t)\omega_0 \times u]. \end{align*} Then the problem (\ref{EV1}) is formulated as \begin{align}\label{EV2} \partial_t u+L(t)u=0,\quad t\in (s,\infty);\quad u(s)=f \end{align} in $L^q_{\sigma}(D)$. We can see that (\ref{psidef}) implies \begin{align}\label{globalholder} \psi(t)\omega_0\in C^\theta ([0,\infty);\mathbb{R}^3)\cap L^{\infty}(0,\infty;\mathbb{R}^3) \end{align} for all $\theta\in (0,1)$. In fact, we have \begin{align}\label{supholder} \sup_{0\leq t<\infty}|\psi(t)\omega_0|=|a|,\quad \sup_{0\leq s<t<\infty}\frac{|\psi(t)\omega_0-\psi(s)\omega_0|}{(t-s)^{\theta}}\leq |a| \max_{t\in \mathbb{R}}|\psi'(t)| \end{align} for all $\theta\in (0,1)$. We fix, for instance, $\theta=1/2.$ Under merely the local H\"{o}lder continuity of the angular velocity, Hansel and Rhandi \cite{harh2014} proved the following proposition (see also Hishida \cite{hishidapre} concerning the assertion 1). Indeed they did not derive the assertion 4, but it directly follows from the real interpolation. For completeness, we give its proof. \begin{prop}[\cite{harh2014}]\label{harhprop} Let $1<q<\infty$. Suppose (\ref{psidef}). The operator family $\{L(t)\}_{t\geq 0}$ generates a strongly continuous evolution operator $\{T(t,s)\}_{t\geq s\geq 0}$ on $L^q_{\sigma}(D)$ with the following properties: \begin{enumerate} \item Let $q\in(3/2,\infty)$ and $s\geq 0$. For every $f\in Z_q(D)$ and $t\in (s,\infty),$ we have $T(t,s)f\in Y_q(D)$ and $T(\cdot,s)f\in C^1((s,\infty); L^q_{\sigma}(D))$ with \begin{align*} \partial_t T(t,s)f+L(t)T(t,s)f=0,\quad t\in(s,\infty) \end{align*} in $L^q_{\sigma}(D)$, where \begin{align*} &Y_q(D)=\{u\in L^q_{\sigma}(D)\cap W_0^{1,q}(D) \cap W^{2,q}(D)\mid|x|\nabla u\in L^q(D)\},\\ &Z_q(D)=\{u\in L^q_\sigma(D)\cap W^{1,q}(D)\mid|x|\nabla u\in L^q(D)\}. \end{align*} \item For every $f\in Y_q(D)$ and $t>0$, we have $T(t,\cdot)f\in C^1\big([0,t]; L^q_{\sigma}(D)\big)$ with \begin{align*} \partial_s T(t,s)f=T(t,s)L(s)f\quad s\in [0,t] \end{align*} in $L^q_{\sigma}(D)$. \item Let $1<q\leq r<\infty,$ and $m,\mathcal{T}\in (0,\infty)$. There is a constant $C=C(q,r,m,\mathcal{T},D)$ such that \begin{align}\label{gradT} \|\nabla T(t,s)f\|_{r} \leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})-\frac{1}{2}} \|f\|_{q} \end{align} holds for all $0\leq s<t\leq \mathcal{T}$ and $f\in L^{q}_{\sigma}(D)$ whenever \begin{align}\label{condition m} (1+\max_{t\in \mathbb{R}}|\psi'(t)|\,)|a|\leq m \end{align} is satisfied. \item Let $1<q<r<\infty, 1\leq\rho_1,\rho_2\leq\infty ~{\rm{and}}~ m,\mathcal{T}\in (0,\infty)$. There is a constant $C=C(q,r,\rho_1,\rho_2,m,\mathcal{T},D)$ such that \begin{align}\label{gradT2} \|\nabla T(t,s)f\|_{r,\rho_2} \leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})-\frac{1}{2}} \|f\|_{q,\rho_1} \end{align} holds for all $0\leq s<t\leq \mathcal{T}$ and $f\in L^{q,\rho_1}_{\sigma}(D)$ whenever (\ref{condition m}) is satisfied. \end{enumerate} \end{prop} \hspace{-0.6cm}{\bf{Proof of the assertion 4.}}~ We choose $r_0,r_1$ such that $1<q<r_0<r<r_1<\infty.$ From the assertion 3 and the real interpolation, we have \begin{align} &\|\nabla T(t,s)f\|_{r_0,\rho_1}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r_0})-\frac{1}{2}} \|f\|_{q,\rho_1},\label{lqlr00}\\ &\|\nabla T(t,s)f\|_{r_1,\rho_1}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r_1})-\frac{1}{2}} \|f\|_{q,\rho_1}.\label{lqlr1} \end{align} By the reiteration theorem for real interpolation (see for instance \cite[Theorem 3.5.3]{belobook1976}), we obtain \begin{align}\label{realinterpolation} L_\sigma^{r,\rho_2}(D)= (L_{\sigma}^{r_0,\rho_1}(D),L_{\sigma}^{r_1,\rho_1}(D))_{\beta,\rho_2},\quad \|u\|_{r,\rho_2}\leq C\|u\|_{r_0,\rho_1}^{1-\beta}\|u\|^{\beta}_{r_1,\rho_1},\quad \frac{1}{r}=\frac{1-\beta}{r_0}+\frac{\beta}{r_1} \end{align} which combined with (\ref{lqlr00}) and (\ref{lqlr1}) concludes (\ref{gradT2}).\qed \vspace{0.2cm} \par We know that the adjoint operator $T(t,s)^*$ is also a strongly continuous evolution operator and satisfies the backward semigroup property \begin{align*} T(\tau,s)^*T(t,\tau)^*=T(t,s)^*\quad (t\geq \tau\geq s\geq 0),\quad T(t,t)^*=I, \end{align*} see Hishida \cite[Subsection 2.3]{hishida2018}. Under the assumption (\ref{globalholder}) with some $\theta\in(0,1)$, Hishida \cite{hishida2018,hishidapre} established the following $L^q$-$L^r$ decay estimates. The assertion 3 is not found there but can be proved in the same way as above. We note that the idea of deduction of (\ref{intesti}) below is due to Yamazaki \cite{yamazaki2000} once we have the assertion 5. \begin{prop}[\cite{hishida2018,hishidapre}]\label{hishidaprop} \quad Let $m\in (0,\infty)$ and suppose (\ref{psidef}). \begin{enumerate} \item Let $1<q\leq r\leq\infty~(q\ne\infty)$. Then there exists a constant $C=C(m,q,r,D)$ such that \begin{align} &\|T(t,s)f\|_{r}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})}\|f\|_{q}\label{lqlr0}\\ &\|T(t,s)^*g\|_{r}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})}\|g\|_{q}\label{ad0} \end{align} hold for all $t> s\geq 0$ and $f,g\in L^q_{\sigma}(D)$ whenever (\ref{condition m}) is satisfied. \item Let $1<q\leq r<\infty, ~1\leq \rho\leq \infty$. Then there exists a constant $C=C(m,q,r,\rho,D)$ such that \begin{align} &\|T(t,s)f\|_{r,\rho}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})} \|f\|_{q,\rho}\label{lqlr}\\ &\|T(t,s)^*g\|_{r,\rho} \leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})}\|g\|_{q,\rho}\label{ad} \end{align} hold for all $t> s\geq 0$ and $f,g\in L^{q,\rho}_{\sigma}(D)$ whenever (\ref{condition m}) is satisfied. \item Let $1<q<r<\infty, 1\leq \rho_1,\rho_2\leq \infty$. Then there exists a constant $C=C(m,q,r,\rho_1,\rho_2,D)$ such that \begin{align} &\|T(t,s)f\|_{r,\rho_2}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})}\|f\|_{q,\rho_1}\label{lqlr2}\\ &\|T(t,s)^*g\|_{r,\rho_2}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})}\|g\|_{q,\rho_1}\label{ad2} \end{align} hold for all $t> s\geq 0$ and $f,g\in L^{q,\rho_1}_{\sigma}(D)$ whenever (\ref{condition m}) is satisfied. \item Let $1<q\leq r\leq 3$. Then there exists a constant $C=C(m,q,r,D)$ such that \begin{align} &\|\nabla T(t,s)f\|_{r}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})-\frac{1}{2}} \|f\|_{q}\label{grad0}\\ &\|\nabla T(t,s)^*g\|_{r}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})-\frac{1}{2}} \|g\|_{q}\label{adgrad0} \end{align} hold for all $t> s\geq 0$ and $f,g\in L^{q}_{\sigma}(D)$ whenever (\ref{condition m}) is satisfied. \item Let $1<q\leq r\leq 3,~1\leq \rho<\infty$. Then there exists a constant $C=C(m,q,r,\rho,D)$ such that \begin{align} &\|\nabla T(t,s)f\|_{r,\rho}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})-\frac{1}{2}} \|f\|_{q,\rho}\label{grad}\\ &\|\nabla T(t,s)^*g\|_{r,\rho}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})-\frac{1}{2}} \|g\|_{q,\rho}\label{adgrad} \end{align} hold for all $t> s\geq 0$ and $f,g\in L^{q,\rho}_{\sigma}(D)$ whenever (\ref{condition m}) is satisfied. \item Let $1<q\leq r\leq 3$ with $1/q-1/r=1/3$. Then there exists a constant $C=C(m,q,D)$ such that \begin{align}\label{intesti} \int_0^t\|\nabla T(t,s)^*g\|_{r,1}\,ds\leq C\|g\|_{q,1} \end{align} holds for all $t>0$ and $g\in L^{q,1}_{\sigma}(D)$ whenever (\ref{condition m}) is satisfied. \end{enumerate} \end{prop} To prove the $L^{\infty}$ decay estimate in Theorem \ref{thm1}, we also prepare the following $L^1$-$L^r$ estimates. The following estimates for data being in $C_0^\infty(D)^3$ are enough for later use, but it is clear that, for instance, the composite operator $T(t,s)P$ extends to a bounded operator from $L^1(D)$ to $L^r_\sigma(D)$ with the same estimate. \begin{lem}\label{lem,infty,1} Let $m\in (0,\infty)$ and suppose (\ref{psidef}). \begin{enumerate} \item Let $1<r<\infty$. Then there is a constant $C=C(m,r,D)>0$ such that \begin{align} &\|T(t,s)Pf\|_r\leq C(t-s)^{-\frac{3}{2}(1-\frac{1}{r})}\|f\|_1,\label{l1lr}\\ &\|T(t,s)^*Pg\|_r\leq C(t-s)^{-\frac{3}{2}(1-\frac{1}{r})}\|g\|_1\label{l1lrad} \end{align} for all $t>s\geq 0$ and $f,g\in C^\infty_0(D)^3$ whenever (\ref{condition m}) is satisfied. \item Let $1<r<\infty$ and $1\leq\rho\leq \infty$. Then there is a constant $C=C(m,r,\rho,D)>0$ such that \begin{align} &\|T(t,s)Pf\|_{r,\rho}\leq C(t-s)^{-\frac{3}{2}(1-\frac{1}{r})}\|f\|_1,\label{l1lr2}\\ &\|T(t,s)^*Pg\|_{r,\rho}\leq C(t-s)^{-\frac{3}{2}(1-\frac{1}{r})}\|g\|_1\label{l1lr2ad} \end{align} for all $t>s\geq 0$ and $f,g\in C^\infty_0(D)^3$ whenever (\ref{condition m}) is satisfied. \item Let $1<r\leq 3,\,1\leq\rho<\infty$. Then there is a constant $C=C(m,r,\rho,D)>0$ such that \begin{align} \|\nabla T(t,s)Pf\|_{r,\rho}\leq C(t-s)^{-\frac{3}{2}(1-\frac{1}{r})-\frac{1}{2}}\|f\|_1, \label{gradl1lr}\\ \|\nabla T(t,s)^*Pg\|_{r,\rho}\leq C(t-s)^{-\frac{3}{2}(1-\frac{1}{r})-\frac{1}{2}}\|g\|_1\label{gradl1lrad} \end{align} for all $t>s\geq 0$ and $f,g\in C^{\infty}_0(D)^3$ whenever (\ref{condition m}) is satisfied. \end{enumerate} \end{lem} \begin{proof} The proof is simply based on duality argument (see Koba \cite[Lemma 2.15]{koba2017}), however, we give it for completeness. Let $1<q\leq r<\infty$ and $1/r+1/r'=1$. By using (\ref{ad0}), we see that \begin{align}\label{argue} |(T(t,s)Pf,\varphi)|=|(f,T(t,s)^*\varphi)|\leq \|f\|_1\|T(t,s)^*\varphi\|_\infty \leq C(t-s)^{-\frac{3}{2}(1-\frac{1}{r})}\|f\|_1\|\varphi\|_{r'} \end{align} for all $\varphi \in L^{r'}_\sigma(D)$, which implies (\ref{l1lr}). We next show (\ref{l1lr2}). We fix $q$ such that $1<q<r$. Combining the estimate (\ref{lqlr2}) with (\ref{l1lr}), we have \begin{align*} \|T(t,s)Pf\|_{r,\rho}\leq C(t-s)^{-\frac{3}{2}(\frac{1}{q}-\frac{1}{r})} \left\|T\left(\frac{t+s}{2},s\right)Pf\right\|_q \leq C(t-s)^{-\frac{3}{2}(1-\frac{1}{r})}\|f\|_1. \end{align*} Finally, in view of (\ref{grad}) and (\ref{l1lr2}), we have \begin{align*} \|\nabla T(t,s)Pf\|_{r,\rho}\leq C(t-s)^{-\frac{1}{2}} \left\|T\left(\frac{t+s}{2},s\right)Pf\right\|_{r,\rho} \leq C(t-s)^{-\frac{3}{2}(1-\frac{1}{r})-\frac{1}{2}}\|f\|_1 \end{align*} which implies (\ref{gradl1lr}). The proof for the adjoint $T(t,s)^*$ is accomplished in the same way. \end{proof} \section{Proof of the main theorems}\label{proof} ~~~In this section we prove the main theorems (Theorem~\ref{thm1} and Theorem~\ref{thm3}). We first give some key estimates and then show Theorem~\ref{thm3}. After that, following Yamazaki \cite{yamazaki2000}, we construct a solution with some decay properties for (\ref{NS6}) and then derive the $L^\infty$ decay of the solution. We finally identify the solution above with a local solution possessing better regularity for the integral equation (\ref{integraleq}) in a neighborhood of each time $t>0$. \par Let us define the function spaces \begin{align*} &X=\big\{v\in BC_{w^*}\big((0,\infty);L^{3,\infty}_{\sigma}(D)\big) \,\big|\,\lim_{t\rightarrow 0} \|v(t)\|_{3,\infty}=0\big\},\\ &X_q=\big\{v\in X\,\big|\,t^{\frac{1}{2}-\frac{3}{2q}}v \in BC_{w^*}\big((0,\infty);L_{\sigma}^{q,\infty}(D) \big)\big\},~~3<q<\infty. \end{align*} Both are Banach spaces endowed with norms $\|\cdot\|_X=\|\cdot\|_{X,\infty}$ and $\|\cdot\|_{X_q}=\|\cdot\|_{X_q,\infty}$, respectively, where \begin{align*} &\|v\|_{X,t}=\sup_{0<\tau<t}\|v(\tau)\|_{3,\infty},\quad \|v\|_{X_q,t}=\|v\|_{X,t} +[v]_{q,t},\quad [v]_{q,t}=\sup_{0<\tau<t} \tau^{\frac{1}{2}-\frac{3}{2q}}\|v(\tau)\|_{q,\infty} \end{align*} for $t\in (0,\infty].$ \begin{lem}\label{key} \begin{enumerate} \item ~Let $v,w\in X$ and set \begin{align*} &\braket{\mathcal{I}(v,w)(t),\varphi}:= \int_0^t (v(\tau)\otimes w(\tau), \nabla T(t,\tau)^*\varphi)\,d\tau,\\ &\braket{\mathcal{J}(v)(t),\varphi}:= \int_0^t (\psi(\tau)\{v(\tau)\otimes u_s+u_s\otimes v(\tau)\}, \nabla T(t,\tau)^*\varphi)\,d\tau \end{align*} for all $\varphi\in C_{0,\sigma}^{\infty}(D)$. Then $\mathcal{I}(v,w),\mathcal{J}(v)\in X$ and there exists a positive constant $C$ such that \begin{align}\label{e1} \|\mathcal{I}(v,w)\|_{X,t}\leq C\|v\|_{X,t}\|w\|_{X,t},\quad\quad \|\mathcal{J}(v)\|_{X,t}\leq C\|u_s\|_{3,\infty}\|v\|_{X,t} \end{align} hold for any $v,w\in X$ and $t\in(0,\infty]$. \item ~Let $q\in(3,\infty)$. If $v\in X_q$, $w\in X$, then $\mathcal{I}(v,w),\mathcal{J}(v)\in X_q$ and there exists a positive constant $C=C(q)$ such that \begin{align}\label{e2} \|\mathcal{I}(v,w)\|_{X_q,t}\leq C\|v\|_{X_q,t}\|w\|_{X,t}, \quad \|\mathcal{J}(v)\|_{X_q,t}\leq C\|u_s\|_{3,\infty}\|v\|_{X_q,t} \end{align} hold for every $v\in X_q$, $w\in X$ and $t\in(0,\infty]$. \item ~We set \begin{align*} \braket{\mathcal{K}(t),\varphi}:=\int_0^t (H(\tau),T(t,\tau)^*\varphi)\,d\tau \end{align*} for $\varphi\in C_{0,\sigma}^\infty(D)$. Let $q\in(3,\infty)$. Then $\mathcal{K}\in X_q$ and there exist positive constants $C$ independent of $q$ and $C'=C'(q)$ such that \begin{align}\label{kest} \|\mathcal{K}\|_{X,t}\leq C(a^2+\alpha|a|),\quad \|\mathcal{K}\|_{X_q,t}\leq C'(a^2+\alpha|a|) \end{align} hold for every $t\in (0,\infty]$. \end{enumerate} \end{lem} \begin{proof} Estimates (\ref{e1}) and (\ref{e2}) can be proved in the same way as done by Yamazaki \cite[Lemma 6.1.]{yamazaki2000}, see also Hishida and Shibata \cite[Section 8]{hish2009}, however, we briefly give the proof of (\ref{e1})$_1$ and (\ref{e2})$_1$. By (\ref{intesti}), we have \begin{align*} |\braket{\mathcal{I}(v,w)(t),\varphi}|\leq \|v\|_{X,t}\|w\|_{X,t} \int_0^t\|\nabla T(t,\tau)^*\varphi\|_{3,1} \leq C\|v\|_{X,t}\|w\|_{X,t}\|\varphi\|_{\frac{3}{2},1}, \end{align*} which yields (\ref{e1})$_1$. We choose $r$ such that $1/3+1/q+1/r=1$ to find \begin{align*} |\braket{\mathcal{I}(v,w)(t),\varphi}|\leq [v]_{X_q,t}\|w\|_{X,t} \int_0^t\tau^{-\frac{1}{2}+\frac{3}{2q}} \|\nabla T(t,\tau)^*\varphi\|_{r,1}\,d\tau= [v]_{q,t}\|w\|_{X,t} \left(\int_0^{\frac{t}{2}}+\int_{\frac{t}{2}}^t\right). \end{align*} In view of (\ref{adgrad}), we have \begin{align*} \int_0^{\frac{t}{2}}\leq C\int_0^{\frac{t}{2}}\tau^{-\frac{1}{2} +\frac{3}{2q}}(t-\tau)^{-1}\,d\tau\|\varphi\|_{q',1} \leq Ct^{-\frac{1}{2}+\frac{3}{2q}}\|\varphi\|_{q',1}, \end{align*} where $1/q+1/q'=1$, whreas, (\ref{intesti}) implies \begin{align*} \int_{\frac{t}{2}}^t \leq \left(\frac{t}{2}\right)^{-\frac{1}{2}+\frac{3}{2q}} \int_0^t\|\nabla T(t,\tau)^*\varphi\|_{r,1}\,d\tau \leq Ct^{-\frac{1}{2}+\frac{3}{2q}}\|\varphi\|_{q',1} \end{align*} from which together with (\ref{e1})$_1$, we obtain (\ref{e2})$_1$. The estimate (\ref{e1}) leads us to \begin{align*} \lim_{t\rightarrow 0}\|\mathcal{I}(v,w)(t)\|_{3,\infty}=0,\quad \lim_{t\rightarrow 0}\|\mathcal{J}(v)(t)\|_{3,\infty}=0. \end{align*} \par Let us consider the weak-$\ast$ continuity of $\mathcal{I}(v,w)$ with values in $L^{3,\infty}_{\sigma}(D)$ (resp. $L^{q,\infty}_{\sigma}(D)$) when $v\in X$ (resp. $v\in X_q$), $w\in X$. Here, we need a different argument from \cite{yamazaki2000} because of the non-autonomous character as well as the non-analyticity of the corresponding semigroup. Since $C_{0,\sigma}^{\infty}(D)$ is dense in $L_{\sigma}^{\kappa,1}$ ($\kappa=3/2,\,q'$) and since we know (\ref{e1}) and (\ref{e2}), it suffices to show that \begin{align}\label{w*contiI} |\braket{\mathcal{I}(v,w)(t)-\mathcal{I}(v,w)(\sigma),\varphi}| \rightarrow 0~~ {\rm{as}} ~~\sigma\rightarrow t \end{align} for all $0<t<\infty$ and $\varphi\in C^{\infty}_{0,\sigma}(D)$. Let $0<\sigma<t$. By using the backward semigroup property, we have \begin{align*} |\braket{\mathcal{I}(v,w)(t)-\mathcal{I}(v,w)(\sigma),\varphi}|&\leq \int_0^\sigma|(v(\tau)\otimes w(\tau), \nabla T(\sigma,\tau)^*(T(t,\sigma)^*\varphi-\varphi)|\,d\tau \\&\hspace{1.5cm} +\int_\sigma^t |(v(\tau)\otimes w(\tau),\nabla T(t,\tau)^*\varphi)|\,d\tau =:I_1+I_2. \end{align*} The estimate (\ref{intesti}) yields \begin{align*} I_1&\leq \|v\|_{X,t}\|w\|_{X,t}\int_0^\sigma\| \nabla T(\sigma,\tau)^*(T(t,\sigma)^*\varphi-\varphi)\|_{3,1}\,d\tau\\ &\leq C\|v\|_{X,t}\|w\|_{X,t}\, \|T(t,\sigma)^*\varphi-\varphi\|_{\frac{3}{2},1}\rightarrow 0 \quad{\rm{as}}~ \sigma \rightarrow t. \end{align*} Furthermore, (\ref{adgrad}) yields \begin{align*} I_2\leq \|v\|_{X,t}\|w\|_{X,t}\int_\sigma^t \|\nabla T(t,\tau)^*\varphi\|_{3,1}\,d\tau \leq C\|v\|_{X,t}\|w\|_{X,t}(t-\sigma)^{\frac{1}{2}}\|\varphi\|_{3,1} \rightarrow 0 \quad {\rm{as}}~\sigma \rightarrow t. \end{align*} We can discuss the other case $0<t<\sigma$ similarly and thus we obtain (\ref{w*contiI}). By the same manner, we can obtain the desired weak-$\ast$ continuity of $\mathcal{J}$. We thus conclude the assertion 1 and 2. \par We next consider $\mathcal{K}(t)$. We use (\ref{ad}) as well as (\ref{hstaest2}) to obtain \begin{align*} |\braket{\mathcal{K}(t),\varphi}|\leq C(a^2+\alpha|a|) \int_0^{\min\{1,t\}}|\|T(t,\tau)^*\varphi\|_{\frac{3}{2},1}\,d\tau \leq C(a^2+\alpha|a|)\min\{1,t\}\|\varphi\|_{\frac{3}{2},1} \end{align*} for $\varphi\in C_{0,\sigma}^\infty(D)$ and $t>0$ which yields $\mathcal{K}(t)\in L^{3,\infty}_\sigma(D)$ with \begin{align*} \|\mathcal{K}\|_{X,t}\leq C(a^2+\alpha|a|) \quad {\rm{for}} ~t\in(0,\infty],\quad \lim_{t\rightarrow 0} \|\mathcal{K}(t)\|_{3,\infty}=0. \end{align*} To derive the estimate $[\mathcal{K}]_{q,t}\leq C(a^2+\alpha|a|)$, we consider two cases: $0<t\leq 2$ and $t\geq 2$. For $0<t\leq2$, (\ref{ad}) yields \begin{align*} |\braket{\mathcal{K}(t),\varphi}| \leq C(a^2+\alpha|a|) \int_0^t\|T(t,\tau)^*\varphi\|_{\frac{3}{2},1}\,d\tau \leq C(a^2+\alpha|a|)\|\varphi\|_{q',1}. \end{align*} For $t\geq 2$, we have \begin{align*} |\braket{\mathcal{K}(t),\varphi}| \leq C(a^2+\alpha|a|) \int_0^1\|T(t,\tau)^*\varphi\|_{\frac{3}{2},1}\,d\tau \leq C(a^2+\alpha|a|)\, t^{-\frac{1}{2}+\frac{3}{2q}}\|\varphi\|_{q',1}. \end{align*} We thus obtain (\ref{kest}). It remains to show the weak-$\ast$ continuity. To this end, it is sufficient to show that \begin{align}\label{w*contiJ} |\braket{\mathcal{K}(t)-\mathcal{K}(\sigma),\varphi}|\rightarrow 0~~ {\rm{as}} ~\sigma\rightarrow t \end{align} for all $t\in (0,\infty)$ due to (\ref{kest}). To prove (\ref{w*contiJ}), we suppose $0<\sigma<t$. We use the backward semigroup property to observe \begin{align*} \braket{\mathcal{K}(t)-\mathcal{K}(\sigma),\varphi} =\int_0^\sigma\big(H(\tau),T(\sigma,\tau)^*(T(t,\sigma)^* \varphi-\varphi)\big)\,d\tau +\int_\sigma^t (H(\tau),T(t,\tau)^*\varphi)\,d\tau. \end{align*} By applying (\ref{ad}), we find that \begin{align*} &\left| \int_0^\sigma\big(H(\tau),T(\sigma,\tau)^*(T(t,\sigma)^*\varphi-\varphi) \big)\,d\tau \right| \leq C(a^2+\alpha|a|)\, \sigma\,\|T(t,\sigma)^*\varphi-\varphi\|_{\frac{3}{2},1} \rightarrow 0 \quad {\rm{as}} ~\sigma\rightarrow t,\\ &\left| \int_\sigma^t (H(\tau),T(t,\tau)^*\varphi)\,d\tau \right| \leq C(a^2+\alpha|a|)(t-\sigma)\|\varphi\|_{\frac{3}{2},1} \rightarrow 0\quad {\rm{as}} ~\sigma\rightarrow t. \end{align*} The other case $t<\sigma$ is discussed similarly. Hence we have (\ref{w*contiJ}). The proof is complete. \end{proof} \begin{comment} \begin{lem}\label{uniquenessresult} Let $r\in (3,\infty).$ Then there exists a constant $\widetilde{\delta}>0$ independent of $r$ and $\psi$ such that if the angular velocity $|\omega_0|=|a|\leq \widetilde{\delta}$, (\ref{NS6}) admits at most one solution within the class \begin{align*} \{v\in BC_{w^*}((0,\infty);L_\sigma^{3,\infty}(D)) \cap L^\infty_{\rm{loc}}(0,\infty;L^r_\sigma(D))\,|\, \lim_{t\rightarrow 0}\|v(t)\|_{3,\infty}=0\}. \end{align*} \end{lem} \end{comment} \hspace{-0.6cm}{\bf{Proof of Theorem \ref{thm3}.}}~ The idea of the proof is traced back to Fujita and Kato \cite[Theorem 3.1.]{fuka1964}. Let $v_1$ and $v_2$ be the solutions of (\ref{NS6}). Then we have \begin{align} (v_1(t)-v_2(t),\varphi)=&\int_0^t \big(v_1(\tau)\otimes(v_1(\tau)-v_2(\tau))+ (v_1(\tau)-v_2(\tau)) \otimes v_2(\tau)\nonumber\\&+\psi(\tau)(v_1(\tau)-v_2(\tau))\otimes u_s+ \psi(\tau)u_s\otimes(v_1(\tau)-v_2(\tau)), \nabla T(t,\tau)^*\varphi\big)\,d\tau\label{v1v2} \end{align} for $\varphi\in C_{0,\sigma}^\infty(D).$ By applying (\ref{e1}) to (\ref{v1v2}) and by Proposition \ref{galdi2003}, we have \begin{align*} \|v_1-v_2\|_{X,t}& \leq C(\|v_1\|_{X,t}+\|v_2\|_{X,t}+\|u_s\|_{3,\infty}) \|v_1-v_2\|_{X,t}\\ &\leq C(\|v_1\|_{X,t}+\|v_2\|_{X,t}+|a|)\|v_1-v_2\|_{X,t}. \end{align*} Suppose \begin{align*} |a|<\frac{1}{2C}=:\widetilde{\delta}. \end{align*} Since $\|v_j(t)\|_{3,\infty}\rightarrow 0$ as $t\rightarrow 0$ ($j=1,2$), one can choose $t_0>0$ such that $C(\|v_1\|_{X,t_0}+\|v_2\|_{X,t_0})<1/2$, which implies $v_1=v_2$ on $(0,t_0]$. Hence, (\ref{v1v2}) is written as \begin{align}\label{v1v22} (v_1(t)-v_2(t),\varphi)= &\int_{t_0}^t\big(v_1(\tau)\otimes(v_1(\tau)-v_2(\tau))+ (v_1(\tau)-v_2(\tau)) \otimes v_2(\tau)\nonumber\\& +\psi(\tau)(v_1(\tau)-v_2(\tau))\otimes u_s+ \psi(\tau)u_s\otimes(v_1(\tau)-v_2(\tau)), \nabla T(t,\tau)^*\varphi\big)\,d\tau. \end{align} We fix $\mathcal{T}\in (t_0,\infty)$ and set $[v]_{q,t_0,t}=\displaystyle\sup_{t_0\leq\tau\leq t}\|v(\tau)\|_q$ for $t\in (t_0,\mathcal{T})$. It follows from (\ref{v1v22}) that \begin{align}\label{v1v2r} [v_1-v_2]_{q,t_0,t}\leq C_*(t-t_0)^{\frac{1}{2}-\frac{3}{2q}} [v_1-v_2]_{q,t_0,t}\,,\quad t\in (t_0,\mathcal{T}), \end{align} where $C_*=C_*(t_0,\mathcal{T})=C([v_1]_{q,t_0,\mathcal{T}}+[v_2]_{q,t_0,\mathcal{T}}+2\|u_s\|_q\big).$ In fact, the estimate (\ref{adgrad0}) yields \begin{align*} \int_{t_0}^t\big|&\big(v_1(\tau)\otimes(v_1(\tau)-v_2(\tau)), \nabla T(t,\tau)^*\varphi\big)\big|\,d\tau\\ &\leq C[v_1(\tau)]_{q,t_0,\mathcal{T}}[v_1-v_2]_{q,t_0,t} \int_{t_0}^t\|\nabla T(t,\tau)^*\varphi\|_{(1-\frac{2}{q})^{-1}}\,d\tau\\ &\leq C[v_1(\tau)]_{q,t_0,\mathcal{T}} [v_1-v_2]_{q,t_0,t}(t-t_0)^{\frac{1}{2}-\frac{3}{2q}} \|\varphi\|_{(1-\frac{1}{q})^{-1}} \end{align*} for all $\varphi\in C_{0,\sigma}^{\infty}(D)$ and $t\in (t_0,\mathcal{T})$. Since the other terms in (\ref{v1v22}) are treated similarly, we obtain (\ref{v1v2r}). We take \begin{align*} \xi=\min\left\{\left(\frac{1}{2C_*}\right)^ {(\frac{1}{2}-\frac{3}{2q})^{-1}},\mathcal{T}-t_0\right\} \end{align*} which leads us to $v_1=v_2$ on $(0,t_0+\xi).$ Even though we replace $t_0$ by $t_0+\xi,t_0+2\xi,\cdots$, we can discuss similarly. Hence, $v_1=v_2$ on $(0,\mathcal{T})$. Since $\mathcal{T}$ is arbitrary, we conclude $v_1=v_2$. \qed \vspace{0.2cm} \par To prove Theorem \ref{thm1}, we begin to construct a solution of (\ref{NS6}) by applying Lemma \ref{key}. \begin{prop}\label{thm2} Let $\psi$ be a function on $\mathbb{R}$ satisfying (\ref{psidef}). We put $\alpha=\displaystyle\max_{t\in\mathbb{R}}|\psi'(t)|$. \begin{enumerate} \item There exists $\delta_1>0$ such that if $(\alpha+1)|a| \leq \delta_1$, problem (\ref{NS6}) admits a unique solution within the class \begin{align*} \big\{v\in BC_{w^*}\big((0,\infty);L^{3,\infty}_{\sigma}(D)\big)\,\big| \,&\lim_{t\rightarrow 0}\|v(t)\|_{3,\infty}=0,\\& \sup_{0<\tau<\infty}\|v(\tau)\|_{3,\infty} \leq C(\alpha+1)|a|\big\}, \end{align*} where $C>0$ is independent of $a$ and $\psi$. \item Let $3<q<\infty$. Then there exists $\delta_2(q)\in(0,\delta_1]$ such that if $(\alpha+1)|a|\leq \delta_2$, \begin{align*} t^{\frac{1}{2}-\frac{3}{2q}}v \in BC_{w^*}\big((0,\infty);L_{\sigma}^{q,\infty}(D)\big), \end{align*} where $v(t)$ is the solution obtained above. \end{enumerate} \end{prop} \begin{proof} We first show the assertion 1 by the contraction mapping principle. Given $v\in X$, we define \begin{align*} \braket{(\Phi v)(t),\varphi}=\rm{the~RHS~of}~(\ref{NS6}),\quad \varphi\in C_{0,\sigma}^\infty(D). \end{align*} Lemma \ref{key} implies that $\Phi v\in X$ with \begin{align} &\|\Phi v\|_{X}\leq C_1\|v\|_{X}^2+C_2|a|\|v\|_{X} +C_3(a^2+\alpha |a|),\label{thm1pf1}\\ &\|\Phi v-\Phi w\|_{X} \leq (C_1\|v\|_{X}+C_1\|w\|_{X}+C_2|a|)\|v-w\|_{X}\label{thm1pf2} \end{align} for every $v,w\in X$. Here, $C_1,C_2,C_3,C_4$ are constants independent of $v,w,a$ and $\psi$. Hence, if we take $a$ satisfying \begin{align*} (\alpha+1)|a|<\min \left\{\frac{1}{2C_2},\frac{1}{16C_1C_3},\eta\right\}=:\delta_1, \end{align*} where $\eta\in (0,1]$ is a constant given in Proposition \ref{galdi2003}, then we obtain a unique solution $v$ within the class \begin{align*} \{v\in X\mid\|v\|_X\leq 4C_3(\alpha+1)|a|\} \end{align*} which completes the proof of the assertion 1. \par We next show the assertion 2. By applying Lemma \ref{key}, we see that $\Phi v\in X_q$ together with (\ref{thm1pf1})--(\ref{thm1pf2}) in which $X$ norm was replaced by $X_q$ norm and the constants $C_i~(i=1,2,3)$ are also replaced by some others $\widetilde{C}_i(q)(\geq C_i)$. If we assume \begin{align}\label{acondiq} (\alpha+1)|a|<\min\left\{\frac{1}{2\widetilde{C}_2}, \frac{1}{16\widetilde{C}_1\widetilde{C}_3} ,\eta\right\}=:\delta_2\,(\leq \delta_1), \end{align} we can obtain a unique solution $\hat{v}$ within the class \begin{align*} \{v\in X_q\mid\|v\|_{X_q} \leq 4\widetilde{C}_3(\alpha+1)|a|\}. \end{align*} Under the condition (\ref{acondiq}), let $v$ be the solution obtained in the assertion 1. Then we have (\ref{v1v22}) in which $v_1$, $v_2$ are replaced by $v$ and $\hat{v}$. By applying (\ref{e1}), we see that \begin{align*} \|v-\hat{v}\|_X\leq \big\{C_1(\|v\|_X+\|\hat{v}\|_X)+C_2|a|\big\}\|v-\hat{v}\|_X \leq \big\{8\,\widetilde{C}_1\widetilde{C}_3(1+\alpha)|a| +\widetilde{C}_2|a|\big\}\|v-\hat{v}\|_X. \end{align*} Furthermore, the condition (\ref{acondiq}) yields \begin{align*} 8\,\widetilde{C}_1\widetilde{C}_3(1+\alpha)|a|+\widetilde{C}_2|a|<1 \end{align*} which leads us to $v=\hat{v}$. The proof is complete. \end{proof} We note that Proposition \ref{thm2} implies \begin{align}\label{cw*r} t^{\frac{1}{2}-\frac{3}{2r}}v\in BC_{w}\big((0,\infty);L^r_{\sigma}(D)\big) \end{align} for all $r\in (3,q)$ by the interpolation inequality \begin{align*} \|f\|_r\leq C\|f\|_{3,\infty}^{1-\beta}\|f\|_{q,\infty}^\beta, \quad \frac{1}{r}=\frac{1-\beta}{3}+\frac{\beta}{q}, \end{align*} see (\ref{realinterpolation}). \par Let $q\in (6,\infty),$ then the solution obtained in Proposition \ref{thm2} also fulfills the following decay properties. \begin{prop}\label{linftydecay} \par Let $\psi$ be a function on $\mathbb{R}$ satisfying (\ref{psidef}) and we put $\alpha:=\displaystyle\max_{t\in\mathbb{R}}|\psi'(t)|$. Suppose that $6<q<\infty$. Then, under the same condition as in the latter part of Proposition \ref{thm2}, the solution $v$ obtained in Proposition \ref{thm2} satisfies $v(t)\in L^r(D)~(t>0)$ with \begin{align}\label{q<decay} \|v(t)\|_{r}=O(t^{-\frac{1}{2}+\frac{3}{2q}}) \quad {\rm{as}}~t\rightarrow \infty \end{align} for $r\in (q,\infty]$. \end{prop} \begin{proof} We first show (\ref{q<decay}) with $r=\infty$, that is, $v(t)\in L^\infty(D)$ for $t>0$ with \begin{align}\label{inftydecay} \|v(t)\|_{\infty}=O(t^{-\frac{1}{2}+\frac{3}{2q}}) \quad {\rm{as}}~t\rightarrow \infty. \end{align} We note by continuity that $C^\infty_{0,\sigma}(D)$ can be replaced by $PC_0^{\infty}(D)$ as the class of test functions in (\ref{NS6}). Hence, it follows that \begin{align}\label{thm1pf6} \sup_{\varphi\in C_0^{\infty}(D),\|\varphi\|_1\leq 1}|(v(t),\varphi)| \leq N_1+N_2+N_3+N_4, \end{align} where \begin{align*} &N_1=\sup_{\varphi\in C_0^{\infty}(D),\|\varphi\|_1\leq 1} \int_0^t |(v(\tau)\otimes v(\tau),\nabla T(t,\tau)^*P\varphi)|\,d\tau,\\ &N_2=\sup_{\varphi\in C_0^{\infty}(D),\|\varphi\|_1\leq 1} \int_0^t |(v(\tau)\otimes u_s,\nabla T(t,\tau)^*P\varphi)|\,d\tau,\\ &N_3=\sup_{\varphi\in C_0^{\infty}(D),\|\varphi\|_1\leq 1} \int_0^t |u_s\otimes v(\tau),\nabla T(t,\tau)^*P\varphi)|\,d\tau,\\ &N_4= \sup_{\varphi\in C_0^{\infty}(D),\|\varphi\|_1\leq 1} \int_0^t |(H(\tau),T(t,\tau)^*P\varphi)|\,d\tau. \end{align*} We begin by considering $N_1$. In view of (\ref{gradl1lrad}), we have \begin{align*} \int_0^t|(v(\tau)\otimes v(\tau),\nabla T(t,\tau)^*P\varphi)|\,d\tau &\leq C[v]_{q,\infty}^2 \int_0^t\tau^{-1+\frac{3}{q}} \left\| \nabla T(t,\tau)^*P\varphi\right\|_{(1-\frac{2}{q})^{-1},1}\,d\tau\\ &\leq C[v]_{q,\infty}^2\int_0^t \tau^{-1+\frac{3}{q}}(t-\tau)^{-\frac{3}{q}-\frac{1}{2}}\,d\tau \,\|\varphi\|_1\\& \leq C[v]_{q,\infty}^2t^{-\frac{1}{2}}\|\varphi\|_1 \end{align*} for all $\varphi\in C_0^{\infty}(D)$ and $t>0$. Here, the integrability is ensured because of $q\in (6,\infty)$. Hence we obtain \begin{align}\label{thm1pf7} N_1\leq Ct^{-\frac{1}{2}}\quad {\rm{for}} ~t>0. \end{align} We next consider $N_2$. By applying (\ref{gradl1lrad}), it follows that \begin{align*} \int_0^t|(v(\tau)\otimes u_s,\nabla T(t,\tau)^*P\varphi)|\,d\tau&\leq [v]_{q,\infty}\|u_s\|_{q,\infty}\int_0^t\tau^{-\frac{1}{2}+\frac{3}{2q}} \|\nabla T(t,\tau)^*P\varphi\|_{(1-\frac{2}{q})^{-1},1}\,d\tau \nonumber\\ &\leq C [v]_{q,\infty}\|u_s\|_{q,\infty}t^{-\frac{3}{2q}}\|\varphi\|_1 \end{align*} for $t>0$. We thus have \begin{align} N_2\leq Ct^{-\frac{3}{2q}}\quad {\rm{for}} ~t>0. \end{align} We next intend to derive the rate of decay $N_2$ as fast as possible. To this end, we split the integral into \begin{align}\label{split} \int_0^t |(v(\tau)\otimes u_s,\nabla T(t,\tau)^*P\varphi)|\,d\tau =\int_0^{\frac{t}{2}}+\int^{t-1}_{\frac{t}{2}}+\int_{t-1}^t \end{align} for $t>2$. We apply (\ref{gradl1lrad}) again to find \begin{align*} \int_0^{\frac{t}{2}}& \leq \|u_s\|_{3,\infty}\|v\|_X \int_0^{\frac{t}{2}}\|\nabla T(t,\tau)^*P\varphi\|_{3,1}\,d\tau \leq Ct^{-\frac{1}{2}}\|\varphi\|_1,\\ \int_{\frac{t}{2}}^{t-1}&\leq\|u_s\|_{3,\infty}[v]_{q,\infty} \int_{\frac{t}{2}}^{t-1}\tau^{-\frac{1}{2}+\frac{3}{2q}} \|\nabla T(t,\tau)^*P\varphi\|_{(1-\frac{1}{3}-\frac{1}{q})^{-1},1}\,d\tau \leq Ct^{-\frac{1}{2}+\frac{3}{2q}}\|\varphi\|_1 \end{align*} and \begin{align*} \int_{t-1}^{t}\leq \|u_s\|_{q,\infty}[v]_{q,\infty} \int_{t-1}^t\tau^{-\frac{1}{2}+\frac{3}{2q}}\| \nabla T(t,\tau)^*P\varphi\|_{(1-\frac{2}{q})^{-1},1}\,d\tau \leq Ct^{-\frac{1}{2}+\frac{3}{2q}}\|\varphi\|_1 \end{align*} for all $\varphi\in C_0^{\infty}(D)$ and $t>2$. Summing up the estimates above, we are led to \begin{align}\label{thm1pf8} N_2\leq Ct^{-\frac{1}{2}+\frac{3}{2q}}\quad {\rm{for}} ~t>2. \end{align} Similarly, we have \begin{align} &N_3\leq Ct^{-\frac{3}{2q}}\quad {\rm{for}} ~t>0,\\ &N_3\leq Ct^{-\frac{1}{2}+\frac{3}{2q}}\quad {\rm{for}} ~t>2. \end{align} It is easily seen from (\ref{psidef}), (\ref{hstaest2}) and (\ref{l1lr2ad}) that \begin{align*} \int_0^t |(H(\tau),T(t,\tau)^*P\varphi)|\,d\tau &\leq C(a^2+\alpha|a|)\int_0^{\min\{1,t\}} \|T(t,\tau)^*P\varphi\|_{\frac{3}{2},1}\,d\tau\\ &\leq C(a^2+\alpha|a|)\int_0^{\min\{1,t\}} (t-\tau)^{-\frac{1}{2}}\,d\tau\,\|\varphi\|_1 \end{align*} for all $\varphi\in C_0^{\infty}(D)$ and $t>0$, which yields \begin{align} &N_4\leq Ct^{\frac{1}{2}}\quad {\rm{for}} ~t>0,\\ &N_4\leq Ct^{-\frac{1}{2}}\quad {\rm{for}} ~t>2.\label{thm1pf9} \end{align} Combining (\ref{thm1pf6})--(\ref{thm1pf9}) implies $v(t)\in L^\infty(D)$ ($t>0$) and (\ref{inftydecay}). In view of the interpolation relation \begin{align*} (L^{q,\infty}(D),L^{\infty}(D))_{1-\frac{q}{r},r}=L^r(D),\quad \|f\|_r\leq C\|f\|^{\frac{q}{r}}_{q,\infty} \|f\|^{1-\frac{q}{r}}_\infty,\quad q<r<\infty, \end{align*} we obtain (\ref{q<decay}) for $r\in (q,\infty]$ as well. This completes the proof. \end{proof} \begin{rmk}\label{rmk3} When the stationary solution possesses the scale-critical rate $O(1/|x|)$, the $L^\infty$ decay of perturbation with less sharp rate $O(t^{-\frac{1}{2}+\varepsilon})$ was derived first by Koba \cite{koba2017} in the context of stability analysis, where $\varepsilon>0$ is arbitrary. If we have a look only at the $L^\infty$ decay rate, our rate is comparable with his result since $q\in (6,\infty)$ is arbitrary. However, we are not able to prove Proposition \ref{linftydecay} by his method. This is because he doesn't split the integrals in $N_2$ and $N_3$, so that the rate of $L^\infty$ decay is slower than the one of $L^{q,\infty}$ decay. From this point of view, Proposition \ref{linftydecay} is regarded as a slight improvement of his result. \end{rmk} We next show that the solution $v$ obtained in Proposition \ref{thm2} actually satisfies the integral equation (\ref{integraleq}) by identifying $v$ with a local solution $\widetilde{v}$ of (\ref{integraleq}) in a neighborhood of each time $t>0$. To this end, we need the following lemma on the uniqueness. The proof is similar to the argument in the second half (after (\ref{v1v22})) of the proof of Theorem \ref{thm3} and thus we may omit it. \begin{lem}\label{uniqueness} Let $3<r<\infty,~0\leq t_0<t_1<\infty$ and $v_0\in L^r_{\sigma}(D)$. Then the problem \begin{align}\label{NS7} (v(t),\varphi)=(v_0,T(t,t_0)^*\varphi) &+\int_{t_0}^t\big(v(\tau)\otimes v(\tau) +\psi(\tau)\{v(\tau)\otimes u_s+u_s\otimes v(\tau)\}, \nabla T(t,\tau)^*\varphi\big)\,d\tau\nonumber\\& +\int_{t_0}^t(H(\tau),T(t,\tau)^*\varphi)\,d\tau,\quad \forall \varphi\in C^{\infty}_{0,\sigma}(D) \end{align} on $(t_0,t_1)$ admits at most one solution within the class $L^{\infty}(t_0,t_1;L_{\sigma}^r(D))$. Here, $H$ is given by (\ref{h}). \end{lem} Given $v_0\in L^r_{\sigma}(D)$ with $r\in (3,\infty)$, let us construct a local solution of the integral equation \begin{align}\label{intt0} v(t)=T(t,t_0)v_0+\int_{t_0}^t &T(t,\tau)P[(Gv)(\tau)+H(\tau)]\,d\tau, \end{align} where $G$ and $H$ are defined by (\ref{g}) and (\ref{h}), respectively. For $0\leq t_0<t_1<\infty$ and $r\in (3,\infty)$, we define the function space \begin{align} &Y_r(t_0,t_1)=\big\{v\in C\big([t_0,t_1];L^{r}_{\sigma}(D)\big) \,\big|\, (\cdot-t_0)^{\frac{1}{2}}\nabla v(\cdot)\in BC_w\big((t_0,t_1];L^r(D)\big)\big\} \end{align} which is a Banach space equipped with norm \begin{align} &\|v\|_{Y_r(t_0,t_1)}=\sup_{t_0\leq\tau\leq t_1}\|v(\tau)\|_r +\sup_{t_0<\tau\leq t_1} (\tau-t_0)^{\frac{1}{2}}\|\nabla v(\tau)\|_r\label{Yr} \end{align} and set \begin{align} &U_1(v,w)(t)=\int_{t_0}^t T(t,\tau)P[v(\tau)\cdot \nabla w(\tau)]\,d\tau,\quad U_2(v)(t)=\int_{t_0}^t T(t,\tau)P[\psi(\tau)v(\tau)\cdot \nabla u_s]\,d\tau,\nonumber\\ &U_3(v)(t)=\int_{t_0}^t T(t,\tau) P[\psi(\tau)u_s\cdot \nabla v(\tau)]\,d\tau,\quad U_4(t)=\int_{t_0}^t T(t,\tau)PH(\tau)\,d\tau.\label{Udef} \end{align} \begin{lem}\label{localkey} Let $3<r<\infty$ and $0\leq t_0<t_1\leq t_0+1$. Suppose that $v,w\in Y_r(t_0,t_1)$. Then $U_1(v,w),U_2(v),U_3(v),U_4\in Y_r(t_0,t_1)$. Furthermore, there exists a constant $C=C(r,t_0)$ such that \begin{align} &\|U_1(v,w)\|_{Y_r(t_0,t)}\leq C(t-t_0)^{\frac{1}{2}-\frac{3}{2r}} \|v\|_{Y_r(t_0,t)}\|w\|_{Y_r(t_0,t)\,,}\label{Y1}\\ &\|U_2(v)\|_{Y_r(t_0,t)}\leq C(t-t_0)^{1-\frac{3}{2r}}\|\nabla u_s\|_r \|v\|_{Y_r(t_0,t)\,,}\label{Y2}\\ &\|U_3(v)\|_{Y_r(t_0,t)}\leq C(t-t_0)^{\frac{1}{2}-\frac{3}{2r}} \|u_s\|_r\|v\|_{Y_r(t_0,t)\,,}\label{Y3}\\ &\|U_4\|_{Y_r(t_0,t)} \leq C(t-t_0)^{\frac{1}{2}+\frac{3}{2r}}(a^2+\alpha|a|)\label{Y4} \end{align} for all $t\in (t_0,t_1].$ \end{lem} \begin{proof} In view of (\ref{lqlr0}), we have \begin{align} \|U_1(t)\|_r\leq C\int_{t_0}^t (t-\tau)^{-\frac{3}{2r}} \|v\|_r\|\nabla w\|_r\,d\tau \leq C(t-t_0)^{-\frac{3}{2r}+\frac{1}{2}} \|v\|_{Y_r(t_0,t)}\|w\|_{Y_r(t_0,t)}.\label{Y11} \end{align} Furthermore, (\ref{gradT}) with $\mathcal{T}=t_0+1$ yields \begin{align}\label{Y12} \|\nabla U_1(t)\|_r \leq C(t-t_0)^{-\frac{3}{2r}}\|v\|_{Y_r(t_0,t)} \|w\|_{Y_r(t_0,t)}. \end{align} By (\ref{Y11}) and (\ref{Y12}), we obtain (\ref{Y1}). Similarly, we can show (\ref{Y2})--(\ref{Y4}). We note that the estimate (\ref{Y4}) follows from (\ref{gradT2}) with $\mathcal{T}=t_0+1$ together with (\ref{hstaest2}). \par We next show the continuity of $U_1$ with respect to $t$. Let $t_2\in [t_0,t_1]$. If $t_2<t,$ we have \begin{align*} U_1(t)-U_1(t_2)&=\int_{t_0}^{t_2}(T(t,t_2)-1) T(t_2,\tau)P[v(\tau)\cdot\nabla w(\tau)]\,d\tau +\int_{t_2}^t T(t,\tau)P[v(\tau)\cdot \nabla w(\tau)]\,d\tau\\&=:U_{11}(t)+U_{12}(t). \end{align*} Lebesgue's convergence theorem yields that $\|U_{11}(t)\|_r\rightarrow 0$ as $t\rightarrow t_2$, while \begin{align*} \|U_{12}(t)\|_r\leq C(t-t_2)^{\frac{1}{2}-\frac{3}{2r}} \|v\|_{Y_r(t_0,t_1)}\|w\|_{Y_r(t_0,t_1)} \rightarrow 0 \quad {\rm{as}}~t\rightarrow t_2. \end{align*} To discuss the case $t<t_2$, we need the following device. Let $(t_0+t_2)/2\leq \tilde{t}<t<t_2$, where $\tilde{t}$ will be determined later, then \begin{align*} U_1(t)-U_1(t_2)=\left(\int_{t_0}^{\tilde{t}}+\int_{\tilde{t}}^t\right) T(t,\tau)&P[v(\tau)\cdot \nabla w(\tau)]\,d\tau\\ &-\left(\int_{t_0}^{\tilde{t}}+\int_{\tilde{t}}^{t_2}\right) T(t_2,\tau)P[v(\tau)\cdot \nabla w(\tau)]\,d\tau. \end{align*} We observe that \begin{align*} \int_{\tilde{t}}^t&\|T(t,\tau)P[v(\tau)\cdot\nabla w(\tau)]\|_r\,d\tau +\int_{\tilde{t}}^{t_2} \|T(t_2,\tau)P[v(\tau)\cdot\nabla w(\tau)]\|_r\,d\tau\\& \leq C\|v\|_{Y_r(t_0,t_1)}\|w\|_{Y_r(t_0,t_1)}\left( \int_{\tilde{t}}^t(t-\tau)^{-\frac{3}{2r}} (\tau-t_0)^{-\frac{1}{2}}\,d\tau+ \int_{\tilde{t}}^{t_2}(t_2-\tau)^{-\frac{3}{2r}} (\tau-t_0)^{-\frac{1}{2}}\,d\tau\right)\\ &\leq\frac{2C}{1-\frac{3}{2r}}\|v\|_{Y_r(t_0,t_1)}\|w\|_{Y_r(t_0,t_1)} \left(\frac{t_0+t_2}{2}-t_0\right)^{-\frac{1}{2}} (t_2-\tilde{t}\,)^{1-\frac{3}{2r}}. \end{align*} For any $\varepsilon>0$, we choose $\tilde{t}$ such that \begin{align*} \frac{2C}{1-\frac{3}{2r}}\|v\|_{Y_r(t_0,t_1)}\|w\|_{Y_r(t_0,t_1)} \left(\frac{t_0+t_2}{2}-t_0\right)^{-\frac{1}{2}} (t_2-\tilde{t}\,)^{1-\frac{3}{2r}}<\varepsilon \end{align*} which yields \begin{align*} \|U_1(t)-U_1(t_2)\|_r&\leq \int_{t_0}^{\tilde{t}} \left\|\big(T(t,\tau)-T(t_2,\tau)\big) P[v(\tau)\cdot\nabla w(\tau)]\right\|_r\,d\tau+\varepsilon \quad {\rm{for~}}\tilde{t}<t<t_{2} \end{align*} and therefore, \begin{align}\label{111} \limsup_{t\rightarrow t_2}\|U_1(t)-U_1(t_2)\|_r \leq \limsup_{t\rightarrow t_2}\int_{t_0}^{\tilde{t}} \left\|\big(T(t,\tau)-T(t_2,\tau)\big) P[v(\tau)\cdot\nabla w(\tau)]\right\|_r\,d\tau+\varepsilon. \end{align} Since $\big\|\big(T(t,\tau)-T(t_2,\tau)\big) P[v(\tau)\cdot\nabla w(\tau)]\big\|_r= \big\|\big(T(t,\tilde{t}\,)-T(t_2,\tilde{t}\,)\big) T(\tilde{t},\tau)P[v(\tau)\cdot\nabla w(\tau)]\big\|_r$ tends to $0$ as $t\rightarrow t_2$ for $t_0<\tau<\tilde{t}$, it follows from Lebesgue's convergence theorem that the integral term in (\ref{111}) tends to $0$ as $t\rightarrow t_{2}$. Since $\varepsilon>0$ is arbitrary, we have \begin{align}\label{Y1conti} U_1\in C\big([t_0,t_1];L^r_{\sigma}(D)\big). \end{align} Furthermore, we find $\nabla U_1\in C_w\big((t_0,t_1];L^r(D)\big)$ on account of (\ref{Y12}) and (\ref{Y1conti}) together with the relation \begin{align*} (\nabla U_1(t)-\nabla U_1(t_2),\varphi) =-(U_1(t)-U_1(t_2),\nabla \cdot\varphi) \end{align*} for all $t_2\in (t_0,t_1]$ and $\varphi\in C_0^{\infty}(D)^{3\times 3}$. Since $U_2,U_3$ and $U_4$ are discussed similarly, the proof is complete. \end{proof} The following proposition provides a local solution of (\ref{intt0}). \begin{prop}\label{localexistence} Let $3<r<\infty$, $t_0\geq 0$ and $v_0\in L^r_{\sigma}(D)$. There exists $t_1\in (t_0,t_0+1]$ such that (\ref{intt0}) admits a unique solution $v\in Y_r(t_0,t_1)$. Moreover, the length of the existence interval can be estimated from below by \begin{align*} t_1-t_0\geq \zeta(\|v_0\|_r), \end{align*} where $\zeta(\cdot):[0,\infty)\rightarrow (0,1)$ is a non-increasing function defined by (\ref{zeta}) below. \end{prop} \begin{proof} We put \begin{align*} (\Psi v)(t)={\rm{the~RHS~of~(\ref{intt0})}}. \end{align*} By applying Lemma \ref{localkey}, we have \begin{align*} &\|\Psi v\|_{Y_r(t_0,t)}\leq (C_1\|v\|_{Y_r(t_0,t)}^2 +C_2\|v\|_{Y_r(t_0,t)} +C_3)(t-t_0)^{\frac{1}{2}-\frac{3}{2r}}+C_4\|v_0\|_r,\\ &\|\Psi v-\Psi w\|_{Y_r(t_0,t)}\leq \{C_1(\|v\|_{Y_r(t_0,t)} +\|w\|_{Y_r(t_0,t)})+C_2\}(t-t_0)^{\frac{1}{2}-\frac{3}{2r}} \|v-w\|_{Y_r(t_0,t)} \end{align*} for all $t\in(t_0,t_0+1]$ and $v,w\in Y_r(t_0,t)$. We note that the constants $C_i$ may be dependent on $\|u_s\|_r$, $\|\nabla u_s\|_r$, $\alpha$ and $a$. We choose $t_1\in(t_0,t_0+1]$ such that \begin{align} &C_2(t_1-t_0)^{\frac{1}{2}-\frac{3}{2r}}<\frac{1}{2},\label{t12}\\ &8C_1(t_1-t_0)^{\frac{1}{2}-\frac{3}{2r}} \{C_3(t_1-t_0)^{\frac{1}{2}-\frac{3}{2r}}+C_4\|v_0\|_r\} <\frac{1}{2}\label{t13} \end{align} which imply \begin{align}\label{t11} &\lambda:=\big\{1-C_2(t_1-t_0)^{\frac{1}{2}-\frac{3}{2r}}\big\}^2 -4C_1(t_1-t_0)^{\frac{1}{2}-\frac{3}{2r}} \big\{C_3(t_1-t_0)^{\frac{1}{2}-\frac{3}{2r}}+C_4\|v_0\|_r\big\}>0. \end{align} We set \begin{align*} &\Lambda:=\frac{1-C_2(t_1-t_0)^{\frac{1}{2}-\frac{3}{2r}} -\sqrt{\lambda}}{2C_1(t_1-t_0)^{\frac{1}{2}-\frac{3}{2r}}} <4(C_3+C_4\|v_0\|_r),\\ &Y_{r,\Lambda}(t_0,t_1):=\{v\in Y_r(t_0,t_1)\mid\|v\|_{Y_r(t_0,t_1)} \leq \Lambda\}. \end{align*} Then we find that the map $\Psi:Y_{r,\Lambda}(t_0,t_1)\rightarrow Y_{r,\Lambda}(t_0,t_1)$ is well-defined and also contractive. Hence we obtain a local solution. Indeed, the conditions (\ref{t12}) and (\ref{t13}) are accomplished by \begin{align*} t_1-t_0<\min\left\{1,\left(\frac{1}{2C_2}\right) ^{(\frac{1}{2}-\frac{3}{2r})^{-1}}, \left(\frac{1}{16C_1(C_3+C_4\|v_0\|_r)}\right) ^{(\frac{1}{2}-\frac{3}{2r})^{-1}}\right\}. \end{align*} Thus, it is possible to take $t_1$ such that \begin{align}\label{zeta} t_1-t_0\geq \frac{1}{2}\min\left\{1,\left(\frac{1}{2C_2}\right) ^{(\frac{1}{2}-\frac{3}{2r})^{-1}}, \left(\frac{1}{16C_1(C_3+C_4\|v_0\|_r)}\right) ^{(\frac{1}{2}-\frac{3}{2r})^{-1}}\right\}=:\zeta(\|v_0\|_r). \end{align} The proof is complete. \end{proof} \begin{lem}\label{betterregular} Let $3<r<\infty$, $t_0\geq 0$ and $v_0\in L^r_\sigma(D).$ The local solution $v$ obtained in Proposition \ref{localexistence} also possesses the following properties: \begin{align}\label{regular} v\in C\big((t_0,t_1];L^\kappa_\sigma(D)\big) \cap C_{w^*}\big((t_0,t_1];L^\infty(D)\big) \end{align} for every $\kappa\in (r,\infty)$ and \begin{align} \nabla v\in C_w\big((t_0,t_1];L^\gamma(D)\big) \end{align} for every $\gamma\in(r,\infty)$ satisfying \begin{align}\label{rlrelation} \frac{2}{r}-\frac{1}{\gamma}<\frac{1}{3}. \end{align} \end{lem} \begin{proof} By using (\ref{lqlr0}) and (\ref{lqlr2}) and the semigroup property, we find $v(t)\in L^{\infty}(D)$ with \begin{align}\label{inftybdd} \|v(t)\|_\infty\leq C(t-t_0)^{-\frac{3}{2r}}\big\{\|v_0\|_r+ \|v\|^2_{Y_r(t_0,t_1)}+ \|v\|_{Y_r(t_0,t_1)}(\|u_s\|_r+\|\nabla u_s\|_r)+(a^2+\alpha|a|) \big\} \end{align} for all $t\in (t_0,t_1]$. Moreover, for each $t_2\in(t_0,t_1]$, we know from $v\in C([t_0,t_1];L^r_\sigma(D))$ that \begin{align} \big(v(t),\varphi\big)-\big(v(t_2),\varphi\big) \rightarrow 0\quad {\rm as}~t\rightarrow t_2 \end{align} for all $\varphi\in C_0^{\infty}(D)$, which combined with (\ref{inftybdd}) yields $v\in C_{w^*}\big((t_0,t_1];L^\infty(D)\big)$. Since \begin{align*} \|v(t)-v(t_2)\|_\kappa\leq \|v(t)-v(t_2)\|^{\frac{r}{\kappa}}_r \|v(t)-v(t_2)\|_\infty^{1-\frac{r}{\kappa}} \end{align*} for $\kappa\in(r,\infty)$ and $t_2\in(t_0,t_1]$, it follows from (\ref{inftybdd}) that \begin{align}\label{contikappa1} v\in C\big((t_0,t_1];L_\sigma^\kappa(D)\big) \quad{\rm for}~\kappa\in (r,\infty). \end{align} \par The estimates (\ref{gradT}) and (\ref{gradT2}) with $\mathcal{T}=t_0+1$ imply that if we assume (\ref{rlrelation}), we have $\nabla v(t)\in L^\gamma (D)$ with \begin{align} \|\nabla v(t)\|_\gamma\leq C(t-t_0)^{-\frac{3}{2}(\frac{1}{r}-\frac{1}{\gamma})-\frac{1}{2}}\big\{ \|v_0\|_r+\|v\|^2_{Y_r(t_0,t_1)} &+\|v\|_{Y_r(t_0,t_1)}(\|u_s\|_r+\|\nabla u_s\|_r)\nonumber\\& +(a^2+\alpha|a|)\big\}\label{nablabdd} \end{align} for all $t\in (t_0,t_1]$. Here, we note that (\ref{rlrelation}) is needed for estimates of $\nabla U_1$ and $\nabla U_3$ given in (\ref{Udef}). On account of (\ref{contikappa1}), (\ref{nablabdd}) and \begin{align*} (\nabla v(t)-\nabla v(t_2),\varphi) =-(v(t)-v(t_2),\nabla\cdot\varphi) \end{align*} for all $t_2\in (t_0,t_1]$ and $\varphi\in C_0^\infty(D)^{3\times 3},$ we find the weak continuity of $\nabla v$ with values in $L^{\gamma}(D)$. The proof is complete. \end{proof} We close the paper with completion of the proof of Theorem \ref{thm1}. \vspace{0.2cm}\\ \noindent{\bf Proof of Theorem \ref{thm1}}\quad It remains to show that the solution $v$ obtained in Proposition \ref{thm2} also satisfies (\ref{integraleq}) with \begin{align}\label{contikappa} v\in C\big((0,\infty);L^\kappa_\sigma(D)\big)\cap C_{w^*}\big((0,\infty);L^\infty(D)\big),\quad \nabla v\in C_w\big((0,\infty);L^\kappa(D)\big) \end{align} for all $3<\kappa<\infty$. Let $t_*\in (0,\infty).$ By applying Proposition \ref{localexistence} and Lemma \ref{betterregular} with $r=6$, we can see that for each $t_0\in [t_*/2,t_*)$, there exists $\widetilde{v}\in Y_6(t_0,t_1)$ which satisfies (\ref{intt0}) and therefore, (\ref{NS7}) with $v_0=v(t_0)$ such that \begin{align*} \widetilde{v}\in C\big((t_0,t_1];L^\kappa_\sigma(D)\big)\cap C_{w^*}\big((t_0,t_1];L^\infty(D)\big),\quad \nabla\, \widetilde{v}\in C_w\big((t_0,t_1];L^\kappa(D)\big) \end{align*} for all $\kappa\in [6,\infty)$. Moreover, the length of the existence interval can be estimated by \begin{align*} t_1-t_0\geq \zeta(\|v(t_0)\|_6)\geq \zeta\left(C_5\left(\frac{t_*}{2}\right)^{-\frac{1}{4}}\right) =:\varepsilon, \end{align*} where $\zeta(\cdot)$ is the non-increasing function in Proposition \ref{localexistence} because of \begin{align*} \|v(t)\|_6\leq C_5\left(\frac{t_*}{2}\right)^{-\frac{1}{4}} \end{align*} for all $t\geq t_*/2$, see (\ref{cw*r}). We note that the solution $v$ obtained in Proposition \ref{thm2} also satisfies (\ref{NS7}) with $v_0=v(t_0)$ since $C^\infty_{0,\sigma}(D)$ can be replaced by $L^{6/5}_\sigma(D)$ as the class of test functions in (\ref{NS6}). Let us take $t_0:=\max\{t_*/2,t_*-\varepsilon/2\}$ so that $t_*\in (t_0,t_1)$, in which $v=\widetilde{v}$ on account of Lemma \ref{uniqueness}. Since $t_*$ is arbitrary, we conclude (\ref{contikappa}) for $\kappa\in [6,\infty)$. It is also proved by applying Proposition \ref{localexistence} with $r\in (3,6)$ that the solution belongs to the class (\ref{contikappa}) for $\kappa\in (3,6)$ as well. The proof is complete.\qed \vspace{0.3cm}\\ \noindent{\bf{Acknowledgment.}}~~The author would like to thank Professor Toshiaki Hishida for valuable discussions to improve this paper.
{'timestamp': '2020-10-06T02:07:24', 'yymm': '2004', 'arxiv_id': '2004.00781', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.00781'}
\section{Introduction} A complete polynomial invariant able to uniquely distinguish between rooted trees has been recently introduced in \cite{liu2021tree}. Motivated to analyze and compare tree shapes in a phylogenetic context, this polynomial (to which we will refer as the \emph{Liu polynomial}) has been used both to define a similarity measure on rooted tree shapes and to estimate parameters and models \textit{via} its coefficients \cite{liu2020polynomial}. Moreover, its generalization from trees to networks (by analyzing the set of embedded spanning trees in the network) has also been used to study the properties of randomly generated networks \cite{janssen2021comparing}. We note that the word ``invariant'' is used here in its traditional sense, and not the one used in algebraic geometry approaches to phylogenetics, in which phylogenetic invariants for an evolutionary model along a tree are the polynomials which vanish on the expected frequencies of base patterns at the leaves \cite{cavender1987invariants}. Throughout this article, a \emph{(complete) invariant} of a set $A$ is a function $f:A\to B$ with the property that $x\sim_A y$ if and only if $f(x)\sim_B f(y)$, where $B$ is some other set (such as the set of polynomials), and $\sim_A$ and $\sim_B$ are equivalence relations in the respective sets. A multitude of (non-polynomial) invariants have been defined for specific subclasses of phylogenetic networks. To name just a few, the $\mu$-vectors which store the number of paths from nodes to leaves characterize (among others) tree-child networks \cite{cardona2008comparison} and orchard networks (without stacks) \cite{bai2021defining}; the set of displayed trees characterizes regular networks \cite{willson2010regular}; and the induced trinets (minimal subnetworks induced by triples of leaves) that characterize (among others) level-$2$ networks \cite{van2014trinets} and orchard networks \cite{semple2021trinets}. In this paper we show how a polynomial invariant can be defined for rooted phylogenetic networks, generalizing the Liu polynomial invariant for trees. In order to do so, we consider phylogenetic networks and a labelled version of them, called internally labelled phylogenetic networks, where we keep the labels on leaves and also (bijectively) label the reticulations. In fact, internally labelled phylogenetic networks are a subset of a more general set of networks, which we call internally multi-labelled phylogenetic networks, or IMLN's. On these networks the presence of elementary nodes is allowed, and leaves, reticulation and elementary nodes are all labelled. Then, if we denote by $\textrm{PN}$ the set of all phylogenetic networks (up to isomorphism) and by $\textrm{ILPN}$ the set of all internally labelled phylogenetic networks (up to isomorphism), the map $\Phi:\textrm{ILPN}\to\textrm{PN}$ that sends each internally labelled phylogenetic network to the phylogenetic network obtained by ``forgetting'' all the internal labels (on reticulations) is obviously well defined; therefore for each $N\in\textrm{PN}$, $\Phi^{-1}(N)$ is the set of all the internally labelled phylogenetic networks that have its same topology; its fiber, in mathematical terms. The aim of this paper is to define a polynomial $p$ that uniquely characterizes these fibers and, in so doing, also characterizes the phylogenetic networks beneath them. This paper is organized as follows. In the Definitions section we include the three main graph structures of study: phylogenetic networks, internally labelled phylogenetic networks and internally multi-labelled phylogenetic networks (or IMLN's). We also define the concept of isomorphism on these structures. The Folding and unfolding section studies a process that unfolds an IMLN into a tree (an IMLT) and its reverse, folding, that recovers the initial IMLN. The key result of this section is the characterization of an IMLN by an IMLT (Corollary~\ref{cor:fu}). The next section is dedicated to the definition and study of an extension of the Liu polynomial on IMLN's. If $N$ is an IMLN on a set of leaves labelled by $X$, the assigned polynomial $p(N)$ has $|X|+r+1$ variables, where $r$ is the number of reticulations in the network. This section is further divided into multiple subsections. The first one studies a special type of path (composed only of reticulations or elementary nodes) in IMLN's, called strong paths. Roughly speaking, these allow us to define an equivalence relation between IMLN's, and we prove that two IMLN's share the polynomial if, and only if, they are equivalent (Theorem~\ref{thm:imlt-inj}). The second gives a sufficient condition on the space of phylogenetic networks (which we call separability) for the derived internally labelled phylogenetic networks to be completely characterized by the polynomial. The multiple lemmas proved in this part allow us to prove the main result (Theorem~\ref{teo:pol-iso}) in the third part; that is, the polynomial is a complete invariant in the set of internally labelled separable phylogenetic networks up to isomorphism. The fourth subsection proves that orchard networks are separable, and so are characterized by the polynomial introduced in this paper (Theorem~\ref{thm:sep-orch}). Finally, in the last part, we present how the obtained results can be applied for an unlabelled version of networks, in the sense that we forget the labelling of the leaves, reducing the polynomial to $r+2$ variables (Proposition~\ref{prop:unlabelled}). \section{Definitions} In this section we introduce the mathematical notation that will be used in the rest of the paper. Throughout this paper, $X$ will denote a non-empty finite set (of taxa). Commonly, we will use $X=\{x_1,\ldots, x_n\}$, and we will allow ourselves to see each member of $X$ as an irreducible polynomial in $\ZZ[x_1,\ldots,x_n]$; i.e., we will consider the labels of the leaves in our networks to be polynomials of the form $x_i$ for $i\in\{1,\ldots, n\}$. \begin{defn}\label{d:rooted.binary.net} A \emph{rooted binary phylogenetic network $N=(V,E)$ on $X$}, or simply a \emph{phylogenetic network} on $X$, is a rooted directed acyclic graph with no parallel arcs satisfying the following conditions: \begin{enumerate} \item[(i)] any node with out-degree zero (a \emph{leaf}) has in-degree one, and the set of nodes with out-degree zero, denoted by $L(N)$, is identified with $X$ \textit{via} a bijection $\varphi: L(N)\to X$; \item[(ii)] the root is the only node with in-degree zero, and has out-degree two; \item[(iii)] any other node has either in-degree one and out-degree two (a \emph{tree} node), or in-degree two and out-degree one (called a \emph{reticulation} node). \end{enumerate} We shall consider the leaves and root to be tree nodes. \end{defn} \begin{defn}\label{d:IMLN} A \emph{rooted binary internally multi-labelled phylogenetic network $N=(V,E)$ on $X$}, or simply an \emph{IMLN} on $X$, is a rooted directed acyclic graph with no parallel arcs satisfying the following conditions: \begin{enumerate} \item[(i)] any node with out-degree zero (a \emph{leaf}) has in-degree one, and the set of nodes with out-degree zero, denoted by $L(N)$, is identified with $X$ \textit{via} a surjection $\varphi: L(N)\to X$; \item[(ii)] the root is the only node with in-degree zero, and it can have out-degree one (in which case we shall say it is an \emph{elementary} node) or two (a \emph{tree} node); \item[(iii)] any other node has either in-degree one and out-degree two (again, a \emph{tree} node), or in-degree two and out-degree one (called a \emph{reticulation} node), or in-degree one and out-degree one (again, an \emph{elementary} node); \item[(iv)] if $R(N)$ denotes the set of reticulation nodes and $E(N)$ the set of elementary nodes of $N$, then there exists $\ell:R(N) \cup E(N)\to \{\lambda_1,\ldots, \lambda_r\}$ a labelling function such that its restriction to $R(N)$ is injective and if $u\in R(N)$ and $v\in E(N)$, $\ell(u)\neq \ell(v)$. \end{enumerate} \end{defn} \begin{defn} A \emph{rooted binary internally multi-labelled phylogenetic tree $T=(V,E)$ on $X$}, or simply \emph{IMLT} on $X$, is an IMLN without reticulation nodes. \end{defn} We will consider the labels $\lambda_1,\ldots,\lambda_r$ to be irreducible polynomials in $\ZZ[x_1,\ldots,x_n,\lambda_1,\ldots,\lambda_r]$. Notice that Definition~\ref{d:IMLN} implies that IMLN's are a recursive structure in the following sense: given any IMLN $N$, for any $u\in V(N)$, the subgraph rooted at $u$ is still an IMLN. This is not the case in general for phylogenetic networks. In the case that an IMLN (with the root of out-degree two) does not have elementary nodes and the labelling on the leaves is a bijection, by definition, it becomes a phylogenetic network if the labelling $\ell$ on reticulations is suppressed. Also, if we consider a phylogenetic network and we add a labelling bijection $\ell:R(N)\to \{\lambda_1,\ldots, \lambda_r\}$, it becomes an IMLN. In order to reflect this possibility, we introduce the following definition. \begin{defn} An \emph{internally labelled phylogenetic network} $N$ on $X$ is an IMLN on $X$ without elementary nodes and where the maps $\varphi: L(N)\to X$ and $\ell:R(N)\to \{\lambda_1,\ldots, \lambda_r\}$ are bijections. \end{defn} In order to formally define the concept of isomorphism between a pair of phylogenetic networks or between a pair of IMLN's, we consider the alternative notation, $(V,E,\varphi)$ and $(V,E,\varphi, \ell)$, to reflect the labelling functions, respectively. \begin{defn} Two phylogenetic networks $N_1=(V_1,E_1,\varphi_1)$ and $N_2=(V_2,E_2,\varphi_2)$ on $X$ are \emph{isomorphic} if there exists a bijection $f: V_1 \rightarrow V_2$ such that $\varphi_1(x)=\varphi_2(f(x))$ for all $x \in L(N_1)$, and $(u,v) \in E_1$ if and only if $(f(u), f(v)) \in E_2$. \end{defn} \begin{defn} Two IMLN's $N_1=(V_1,E_1,\varphi_1,\ell_1)$ and $N_2=(V_2,E_2,\varphi_2,\ell_2)$ on $X$ are \emph{isomorphic} if there exists a bijection $f: V_1 \rightarrow V_2$ such that $\varphi_1(x)=\varphi_2(f(x))$ for all $x \in L(N_1)$, $\ell_1(x)=\ell_2(f(x))$ for all $x \in R(N_1) \cup E(N_1)$, and $(u,v) \in E_1$ if and only if $(f(u), f(v)) \in E_2$. \end{defn} That is, a graph isomorphism that preserves the labels of both the reticulation and elementary nodes. \section{Folding and unfolding}\label{sec:fold} Following \cite{huber2016folding}, a phylogenetic network can be ``unfolded" in a specific manner to obtain a multi-labelled tree, that is a particular IMLT without elementary nodes in terms of the previous definitions. Moreover, in some cases, this process can be reverted, and the multi-labelled tree can be ``folded'' recovering the initial network. A phylogenetic network cannot in general be characterized by a multi-labelled tree, and this correspondence is valid only for the subclass of $FU$-stable phylogenetic networks \cite{huber2016folding}. In this section, however, we prove that an internally labelled phylogenetic network can be uniquely characterized by an IMLT obtained by a sequence of ``unfoldings'' on its reticulation nodes. Roughly speaking, considering the reticulations of an IMLN in a specific order, it is possible to sequentially duplicate the subnetwork descending from these nodes until an IMLT is obtained. Let $N$ be a (generic) IMLN, and $R(N)$ the set of its reticulation nodes. The relation of being a descendant of another node induces a partial order over $R(N)$, which we will denote by $\leq_R$. That is, for any two nodes $u,v \in R(N)$, $u\leq_R v$ if, and only if, there exists a directed path from $v$ to $u$. Let $R_{\min}(N)$ be the set of the minimal elements of $R(N)$ under this order, i.e. reticulation nodes such that none of their descendants are also reticulation nodes. \begin{lem}\label{lema:min_tree} Let $N$ be an IMLN and $u \in R_{\min}(N)$. Then the graph rooted at $u$ is an IMLT. \end{lem} \begin{pf} If $u \in R_{\min}(N)$, then there is no path in $N$ from $u$ to another reticulation. This means that there are no reticulations in the graph rooted at $u$; and therefore it is an IMLT. \end{pf} Let $N$ be an IMLN, and consider $u\in R_{\min}(N)$ (so that $u$ is labelled by an element in $\{\lambda_1, \ldots, \lambda_r\}$). Let $v_1,v_2$ be its parents, noting that $v_1\neq v_2$ due to the fact that parallel arcs are excluded. Define $U(N, u)$ to be the \textit{unfolded IMLN of $N$ at $u$}, obtained by the following algorithm: \begin{enumerate} \item delete edges $(v_1,u)$ and $(v_2,u)$; \item duplicate $N(u)$, the IMLT rooted at $u$, including all its labels; \item add an edge from $v_1$ to one of the resulting copies of $u$, and an edge from $v_2$ to the remaining copy of $u$. \end{enumerate} \begin{rmk}\label{rem:unfolded-paths} Notice that the process of unfolding preserves paths in the following sense: if $N'$ is obtained from $N$ by unfolding $N$ at some node $u$, then any path between two nodes in $N'$ comes from an existing path in $N$; and \textit{vice versa}, any path between two nodes in $N$ corresponds to a path in $N'$. Notice, however, that a path in $N$ might very well correspond to two different paths in $N'$, and so this assignation is not injective. \end{rmk} \begin{cor} Let $N$ be an IMLN, and $u\in R_{\min}(N)$. Then $U(N, u)$ is an IMLN. \end{cor} Let $N$ be an IMLN. We say that a sequence $(u_1,\ldots, u_k)$ of nodes in $R(N)$ is \emph{compatible} if the associated sequence $(N, N_{u_1}, N_{u_2},\ldots, N_{u_k})$ of IMLN's, such that $N_{u_{i+1}} = U(N_{u_i}, u_{i+1})$ and $N_{u_1}=U(N,u_1)$, is such that $u_{i+1}\in R_{\min}(N_{u_i})$ and $u_{1}\in R_{\min}(N)$. Then, if $(u_1,\ldots, u_k)$ is compatible, for each $i \in \{1,\ldots,k-1\}$ there is no path from $u_i$ to $u_j$ when $j>i$; i.e., it is non decreasing under the partial order $\leq_R$ induced by the network over $R(N)$. \begin{lem}\label{lem:unf-order} Let $N$ be an IMLN and $u_1, u_2\in R_{\min}(N)$. Then, $$ U(U(N, u_1), u_2) = U(U(N, u_2), u_1). $$ \end{lem} \begin{pf} It is straightforward by Lemma \ref{lema:min_tree} and the steps of the unfolding algorithm. If $u_1 \in R_{\min}(N)$, then $ u_2\in R_{\min}(U(N,u_1))$; otherwise there would be a reticulation node $u'$ in $R(U(N,u_1))$ and a path from $u_2$ to $u'$ in $U(N,u_1)$, and so in $N$, which is a contradiction. Then, by Lemma \ref{lema:min_tree}, the graph rooted at $u_2$ in $U(N,u_1)$ is an IMLT. Since $u_2$ is not a node in any of the copies of the IMLT rooted at $u_1$ in the construction of $U(N,u_1)$, there is no intersection between the copies from $u_1$ and the copies from $u_2$. Since the same argument holds if we start by $u_2$, the result is achieved. \end{pf} Lemma \ref{lem:unf-order} can be extended following the same arguments for any set of reticulations $\{u_1,\ldots, u_k\}$ if all of them are in $R_{\min}(N)$, since there will be no intersection between the created copies of IMLT's. Let $N$ be an IMLN. We define an equivalence relation $\equiv$ in the set of compatible sequences of elements of $R(N)$ as follows: $$ (u_1, u_2, \ldots, u_k) \equiv (v_1, v_2, \ldots, v_{k'}) \Leftrightarrow \{u_1, u_2, \ldots, u_k\}= \{v_1, v_2, \ldots, v_{k'}\}.$$ That is, we say that two compatible sequences are \emph{equivalent} if they are composed by the same set of nodes. An \emph{$\leq_R$-chain} in an IMLN $N$ is a chain under the $\leq_R$ order defined on $R(N)$ (or a subset of it). That is, a subset of reticulations such that $u_1 \leq_R \cdots \leq_R u_s$. And, an \emph{$\leq_R$-antichain} in an IMLN $N$ is an antichain under the $\leq_R$ order; i.e., a subset of reticulations of $N$ which are pairwise incompatible ($u_i \not\leq_R u_j$ and $u_j \not\leq_R u_i$ if $u_i \neq u_j$) under the $\leq_R$ order. In the next lemma we prove that if we consider an $\leq_R$-chain in an IMLN $N$ then there is a single way to traverse these nodes in a compatible sequence, from bottom to top. On the other hand, if we consider an $\leq_R$-antichain, then every way to traverse these nodes is valid to form a compatible sequence. \begin{lem}\label{lem:chains} Let $N$ be an IMLN and $S=\{v_1,\ldots, v_r\} \subseteq R(N)$. Then \begin{itemize} \item[(a)] If $v_1 \leq_R v_2 \leq_R \cdots \leq_R v_r$ is an $\leq_R$-chain, then $v_i$ must precede $v_j$ in every compatible sequence containing $S$ if $i<j$. \item[(b)] If $S$ is an $\leq_R$-antichain, then every possible ordering of its nodes produces a compatible sequence composed by $S$. \end{itemize} \end{lem} \begin{pf} We first prove (a). If $v_1 \leq_R v_2 \leq_R \cdots \leq_R v_r$ is an $\leq_R$-chain, then there is a path from $v_j$ to $v_i$ if $i<j$. Therefore if there exists a path from $v_i$ to $v_j$, it produces a cycle in $N$; but this is not possible because $N$ is an IMLN, and so in particular it is acyclic. This means that there is no path from $v_i$ to $v_j$ when $i<j$. Consequently, if $i<j$, $v_i$ must precede $v_j$ in every compatible sequence containing $S$. \\ Now we prove (b). Let $v$ and $v'$ be two nodes in $S$. If $v$ precedes $v'$ in a sequence there cannot be a path from $v$ to $v'$; otherwise $v'\leq_Rv$. If $v'$ precedes $v$ in a sequence there cannot be a path from $v'$ to $v$; otherwise $v\leq_Rv'$. Since $S$ is an $\leq_R$-antichain, then both cases derive compatible sequences. \end{pf} \begin{cor}\label{cor:unf-order} Let $N$ be an IMLN and $(u_1, u_2, \ldots, u_k) \equiv (v_1, v_2, \ldots, v_{k})$ a pair of equivalent compatible sequences of elements of $R(N)$. Let $(N, N_{u_1}, N_{u_2},\ldots, N_{u_k})$ and $(N, N'_{v_1}, N'_{v_2},\ldots, N'_{v_k})$ be the associated sequences of IMLN's to their corresponding compatible sequences. Then $N_{u_k}$ and $N'_{v_k}$ are isomorphic. \end{cor} \begin{pf} For $k=1$ there is nothing to prove, since $u_1=v_1$. For $k=2$. If $u_1,u_2 \in R_{\min}(N)$, there is nothing to prove, because $(u_1,u_2)$ and $(u_2,u_1)$ are compatible sequences and Lemma \ref{lem:unf-order} applies. If $(u_1,u_2)$ is a compatible sequence and $u_1 \leq_R u_2$, then must be $(v_1,v_2)=(u_1, u_2)$ (and not $(v_1,v_2)=(u_2, u_1)$), since $u_1\notin R_{\min}(N_{u_2})$. \\ The general situation for $k\geq 3$ demands a different approach. Let $s_1=(u_1, u_2, \ldots, u_k)$ and $s_2=(v_1, v_2, \ldots, v_{k})$. Since $s_1 \equiv s_2$, we have $\{u_1, u_2, \ldots, u_k\}= \{v_1, v_2, \ldots, v_{k'}\} \subseteq R(N)$. Let $A=\{u_1, u_2, \ldots, u_k\}$. Then we could iteratively apply the following process to prove the result. Let $A'=\{u\in A: u \in R_{\min}(N)\}$. Note that $A'$ is not empty due to $u_1$ and $v_1$ (which could be equal) are in $R_{\min}(N)$. Then, let $s_1^{A'}$ be the sequence obtained from $s_1$ by moving all the nodes in $A'$ to the first positions (in such a way that if $u_i, u_j \in A'$ with $i<j$, then the node $u_i$ appears before $u_j$ in $s_1^{A'}$) and remain invariant the rest of nodes. Note that $s_1^{A'}$ is compatible by construction and $s_1^{A'} \equiv s_1$. A similar process can be repeated to obtain $s_2^{A'} \equiv s_2$. Note that the set of nodes of $A'$ occupying the first $|A'|$ positions in both $s_1^{A'}$ and $s_2^{A'}$ are exactly the same, and it is an $\leq_R$-antichain; but these nodes may not appear in the same order in both sequences. \\ Let $u^*$ be the last node (the rightmost) in $s_1^{A'}$ such that $u^* \in A'$. Now let $\hat{s_2}^{A'}$ be the compatible equivalent sequence to $s_2^{A'}$ obtained by remaining invariant all positions except for the node $u^*$, which comes to be the last node in $\hat{s_2}^{A'}$ with $u^* \in A'$. This ensures that the last node of the first $|A'|$ positions in both $s_1^{A'}$ and $\hat{s_2}^{A'}$ is the same, $u^*$. Note that, could be $u^*=u_k=v_k$ (when $A=A'$). By Lemma \ref{lem:chains}(b) and Lemma \ref{lem:unf-order}, the IMLN $N_{u^*}$ obtained by sequentially unfold at the nodes in $s_1^{A'}$ until $u^*$ is achieved, is isomorphic to the IMLN obtained by sequentially unfold at the (same) nodes in $\hat{s_2}^{A'}$ until $u^*$ is achieved. Then, the same process can be repeated by considering new equivalent compatible sequences obtained from $s_1^{A'}$ and $\hat{s_2}^{A'}$ by suppressing the first $|A'|$ positions and starting with the IMLN $N_{u^*}$.\end{pf} Therefore, given a compatible sequence $(u_1, u_2, \ldots, u_r)$ of all the elements of $R(N)$, and its associated sequence $(N, N_{u_1}, N_{u_2},\ldots, N_{u_r})$, we define the \textit{unfolding} of an {IMLN} $N$, denoted by $U(N)$, by means of the equation $U(N) = N_{u_r}$. We may refer to such a sequence as a \textit{sequence of unfoldings}. See Fig \ref{fig:unf} for an example of a sequence of unfoldings for an IMLN; in fact for an internally labelled phylogenetic network. \begin{figure}[!ht] \includegraphics[width=\linewidth]{im1.png} \caption{{\bf The unfolding of an IMLN}. {Top two figures: A phylogenetic network $N$ on $\{x_1,x_2,x_3,x_4\}$, and the IMLN obtained by considering the labelling function over $R(N)$ given by $\ell(u_i)=\lambda_i$ for $i\in\{1,2,3\}$. Notice that $N$ is an internally labelled phylogenetic network. The three figures below are the sequence of unfoldings $(N_{u_2}, N_{u_3}, N_{u_1})$ associated to the compatible sequence of reticulations $(u_2,u_3,u_1)$. Following the introduced terminology, $N_{u_2}=U(N,u_2)$, $N_{u_3}=U(U(N,u_2),u_3)$ and $N_{u_1}=U(U(U(N,u_2),u_3),u_1)=U(N)$. Note that $u_2,u_3 \in R_{\min}(N)$ and $u_1 \notin R_{\min}(N)$, there is a path from $u_1$ to $u_2$ in $N$.}} \label{fig:unf} \end{figure} Now, we are interested in the ``reverse'' process to unfolding. Roughly speaking, we are interested in formally defining a way to ``fold'' an IMLT to recover the IMLN from which it comes. We can, given an IMLN $N$, also define a partial order over the set of elementary nodes $E(N)$ by saying that for any two $u,v\in E(N)$, $u\leq_E v$ if and only if there exist $u', v'\in E(N)$ with $\ell(u) = \ell(u')$ and $\ell(v) = \ell(v')$ and a directed path from $v$ to $u$. We call the set of elementary nodes that are maximal under this order $E_{\max}(N)$. \begin{lem} \label{lem:elem} Let $(N, N_{u_1}, N_{u_2},\ldots, N_{u_r})$ be a sequence of unfoldings of an internally labelled phylogenetic network $N$. For any $N_{u_i}$ in it and for every $u\in E_{\max}(N_{u_i})$, there exists exactly another $v\in E_{\max}(N_{u_i})$ such that $\ell(u) = \ell(v)$ and the IMLT's $N_{u_i}(u)$ and $N_{u_i}(v)$ are isomorphic. \end{lem} \begin{pf} Let $N_{u_i}$ be one of the IMLN's in the sequence of unfoldings. Let $N'=N_{u_{i-1}}$, with $N'=N$ when $i=1$. By construction, $u_{i}\in R_{\min}(N')$. \\ Since $N_{u_i}=U(N',u_i)$, the IMLT $N'(u_i)$ is duplicated; say $u$ and $v$ the two resulting copies of $u_i$ in $N_{u_i}$, we have $\ell(u) = \ell(v)$ and $N_{u_i}(u)=N_{u_i}(v)$. Moreover, $u, v \in E_{\max}(N_{u_i})$; otherwise, if $u$ (or $v$) is not maximal under the order $\leq_E$ in $N_{u_i}$, it means that there are $w, w'\in E(N_{u_i})$ with $\ell(w)=\ell(w')$ such that there is a path from $w$ to $u$. By Remark \ref{rem:unfolded-paths} this path is preserved in every $N_{u_j}$ with $j<i$. Since the labelling function $\ell$ is injective over reticulation nodes and $N$ has not elementary nodes, this means that the pair $w, w'$ corresponds to a reticulation node in some $N_{u_j}$ with $j<i$; equivalently, this is a reticulation node equal to some $u_j$ with $j<i$. This leads to a contradiction with the fact that the sequence $(u_1,u_2,\ldots, u_r)$ is compatible. If we consider a maximal element in $N_{u_i}$ different to the two coming from the duplication of $u_i$ in $N'$, the previous argument can be reproduced similarly. These pair of maximal elements are preserved as maximal in every $N_{u_j}$ with $j<i$ right up until the unfolding on this reticulation is produced. This proves that the IMLT's rooted on the corresponding copies of it are also preserved until $N_{u_i}$ is reached. \end{pf} In particular, in the proof of Lemma \ref{lem:elem}, and following the same notation, we show that the node $u_i$ is maximal under the $\leq_E$ order in $N_{u_i}$. Notice also that this could be false if elementary nodes are allowed in the initial IMLN $N$. \begin{prop}\label{lem:elem-order} Let $(N, N_{u_1}, N_{u_2},\ldots, N_{u_k})$ be a sequence of unfoldings of an internally labelled phylogenetic network $N$. For any $N_{u_i}$ in it, let $w\in E_{\max}(N_{u_i})$. Then, $v\in E(N_{u_i})$ is such that $v\leq_E w$ if and only if $v\leq_R w$ in $R(N)$. \end{prop} \begin{pf} We begin by the ``if'' direction. If $v,w$ are such that $v\leq_R w$ when seen as reticulation nodes in $N$, there exists at least a path from $w$ to $v$. Now, since $w\in E_{\max}(N_{u_i})$, by Lemma \ref{lem:elem}, there exists $w'\in E_{\max}(N_{u_i})$ such that $\ell(w) = \ell(w')$ and $N_{u_i}(w)=N_{u_i}(w')$, \textit{via} an isomorphism $f$. Then, since by hypothesis $v\in E(N_{u_i})$ and, by Remark \ref{rem:unfolded-paths}, the path from $w$ to $v$ in $N$ is preserved in $N_{u_i}$, there exist paths from $w$ to $v$ and from $w'$ to $f(v)$ in $N_{u_i}$, such that $\ell(v)=\ell(f(v))$ and therefore $v\leq_E w$ in $N_{u_i}$. On the opposite direction, suppose that $v,w$ are such that $v\leq_E w$. Again by Lemma \ref{lem:elem}, in $N_{u_i}$ there exists $w'$ such that $\ell(w) = \ell(w')$ and $N_{u_i}(w)=N_{u_i}(w')$ \textit{via} an isomorphism $f$. Since $v\leq_E w$, there exists a path from $w$ to $v$ and a path from $w'$ to $f(v)$ and $\ell(v) = \ell(f(v))$. Now, since there are no elementary nodes in $N$, there must exist $j<i$ such that in $N_{u_j}$ (it could be that $N_{u_j}=N$), the nodes $v$ and $w$ are reticulations. By Remark \ref{rem:unfolded-paths}, this implies that there would exist a path from $w$ to $v$ in $N_{u_j}$, and therefore $v\leq_R w$ in $N_{u_j}$, and so in $N$. Thus concludes the proof. \end{pf} Given $N$ an IMLN, $u\in R_{\min}(N)$ and $U(N, u)$, we would like to consider $N$ to be the result of a folding operation over $U(N, u)$: $N = F(U(N,u), u)$, for some suitable $F$. For any unfolding sequence $(N, N_{u_1}, N_{u_2},\ldots, N_{u_r})$, we say that each of its members is a \emph{(phylogenetic) pseudo-network} ---in particular, they are IMLN's. Equivalently, we can define a pseudo-network recursively as follows: let $N$ be an IMLN; it is a pseudo-network if it satisfies the following three conditions: \begin{itemize} \item[(i)] no reticulation node descends from an elementary node; \item[(ii)] for any $u\in E_{\max}(N)$ there exists $v\in E_{\max}(N)$ such that $\ell(u) = \ell(v)$ and $N(u) = N(v)$ as IMLT's; \item[(iii)] for any $u\in E_{\max}(N)$, the IMLN obtained by the process of \begin{enumerate} \item considering the node $v\in E_{\max}(N)$ such that $\ell(v) = \ell(u)$ and $N(u) = N(v)$, and the parent of $v$, say $v^{(1)}$; \item deleting $N(v)$, as well as the edge $(v^{(1)},v)$; \item adding the arc $(v^{(1)},u)$, \end{enumerate} is also a pseudo-network. \end{itemize} The IMLN obtained by the process described in (iii) is denoted by $F(N,u)$, and called the \emph{folded IMLN of $N$ at $u$}. Notice that if $u,v\in E_{\max}(N)$ are such that $\ell(u) = \ell(v)$, then $F(N, u) = F(N, v)$. \begin{lem}\label{lem:folding} Let $N$ be a pseudo-network and $u\in R_{\min}(N)$. Then, $$ F(U(N, u), u) = N. $$ \end{lem} \begin{pf} Let $N'=U(N, u)$. Since $u\in R_{\min}(N)$, then $N(u)$ (the tree rooted at $u$) is an IMLT. Let $v_1, v_2$ be the parents of $u$ in $N$. When $N(u)$ is duplicated in the unfolding process, $u$ and a new copy of it, say $v$, are elementary nodes and the roots of $N'(u)$ and $N'(v)$ respectively, such that $N'(u)=N'(v)$. Moreover, $(v_1,u), (v_2,v)$ are arcs in $N'$. Since $u \in E_{\max}(N')$ (because $u\in R_{\min}(N)$), by Lemma \ref{lem:elem}, $v$ is the other node in $E_{\max}(N')$, such that $\ell(u) = \ell(v)$ and $N'(u)=N'(v)$. By definition of the folding process of $N'$ at $u$, the IMLT $N'(v)$ and also the arc $(v_2,v)$ are deleted and a new arc $(v_2,u)$ is created. This results in a reticulation node $u$ with parents $v_1$ and $v_2$ which is the root of $N'(u)$. Since $N(u)=N'(u)$, then $F(N', u) = N$. \end{pf} Given $N$ an IMLN and $(N, N_{u_1}, N_{u_2}, \ldots, N_{u_r})$ a sequence of unfoldings, by Lemma \ref{lem:folding} we have that $N_{u_i} = F(N_{u_{i+1}}, u_{i+1})$ and that $N = F(N_{u_1}, u_1)$. Therefore, we derive the following result. \begin{cor}\label{cor:folding} Let $N$ be an internally labelled phylogenetic network and $(N, N_{u_1}, N_{u_2}, \ldots, N_{u_r})$ any sequence of unfoldings. Then $$ N = F(F(F(\ldots F(U(N), u_r)\ldots), u_2) u_1). $$ \end{cor} Note that, similarly as we have done by the equivalent compatible sequences, there is not a unique way to recover the IMLN $N$ by applying a set of foldings. If $N$ is a pseudo-network we know that it is the product of a sequence of unfoldings performed over an {IMLN}, $N'$. We can then rewrite Corollary \ref{cor:folding}, by defining a function $F$ from the set of pseudo-networks to the set of {IMLN's} by $F(N):=N'$. Hence, \begin{cor}\label{cor:fu} Let $N$ be an internally labelled phylogenetic network. Then $$ N = F(U(N)). $$ \end{cor} This result is the analogue of the concept of stable networks in Section 4 of~\cite{huber2016folding}. The key difference here is that we allow elementary nodes. \section{A polynomial for internally multi-labelled phylogenetic networks}\label{sec:polynomial} Given a phylogenetic network $N$ on $X$, one can obtain a rooted tree by removing one incident arc to each reticulation node. These (sub)trees could contain elementary nodes, and its leaves might be labelled in $X$ (the leaves from $N$) and other sets different from it (for instance when the single outgoing arc to a reticulation is removed). Those trees become unrooted if the direction of the arcs is suppressed (particularly, the root becomes a degree two node) and are called \emph{embedded spanning trees} if its set of leaves is exactly $X$. Tree-child phylogenetic networks are characterized by their set of embedded spanning trees \cite{francis2018identifiability}, but not general phylogenetic networks. In \cite{janssen2021comparing}, the {Liu} polynomial is generalized to phylogenetic networks by their sets of embedded spanning trees. Roughly speaking, the polynomial of the network is the product of the polynomials of the embedded spanning trees (considering trees with multiplicity). Consequently, this extension is a complete invariant for tree-child networks. There are some natural extensions of the Liu polynomial to IMLN's that come to mind. The first one, for internally labelled phylogenetic networks, is to completely unfold such a network and, from any elementary node $u$ labelled $\lambda_i$, for some $i\in\{1,\ldots, r\}$ and labels $\lambda_i$ distinguishable from labels $x_i$, grow an arc to a new node $v$, label $v$ as $\lambda_i$, and finally forget the labelling of $u$. Thus, the unfolded IMLT becomes a multi-labelled tree over leaves $\{x_1,\ldots,x_n, \lambda_1, \ldots, \lambda_r\}$. See an example of that decomposition in Fig \ref{fig:cesc&new} from the internally labelled phylogenetic network $N$ depicted in Fig \ref{fig:unf}. By means of Corollary 3.5 in \cite{liu2021tree}, this extension of the polynomial is immediately seen to uniquely characterize an internally labelled phylogenetic network. \begin{figure}[!ht] \centering \includegraphics[scale=0.5]{im2.png} \caption{{\bf A multi-labelled tree derived from an internally labelled phylogenetic network}. Let $N$ be the network depicted in Fig \ref{fig:unf}. This figure depicts a decomposition of $N$ resulting in a multi-labelled tree.} \label{fig:cesc&new} \end{figure} We will here deal with a natural extension that reflects the reticulation process in the sheer morphology of the polynomial, rather than in the name of the variables. Let $N$ be an IMLN. Then, consider $$p: V(N)\to\ZZ[x_1,\ldots, x_n,\lambda_1,\ldots,\lambda_r, y]$$ to be defined recursively as follows. Let $u\in V(N)$, then: \begin{itemize} \item if $u$ is a leaf, $p(u) = \varphi(u)$; \item if $u$ is an internal tree node whose two children are $v_1, v_2$, $ p(u) = y + p(v_1) p(v_2)$; \item otherwise, i.e. if $u$ has only one child $v$ and its associated label is $\lambda_i = \ell(u)$, then $p(u) = \lambda_i p(v)$. \end{itemize} Then, let $\rho_N$ be the root of $N$; we define $p(N)$ to be $p(\rho_N)$. Notice that this definition of the polynomial $p$ is given over generic IMLN's. For example, the polynomial associated to the IMLN represented in Fig \ref{fig:unf} is \begin{equation*} \begin{split} p(N) = & y+ y^2 + y^3 + \lambda_1\lambda_2x_2y^3 + \lambda_3x_3x_4y^2 + \lambda_1\lambda_2x_1x_2y + \lambda_1\lambda_2x_1x_2y^2 + \\ & + \lambda_1\lambda_2\lambda_3x_2x_3x_4y^2 + \lambda_1\lambda_2^2\lambda_3x_2^2x_3y^2 + \lambda_1\lambda_2^2\lambda_3^2x_2^2x_3^2x_4y + \\ & + \lambda_1\lambda_2\lambda_3x_1x_2x_3x_4y + \lambda_1^2\lambda_2^2x_1x_2^2y^2 + \lambda_1^2\lambda_2^2\lambda_3x_1x_2^2x_3x_4y + \lambda_1^2\lambda_2^3\lambda_3x_1x_2^3x_3y + \\ & + \lambda_1^2\lambda_2^3\lambda_3^2x_1x_2^3x_3^2x_4. \end{split} \end{equation*} \begin{prop}\label{prop:irr} Let $N$ be an IMLN. Then, for any $u\in V(N)$, $p(u)\in\ZZ[x_1,\ldots, x_n,\lambda_1,\ldots,\lambda_r, y]$ is an irreducible polynomial if and only if $u$ is a tree node. \end{prop} \begin{pf} If $u$ is not a tree node the polynomial will not be irreducible, since then there would exist $v\in V(N)$ as the only descendant of $u$, and $p(u) = \ell(u)p(v)$. It then remains only to see that if $u$ is a tree node, $p(u)$ is irreducible. In this case, either $u$ is a leaf and then $p(u) = \varphi(u) =x_i$ for some $i\in\{1,\ldots, n\}$ and so irreducible, or $u$ has two children and $p(u) = y + \Lambda p(w_1)p(w_2)$, where $\Lambda$ is a product of $\lambda_i$ from $\lambda_1,\ldots,\lambda_r$, and $w_1,w_2$ are the first descendants from $u$ at each side that are tree nodes (they are possibly equal). Now consider the polynomial $p'(u)$ obtained from $p(u)$ by changing every variable $x_1,\ldots, x_n, \lambda_1,\ldots, \lambda_r$ for, say, $x_1$. Then, it can be seen that $p'(u)$ satisfies Eisenstein's irreducibility criterion in $\ZZ[y][x_1]$ (which is an unique factorization domain, UFD) applied to the ideal $\langle y\rangle$, and so $p(u)$ is irreducible when seen as a polynomial in $\ZZ[y][x_1,\ldots,x_n,\lambda_1,\ldots,\lambda_r]$. But, since $y$ does not divide $p(u)$, then $p(u)$ is also irreducible in $\ZZ[x_1,\ldots,x_n,\lambda_1,\ldots,\lambda_r, y]$. \end{pf} The next proposition will show that the polynomial is conserved throughout a sequence of unfoldings, and therefore will allow us to compute it over any of its members without distinction. In particular, it can be computed on the unfolding of the network. \begin{prop}\label{prop:pol-fold} Let $N$ be an IMLN, and $(N, N_{u_1}, N_{u_2},\ldots, N_{u_r})$ be a sequence of unfoldings. Then, $p(N) = p(N_{u_1})$ and, for any $i\in\{1,\ldots, r-1\}$, $p(N_{u_{i+1}}) = p(N_{u_{i}})$. \end{prop} \begin{pf} Let $N'$ be an IMLN, and $u\in R_{\min}(N')$. If we are able to show that $p(N') = p(U(N', u))$, then the proposition will hold. Let $v^{(1)}, v^{(2)}$ be the parents of $u$, in $U(N', u)$ each of them will be the parent of at least one elementary node $u_x$, $x\in\{1,2\}$, which will be the root of a copy of the IMLT $N'(u)$, and by construction $p(u_1) = p(u_2) = p(u) = p(N'(u))$. Now, by the definition of the polynomial, $p(v^{(x)})$ will be the same in $N'$ and in $U(N', u)$. Therefore, $p(N') = p(U(N', u))$. \end{pf} We now introduce two remarks, the first concerning the interpretation of the coefficients and, the second, about the reconstruction of the unfolding of an IMLN from the polynomial if it characterizes the IMLN. \begin{rmk} The interpretation of the coefficients of the polynomial $p(N)$ can be extended from Lemma 2.4 in \cite{liu2020polynomial} by slightly modifying the definition of primary subtrees to the IMLT $T=U(N)$. Let a \emph{primary subtree} $S$ of $T$ be a rooted subtree of $T$ such that $S$ shares the same root node with $T$ and any leaf node in $T$ is either a leaf node in $S$ or a descendant of a leaf node in $S$ which does not come from an elementary node. Then, if we represent $p(N)$ as $$\sum c(\gamma_1, \ldots, \gamma_r, \alpha_1, \ldots, \alpha_n, \beta) \lambda_1^{\gamma_1}\cdots \lambda_r^{\gamma_r} x_1^{\alpha_1} \cdots x_n^{\alpha_n}y^{\beta},$$ each one of its coefficients counts the number of primary subtrees of $U(N)$ satisfying that: \begin{itemize} \item $\gamma_i$ (for $i\in\{1, \ldots, r\}$) is the number of nodes labelled by $\lambda_i$ of these subtrees; \item $\alpha_i$ (for $i\in\{1, \ldots, n\}$) is the number of leaf nodes labelled by $x_i$ of these subtrees which are also leaves in $U(N)$; \item $\beta$ is the number of leaf nodes of these subtrees which are internal nodes in $U(N)$. \end{itemize} See Fig \ref{fig:coefs} for the interpretation of some of the terms of the polynomial $p(N)$ of the IMLN $N$ depicted in Fig \ref{fig:unf}. Notice that these primary subtrees can then be folded into a sort of ``sub-primary networks''. \end{rmk} \begin{figure}[!ht] \centering \includegraphics[scale=0.5]{im3.png} \caption{{\bf Two primary subtrees of $U(N)$}. {Let $N$ be the IMLN depicted in Fig \ref{fig:unf}. The figure depicts two primary subtrees of $U(N)$ corresponding to the terms $\lambda_1\lambda_2x_2y^3$ (left), and $\lambda_1^2\lambda_2^2\lambda_3x_1x_2^2x_3x_4y$ (right), of the polynomial $p(N)$.}} \label{fig:coefs} \end{figure} \begin{rmk} In this remark we shall give a first approximation to the problem of reconstructing the Newick string of an IMLT $U(N)$ from $p(N)$, in the case where the polynomial characterizes $N$. Roughly speaking, we proceed as follows: start by substracting $y$ from $p(N)$ and then factor $p(N)-y=q_1 \cdot q_2$. Then the Newick string to consider is $(q_1,q_2)$. From now on, whenever it is possible to substract $y$ from a polynomial $q$, do so. If the factorization involves only two members, $q=q_1\cdot q_2$, then proceed as before and replace $q$ by $(q_1,q_2)$. Otherwise, there could be conflicts in terms of deciding how to group members in a factorization of type $$\prod_{j \in J\subseteq\{1,\ldots,r\}} \lambda_j \prod_k q_k,$$ where $q_k$ are polynomials. But there will always be in the queue of factorizations pending to be grouped, a pair of them where a ``minimum'' monomial of type $\lambda_i \cdot q_s$ is common in both; this allows one to determine that there is an arc from an elementary node labelled by $\lambda_i$ to the subtree determined by the polynomial $q_s$. In terms of the Newick string, it could be replaced by $(\lambda_i(q_s))$. \end{rmk} We are now specially interested in determining under which conditions the polynomial associated to an IMLN uniquely characterizes it. Note that this is not always the case, indeed for IMLT's. See for instance the three representations of IMLT's in Fig \ref{fig:strongpaths}. The polynomial fails to correctly distinguish between them. Roughly speaking, looking at the polynomials of the elementary vertices we could readily distinguish between the three possibilities, but we cannot do so by only looking at $p(u)$, since $p(u)=y+\lambda_1\lambda_2p(w_1)p(w_2)$. \begin{figure}[!ht] \centering \includegraphics[scale=1.1]{im4.png} \caption{{\bf Non-isomorphic IMLT's}. {Three non-isomorphic IMLT's presenting the same polynomial at $u$.}} \label{fig:strongpaths} \end{figure} \subsection{Strong paths} We shall now present a series of definitions. Let $N$ be an IMLN, and $u,v\in V(N)$. If there exists a path from $u$ to $v$ consisting only of elementary or reticulation nodes, we say that $u$ is a \textit{strong ancestor} of $v$, and that $v$ is a \textit{strong descendant} of $u$. Such a path is called a \textit{strong path}. For example, by considering the situation in Fig \ref{fig:strongpaths}, we can see that in all three cases $w_1, w_2$ strongly descend from $u$. \begin{lem}\label{lem:ret-unique} Let $N$ be an internally labelled phylogenetic network, and $v_1,v_2$ two reticulation nodes. If $p(v_1) = p(v_2)$, then $v_1 = v_2$. \end{lem} \begin{pf} Let $w_1$ be the child of $v_1$; by the definition of the polynomial, $p(v_1) / p(w_1) = \lambda_i$ for some $\lambda_i\in\{\lambda_1,\ldots,\lambda_r\}$. Since $p(v_1) = p(v_2)$, it also means that $p(v_2) / p(w_1) = \lambda_i$, but since $N$ is an internally labelled phylogenetic network this implies that $v_2$ is a parent of $w_1$ and that $\ell(v_2) = \lambda_i$. Thus, they are the same node. \end{pf} \begin{lem}\label{lem:ret-desc} Let $N$ be an internally labelled phylogenetic network, and $v$ a reticulation node in it. A node $u$ is a strong ancestor of $v$ if, and only if, one of the two following conditions happens: \begin{itemize} \item $p(v)\mid p(u)$, and then $u$ is a reticulation node, or \item $p(v)\mid (p(u) - y)$, and then $u$ is a tree node. \end{itemize} \end{lem} \begin{pf} By the definition of the polynomial and Lemma \ref{lem:ret-unique}. \end{pf} Now, if we want to compare two IMLN's on the same sets of labels $\{x_1, \ldots, x_n\}$ and $\{\lambda_1, \ldots, \lambda_r\}$, we should take into account the possibility that two of them are isomorphic up to a permutation of the labels. In order to express this possibility, let $\sigma:\{x_1, \ldots, x_n, \lambda_1, \ldots, \lambda_r\}\to \{x_1, \ldots, x_n, \lambda_1, \ldots, \lambda_r\}$ be a permutation such that $\sigma(X) = X$ (i.e., that fixes the sets of labels of the leaves and of the elementary or reticulation nodes). Given an IMLN $N$, we denote by $^\sigma N$ the network isomorphic to $N$ that has all its labels permuted according to $\sigma$, and by $^\sigma p(N)$ we mean $p(^\sigma N)$ or, equivalently, the polynomial that has all its variables changed according to $\sigma$. \begin{defn}\label{def:strong-path} Let $N_1, N_2$ be two IMLN's, and $\sigma$ a permutation of their labels such that $\sigma(X) = X$. We say that $N_1$ and $N_2$ are \textit{equivalent} modulo \textit{strong paths} if the following three conditions are satisfied: \begin{enumerate} \item[(i)] $p(N_1) =\ ^\sigma p(N_2)$; \item[(ii)] there exists a bijection $f$ between the sets of tree nodes of $N_1$ and $N_2$ such that, if $u, v$ are tree nodes and $v$ is a strong descendant of $u$, then $f(v)$ is a strong descendant of $f(u)$; \item[(iii)] for any tree node $u$ in $N_1$, $p(u) =\ ^\sigma p(f(u))$. \end{enumerate} \end{defn} Being equivalent \textit{modulo} strong paths is an equivalence relation. \begin{rmk}\label{rem:strong-paths} The above definition can also be easily stated exclusively in terms of strong paths, which are intrinsic to the IMLN. However, the definition in terms of the polynomial is more tractable and concise. \end{rmk} Notice that all the IMLT's in Fig \ref{fig:strongpaths} are equivalent \textit{modulo} strong paths. Indeed, we present the following theorem: \begin{thm}\label{thm:imlt-inj} Let $N_1, N_2$ be two IMLN's, and $\sigma$ a permutation of their labels such that $\sigma(X) = X$. Then, $p(N_1) =\ ^\sigma p(N_2)$ if, and only if, $N_1$ and $N_2$ are equivalent \emph{modulo} strong paths. \end{thm} \begin{pf} The ``if'' part of the implication is direct by the first condition of the definition of equivalence \textit{modulo} strong paths. Suppose now that $p(N_1) =\ ^\sigma p(N_2)$, and let us show that $N_1$ and $N_2$ must be equivalent. We first see that there exists a bijection $f$ between the sets of tree nodes of $N_1$ and $N_2$ such that for any tree node $u$ in $N_1$, $p(u) =\ ^\sigma p(f(u))$. We will use the following inductive schema: we shall prove that, if $u$ is a tree node in $N_1$ and $f(u_1)$ is a tree node in $N_2$ such that $p(u) =\ ^\sigma p(f(u))$, then if $w_1, w_2$ in $N_1$ are the two tree nodes that strongly descend from $u_1$, then the two tree nodes $w_1', w_2'$ that strongly descend from $f(u)$ in $N_2$ are such that $p(w_1) =\ ^\sigma p(w_1')$ and $p(w_2) =\ ^\sigma p(w_2')$. Then, we will provide tree nodes $u_1, u_2$ in $N_1$ and $N_2$, respectively, from which all other tree nodes will descend and such that $p(u_1) =\ ^\sigma p(u_2)$. Let $u$ be a tree node in $N_1$, and $w_1,w_2$ be the two tree nodes that strongly descend from it. Then, $p(u) = y + \mu_1\cdot\ldots\cdot\mu_{r'}p(w_1)p(w_2)$, for $\mu_1,\ldots,\mu_{r'}\in\{\lambda_1,\ldots,\lambda_r\}$. Then, if $p(u) =\ ^\sigma p(f(u))$, $\mu_1\cdot\ldots\cdot\mu_{r'}p(w_1)p(w_2) =\ ^\sigma\mu_1'\cdot\ldots\cdot\ ^\sigma\mu_{r'}'\ ^\sigma p(w_1')^\sigma p(w_2')$, where $w_1', w_2'$ are the tree nodes that strongly descend from $f(u)$ in $N_2$; but since $p(w_1), p(w_2)$ are both irreducible and different from any $\lambda_i$, then it must happen that (without loss of generality) $p(w_1) =\ ^\sigma p(w_1')$ and $p(w_2) =\ ^\sigma p(w_2')$. Thus, set $f(w_1) = w_1'$ and $f(w_2) = w_2'$. We will now show that there is a tree node in both $N_1$ and $N_2$ such that any other tree node descends from it. Suppose that the root of $N_1$, say $\rho_1$, is a tree node; if so, since $p(N_1) =\ ^\sigma p(N_2)$ and by Proposition \ref{prop:irr}, the root of $N_2$, say $\rho_2$, must also be a tree node. Therefore, any other tree node in their respective IMLN's must descend from them, and furthermore $p(\rho_1) =\ ^\sigma p(\rho_2)$. Set $f(\rho_1) = \rho_2$. Finally, suppose that $\rho_1$ is not a tree node; then, $p(\rho_1)$ is not an irreducible polynomial, and therefore neither will $^\sigma p(\rho_2)$. Let $w_1$ be the only tree node strongly descending from $\rho_1$ in $N_1$. It is straightforward to see that, if $w_1'$ is the only tree node strongly descending from $\rho_2$ in $N_2$, then $p(w_1) =\ ^\sigma p(w_1')$. In both cases, any other tree node in the network will descend from them. Therefore, set $f(w_1) = w_1'$. \end{pf} Now, the question arises: under which conditions can we say that two internally labelled phylogenetic networks that are equivalent \textit{modulo} strong paths are actually isomorphic? \subsection{Separability: a sufficient condition} In this part we shall give a sufficient condition for two internally labelled phylogenetic networks to be completely characterized by the polynomial. In order to do so, we will work with the immediate neighbourhood of any tree node. Let $N$ be a phylogenetic network, and let $u$ be a tree node in $N$. Let $w_1, w_2$ be the two (possibly equal) tree nodes that strongly descend from it. Let $v_1,\ldots, v_{r_1},\ldots, v_{r_1+r_2}$ be the reticulation nodes in the strong paths from $u$ to $w_1$ and $w_2$, and suppose that there are $r_1$ such nodes in the path from $u$ to $w_1$ and $r_2$ in the other. See Fig \ref{fig:lem_separ}. Let $U(u) = \{u_1,\ldots, u_k\}$ be the set of all the tree nodes that are strong ancestors of $w_1$ or $w_2$ different from $u$. Note that the node $u_i$ in Fig \ref{fig:lem_separ} (left) is a node in $U(u)$. In what follows, we will allow ourselves to write $U$ if the context is sufficiently clear. We will present now the following lemma. \begin{lem}\label{lem:entry} Consider the situation above. Let $v$ be a reticulation node from the collection $v_1,\ldots, v_{r_1+r_2}$. Then, there are two possibilities: \begin{itemize} \item both its parents are nodes from $v_1,\ldots, v_{r_1+r_2}$, or \item there exists at least one tree node $u_i\in U$ such that there is a strong path from $u_i$ to $v$ not containing any other reticulation node $v_1,\ldots, v_{r_1+r_2}$. \end{itemize} Furthermore, the first possibility can only happen for \emph{one} reticulation node in $v_1,\ldots, v_{r_1+r_2}$, and it will hold if, and only if, $w_1=w_2$. \end{lem} \begin{pf} Suppose that $v$ is the first reticulation node (counting by proximity to $u$) that satisfies the first condition (this makes sense, since our networks are binary). In this situation, from it emerges only one path up to the next tree node. But since $N$ is binary, the two paths that emerged from $u$ are now confounded in the only path from $v$ to the next tree node, $w_1 = w_2$. See Fig \ref{fig:lem_separ}, right. Therefore, since there is now only one path of reticulation nodes, no other node in it can satisfy the first condition. If $v$ does not satisfy the first condition, one of its parents must not be from $v_1,\ldots, v_{r_1+r_2}$. Let $u_i$ be a tree node strong ancestor of such a parent of $v$. The pair $v, u_i$ satisfies the second condition. See Fig \ref{fig:lem_separ}, left. \end{pf} \begin{figure}[!ht] \centering \includegraphics[scale=1.5]{im5.png} \caption{{\bf Strong paths from a tree node}. {A tree node $u$ and its strong descendants $w_1$ and $w_2$ (left) or $w_1$ (right). The curly paths represent strong paths. The nodes $v$ and $u_i$ are used in the proof of Lemma \ref{lem:entry}}.} \label{fig:lem_separ} \end{figure} We say that a tree node $u_i\in U(u)$ \textit{enters the neighbourhood of $u$ at $v$} if the pair $v,u_i$ satisfies the second condition of Lemma \ref{lem:entry}. If the context is sufficiently clear, we shall only say that it \textit{enters at $v$}. Likewise, we say that $v$ is the \textit{entry of $u_i$ to the neighbourhood of $u$} (or that it is just its \textit{entry}). We can then divide the set $U$ into five sets: let $v^{(x)}$, $x\in\{1,2\}$, be the two children of $u$, then we define \begin{align*} U_1^{(x)} = &\ \{u_i\in U : u_i \textrm{ enters the neighbourhood of } u \textrm{ at only one} \\ & \qquad\textrm{ reticulation node } v \textrm{ that is a strong descendant of } v^{(x)}\}, \\ U_2^{(x)} = &\ \{u_i\in U : u_i \textrm{ enters the neighbourhood of } u \textrm{ at two (possibly equal)} \\ & \qquad\textrm{ reticulation nodes } v_1, v_2 \textrm{ that are strong descendants of } v^{(x)}\}, \\ U_3 = &\ U\setminus(U_1^{(1)}\cup U_1^{(2)} \cup U_2^{(1)} \cup U_2^{(2)}). \end{align*} Notice that, if $w_1 \neq w_2$, then \begin{align*} U_3 =\{u_i\in U : u_i \textrm{ is a strong ancestor of both } w_1 \textrm{ and } w_2\}. \end{align*} The above division $\{U_1^{(1)}, U_1^{(2)}, U_2^{(1)}, U_2^{(2)}, U_3\}$ is a partition of $U$. In Fig \ref{fig:partitionU} three tree nodes $u_1$, $u_2$ and $u_3$ from the set $U=U(u)$ are represented. Note that $u_1 \in U_1^{(1)}$, $u_2 \in U_2^{(2)}$ and $u_3 \in U_3$. \begin{figure}[!ht] \centering \includegraphics[scale=1.5]{im6.png} \caption{{\bf Division of $U(u)$}. {Three trees nodes evidencing the type of sets in the division of $U(u)$. In this case, $u_1 \in U_1^{(1)}$, $u_2 \in U_2^{(2)}$ and $u_3 \in U_3$.}} \label{fig:partitionU} \end{figure} In general, given all the polynomials evaluated at each tree node of $U$, we cannot deduce the exact configuration of the $v_i$'s. Remember, for instance, for the case where $r_1+r_2=2$, the three situations presented in Fig \ref{fig:strongpaths}. That is, we had no \textit{a priori} information on which $v_i$ were strong ancestors of $w_1$ and which of $w_2$. This fact motivates the following definition. \begin{defn} Let $N$ be a phylogenetic network and $u$ a tree node in it. Let $v^{(x)}$, $x\in\{1,2\}$, be the two children of $u$. We say that $u$ is \textit{separable} if either $v^{(1)}$ and $v^{(2)}$ are tree nodes, or if there exists a tree node $u_1$ different from $u$ such that it satisfies one of the following conditions: \begin{itemize} \item is a strong ancestor of $v^{(1)}$ (or $v^{(2)}$) but not of any other strong descendant of $u$, or \item is a strong ancestor of $v^{(1)}$ (or $v^{(2)}$) and of one of its strong descendants. \end{itemize} \end{defn} \begin{rmk}\label{rem:not-sep} In this case, the negative definition might be more intuitive. Let $u$ be a tree node with $w_1$ and $w_2$ the tree nodes strongly descended from $u$. Then $u$ is \emph{not separable} if none of its two children $v^{(1)}$ and $v^{(2)}$ are tree nodes, and \begin{itemize} \item if $w_1\neq w_2$, all the strong ancestors of $v^{(1)}, v^{(2)}$ that are not $u$ are in $U_3(u)$, or \item if $w_1 = w_2$ and $v$ is the first reticulation node that is a strong descendant of both $v^{(1)}$ and $v^{(2)}$, then any strong ancestor of $v^{(1)}$ that is not $u$ will be a strong ancestor of a reticulation node in the strong path from $v^{(2)}$ to $v$, and \textit{vice versa}. \end{itemize} \end{rmk} A phylogenetic network is called \textit{separable} if all its tree nodes are so. \begin{rmk} Notice that separability is a completely topological condition. Thus, we will use it indistinguishably for phylogenetic networks and internally labelled phylogenetic networks. \end{rmk} The key point in separability is that given $u$ a separable tree node and all the polynomials of the tree nodes that are strong ancestors of $w_1$ and $w_2$, we can actually identify the polynomial $p(u_1)$ of the tree node that satisfies the conditions of the definition, and thus we can identify which reticulation nodes descend from $v^{(1)}$ and which from $v^{(2)}$. Indeed: if $w_1\neq w_2$, $p(u_1)$ will be such that $p(w_1)$ divides $p(u_1)-y$ but $p(w_2)$ does not, and contains the largest number of $\lambda_1,\ldots,\lambda_r$ dividing $p(u)-y$. If $w_1 = w_2$, the argument is analogous using that $p(w_1)^2$ is not a divisor of $p(u_1)-y$. As a result, we are able to deduce that $p(v^{(x)})= \mu_1^{(x)}\ldots\mu_{r_x}^{(x)}p(w_x)$, $x\in\{1,2\}$, for $\mu_1^{(x)}\ldots\mu_{r_x}^{(x)}$ dividing $p(u)-y$. Thus, we are able to ``separate'' $p(u)$ into the contributions from $p(v^{(1)})$ and $p(v^{(2)})$. Fig \ref{fig:non_separ} depicts two sub-networks which can be part of internally labelled phylogenetic networks (and then part of the underlying phylogenetic networks) that are not separable. Notice that they are not separable at any of the nodes $u_1,u_2,u_3$. The filled triangle and non-filled triangle pendant at $w_1$ and $w_2$ represent non-isomorphic sub-networks (for example a leaf and a cherry). Note that in both cases we have the same polynomials at $u_i$, namely $p(u_1)= y+ \lambda_1\lambda_2\lambda_3p(w_1)p(w_2)$, $p(u_2)= y+ \lambda_1\lambda_2\lambda_3\lambda_4p(w_1)p(w_2)$ and $p(u_3)=y+ \lambda_1\lambda_2\lambda_4p(w_1)p(w_2)$. Thus, we can not distinguish between the sub-networks when looking at $p(u_1)$, $p(u_2)$, $p(u_3)$. \begin{figure}[!ht] \centering \includegraphics[scale=1.2]{im7.png} \caption{{\bf Non separable internally labelled phylogenetic networks}. {None of the nodes $u_1,u_2,u_3$ are separable. The filled and non-filled triangles pending from $w_1$ and $w_2$ represent non-isomorphic sub-networks.}} \label{fig:non_separ} \end{figure} \begin{lem}\label{lem:diff-equiv} Let $N$ be an internally labelled phylogenetic network, and $u_1$ a tree node in it such that it is one of the deepest tree node (i.e., one for which exists path of maximal length from the root to it) satisfying the following condition: there exists another tree node $u_2$ such that $p(u_1) = p(u_2)$. Then, $u_1$ and $u_2$ must have the same set of children. \end{lem} \begin{pf} If $u_1$ is a leaf, there is nothing to prove, because all the leaves have a different label. Then if $p(u_1) = p(u_2)$, and $p(u_1)=\varphi(u_1)$, we must have $u_2=u_1$. In the other case, let $v^{(1)},v^{(2)}$ be the two children of $u_1$; since $p(v^{(1)})$ and $p(v^{(2)})$ both divide $p(u_2)-y$ and are unique (because $u_1$ is one of the deepest node satisfying the condition in the statement of the lemma), $u_2$ is a strong ancestor to both of them. Therefore, $v^{(1)}, v^{(2)}$ must be reticulation nodes. We write \[p(u_1) = y + \mu_1^{(1)} \cdot \ldots \cdot \mu_{r_1}^{(1)} \mu_1^{(2)} \cdot \ldots \cdot \mu_{r_2}^{(2)} p(w_1) p(w_2),\] where $w_1,w_2$ are the tree nodes that strongly descend from $u_1$, $p(v^{(x)}) = \mu_1^{(x)} \cdot \ldots \cdot \mu_{r_x}^{(x)} p(w_x)$ for $x\in\{1,2\}$, and $\mu_{i}^{(1)}, \mu_{j}^{(2)} \in \{\lambda_1, \ldots, \lambda_r\}$. From $v^{(x)}$ to $w_x$ there is only one strong path of length $r_x$, and since $u_2$ is a strong ancestor of both $v^{(1)}$ and $v^{(2)}$ there are $r_1+r_2$ polynomials $\lambda_1,\ldots,\lambda_r$ that divide $p(u_2)-y$. But these are exactly the number of polynomials in $\lambda_1,\ldots,\lambda_r$ that must divide $p(u_2)-y$, since $p(u_1) = p(u_2)$. \end{pf} \begin{lem}\label{lem:sep-diff} Let $N$ be an internally labelled separable phylogenetic network, and $u_1, u_2$ two internal nodes in it. Then, $p(u_1) = p(u_2)$ if, and only if, $u_1=u_2$. \end{lem} \begin{pf} The ``if'' part is trivial by the definition of the polynomial. By Lemma \ref{lem:ret-unique}, if either $u_1$, or $u_2$ is a reticulation node, the result is proven. Therefore, assume that $u_1, u_2$ are both tree nodes, and suppose, for the sake of contradiction, that $u_1\neq u_2$. Furthermore, assume that $u_1$ is one of the deepest nodes satisfying that $p(u_1)=p(u_2)$. \\ By Lemma \ref{lem:diff-equiv}, their sets of children are the same. Let $v_1,v_2$ be the two children of $u_1$ and $u_2$. Then $u_1$ and $u_2$ are the only strong ancestors of both $v_1$ and $v_2$. Moreover, $u_2$ is in $U_3(u_1)$. This means that $u_1$ is not separable and, therefore, neither is $N$. \end{pf} \begin{cor} If $N$ is a separable phylogenetic network, then there is no pair of tree vertices with the same set of children. \end{cor} Note that the other direction of the implication in the above Corollary is false. See for instance the (internally labelled) phylogenetic subnetworks depicted in Fig \ref{fig:non_separ}. These are non separable and they have different set of children for every pair of tree nodes. \subsection{Isomorphism of internally labelled phylogenetic networks} In this part we prove the main theorem of this paper. It roughly says that the polynomial is a complete invariant for the class of internally labelled separable phylogenetic networks up to equivalence \textit{modulo} strong paths. \begin{lem}\label{lem:iso} Let $N_1, N_2$ be two internally labelled phylogenetic networks such that, for any $u_1, u_2\in N_x$, $x\in\{1,2\}$, $p(u_1) = p(u_2)$ implies that $u_1=u_2$. Suppose that, for any $u,v\in V(N_2)$, $p(u)\neq p(v)$ if $u\neq v$, and let $f:V(N_1)\to V(N_2)$ be a bijection. If there exists a permutation $\sigma$ of their labels with $\sigma(X) = X$ such that $p(u) =\ ^\sigma p (f(u))$ for any $u\in V(N_1)$, then $f$ is an isomorphism of internally labelled phylogenetic networks. \end{lem} \begin{pf} In order to ease the notation, and without loss of generality, let us assume that $\sigma$ is the identity. The fact that $f$ is a bijection is already required in the statement of the Lemma. Then, we must prove that if $(u,v)\in E(N_1)$, then $(f(u),f(v)) \in E(N_2)$ and that $f$ preserves the labels.\\ Suppose that $u$ is a reticulation node; if $(u,v)\in E(N_1)$, then $p(u) = \lambda_i p(v)$ for some $\lambda_i\in\{\lambda_1,\ldots,\lambda_r\}$. Therefore, $p(f(u)) = \lambda_i p(f(v))$ which, since $p(f(v))$ is unique for $f(v)$, implies that $f(v)$ is the only child of $f(u)$ (which is a reticulation node since $p(f(u))$ is not irreducible).\\ Suppose now that $u$ is a tree node, and let $v_1, v_2$ be its two children. Then, we know that $p(v_x) = p(f(v_x))$ for $x\in\{1,2\}$, and that $p(f(u)) = y + p(f(v_1))p(f(v_2))$. Since each node is uniquely characterized by its polynomial, it means that both $f(v_1)$ and $f(v_2)$ are strong descendants of $f(u)$. By an argument analogous to that in the proof of Lemma \ref{lem:diff-equiv}, we can deduce that $f(v_1)$ and $f(v_2)$ are actually the children of $f(u)$.\\ Now, we prove that $f$ preserves the labels on the leaves and on the reticulations. If $u\in L(N_1)$, then $f(u)\in L(N_2)$. Since $u \in L(N_1)$, by definition, $p(u)=\varphi_1(u)$. Moreover, $p(u)=p(f(u))$ because leaves are tree nodes. Since $f(u) \in L(N_2)$, $p(f(u))=\varphi_2(f(u))$. Then, $\varphi_1(u)= \varphi_2(f(u))$. Now, let $u \in R(N_1)$ (a reticulation on $N_1$). By definition, $p(u)=\ell_1(u) p(v)$, where $v$ is the single child of $u$. We have seen above that $p(f(u))=\ell_1(u)p(f(v))$; but, since $f(u)$ is a reticulation in $N_2$ and $f(v)$ is its single child, by definition, $p(f(u))=\ell_2(f(u))p(f(v))$. Then, $\ell_1(u)=\ell_2(f(u))$. \end{pf} \begin{thm}\label{thm:sep-iso} Let $N_1, N_2$ be two internally labelled separable phylogenetic networks. If they are equivalent \emph{modulo} strong paths, then they are isomorphic. \end{thm} \begin{pf} By Lemma \ref{lem:sep-diff}, if $N_1$ and $N_2$ are separable, then $p(u_1) = p(u_2)$ implies $u_1=u_2$ for any internal node in either $N_1$ or $N_2$. Then, if we are able to find a bijection $f$ between the sets of nodes satisfying the premises of Lemma \ref{lem:iso}, we will be able to apply it and show the result.\\ Now, $N_1$ and $N_2$ are equivalent \textit{modulo} strong paths, and that means that there exists a bijection $f$ between the sets of tree nodes such that, for a fixed permutation $\sigma$ between the sets of labels with $\sigma(X) = X$, $p(u) =\ ^\sigma p(f(u))$ for any tree node $u$, and if $u, v$ are tree nodes and $v$ is a strong descendant of $u$, then $f(v)$ is a strong descendant of $f(u)$. We shall show that this $f$ induces our bijection if we generalize it to any internal node (i.e., if we define it correctly over the reticulation nodes in $N_1$). In order to ease the notation, and without loss of generality, let $\sigma$ be the identity. \\ Let $v$ be a reticulation node in $N_1$, and $u$ a tree node that is a strong ancestor of it. Let $v^{(1)}, v^{(2)}$ be the children of $u$, and suppose that $v$ strongly descends from $v^{(1)}$. Let $w_1, w_2$ be the two (possibly equal) tree nodes that strongly descend from $u$.\\ Since $N_1$ is separable, in particular $u$ is separable, and we know that we can write $p(v^{(1)}) = \mu_1^{(1)} \ldots \mu_{r_1}^{(1)} p(w_1)$ and $p(v^{(2)}) = \mu_1^{(2)}\ldots\mu_{r_2}^{(2)} p(w_2)$. Now, by Lemma \ref{lem:entry}, either (1) there exists a tree node $u'$ that enters the neighbourhood of $u$ at $v$, or (2) it does not and both parents of $v$ are strong descendants of $u$. Thus, we distinguish the following cases: \begin{itemize} \item[(1)] There exists a tree node $u'$ that enters the neighbourhood of $u$ at $v$, and \begin{itemize} \item if $v$ is the only reticulation node at which $u'$ enters the neighbourhood of $u$ (that is $u'\in U_1^{(1)}(u)$), then $p(v) = \mu_{i_1}^{(1)}\ldots \mu_{r_1}^{(1)}p(w_1)$, where $\mu_{i_1}^{(1)},\ldots,\mu_{r_1}^{(1)}$ are the only polynomials in $\lambda_1,\ldots,\lambda_r$ that divide both $p(u)-y$ and $p(u')-y$. \item if $u'$ also enters the neighbourhood of $u$ at some $v'$ and there is no strong path between $v$ and $v'$ (that is $u'\in U_3(u)$), then $p(v) = \mu_{i_1}^{(1)}\ldots \mu_{r_1}^{(1)}p(w_1)$, where $\mu_{i_1}^{(1)},\ldots,\mu_{r_1}^{(1)}$ are the only polynomials in $\lambda_1,\ldots,\lambda_r$ that divide both $p(u)-y$ and $p(v^{(1)})$. \item if $u'$ also enters the neighbourhood of $u$ at some $v'$ that is a strong ancestor of $v$ (that is a case where $u'\in U_2^{(1)}(u)$), then $p(v) = \mu_{i_1}^{(1)}\ldots \mu_{r_1}^{(1)}p(w_1)$, where $\mu_{i_1}^{(1)},\ldots,\mu_{r_1}^{(1)}$ are the only polynomials in $\lambda_1,\ldots,\lambda_r$ such that they divide $p(u)-y$ and, for every $j\in \{i_1, \ldots, r_1\}$, $(\mu_{j}^{(1)})^2\mid p(u')-y$. \item if $u'$ also enters the neighbourhood of $u$ at some $v'$ that is a strong descendant of $v$ (that is a case where $u'\in U_2^{(1)}(u)$), then $p(v) = \mu_{i_1}^{(1)}\ldots \mu_{r_1}^{(1)}p(w_1)$, where $\mu_{i_1}^{(1)},\ldots,\mu_{r_1}^{(1)}$ are the only polynomials in $\lambda_1,\ldots,\lambda_r$ that divide both $p(u)-y$ and $p(u')-y$. \end{itemize} Notice that the above arguments are independent of whether $w_1 = w_2$ or not. \item[(2)] Both parents of $v$ are strong descendants of $u$ (and so $w_1 = w_2$). Let $\mu_{i_1}$ the label of the reticulation $v$ and let $\mu_{i_1},\ldots, \mu_{r_3}$ the labels of reticulations in the strong path from $v$ to $w_1$. Then $p(v) = \mu_{i_1} \ldots \mu_{r_3} p(w_1)$, where $\mu_{j}$ for $j\in\{i_1,\ldots, r_3\}$ are the only polynomials in $\lambda_1, \ldots, \lambda_r$ such that $(\mu_{j})^2\mid p(u)-y$. \end{itemize} Since $N_2$ is also separable, in particular $f(u)$ is separable, and since $p(f(u)) = p(u)$ (because $N_1$ and $N_2$ are equivalent \textit{modulo} strong paths), some of its children cannot be a tree node. Therefore, if $v_*^{(1)}, v_*^{(2)}$ are its children, there must exist a tree node $u_1$ that is either a strong ancestor of $v_*^{(1)}$ but not of any other strong descendant of $f(u)$ or a strong ancestor of, say, $v_*^{(1)}$ and of one of its strong descendants. This node will allow us to characterize $p(v_*^{(1)})$. But since $N_1$ and $N_2$ are equivalent \textit{modulo} strong paths, there exists $f^{-1}(u_1)$ in $N_1$ that satisfies the same condition with regard to the pair $u, v^{(1)}$ in $N_1$, and so $p(v^{(1)})=p(v_*^{(1)})$ and $p(v^{(2)}) = p(v_*^{(2)})$. Thus, we set $f(v^{(1)}) = v_*^{(1)}$ and $f(v^{(2)}) = v_*^{(2)}$.\\ Now, for any $v_*$ reticulation node strongly descending from either $v_*^{(1)}$ or $v_*^{(2)}$, any of its strong ancestors that are tree nodes are such that there exists a tree node in $N_1$ with its same polynomial (and thus, is a strong ancestor of some $v$ strongly descending from $u$). Therefore, we will have that $p(v) = p(v_*)$, and we can then set $f(v) = v_*$. \end{pf} Theorem \ref{thm:imlt-inj} and Theorem \ref{thm:sep-iso} together imply the following main result. \begin{thm}\label{teo:pol-iso} Let $N_1, N_2$ be two internally labelled separable phylogenetic networks, and $\sigma$ a permutation of their labels such that $\sigma(X) = X$. If $p(N_1) =\ ^\sigma p(N_2)$, then $N_1$ and $N_2$ are isomorphic. \end{thm} \subsection{Orchard networks} In this subsection we prove that the phylogenetic networks in the class of orchard networks \cite{erdos2019class} are separable. These (strictly) include tree-child networks. Before we recall the definition of orchard networks, we need to introduce some definitions. Let $N$ be a phylogenetic network on $X$. Let $\{a, b\} \subseteq X$. The set $\{a, b\}$ is a \emph{cherry} of $N$ if $a$ and $b$ share a parent. Let $p_a$ and $p_b$ the parents of $a$ and $b$, respectively. If $p_b$ is a reticulation and $(p_a, p_b)$ is an arc in $N$, then $\{a, b\}$ is a \emph{reticulated cherry} of $N$. \\ Let $N$ be a phylogenetic network and let $\{a, b\}$ be a cherry of $N$. Then ``\emph{reduce} $b$'' is the operation of deleting $b$ and suppressing the resulting elementary node. If $p_a=p_b$ is the root of $N$, then delete $b$ and the root. If $\{a, b\}$ is a reticulated cherry of $N$ in which $p_b$ is the reticulation, ``\emph{cut} $\{a, b\}$'' is the operation of deleting $(p_a, p_b)$, and suppressing the two resulting elementary nodes. For both operations, we say that a \emph{cherry-reduction} is performed on $N$. Let $N$ be a phylogenetic network. The sequence $N=N_0, N_1, \ldots, N_k$ of phylogenetic networks is a \emph{cherry-reduction sequence} of $N$ if, for all $i \in \{1,\ldots,k\}$, the phylogenetic network $N_i$ is obtained from $N_{i-1}$ by a (single) cherry-reduction. Then, a phylogenetic network $N$ is \emph{orchard} if there exists a cherry-reduction sequence $N=N_0, N_1, \ldots, N_k$ of $N$ such that $N_k$ consists of a single vertex. \begin{thm}\label{thm:sep-orch} Orchard networks are separable. \end{thm} \begin{pf} Let $N$ be an orchard network and let $N=N_1,\ldots,N_k$ be a sequence of cherry-reductions of $N$. We prove that, for any $i\in\{1,\ldots,k-1\}$, if $N_i$ is not separable, then $N_{i+1}$ is not either. This means that if $N$ is not separable, the last network in every cherry-reduction sequence cannot be a single vertex, reaching a contradiction due to $N$ being orchard. \\ If a reduction of a leaf in a cherry is produced there is nothing to prove because it does not involve reticulation nodes. Then suppose that a cut of a reticulated cherry $\{a, b\}$ is produced in $N_i$. Let $p_a$ and $p_b$ the parents of $a$ and $b$, respectively, and let $p_b$ the reticulation node. Then $p_a$ is a tree node. Moreover $p_a$ is a separable node in $N_i$ because the single strong descendant that is a reticulation node of $p_a$ is $p_b$. Then, $N_i$ is not separable due to some other tree node. \\ Notice that the cut of the reticulated cherry $\{a,b\}$ does not change the relation of strong descendance in the remaining nodes; i.e., $u, v$ were such that $v$ strongly descended from $u$ in $N_i$ if, and only if, the correspondent nodes in $N_{i+1}$ satisfy this condition too. More precisely, let $u$ be a non separable tree node, $v^{(1)}, v^{(2)}$ its children and $w_1, w_2$ the tree nodes that strongly descend from it. By Remark \ref{rem:not-sep} this means that, to begin with, neither $v^{(1)}$ nor $v^{(2)}$ are tree nodes and, if $w_1 \neq w_2$, all the strong ancestors of $v^{(1)}, v^{(2)}$ that are not $u$ are in $U_3(u)$. Now, $p_a$ can never be in $U_3(u)$ because one of its children is a leaf, $a$. Therefore, the cut of the reticulated cherry $\{a, b\}$ would not affect the non separability of $u$. Suppose now that $w_1 = w_2$. By Remark \ref{rem:not-sep}, if $v$ is the first reticulation node that is strong descendant of both $v^{(1)}, v^{(2)}$, the reticulation node $p_b$ cannot be in the strong paths from $v^{(1)}$ to $v$ and from $v^{(2)}$ to $v$ (note also that must be $p_b \neq v$). Then, both strong paths remain untouched to the cut of the reticulated cherry and also the set of strong ancestors of $v^{(1)}$ and $v^{(2)}$ that cause the non separability of $u$. Therefore, any non separable tree node in $N_i$ continues to be so in $N_{i+1}$.\end{pf} \subsection{Unlabelled version} Throughout this paper we have not made any use of the different labels of the leaves of an IMLN, and so the arguments could be translated, \textit{mutatis mutandis}, to IMLN's whose leaves are not labelled (although internal labels would still be necessary), modelled by labelling all leaves using a single variable $x$, to give a polynomial in $\ZZ[x,\lambda_1,\dots,\lambda_r,y]$. Again, for the case of phylogenetic networks, this would require that given two unlabelled phylogenetic networks we consider internally labelled phylogenetic networks with the same topology. This leads to the following proposition: \begin{prop}\label{prop:unlabelled} Let $N_1, N_2$ be two internally labelled separable phylogenetic networks whose leaves are all labelled by $x$. Then, $p(N_1) = p(N_2)$ implies that $N_1$ and $N_2$ are isomorphic. \end{prop} \section{Conclusion}\label{s:conc} In this paper a new complete polynomial invariant for a class of (binary) phylogenetic networks, that of separable networks, is introduced. It generalizes results in both \cite{liu2020polynomial} for phylogenetic trees and in \cite{janssen2021comparing} for phylogenetic networks where their set of embedded spanning trees (like tree-child) characterizes it. The introduced polynomial $p$ is a generalization of the Liu polynomial and it is defined in a more generic structure of networks, called IMLN's, where the reticulations are also labelled with labels other than those on the leaves. We prove that for the case of separable phylogenetic networks, the internally labelled structure derived from those is completely characterized by the polynomial. This induces a complete polynomial invariant for separable phylogenetic networks. That is, given two separable phylogenetic networks $N_1$ and $N_2$ on $X$, we could fix an internally labelled phylogenetic network from it, say $N_1^*$, by bijectively labelling the reticulations. Then, if we consider all possible internally labelled phylogenetic networks obtained from $N_2$ by the permutation of all its variables, $X$ and the reticulations, we can compare $p(N_1^*)$ with the polynomial of all the networks obtained from $N_2$. Note that, due to Proposition \ref{prop:unlabelled}, we could avoid the permutation of the labels on $X$, reducing the cost of this computation. Establishing a complete polynomial invariant for phylogenetic networks opens the door to several interesting opportunities for exploration, such as new ways to define metrics on networks, fast methods to distinguish networks, and possibly ways to extract important features of a network by examining this polynomial. To this end, it may be helpful to understand whether a particular polynomial is derived from a network or not (for clearly not all irreducible polynomials give networks). Furthermore, the computation of $p(N)$ here may be performed reticulation-by-reticulation for some network classes, eg orchard networks~\cite{erdos2019class}. That is, suppose that $N$ is an internally labelled phylogenetic network derived from an orchard network and $N=N_0, N_1, \ldots, N_k$ is a complete cherry reduction sequence of $N$ (that is $N_k$ is a single node). We can perform an assignment of polynomials to all leaves in every intermediate IMLN $N_j$. Finally, $p(N)$ is the polynomial assigned to the single node in $N_k$. Start by assigning $p(u)=\varphi(u)$, for every leaf $u$ in $N_0$. Then, let $\{v_1,v_2\}$ be the two leaves involved in the cherry-reduction to move from $N_j$ to $N_{j+1}$ and let $p(v_i)$ be the polynomial assigned to $v_i$ in $N_j$ for $i\in \{1,2\}$. Then, \begin{itemize} \item if $\{v_1,v_2\}$ is a cherry, assign to the resulting leaf in $N_{j+1}$ the polynomial $y+p(v_1)p(v_2)$. \item if $\{v_1,v_2\}$ is a reticulated cherry (being $v_2$ the child of the reticulation labelled by $\lambda_i$), assign to the resulting leaf in $N_{j+1}$ coming from the parent of $v_1$ the polynomial $y+\lambda_i p(v_1)p(v_2)$, and to the resulting leaf in $N_{j+1}$ coming from the parent of $v_2$, the polynomial $\lambda_ip(v_2)$. \end{itemize} It would be interesting to investigate more optimisations for general or for specific subclasses of phylogenetic networks. It would also be interesting to think about ways to reduce the complexity of the polynomial assigned to a network; even at the expense of a loss of the uniqueness of this assignment. One possibility would be, for instance, to define a polynomial for a phylogenetic network over the IMLT into which is transformed the network following a similar approach that allow the computation of its extended Newick format \cite{cardona2008extended}. Consider, for example, this: for every reticulation, split it (also copying its label) in two copies, the first such copy with one of its parent and its child, and the other copy with the other parent and no children. See two examples of this decomposition in Fig~\ref{fig:newick} from the internally labelled phylogenetic network $N$ depicted in Fig~\ref{fig:unf}. Clearly, this transformation process is not unique, and different IMLT's can be obtained from the same network; but different networks result in disjoint sets of IMLT's. Notice that this process can be understood as a way to prune irrelevant subtrees of the IMLT $U(N)$, with the goal to keep enough information to code the network. Roughly speaking, to recover the network from these IMLT's one should only merge every pair of nodes labelled by the same $\lambda_i$. Applying the definition of the polynomial $p$ to these IMLT's, we obtain, for the example depicted in Fig~\ref{fig:newick} (a), the polynomial \begin{equation*} \begin{split} p(N) = & y+ y^2 + y^3 + \lambda_1y^3 + \lambda_1\lambda_3x_4y^2 + \lambda_1\lambda_2\lambda_3x_2x_3y^2 + \lambda_1\lambda_2\lambda_3^2x_2x_3x_4y + \\ & + \lambda_1\lambda_2x_1y + \lambda_1\lambda_2x_1y^2 + \lambda_1\lambda_2\lambda_3x_1x_4y + \lambda_1^2\lambda_2x_1y^2 + \lambda_1^2\lambda_2\lambda_3x_1x_4y + \\ & + \lambda_1^2\lambda_2^2\lambda_3x_1x_2x_3y + \lambda_1^2\lambda_2^2\lambda_3^2x_1x_2x_3x_4, \end{split} \end{equation*} where (some of) the terms are notably simpler than in the original. \begin{figure}[!ht] \centering \includegraphics[scale=0.5]{im8.png} \caption{{\bf Subtrees of $U(N)$}. Let $N$ be the internally labelled phylogenetic network depicted in Fig \ref{fig:unf}. The figure depicts two (IMLT) subtrees of $U(N)$.} \label{fig:newick} \end{figure} There are potentially many further questions arising that relate to phylogenetic networks more broadly. For instance, do embedded spanning trees characterize general internally labelled phylogenetic networks? That is, if we keep the labels on elementary nodes (which come from reticulation nodes) of the embedded spanning trees, can we extend the results in~\cite{francis2018identifiability} from tree-child networks to more general networks? Which classes of phylogenetic networks are separable? Do \textit{FU}-stable networks require all the labels of the polynomials $\lambda_1,\ldots, \lambda_r$ or can these be replaced by a single variable $\lambda$? And, over all, is there a complete characterization in topological terms of the phylogenetic networks that are characterized by the polynomial introduced in this article? With all this, we hope that the results here will stimulate these and many other investigations. \section*{Acknowledgment} JCP and TMC were supported by the Ministerio de Ciencia e Innovación (MCI), the Agencia Estatal de Investigación (AEI) and the European Regional Development Funds (ERDF); through project PGC2018-096956-B-C43 (FEDER/MICINN/AEI). The authors thank Francesc Rosselló for helpful comments and suggestions. \bibliographystyle{alpha}
{'timestamp': '2022-04-06T02:27:24', 'yymm': '2112', 'arxiv_id': '2112.15179', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.15179'}
\section{Introduction}\label{sec: intro} The Hopf algebra $\textsl{NCSym}$ of symmetric functions in noncommuting variables was introduced in \cite{BRRZ:2008}, following the work of Wolf, Rosas, and Sagan \cite{Wol:1936,RosSag:2006}. We refer the reader to these sources for details on its realization as formal sums of monomials invariant under $\mathfrak{S}_n$ (for all $n>0$). For our goals, it suffices to describe $\textsl{NCSym}$ abstractly in terms of generators and relations, which we do in Section \ref{sec: ncsym}. The Hopf algebra $\textsl{NCSym}$ is cocommutative (by definition) and freely generated as an algebra (essentially the main result of Wolf). Following \cite{BRRZ:2008}, we choose the \emph{atomic set partitions} $\bm{\dot\Pi}$ as a free generating set (see Section \ref{sec: ncsym}). The Cartier-Milnor-Moore theorem then guarantees that $\textsl{NCSym}$ is isomorphic to $\mathfrak U(\mathfrak L(\bm{\dot\Pi}))$, the universal enveloping algebra of the free Lie algebra generated by $\bm{\dot\Pi}$. This note grew out of an attempt to realize this isomorphism explicitly. In Section \ref{sec: prelims}, we record some useful combinatorial machinery. Section \ref{sec: ncsym} contains a precise definition of the Hopf algebra $\textsl{NCSym}$ as well as the statements and proofs of our main results. In Section \ref{sec: summary}, we comment on: (i) an important connection between $\textsl{NCSym}$ and supercharacter theory; and (ii) key structural features of our proofs that will be further developed in \cite{LauMas:1}. \subsection*{Acknowledgements} We thank Marcelo Aguiar and Nantel Bergeron for useful conversations and for pointing us to the related work of Patras and Reutenauer. \section{Combinatorial Preliminaries}\label{sec: prelims} We record some useful shorthand for manipulating set partitions and set compositions. Throughout, we let $\mathbb{N}$ and $\mathbb{P}$ denote the set of nonnegative integers and positive integers, respectively. Also, given $n\in\mathbb{P}$, we let $[n]$ denote the subset $\{1,2,\dotsc,n\}$. \subsection{Set partitions}\label{sec: set parts} Fix $X\subseteq \mathbb{P}$ and let $\mathbf{A} = \{A_1,\dotsc, A_r\}$ be a set of subsets of $X$. We say that $\mathbf{A}$ is a \demph{set partition} of $X$, written $\mathbf{A} \vdash X$, if and only if $A_1\cup\dotsb\cup A_r=X$, $A_i\neq\emptyset$ ($\forall i$), and $A_i\cap A_j=\emptyset$ ($\forall i\neq j$). We order the parts in increasing order of their minimum elements. The \demph{weight} $|\mathbf{A}|$ of $\mathbf{A}$ is the cardinality of $X$ and the \demph{length} $\length{\mathbf{A}}$ of $\mathbf{A}$ is its number of parts ($r$). In what follows, we lighten the heavy notation for set partitions, writing, e.g., the set partition $\{\{1,3\},\{2,8\},\{4\}\}$ as $13.28.4$. We write $\bm\Pi(X)$ for the set partitions of $X$ and $\bm\Pi(n)$ when $X=[n]$. Given any $A\subseteq \mathbb{N}$ and $k\in \mathbb{N}$, we write $\shift{A}{k}$ for the set \[ \shift{A}{k}:=\{a+k \mid a\in A\}. \] By extension, for any set partition $\mathbf{A}=\{A_1,A_2,\ldots,A_r\}$ we set $ \shift{\mathbf{A}}{k}:=\{\shift{A_1}{k},\shift{A_2}{k},$ $\ldots,\shift{A_r}{k}\}. $ The operator $\shift{(\hbox{--})}{k}$ has a complement $\std{(\hbox{--})}$ called the \demph{standardization} operator. It maps set partitions $\mathbf{A}$ of any cardinality $n$ subset $X\subseteq \mathbb{P}$ to set partitions of $[n]$, by defining $\std{\mathbf{A}}$ as the pullback of $\mathbf{A}$ along the unique increasing bijection from $[n]$ to $X$. For example, $\std{(18.4)} = 13.2$ and $\std{(18.4.67)}=15.2.34$. Given set partitions $\mathbf{B} \vdash [m]$ and $\mathbf{C} \vdash [n]$, we let $\mathbf{B} {\textcolor{mymagenta}{|}} \mathbf{C}$ denote the set partition $\mathbf{B} \cup \shift{\mathbf{C}}{m}$ of $[m+n]$. \begin{definition}\label{def: atomic} A set partition $\mathbf{A}=\{A_1,A_2,\ldots,A_r\}$ of $[n]$ is \demph{atomic} (``connected'' in \cite{HivNovThi:2008}) if there {does not} exist a subset $\mathbf{B}\subseteq \mathbf{A}$ and an integer $m<n$ such that $\mathbf{B} \vdash [m]$. Conversely, $\mathbf{A}$ is not atomic if there are set partitions $\mathbf{B}$ of $[n']$ and $\mathbf{C}$ of $[n'']$ splitting $\mathbf{A}$ in two: $\mathbf{A} = \mathbf{B} {\textcolor{mymagenta}{|}} \mathbf{C}$. \end{definition} For example, the partition $17.235.4.68$ is atomic, while $12.346.57.8$ is not. The maximal splitting of the latter would be $12 {\textcolor{mymagenta}{|}} 124.35 {\textcolor{mymagenta}{|}} 1$. We denote the atomic set partitions by $\bm{\dot\Pi}$. If $\mathbf{A}$ is a set partition with $r$ parts, and $K\subseteq[r]$, we write $A_K$ to denote the sub partition $\{A_k \mid k\in K\}\subseteq \mathbf{A}$. For example, if $\mathbf{A}=17.235.4.68$, then $\mathbf{A}_{\{1,3,4\}} = 17.4.68$. \subsection{Set compositions}\label{sec: set comps} Fix $K\subseteq \mathbb{P}$ and let $\gamma = (\gamma_1,\dotsc, \gamma_s)$ be a sequence of subsets of $K$. We say that $\gamma$ is a \demph{set composition} of $K$, written $\gamma \vDash K$, if and only if $\gamma_1\cup\dotsb\cup \gamma_s=K$, $\gamma_i\neq\emptyset$ ($\forall i$), and $\gamma_i\cap \gamma_j=\emptyset$ ($\forall i\neq j$). The \demph{weight} $|\gamma|$ and \demph{length} $\length{\gamma}$ are defined as for set partitions. We use ``$\bm|$'' in place of ``.'' in our shorthand for set compositions, e.g., the set composition $(\{3,8\},\{1,2\},\{4\})$ is abbreviated as $38\cmrg12\cmrg4$. We write $\bm\Gamma(K)$ for the set partitions of $K$ and $\bm\Gamma(r)$ when $K=[r]$. If $\gamma$ is a set composition of $X$ and $K\subseteq X$, we write $\gamma{}_{\!}\rfloor_K$ for the induced set composition of $K$. For example, if $\gamma = 38\bm| 12\bm| 4$ and $K = \{3,4,8\}$, then $\gamma{}_{\!}\rfloor_K = 38\bm| 4$. Similarly, $\gamma{}_{\!}\rfloor_{\{1,3\}} = 3\bm| 1$. Note that $\gamma{}_{\!}\rfloor_{\{1,3\}}$ is not the same as $\gamma_{\{1,3\}}$. Following the notation introduced for set partitions, we let $\gamma_{\{1,3\}}$ denote the subsequence $38\bm| 4$ of $\gamma$. Given two set compositions $\gamma,\rho \vDash K$, we say that $\gamma$ \demph{refines} $\rho$, written $\gamma \succ \rho$, if each block of $\rho$ is the union of a contiguous string of blocks of $\gamma$. For example, $2\cmrg4\cmrg3\cmrg17\bm| 9 \succ 234\cmrg179 \succ 123479$. \subsection{Set compositions as functions on set partitions}\label{sec: functions} Let $\mathbf{A}$ be a set partition of $X$ with $r$ parts and suppose $\gamma=(\gamma_1,\dotsc,\gamma_s)$ is a set composition of $K\subseteq[r]$. We define a new set partition $\gamma[\mathbf{A}]$ as follows: \begin{gather}\label{eq: gamma of A} \gamma[\mathbf{A}] := \std{\mathbf{A}_{\gamma_1}} {\textcolor{mymagenta}{|}} \std{\mathbf{A}_{\gamma_2}} {\textcolor{mymagenta}{|}} \dotsb {\textcolor{mymagenta}{|}} \std{\mathbf{A}_{\gamma_{s}}} \,. \end{gather} See Figure \ref{fig: gamma of A} for several examples. \begin{figure}[hbt] \[ \begin{array}{r|lll} & 13.29.458.7 & 13.28.456.7 & 15.28.346.7 \\[0.4ex] \hline 13\cmrg2\ & 12.345.67 & 12.345.67 & 14.235.67 \\ 2\cmrg34\ & 12.346.5 & 12.345.6 & 12.345.6 \\ 1\cmrg234\ & 12.38.457.6 & 12.38.456.7 & 12.38.457.6 \\ \end{array} \] \caption{Set partitions $\gamma[\mathbf{A}]$ for several examples of $\gamma$ and $\mathbf{A}$.} \label{fig: gamma of A} \end{figure} \section{Structure of $\textsl{NCSym}$}\label{sec: ncsym} Let $\textsl{NCSym}=\bigoplus_{n\geq0} \textsl{NCSym}_n$ denote the graded $\mathbb{Q}$ vector space whose $n$th graded piece consists of formal sums of set partitions $\mathbf{A} \in \bm\Pi(n)$. We give $\textsl{NCSym}$ the structure of graded Hopf algebra as follows. The algebra structure is given by \begin{gather}\label{eq: ncsym product} \mathbf{A} \bm\cdot \mathbf{B} = \mathbf{A} {\textcolor{mymagenta}{|}} \mathbf{B} \quad\hbox{and}\quad 1_{\textsl{NCSym}} = \bm\emptyset \,, \end{gather} where $\bm\emptyset$ is the unique set partition of the empty set. The coalgebra structure is given by \begin{gather}\label{eq: ncsym coproduct} \Delta(\mathbf{A}) = \sum_{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L = [r]} \std{(\mathbf{A}_K)} \otimes \std{(\mathbf{A}_L)} \quad\hbox{and}\quad \varepsilon(\mathbf{A}) = 0 \hbox{\ for all\ }\mathbf{A}\neq\bm\emptyset \,. \end{gather} Here, $\mathbf{A}$ is a set partition with $r$ parts, and $K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L = [r]$ is understood as an ordered disjoint union, i.e., $K\cup L = [r], K\cap L = \emptyset,$ and $K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L \neq L \mathrel{\ensuremath{\mathaccent\cdot\cup}} K$. In terms of the symmetric function interpretation of $\textsl{NCSym}$, the formulas above correspond to working in the basis of \demph{power sum symmetric functions}. The compatibility of product and coproduct is proven in \cite{BRRZ:2008}, the product formula is proven in \cite{BerZab:2009}, and the coproduct formula is proven in \cite{BHRZ:2006}. A combinatorially meaningful formula for the antipode in $\textsl{NCSym}$ has been missing until now (Theorem \ref{thm: ncsym antipode}). Evidently, $\textsl{NCSym}$ is cocommutative and freely generated by $\bm{\dot\Pi}$. The Cartier-Milnor-Moore theorem guarantees algebraically independent primitive elements $p(\mathbf{A})$ of $\textsl{NCSym}$ associated to each $\mathbf{A} \in\mathcal P$. We find them below (Theorem \ref{thm: ncsym primitives}). \subsection{Primitive generators of $\textsl{NCSym}$}\label{sec: ncsym primitives} We aim to prove the following result. \begin{theorem}\label{thm: ncsym primitives} Let $\bm\Gamma'(r)$ denote the set compositions $\gamma$ of $[r]$ with $1\in\gamma_1$. If $\mathbf{A}$ is a set partition with $r$ parts, then \begin{gather}\label{eq: ncsym primitives} p(\mathbf{A}) := \sum_{\gamma\in\bm\Gamma'(r)} (-1)^{\length{\gamma}-1} \gamma[\mathbf{A}] \end{gather} is a nonzero primitive if $\mathbf{A}\in\bm{\dot\Pi}$ and zero otherwise. \end{theorem} \begin{remark} An arbitrary choice was made in \eqref{eq: ncsym primitives}, namely demanding that the distinguished element $1$ belongs to the first block of $\gamma$. Different choices of distinguished elements (and distinguished blocks) give rise to different sets of primitives. Summing over these choices produces a projection operator onto the space of all primitives. This operator can be understood in the context of the work Patras and Reutenauer on Lie idempotents \cite{PatReu:1999,PatReu:2002a}. Fisher \cite{Fis:thesis} uses similar idempotents to work out primitive formulas for several Hopf monoids in species. His formulas, which were discovered independently, also give rise to \eqref{eq: ncsym primitives}. \end{remark} To prove Theorem \ref{thm: ncsym primitives}, we need a lemma about \emph{left quasi-shuffles.} Let $\mathbf{P}=\langle \mathrm{fin}(2^\mathbb{P}) \rangle$ denote the free monoid generated by the finite subsets of $\mathbb{P}$ and let $\mathbb{Q}\mathbf{P}$ denote the corresponding free algebra. We use ${\textcolor{mymagenta}{|}}$ to separate letters in the words $w\in\mathbf{P}$. Given a word $u=u_1{\textcolor{mymagenta}{|}}\dotsb{\textcolor{mymagenta}{|}} u_k$ on $k$ letters and an index $i\leq k$, we write $u_{[i]}$ for the prefix $u_1{\textcolor{mymagenta}{|}}\dotsb{\textcolor{mymagenta}{|}} u_i$ and $u^{[i]}$ for the suffix $u_{i+1}{\textcolor{mymagenta}{|}}\dotsb{\textcolor{mymagenta}{|}} u_k$. We say that words $u=u_1{\textcolor{mymagenta}{|}}\dotsb{\textcolor{mymagenta}{|}} u_k$ and $v=v_1{\textcolor{mymagenta}{|}}\dotsb{\textcolor{mymagenta}{|}} v_l$ in $\mathbf{P}$ are \emph{disjoint} if $\bigl(\bigcup_i u_i\bigr) \cap \bigl(\bigcup_j v_j\bigr) = \emptyset$. (We identify the set compositions $\bm\Gamma(K)$ with the words $w\in\mathbf{P}$ that further satisfy $w_i\cap w_j = \emptyset$ for $i\neq j$ and $\bigcup_j w_j = K$.) The \demph{quasi-shuffle} $u\nshuff v$ of two disjoint words $u,v\in \mathbf{P}$ is defined recursively as follows: \begin{itemize} \item $u \nshuff \emptyset = u$ and $\emptyset \nshuff v = v$; \item if $u=a{\textcolor{mymagenta}{|}} u'$ and $v=b{\textcolor{mymagenta}{|}} v'$, then \[ u \nshuff v = \left\{a{\textcolor{mymagenta}{|}} w : w\in u'\nshuff v \right\} \cup \left\{ab{\textcolor{mymagenta}{|}} w : w\in u'\nshuff v' \right\} \cup \left\{b{\textcolor{mymagenta}{|}} w : w\in u\nshuff v' \right\} . \] \end{itemize} Here $\emptyset$ denotes the unique set composition of the emptyset and ``$ab$'' denotes the union $a\cup b$, a single letter in $\mathbf{P}$. The quasi-shuffle governs the formula for the product of two monomial symmetric functions in noncommuting variables. We need a subset of these that we call the \demph{left quasi-shuffles:} \[ u \mathrel{\tilde\nshuff} v = \left\{a{\textcolor{mymagenta}{|}} w : w\in u'\nshuff v \right\} \cup \left\{ab{\textcolor{mymagenta}{|}} w : w\in u'\nshuff v' \right\} \] for nonempty disjoint words $u=a{\textcolor{mymagenta}{|}} u'$ and $v=b{\textcolor{mymagenta}{|}} v'$ in $\mathbf{P}$. (Note that the recursive definition involves $\nshuff$, not $\mathrel{\tilde\nshuff}$.) \begin{example} Consider the set compositions $1\mrg3$ and $24$. We have \[ 1{\textcolor{mymagenta}{|}} 3 \mathrel{\tilde\nshuff} 24 = \bigl\{1{\textcolor{mymagenta}{|}} 3 {\textcolor{mymagenta}{|}} 24, \, 1{\textcolor{mymagenta}{|}} 234, 1{\textcolor{mymagenta}{|}} 24{\textcolor{mymagenta}{|}} 3, \, 124{\textcolor{mymagenta}{|}} 3\bigr\} \,. \] Note that for each left quasi-shuffle $\gamma$ above, $\gamma{}_{\!}\rfloor_{\{1,3\}} = 1\mrg3$ and $\gamma{}_{\!}\rfloor_{\{2,4\}} = 24$. \end{example} \begin{lemma}\label{thm: mcsym primitives} Let $\bm\Gamma'(r)$ denote the set compositions $\gamma$ of $[r]$ with $1\in\gamma_1$. Given any partition $K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L = [r]$ with $1\in K\subsetneq[r]$, we have \[ \sum_{\gamma\in\bm\Gamma'(r)} (-1)^{\length{\gamma}} \gamma{}_{\!}\rfloor_K \otimes \gamma{}_{\!}\rfloor_L = 0 \,, \] as an element of $\mathbb{Q}\mathbf{P}\otimes \mathbb{Q}\mathbf{P}$. \end{lemma} \begin{proof} Consider a term $u \otimes v$ in the sum. (The hypothesis $K\neq [r]$ guarantees that $v\neq\emptyset$.) The compositions $\gamma$ that satisfy $\gamma{}_{\!}\rfloor_K = u$ and $\gamma{}_{\!}\rfloor_L = v$ are precisely the left quasi-shuffles $u \mathrel{\tilde\nshuff} v$. We now establish a bijection between those of even length and those of odd length. This will complete the proof. A left quasi-shuffle $w\in u\mathrel{\tilde\nshuff} v$ falls into one of two types according to whether or not the first letter of $v$ appears as a letter in $w$: after beginning with some (possibly empty) initial prefix of $u$, say up to the $i$th letter $u_i$, the rest of the word $w$ looks like either $u_{i+1}{\textcolor{mymagenta}{|}} v_1{\textcolor{mymagenta}{|}} w'$ or $u_{i+1}v_1{\textcolor{mymagenta}{|}} w'$ for $w'$ in $u^{[i+1]}\nshuff v^{[1]}$. Here, again, $u_{i+1}v_1$ denotes $u_{i+1}\cup v_1$. The indicated pairing $w \mapsto \phi(w)$ ($u_{i+1}{\textcolor{mymagenta}{|}} v_1 \mapsto u_{i+1}v_1$) decreases the number of letters by one. Thus $w$ and $\phi(w)$ contribute opposite signs to the coefficient of $u\otimes v$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: ncsym primitives}] Since $\Delta$ is cocommutative, we need only consider the terms $\std{\gamma[\mathbf{A}]_K} \otimes \std{\gamma[\mathbf{A}]_L}$ from each $\Delta(\gamma[\mathbf{A}])$ satisfying $1\in K$. Notice that ${\gamma[\mathbf{A}]_K} = {\bigl(\gamma{}_{\!}\rfloor_{K'}\bigr)[\mathbf{A}]}$ for some $K'\subseteq[r]$ (with $1\in K'$ if $1\in\gamma_1$). Thus \begin{gather*} \sum_{\gamma\in\bm\Gamma'(r)} (-1)^{\length{\gamma}} \sum_{\substack{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L = [r] \\ 1\in K}} \std{\gamma[\mathbf{A}]_K} \otimes \std{\gamma[\mathbf{A}]_L} \intertext{% is the same sum as} \sum_{\substack{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L = [r] \\ 1\in K}} \sum_{\gamma\in\bm\Gamma'(r)} (-1)^{\length{\gamma}} \std{\bigl(\gamma{}_{\!}\rfloor_K\bigr)[\mathbf{A}]} \otimes \std{\bigl(\gamma{}_{\!}\rfloor_L\bigr)[\mathbf{A}]} \,, \intertext{% or even} \Biggl(\sum_{\substack{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L = [r] \\ 1\in K}} \sum_{\gamma\in\bm\Gamma'(r)} (-1)^{\length{\gamma}} \gamma{}_{\!}\rfloor_K \otimes \gamma{}_{\!}\rfloor_L \Biggr)[\mathbf{A}\otimes \mathbf{A}] \,. \end{gather*} Lemma \ref{thm: mcsym primitives} reduces this expression to ${(0)[\mathbf{A}\otimes\mathbf{A}]}=0$ when $K\neq[r]$. When $K=[r]$, the expression is precisely $p(\mathbf{A})\otimes 1$. Conclude that $p(\mathbf{A})$ is a primitive element when $\mathbf{A}$ is atomic. To show that $p(\mathbf{A})$ is nonzero when $\mathbf{A}$ is atomic, we merely remark that $\gamma[\mathbf{A}]$ has at least as many atoms as $\gamma$ has parts. It is only when $\mathbf{A}\in\bm{\dot\Pi}$ and $\gamma=[r]$ that a single atom is obtained: $\gamma[\mathbf{A}]$ contributes $\mathbf{A}$ to the sum $p(\mathbf{A})$ only when $\gamma=[r]$. Thus $p(\mathbf{A})\neq 0$. We now show that $p(\mathbf{A})=0$ when $\mathbf{A}$ is not atomic. Suppose $\mathbf{A}=\mathbf{A}'{\textcolor{mymagenta}{|}}\mathbf{A}''$ and put $\length{\mathbf{A}} = r, \length{\mathbf{A}'} = r'$. (Note that $r'<r$ by assumption.) Divide the set compositions $\gamma\in \bm\Gamma'(r)$ into two types according to the following dichotomy. If $\gamma_i$ is the first letter, from the left, satisfying $\gamma_i \cap \{r'+1,\dotsc,r\} \neq \emptyset$, then either $\gamma_i\cap\{1,\dotsc,r'\}$ is empty or it is not. The pairing $\gamma\mapsto \phi(\gamma)$ ($\gamma_{i-1}{\textcolor{mymagenta}{|}}\gamma_i \mapsto \gamma_{i-1}\gamma_i$) decreases the number of letters by one. However, the difference is not visible at the level of functions on $\mathbf{A}$. That is, $\gamma[\mathbf{A}] = \phi(\gamma)[\mathbf{A}]$. This completes the proof. \end{proof} \begin{corollary}The set $\{p(\mathbf{A}) \mid \mathbf{A}\in\bm{\dot\Pi}\}$ comprises irredundant (algebraically independent) generators of the Lie algebra of primitive elements of $\textsl{NCSym}$. \end{corollary} \begin{proof} First we show algebraic independence of $\{p(\mathbf{A}) \mid \mathbf{A}\in\bm{\dot\Pi}\}$. Let $<$ be a total order on the set $\bm{\dot\Pi}$ satisfying $\mathbf{A} < \mathbf{A}'$ when $|\mathbf{A}|>|\mathbf{A}'|$. If we extend $<$ to a lexicographic ordering of $\bm\Pi$ in the usual manner, then the leading (minimum) term of $p(\mathbf{A})$ is $\mathbf{A}$. (As remarked in the proof above, the only term in $p(\mathbf{A})$ with one atom is $\mathbf{A}$.) Conclude that any polynomial in the $p(\mathbf{A})$s ($\mathbf{A}\in \bm{\dot\Pi}$) has the same leading term as the corresponding polynomial in the $\mathbf{A}$s. Hence the $p(\mathbf{A})$s freely generate $\textsl{NCSym}$. Turning to the Lie algebra of primitive elements in $\textsl{NCSym}$, we recall the construction of Hall polynomials. Given an ordered alphabet $X$ and a word $w=x_1\dotsc x_t$ over $X$, we say that $w$ is a \demph{Lyndon word} if $w$ is lexicographically smaller than all its proper suffixes $x_i\dotsb x_t$ ($i>1$). A classical result has that all Lyndon words $w$ have a unique proper decomposition $w=uv$ with $v$ Lyndon of maximum length. See \cite{Reu:1993}. We define the Hall polynomial $[[w]]$ by forming the Lie bracket $[u,v]$ at successively smaller Lyndon factorizations. For example, if $w=aabb$, then the Lyndon factorization of $w$ is $(a,abb)$; next, $abb$ is further factored as $(ab,b)$ and $ab$ is factored as $(a,b)$. The resulting Hall polynomial is $[[w]] = [a,[[a,b],b]]$. If $w$ is Lyndon, then the leading term of $[[w]]$ is $w$. A consequence is the classical result that the Hall polynomials $\{[[w]] \mid w\hbox{ is Lyndon}\}$ form a basis of the free Lie algebra generated by $X$. Turning to $\textsl{NCSym}$, the first paragraph of the proof shows that we may replace the alphabet $\bm{\dot\Pi}$ with the alphabet $\{p(\mathbf{A}) \mid \mathbf{A} \in \bm{\dot\Pi}\}$. Conclude that the Hall polynomials $[[p(\mathbf{A}')p(\mathbf{A}'')\dotsb p(\mathbf{A}^{(t)})]]$, with $\mathbf{A}'{\textcolor{mymagenta}{|}} \mathbf{A}''{\textcolor{mymagenta}{|}} \dotsb{\textcolor{mymagenta}{|}} \mathbf{A}^{(t)}$ Lyndon in the atoms $\mathbf{A}^{(i)}$, form a basis of the Lie algebra of primitive elements in $\textsl{NCSym}$. \end{proof} \subsection{The antipode of $\textsl{NCSym}$}\label{sec: ncsym antipode} We aim to prove the following result. \begin{theorem}\label{thm: ncsym antipode} Suppose $\mathbf{A}$ is a set partition with $r$ parts and let $\bm\Gamma(r)$ denote the set compositions of $[r]$. The antipode $S$ of $\textsl{NCSym}$ acts on $\mathbf{A}$ by \begin{gather}\label{eq: ncsym antipode} S({\mathbf{A}}) = \sum_{\gamma\in\bm\Gamma(r)} (-1)^{\length{\gamma}} \gamma[\mathbf{A}] \,. \end{gather} \end{theorem} \begin{remarks} {\it 1.} This description of the antipode typically contains many cancellations. For example, it says that $S(12.3) = -(12.3) + (12.3) + (1.23)$. There may even be cancellations when $\mathbf{A}$ is atomic. We invite the reader to compute $S(14.2.3)$, which {\it a priori} has 13 terms, but in fact has only nine. On the other hand, \eqref{eq: ncsym antipode} is irredundant for many atomic $\mathbf{A}$. \smallskip {\it 2.} In case $\mathbf{A}$ is not atomic, we can do better. Suppose $\mathbf{A}=\mathbf{A}'{\textcolor{mymagenta}{|}}\mathbf{A}''{\textcolor{mymagenta}{|}}\dotsb{\textcolor{mymagenta}{|}}\mathbf{A}^{(t)}$ is a splitting of $\mathbf{A}$ into atomic pieces, and let $\length{\mathbf{A}^{(i)}} = r_i$ with $\sum_i r_i = r$. Finally, let $\bm\Gamma(\overleftarrow r)$ denote all refinements of the set composition \[ \overleftarrow r = \bigl(\{r{-}r_t{+}1,\dotsc,r\},\dotsc,\{r_1{+}1,\dotsc,r_1{+}r_2\},\{1,\dotsc,r_1\}\bigr) . \] Then \begin{gather}\label{eq: alternate antipode} S(\mathbf{A}) = \sum_{\gamma\in\bm\Gamma(\overleftarrow r)} (-1)^{\length{\gamma}} \gamma[\mathbf{A}] \,. \end{gather} This follows immediately from the fact that $S$ is an algebra antimorphism and \eqref{eq: ncsym antipode} holds on atomic set partitions. Using this formula, we may express $S(13.2.4)$ using three terms, $S(13.2.4) = (1.24.3) - (1.23.4) - (1.2.34)$. Using \eqref{eq: ncsym antipode} would have required us to write down 13 terms, 10 of which would cancel. The relationship between \eqref{eq: alternate antipode} and Theorem 14.31 of \cite{AguMah:1} will be explored in \cite{LauMas:1}. \end{remarks} \begin{proof}[Proof of Theorem \ref{thm: ncsym antipode}] Since the antipode is guaranteed to exist in graded connected bialgebras, we need only check that \eqref{eq: ncsym antipode} provides a left convolution inverse of $\mathrm{id}$. Writing $m$ for multiplication in $\textsl{NCSym}$, we have \begin{align*} m(S\otimes\mathrm{id})\Delta(\mathbf{A}) &= m(S\otimes\mathrm{id})\left(\sum_{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L=[r]} \std{\mathbf{A}_K} \otimes \std{\mathbf{A}_L}\right) = m(S\otimes\mathrm{id})\left(\sum_{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L=[r]} K[\mathbf{A}]\otimes L[\mathbf{A}]\right) \\ \intertext{(viewing $K$ and $L$ as set compositions with one part)} &= m \sum_{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L=[r]} \left(\sum_{\gamma\in\bm\Gamma_{|K|}} (-1)^{\length{\gamma}}\gamma[K[\mathbf{A}]]\right)\otimes L[\mathbf{A}] \\ &= \left(\sum_{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L=[r]} \sum_{\gamma\in\bm\Gamma_{K}} (-1)^{\length{\gamma}}(\gamma\bm| L)\right) \! [\mathbf{A}] \,. \end{align*} We show that this sum is the zero function on $\mathbf{A}$. We have \begin{align*} \sum_{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L=[r]} \sum_{\gamma\in\bm\Gamma_{K}} (-1)^{\length{\gamma}}(\gamma\bm| L) &= \sum_{\gamma\in\bm\Gamma_{[r]}} (-1)^{\length{\gamma}}\gamma \ \ + \ \sum_{\substack{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L=[r] \\ K\neq[r]}} \sum_{\gamma\in\bm\Gamma_{K}} (-1)^{\length{\gamma}}(\gamma\bm| L) \\ &= \sum_{\emptyset\subsetneq L\subseteq[r]}\sum_{\substack{\gamma\in\bm\Gamma_{[r]}\\ \gamma=\gamma'\bm| L}} (-1)^{\length{\gamma'\bm| L}}(\gamma'\bm| L) \ \ - \sum_{\substack{K\mathrel{\ensuremath{\mathaccent\cdot\cup}} L=[r] \\ K\neq[r]}} \sum_{\gamma\in\bm\Gamma_{K}} (-1)^{\length{\gamma\bm| L}}(\gamma\bm| L) \\ &= 0 \,, \end{align*} as claimed. We conclude that $m(S\otimes\mathrm{id})\Delta(\mathbf{A}) = 0$ for $|\mathbf{A}|>0$, which completes the proof. \end{proof} \section{Summary Remarks}\label{sec: summary} \subsection{Supercharacter theory}\label{sec: SC} Let $\mathbb{F}_q$ be a finite field with $q$ elements. The classification problem for irreducible representations of the upper triangular groups $U_n(\mathbb{F}_q)$ is known to be of wild type. After the work of Andr\'e and Yan \cite{And:1995,Yan:1}, it seems a good first step at understanding the character theory of $U_n(\mathbb{F}_q)$ is to study its \emph{supercharacters} and \emph{superclasses}. Superclasses are formed by clumping together certain conjugacy classes of $U_n(\mathbb{F}_q)$. To each superclass is associated a supercharacter, which is the corresponding sum of characters. See \cite{DiaIsa:2008} for an excellent exposition by Diaconis and Isaacs. Following this approach to the problem, and mimicing the classical constructions in the representation ring of the symmetric groups, Thiem \cite{Thi:2010} gave the space of supercharacters $\mathcal{SC}_q$ a Hopf algebra structure, with product and coproduct coming from superinflation and restriction.\footnote{More precisely, the coproduct is not explicitly defined there, nor is the product / coproduct compatibility checked, but these were subsequently verified by Thiem (private communication). The details of his Hopf algebra construction will appear in the report \cite{AIM}.} The result was something that bore a strong resemblance to $\textsl{NCSym}$ for $q=2$. One of the main topics of the AIM workshop \cite{AIM} was to explore this connection and see what could be said for $q$ arbitrary. Quite a lot of progress was made in several directions. Relevant to the present discussion is that $\mathcal{SC}_2$ is indeed isomorphic to $\textsl{NCSym}$ as Hopf algebras. The isomorphism above is straightforward, simply taking superclass functions $\kappa_\mathbf{A}$ to monomial symmetric functions $m_\mathbf{A}$. (See \cite{Thi:2010} and \cite{BRRZ:2008} for notation.) In light of this, it would be very interesting to have analogs of \eqref{eq: ncsym primitives} and \eqref{eq: ncsym antipode} for the monomial basis. Some exciting progress was made in this direction on the last day of the workshop, but some details still need to be checked before any formal statement can be made \cite{Nantel}. \subsection{Transfer of structure}\label{sec: transfer} In Section \ref{sec: ncsym}, we used the idea of set compositions as functions on set partitions to formulate our results. As the proofs of these results indicate, this idea can be mined further. There is a graded connected cocommutative Hopf algebra structure on $\mathbb{Q}\mathbf{P}$ which is freely generated as an algebra by $\mathrm{fin}(2^\mathbb{P})$. Formulas for primitives and the antipode in $\mathbb{Q}\mathbf{P}$ mimic \eqref{eq: ncsym primitives} and \eqref{eq: ncsym antipode}. More is true. In fact, the $\mathbb{Q}\mathbf{P}$ formulas engender \eqref{eq: ncsym primitives} and \eqref{eq: ncsym antipode} via a transfer of structure coming from a \emph{measuring} of Hopf algebras. We leave the details to \cite{LauMas:1}, where further examples of transfer of structure will be worked out. We quote a key theorem from that work. \begin{theorem}\label{thm: transfer} Let $A, B$ be Hopf algebras and let $C$ be a coalgebra. Suppose $\theta \colon B\otimes C\to A$ is a \emph{covering} (a surjective coalgebra map that \emph{measures} $B$ to $A$). Let $\iota \colon A\to B\otimes C$ be any linear section of $\theta$, that is, $\theta\circ \iota = \mathrm{id}$. Then the following hold. \begin{enumerate} \item\label{itm: primitive} If $p\in B$ is primitive, then for every $c\in C$ the element $\theta(p,c)\in A$ is primitive. \item\label{itm: antipode} $S_A = \theta\circ (S_B\otimes\mathrm{id})\circ \iota$. \end{enumerate} \end{theorem} Here, $\mathbb{Q}\mathbf{P}$ covers $\textsl{NCSym}$ by taking $C$ to be the free pointed coalgebra on set partitions; the covering $\theta(\gamma\otimes \mathbf{A})$ is the evaluation $\gamma[\mathbf{A}]$ discussed in Section \ref{sec: functions}. In \cite{LauMas:1}, we also discuss coverings by $\textsl{NSym}$, where $\textsl{NSym}$ is the Hopf algebra of noncommutative symmetric functions \cite{GKLLRT:1995} freely generated by symbols $H_n$ that comultiply as \[ \Delta(H_n) = \sum_{i+j=n} H_i \otimes H_j \,. \] Such coverings exist for every graded cocommutative Hopf algebra $A$ and are given by setting $C$ to be the underlying coalgebra of $A$ and $\theta$ to be the unique covering for which $\theta(H_n \otimes a) := \pi_n(a)$, where $\pi_n$ is the projection to the $n^\mathrm{th}$ graded component of $A$. Again, primitive and antipode formulas in $\textsl{NSym}$ are transfered via Theorem \ref{thm: transfer}. Formula \eqref{itm: antipode} in the theorem simply recovers Takeuchi's formula for the antipode, whereas Formula \eqref{itm: primitive} is used to obtain Takeuchi-type formulas for primitives in $A$. In particular, we obtain generators for the space of primitives (analogous to Theorem \ref{thm: ncsym primitives}) as well as a projection onto the space of all primitives. \bibliographystyle{abbrv}
{'timestamp': '2010-06-22T02:01:09', 'yymm': '1006', 'arxiv_id': '1006.0367', 'language': 'en', 'url': 'https://arxiv.org/abs/1006.0367'}
\section{Introduction}\label{sec:intro} This paper is aim to study statistics on bounded permutations and $213$-avoiding bounded permutations. Let us give an overview of notation and terminology first. Denote by $S_n$ the set of the permutations of $[n]$, where $[n]=\{1,2,\ldots,n\}$. For a permutation $\pi=\pi_1 \pi_2 \cdots \pi_n$ in $S_n$, a pair $(\pi_i,\pi_j)$ is called an inversion if $i<j$ and $\pi_i>\pi_j$. Let $\inv(\pi)$ denote the number of inversions of $\pi$. An element $i$ is called a descent of $\pi$ if $\pi_i >\pi_{i+1}$ and the descent number $\des(\pi)$ is defined to be the number of the descents of $\pi$. We say $i$ is an excedance of $\pi$ if $\pi_i > i$. While, $i$ is called a weak excedance of $\pi$ if $\pi_i \geq i$. Otherwise, $i$ is called a non-weak excedance. Any permutation statistics with the same distribution as $\inv$ are referred to as Mahonian statistics, while those equidistributed with $\des$ and the excedance number are called Eulerian statistics. Given a word $w=w_1 w_2 \cdots w_n$ of length $n$, let \begin{align*} \Lmap(w) &= \{i\colon w_j <w_i \text{ for all } j<i \}, \\ \Lmal(w) &= \{w_i \colon w_j <w_i \text{ for all } j<i \}, \\ \Rmip(w) &= \{i\colon w_j >w_i \text{ for all } j>i \},\\ \Rmil(w) &= \{w_i \colon w_j >w_i \text{ for all } j>i \}, \\ \Rmal(w) &= \{w_i\colon w_j <w_i \text{ for all } j>i \} \end{align*} be the set of left-to-right maximum places, left-to-right maximum letters, right-to-left minimum positions, right-to-left minimum letters and right-to left maximum letters respectively. Their corresponding numerical statistics are denoted by $\lmax(w) = \# \Lmal(w)$, $\rmin(w) = \#\Rmil(w)$ and $\rmax(w) = \# \Rmal(w).$ Besides the linear orders, permutations can be also seen as functions. For example, we may consider $\sigma=34152$ as a function $\sigma \colon [5] \to [5]$ with $ \sigma(1)=3, \sigma(2)=4, \sigma(3)=1, \sigma(4)=5$ and $\sigma(5)=2$. Thus, we can write $\sigma=(13)(245)$ and call it a cycle notation of $\sigma$. Assume that $\sigma=\sigma_1 \sigma_2 \cdots \sigma_n$ and the bijection $i \mapsto \sigma_i$ has $r$ disjoint cycles, whose minimum elements are $c_1, c_2,\ldots, c_r$. Let $\Cyc(\sigma)=\{c_1, c_2,\ldots, c_r\}$ and the number of cycles of $\sigma$, $\cyc(\sigma)$, is defined to be the cardinality of $\Cyc(\sigma)$. An occurrence of a classical pattern $p$ in a permutation $\sigma$ is a subsequence of $\sigma$ that is order-isomorphic to $p$. Babson and Steingr\'{i}msson \cite{BS} generalized the notion of permutation patterns, to what are now known as vincular patterns, see \cite{Kitaev2011}. Adjacent letters in a vincular pattern which are underlined must stay adjacent when they are placed back to the original permutation. For an instance, $41253$ contains only one occurrence of the vincular pattern $\underline{31}42$ in its subsequence $4153$, but not in $4253$. Motivated by juggling sequence and bubble sort, Chung, Claesson, Dukes and Graham \cite{ChungClaesson} introduced the maximum drop size statistic on permutations. For more about bounded permutations, see \cite{ChungGraham, Joanna, Hyatt}. We say that $\pi \in S_n$ has a drop at $i$ if $\pi_i <i$ and the drop size is meant to be $i-\pi_i$. The maximum drop size of $\pi$ is defined by \begin{equation*} \mathrm{maxdrop(\pi)}=\max\{i-\pi_i\colon 1 \leq i \leq n\}. \end{equation*} Let $A(n,k)=\{\pi \in S_n \colon \maxdrop(\pi) \leq k\}$. For convenience, we call permutations with bounded drop size the bounded permutations. Petersen\cite{Petersen} introduced a new statistic, which is called the sorting index. Given a permutation $\pi$ in $S_n$, it is known that $\pi$ has a unique decomposition into transpositions \[\pi=(i_1,j_1)(i_2,j_2)\cdots (i_k,j_k),\] where \[j_1<j_2< \cdots <j_k\] and \[i_1<j_1,i_2<j_2,\ldots,i_k<j_k.\] The sorting index is defined by \[\mathrm{sor(\pi)}=\sum_{r=1}^{k}(j_r-i_r).\] Petersen showed that the sorting index is Mahonian. Moreover, by using two different factorizations of $\sum_{\pi \in S_n} \pi$, he derived that \begin{equation*} \sum_{\pi \in S_n} q^{\mathrm{sor(\pi)}}t^{\mathrm{cyc(\pi)}}=\sum_{\pi \in S_n} q^{\mathrm{inv(\pi)}}t^{\mathrm{rmin(\pi)}}=\prod_{i=1}^{n} (t+[i]_q-1), \end{equation*} where $[i]_q=1+q+\cdots+q^{i-1}$. Based on the standard algorithm for generating a random permutation, Wilson\cite{wilson2010} defined a permutation statistic $\DIS$. It turns out that for a permutation $\pi$ in $S_n$, we have $\DIS(\pi^{-1})=\sor(\pi)$. The first of our main results in this paper is to generalize Petersen's results to bounded permutations, which are stated as follows. \begin{theorem}\label{thm:tha} For $0 \leq k<n$, $(\inv,\lmax)$ and $(\DIS,\cyc)$ have the same joint distribution over $A(n,k)$. Moreover, we have \begin{equation*} \sum_{\sigma \in A(n,k)} q^{\inv(\sigma)}t^{\lmax(\sigma)}=\sum_{\sigma \in A(n,k)} q^{\DIS(\sigma)}t^{\cyc(\sigma)}=(t+[k+1]_q-1)^{n-k} \prod_{i=1}^{k}(t+[i]_q-1). \end{equation*} \end{theorem} Moreover, we will show that a bijection constructed by Foata and Han \cite{Han} can be restricted to give an explanation of the equidistribution above. Hyatt and Remmel \cite{HyattRemmel} initialed the study of pattern avoiding bounded permutations. They gave the number of $231$-avoiding bounded permutations with $j$ descents. Our second main result is concerned with the distribution of descents over $213$-avoiding bounded permutations. Let $A_{n,k}(\sigma)$ denote the set of $\sigma$-avoiding permutations in $A(n,k)$. We have the following proposition. \begin{proposition}\label{prop:213ballot} For $0 \leq k \leq n$, we have $A_{n,k}(213)$ is enumerated by the ballot number $C_{n,k}$, where \[C_{n,k}=\frac{n-k+1}{n+1}{{n+k}\choose{k}}.\] \end{proposition} Let $S_n^k(321)$ be the set of $321$-avoiding permutations of length $n$ with the last element $k+1$. Lin and Kim \cite{LinKim} mentioned that $S_{n+1}^k(321)$ is enumerated by $C_{n,k}$. Bailey \cite{Bailey} introduced non-negative $n,k$-arrangements of $1$'s and $-1$'s. Assume that $\Gamma_{n,k}$ is the set of sequences with non-negative partial sums that can be formed by $n$ $1$'s and $k$ $-1$'s. Namely, given $a=a_1 a_2 \cdots a_{n+k} \in \Gamma_{n,k}$, we have $a_1+\cdots +a_{i} \geq 0$ for $1 \leq i \leq n+k$. Bailey showed that $\Gamma_{n,k}$ is also counted by $C_{n,k}$. Let $\vnw(\pi)$ be the number of the non-weak excedances that are before the last element of $\pi$. For $a \in \Gamma_{n,k}$, let $\vpk(a)$ be the number of $a_i=1$ such that $a_i$ is not the rightmost $1$ and $a_{i+1}=-1$ for $1 \leq i \leq n+k-1$. It should be mentioned that $a$ can be also seen as a ballot path from $(0,0)$ to $(n,k)$, while the statistic $\vpk$ can be seen as a variation of the number of NE-turns over ballot paths as studied by Su \cite{Su}. We will show a stronger equidistribution over $A_{n,k}(213)$, $S_{n+1}^k(321)$ and $\Gamma_{n,k}$. \begin{theorem}\label{thm:equidis} Statistic $\des$ over $A_{n,k}(213)$, statistic $\vnw$ over $ S^k_{n+1}(321)$ and statistic $\vpk$ over $ \Gamma_{n,k}$ are equally distributed. \end{theorem} \begin{theorem}\label{thm:generating} Let \[G(p,q,z)=\sum_{\pi \in A_{n,k}(213),\, 0 \leq k \leq n}p^{\des(\pi)} q^{k}z^n, \] then we have \begin{align} G(p,q,z)=\frac{1-pqz-(1-z)q+(zq-pqz-q)\widetilde{F}_0}{1-z-pqz-(1-z)q}, \label{eq:main} \end{align} where \[ \widetilde{F}_0=\frac{1-zq(1+p)-\sqrt{(1+zq(1-p))^2-4zq}}{2pqz}. \] \end{theorem} The outline of this paper is as follows. In Section \ref{sec:TH1}, we shall prove Theorem~\ref{thm:tha} algebraically and bijectively. In Section \ref{sec:TH23}, we will investigate $A_{n,k}(213)$ as well as the distributions of $\des$ over $A_{n,k}(213)$. \section{A proof of Theorem~\ref{thm:tha}}\label{sec:TH1} In this section, we will give two different factorizations of $\sum_{\sigma \in S(n,k)} \sigma$ by maintaining Petersen's method, where $$S(n,k)=\{\sigma \in S_n \colon \max_i(\sigma_i-i )\leq k\}.$$ These two factorizations allow us to give a proof of Theorem \ref{thm:tha}. Recall that both the set of transpositions $T=\{(i,j) \colon 1 \leq i<j \leq n \}$ and the set of adjacent transpositions $S=\{(i,i+1)\colon 1\leq i \leq n-1\}$ are generating sets for $S_n$. For convenience, we write $t_{i,j}=(i,j)$ and $s_i=(i,i+1)$. The readers are recommended to see \cite{Brenti} for a comprehensive introduction. Given a permutation $\pi$ in $S_{n-1}$, in order to generate a permutation in $S_n$, we may insert $n$ in all possible positions of $\pi$. This can be performed analogously to generate a permutation in $S(n,k)$ from $S(n-1,k)$. Clearly, $S(n,k)=S_n$ when $n \leq k$ and $S(n,0)=12 \cdots n$. Hence, we need only to consider the cases when $0 < k < n$. In terms of the group algebra, for $0 < k < n$, we define \begin{equation*} \Psi_{i,k}= \left\{ \begin{array}{ll} 1+s_i + s_i s_{i-1}+ \cdots +s_i s_{i-1} \cdots s_1, & \mbox{ $0<i<k$,}\\[6pt] 1+s_i + s_i s_{i-1}+ \cdots +s_i s_{i-1} \cdots s_{i+1-k}, & \mbox{ $k \leq i \leq n-1$.} \end{array} \right. \end{equation*} Notice that each element in $\Psi_{i,k}$ can be seen as a reduced expression in $S_{i+1}$. \begin{lemma} For $n \geq 2$ and $0< k< n$, we have \begin{equation}\label{eq:PSI} \sum_{w \in S(n,k)}w=\Psi_{1,k}\Psi_{2,k}\cdots\Psi_{n-1,k}. \end{equation} \end{lemma} \begin{proof} We will give a proof by induction on $n$. It is clear that $\sum_{w \in S(2,k)} w=\Psi_{1,k}$. We assume that equation (\ref{eq:PSI}) holds for $n=m-1$, namely, \begin{equation}\label{eq:induction_m-1} \sum_{u \in S(m-1,k)} u=\Psi_{1,k}\Psi_{2,k}\cdots\Psi_{m-2,k}. \end{equation} We proceed to prove that it holds for $n=m$. Clearly, we can identify the elements in $S(m-1,k)$ with the set $$\{\sigma \in S(m,k) \colon \sigma(m)=m\}.$$ Assume that $\sigma=\sigma_1\sigma_2\cdots \sigma_{m-1}m \in S(m,k)$, there are two cases to consider. If $k > m-1$, then we have \begin{align*} \sigma \cdot \Psi_{m-1,k}&= \sigma+ \sigma s_{m-1}+ \cdots +\sigma s_{m-1} s_{m-2} \cdots s_1 \\ &=\sigma_1\sigma_2\cdots \sigma_{m-1}m+\sigma_1\sigma_2\cdots m\sigma_{m-1}+ \cdots +m\sigma_1\sigma_2\cdots \sigma_{m-1}. \end{align*} If $k < m$, we have \begin{align*} \sigma \cdot \Psi_{m-1,k}&= \sigma+ \sigma s_{m-1}+\sigma s_{m-1} s_{m-2} \cdots +\sigma s_{m-1} s_{m-2} \cdots s_{m-k} \\ &=\sigma_1\sigma_2\cdots \sigma_{m-1}m+\sigma_1\sigma_2\cdots m\sigma_{m-1}+ \cdots + \sigma_1\sigma_2\cdots\sigma_{m-k-1} m \sigma_{m-k}\cdots \sigma_{m-1}. \end{align*} Note that $m$ appears at all possible positions in the above two cases such that $max_i\{\pi_i -i \}\leq k$. Hence, we have \begin{equation}\label{eq:induction_m} \sum_{w \in S(m,k)} w=\sum_{\sigma \in S(m,k), \sigma(m)=m} \sigma \cdot \Psi_{m-1,k}= \sum_{u \in S(m-1,k)} u \cdot \Psi_{m-1,k}. \end{equation} Then, equation (\ref{eq:PSI}) follows from (\ref{eq:induction_m-1}) and (\ref{eq:induction_m}). This completes the proof. \end{proof} Based on the factorization above, we obtain the following theorem. \begin{theorem}\label{tha1} For $n \geq 1$ and $0< k \leq n$, we have \begin{equation}\label{eq:orign} \sum_{\sigma \in A(n,k)} q^{\mathrm{inv(\sigma)}}t^{\mathrm{lmax(\sigma)}}=(t+[k+1]_q-1)^{n-k} \prod_{i=1}^{k}(t+[i]_q-1). \end{equation} \end{theorem} \begin{proof} It is easy to see that $\sigma \in A(n,k)$ if and only if $\sigma^{-1} \in S(n,k)$ and $lmax(\sigma)=rmin(\sigma^{-1})$. It follows that \[\sum_{\sigma \in A(n,k)} q^{\mathrm{inv(\sigma)}}t^{\mathrm{lmax(\sigma)}} = \sum_{\sigma \in S(n,k)} q^{\mathrm{inv(\sigma)}}t^{\mathrm{rmin(\sigma)}}. \] Hence, to prove (\ref{eq:orign}), it suffices to show that \begin{equation}\label{eq:suffice} \sum_{\sigma \in S(n,k)} q^{\mathrm{inv(\sigma)}}t^{\mathrm{n-rmin(\sigma)}}=(1+t[k+1]_q-t)^{n-k} \prod_{i=1}^{k-1}(1+t[i+1]_q-t). \end{equation} Let $\varphi$ be a linear map from $\mathbb{Z}[S(n,k)]$ to $\mathbb{Z}[q,t]$ such that $\mathrm{\varphi(\sigma)=q^{inv(\sigma)}t^{n-rmin(\sigma)}}$. By the definition of $\Psi_{i,k}$, we see that \begin{equation}\label{eq:varphipsi} \varphi(\Psi_{i,k})= \left\{ \begin{array}{ll} 1+qt+\cdots +q^i t, & \mbox{ $0<i<k$},\\[6pt] 1+qt+\cdots +q^k t, & \mbox{ $k \leq i \leq n-1$}. \end{array} \right. \end{equation} For $u \in S(i,k)$ with $u_i=i$ and $k+1 \leq i \leq n$, we deduce that \begin{align*} \varphi(u \cdot \Psi_{i-1,k}) &= \varphi(u)+ \varphi(u s_{i-1})+\varphi(u s_{i-1} s_{i-2}) + \cdots +\varphi(u s_{i-1} s_{i-2} \cdots s_{i-k})\\[3pt] &=\varphi(u)+qt\varphi(u)+q^2t\varphi(u)+ \cdots+q^kt\varphi(u)\\[3pt] &=\varphi(u) \varphi(\Psi_{i-1,k}). \end{align*} Similarly, we may show that $\varphi(u \cdot \Psi_{i-1,k})=\varphi(u) \varphi(\Psi_{i-1,k})$ for $1 < i \leq k$. Hence, we deduce that \begin{align*} \varphi(\Psi_{1,k}\Psi_{2,k} \cdots \Psi_{n-1,k}) & =\varphi(\sum_{w \in S(n,k),w_n=n} w \cdot\Psi_{n-1,k} ) \\[3pt] &=\sum_{w \in S(n,k),w_n=n} \varphi(w \cdot\Psi_{n-1,k} ) \\[3pt] &=\sum_{w \in S(n,k),w_n=n} \varphi(w) \varphi(\Psi_{n-1,k} )\\[3pt] &=\varphi(\sum_{w \in S(n,k),w_n=n} w) \varphi(\Psi_{n-1,k} )\\[3pt] &=\varphi(\Psi_{1,k}\Psi_{2,k} \cdots \Psi_{n-2,k})\varphi(\Psi_{n-1,k} ). \end{align*} Then, (\ref{eq:suffice}) follows from (\ref{eq:varphipsi}). \end{proof} By setting $t=1$ in (\ref{eq:orign}), we deduce the following corollary, which coincides with Corollary 2.2 in \cite{ChungClaesson}. \begin{corollary} For $n \geq 1$ and $0< k \leq n$, we have \begin{equation*} \sum_{\sigma \in A(n,k)} q^{\mathrm{inv(\sigma)}} =[k+1]_q ^{n-k} [k]_q! . \end{equation*} \end{corollary} Analogously, we proceed to give another factorization of $\sum_{\sigma \in S(n,k)} \sigma$. For $0 < k < n$, we define \begin{equation*} \Phi_{i,k}= \begin{cases} 1+t_{i,i+1} + t_{i-1,i+1} + \cdots + t_{1,i+1}; & \mbox{ $0<i < k$}\\ 1+t_{i,i+1} + t_{i-1,i+1} + \cdots + t_{i+1-k,i+1}. & \mbox{ $k \leq i \leq n-1$}\\ \end{cases} \end{equation*} The following lemma shows that how each element in $S(n,k)$ can be generated in terms of the minimal length product of reflections. \begin{lemma} \label{fac_a2} For $n \geq 2$ and $0< k< n$, we have \begin{equation}\label{eq:PHI} \sum_{\sigma \in S(n,k)} \sigma=\Phi_{1,k}\Phi_{2,k}\cdots\Phi_{n-1,k}. \end{equation} \end{lemma} \begin{proof} We will give a proof by induction. Clearly, $\sum_{\sigma \in S(2,k)} \sigma=\Phi_{1,k}$. We assume that equation (\ref{eq:PHI}) holds for $n=m-1$. Now we need to prove that it holds for $n=m$. Recall that given an element $\sigma=\sigma_1\cdots \sigma_{m-1} $ in $S(m-1,k)$, we may identify $\sigma$ with the permutation $ \sigma_1 \sigma_2\cdots \sigma_{m-1}m.$ We consider two cases. If $k > m-1$, then we have \begin{align*} \sigma \cdot \Phi_{m-1,k}&= \sigma+ \sigma t_{m-1,m}+ \cdots +\sigma t_{1,m} \\ &=\sigma_1\sigma_2\cdots \sigma_{m-1}m+\sigma_1\sigma_2\cdots m\sigma_{m-1}+ \cdots +m\sigma_2\cdots \sigma_{m-1}\sigma_{1}. \end{align*} If $ k< m$, then we have \begin{align*} \sigma \cdot \Phi_{m-1,k}&= \sigma+ \sigma t_{m-1,m}+ \cdots +\sigma t_{m-k,m} \\ &=\sigma_1\sigma_2\cdots \sigma_{m-1}m+\sigma_1\sigma_2\cdots m\sigma_{m-1}+ \cdots + \sigma_1\sigma_2\cdots\sigma_{m-k-1} m \sigma_{m-k+1}\cdots \sigma_{m-1}\sigma_{m-k}. \end{align*} Notice that in the second case, the position of $m$ is restricted. It can only change the position with $\sigma_{m-k},\sigma_{m-k+1}, \ldots, \sigma_{m-1}$. Thus each element $\pi$ in the above sum satisfy that $max_i\{\pi_i -i \}\leq k$. Hence, we deduce that $$ \sum_{w \in S(m,k)} w=\sum_{\sigma \in S(m,k), \sigma(m)=m} \sigma \cdot \Phi_{m-1,k}=\Phi_{1,k}\Phi_{2,k}\cdots\Phi_{m-1,k}.$$ \end{proof} The factorization above leads to the following theorem. \begin{theorem} \label{tha2} For $n \geq 1$ and $0< k \leq n$, we have \begin{equation}\label{eq:orign2} \sum_{\sigma \in A(n,k)} q^{\mathrm{DIS(\sigma)}}t^{\mathrm{cyc(\sigma)}}=(t+[k+1]_q-1)^{n-k} \prod_{i=1}^{k}(t+[i]_q-1). \end{equation} \end{theorem} \begin{proof} Since $\mathrm{sor(\sigma^{-1})=DIS(\sigma)}$ and $\mathrm{cyc(\sigma)=cyc(\sigma^{-1})}$, it suffices to show that \begin{equation}\label{eq:suffice2} \sum_{\sigma \in S(n,k)} q^{\mathrm{sor(\sigma)}}t^{n-\mathrm{cyc(\sigma)}}=(1+t[k+1]_q-t)^{n-k} \prod_{i=1}^{k-1}(1+t[i+1]_q-t). \end{equation} Let $\phi$ be a linear map from $\mathbb{Z}[S(n,k)]$ to $\mathbb{Z}[q,t]$ such that $\mathrm{\phi(\sigma)=q^{sor(\sigma)}t^{n-cyc(\sigma)}}$. From the construction, it is easily seen that \begin{equation*} \phi(\Phi_{i,k})= \begin{cases} 1+qt+\cdots +q^i t; & \mbox{ $0<i<k$}\\ 1+qt+\cdots +q^k t. & \mbox{ $k \leq i \leq n-1$}\\ \end{cases} \end{equation*} Let $\sigma=\sigma_1\sigma_2\cdots\sigma_m$ with $\sigma_m=m$. When exchange the position of $\sigma_m$ and $\sigma_i$, we investigate how the statistics $\mathrm{sor}$ and $\mathrm{cyc}$ behaves in $\sigma$. It is easily seen that the statistic $\mathrm{sor}$ is increased by $m-i$. If $i=m$, the statistic $\mathrm{cyc}$ remains the same. If $i<m$, $\mathrm{cyc}$ is decreased by $1$. Assume that $u \in S(i,k)$ with $u_i=i$ and $k+1 \leq i \leq n$. We can verify that \begin{align*} \phi(u \cdot \Phi_{i-1,k}) & =\phi(u)+\phi(ut_{i-1,i})+ \cdots + \phi(ut_{i-k,i}) \\[3pt] &=\phi(u)+qt\phi(u)+\cdots+q^kt\phi(u)\\[3pt] &=\phi(u) \phi(\Phi_{i-1,k}). \end{align*} Similarly, we may prove that $ \phi(u \cdot \Phi_{i-1,k})=\phi(u) \phi(\Phi_{i-1,k})$ for $1<i \leq k$. Hence, we deduce that $\phi(\Phi_{1,k}\Phi_{2,k} \cdots \Phi_{n-1,k})=\phi(\Phi_{1,k})\phi(\Phi_{2,k}) \cdots \phi(\Phi_{n-1,k})$. It follows that \begin{align*} \sum_{\sigma \in S(n,k)} q^{\mathrm{sor(\sigma)}}t^{\mathrm{n-cyc(\sigma)}} & =\phi(\sum_{\sigma \in S(n,k)} \sigma) \\[3pt] &=\phi(\Phi_{1,k}\Phi_{2,k} \cdots \Phi_{n-1,k}) \\[7pt] &=\phi(\Phi_{1,k})\phi(\Phi_{2,k}) \cdots \phi(\Phi_{n-1,k})\\[3pt] &=(1+t[k+1]_q-t)^{n-k} \prod_{i=1}^{k-1}(1+t[i+1]_q-t). \end{align*} This completes the proof. \end{proof} Combining Theorem \ref{tha1} and Theorem \ref{tha2}, we give a proof of Theorem \ref{thm:tha}. It should be mentioned that Chen, Gong and Guo\cite{Chen} found that a bijection on $S_n$ defined by Foata and Han\cite{Han} mapped the statistics $\mathrm{(inv,rmin)}$ to $\mathrm{(sor, cyc)}$. In fact, we claim that this bijection also serves as a bijection on $S(n,k)$ that sends $\mathrm{(inv,rmin)}$ to $\mathrm{(sor, cyc)}$, which leads to a bijective proof of the equidistribution of $\mathrm{(inv,rmin)}$ and $\mathrm{(sor, cyc)}$ over $S(n,k)$. We present Foata and Han's bijection first. Let $\mathrm{SE_n}$ be the set of all sequences $a=(a_1,a_2,\ldots,a_n )$ such that $1 \leq a_i \leq i$ for $i \in [n]$. The Lehmer code of a permutation $\sigma=\sigma_1\sigma_2\cdots \sigma_n \in S_n$ is defined to be the sequence $a=Leh(\sigma)=(a_1,a_2,\ldots,a_n )$, where $$ a_i=|\{j\colon 1 \leq j \leq i, \sigma_j \leq \sigma_i\}|. $$ Define the A-code of a permutation $\sigma$ by $\mathrm{\text{A-code}(\sigma)=Leh ~\sigma^{-1}}$. For $\sigma \in S_n$ and $i \in [n]$, let $k =k(i)$ be the smallest integer $k \geq 1$ such that $\sigma^{-k}(i) \leq i$. Then define $$ \text{B-code}(\sigma)=(b_1,b_2, \ldots, b_n)~~ \text{with} ~~ b_i:=\sigma ^{-k(i)}(i). $$ Let $\gamma=(\text{B-code})^{-1}\circ \text{A-code}$. Combining the results of Foata and Han\cite{Han} and Chen et al.\cite{Chen}, it is easy to see that \[ (\inv,\Rmil,\Lmap)\sigma=(\sor,\Cyc,\Lmap)\gamma(\sigma). \] We assert the following proposition without giving a proof. \begin{proposition}\label{prop:keeplmax} For $\sigma \in S_n$, \begin{itemize} \item [1.] $Lmal(\sigma)=Rmip(\sigma^{-1})=Rmip(Leh(\sigma^{-1}))=Rmip(\text{A-code}(\sigma))$. \item [2.] $Lmal(\sigma)=Rmip(\text{B-code}(\sigma))$. \end{itemize} \end{proposition} Following Proposition \ref{prop:keeplmax}, we deduce that $Lmal(\sigma)=Lmal(\gamma(\sigma))$. Notice that the bijection $\gamma$ also keeps $\mathrm{Lmap}$. It follows that $\gamma$ also keeps the statistic $\mathrm{\max_i \{\sigma_i-i\}}$, which implies that $\gamma$ is a bijection on $S(n,k)$. \begin{proposition} The map $\gamma=(\text{B-code})^{-1}\circ \text{A-code}$ keeps the statistic $r:=\mathrm{\max_i \{\sigma_i-i\}}$, that is, for any permutation $\sigma \in S_n$, we have \[r (\sigma)=r(\gamma(\sigma)).\] \end{proposition} \begin{theorem} $\gamma$ can be restricted to a bijection over $S(n,k)$ such that for $\sigma \in S(n,k)$ \[(\inv,\Rmil)\sigma=(\sor,\Cyc)\gamma(\sigma).\] \end{theorem} \begin{example} Given $\sigma=571492638$, then $\text{A-code}(\sigma)=123215285$ and $\text{B-code}^{-1}(123215285)=573291486$. Hence, we have $\gamma(571492638)=573291486$. It is easy to check that \begin{align*} \Lmal(571492638)&=\Lmal(573291486)=\{5,7,9\},\\[3pt] \Lmap(571492638)&=\Lmap(573291486)=\{1,2,5\},\\[3pt] r(571492638)&=r(573291486)=5. \end{align*} \end{example} \section{ $A_{n,k}(213)$ and Distribution of $\des$ over $A_{n,k}(213)$ }\label{sec:TH23} In this section, we will show that $A_{n,k}(213)$ is calculated by the ballot number. Further, we provide bijections between any two of the structures $A_{n,k}(213)$, $S_{n+1}^{k}(321)$ and $\Gamma_{n,k}$. Based on this, we deduce the distribution of $\des$ over $A_{n,k}(213)$, which can be seen as refinements of both Narayana numbers and ballot numbers. To give a proof of Proposition \ref{prop:213ballot}, we first recall some properties of $C_{n,k}$. The beginnings of $C_{n,k}$ are given in Table 1 \begin{table}[htp]\label{tab:ballot} \begin{center} \begin{tabular}{ c| c c c c c c c c } \hline {n}$\backslash${k} & 0 & 1 & 2 & 3 & 4 & 5 &6 &7\\ \hline 0 & 1 & & & & & & & \\ 1 & 1 & 1 & & & & & & \\ 2 & 1 & 2 & 2 & & & & & \\ 3 & 1 & 3 & 5 & 5 & & & & \\ 4 & 1 & 4 & 9 & 14 & 14 & & & \\ 5 & 1 & 5 & 14 & 28 & 42 & 42 & & \\ 6 & 1 & 6 & 20 & 48 & 90 & 132 & 132 & \\ 7 & 1 & 7 & 27 & 75 & 165 & 297 & 429 &429 \\ \hline \end{tabular} \end{center} \vspace{0.5cm} \caption{The beginnings of the ballot number $C_{n,k}$} \end{table} It can be checked that $C_{n,k}$ satisfies the following recurrences \begin{eqnarray} C_{n,k} &=& C_{n-1,k}+C_{n,k-1}, \label{eq:re1}\\[4pt] C_{n,k}&=& C_{n-1,k}+C_{n-1,k-1}+\cdots + C_{n-1,1}+C_{n-1,0}. \label{eq:re2} \end{eqnarray} \begin{lemma}\label{lem:lastmaxdrop} For $\pi \in S_n(213)$, we have $\maxdrop(\pi)=n-\pi_n$. \end{lemma} \begin{proof} Assume that $\pi_n=i$, we have $1,2, \ldots, i$ is a subsequence of $\pi$. It follows that $\pi^{-1}(j)\leq n-i+j$ for $1 \leq j \leq i-1$, namely, $\pi^{-1}(j)-j \leq n-i$. Moreover, it is easy to check that $ \pi^{-1}(j)-j < n-i$ for $ i<j \leq n$, as desired. \end{proof} Now we are ready to give a proof of Proposition \ref{prop:213ballot}. \begin{proof}[Proof of Proposition~\ref{prop:213ballot}] Assume that $A^i_{n}(213)=\{ \pi \in A_{n}(213) \colon \pi_n=n-i\}$. Then, we have \[A_{n,k}(213)= \bigcup_{i=0}^k A^i_{n}(213). \] We aim to show that $\# A^i_{n}(213) = \# A_{n-1,i}(213).$ Based on Lemma \ref{lem:lastmaxdrop}, this can be verified through a simple bijection from $A^i_{n}(213)$ to $A_{n-1,i}(213)$ by deleting the last element and getting the standard order. Hence, $\#A_{n,k}(213)= \Sigma_{i=0}^k \# A_{n-1,i}(213)$. In view of equation (\ref{eq:re2}), we deduce that $\#A_{n,k}(213)= C_{n,k}$. \end{proof} \begin{corollary} Sets $A_{n,k}(132)$, $A_{n,k}(2\underline{13})$ and $A_{n,k}(\underline{13}2)$ are all enumerated by $C_{n,k}$. \end{corollary} \begin{proof} Since $\pi^{rc} \in S_{n,k}(132)$ if and only if $\pi \in A_{n,k}(213)$, then $\#A_{n,k}(213)=\#A_{n,k}(132)$. Notice that $\pi$ is $213$-avoiding if and only if $\pi$ is $2\underline{13}$-avoiding, see Lemma 2.8 in \cite{Fu}. Hence, $\#A_{n,k}(213)=\#A_{n,k}(2\underline{13})$. Similarly, we have $\#A_{n,k}(132)=\#A_{n,k}(\underline{13}2)$, as desired. \end{proof} As mentioned by Lin and Kim \cite{LinKim}, $S_{n+1}^k(321)$ is enumerated by $C_{n,k}$. We will deduce a recurrence of $\#S_{n}^k(321)$, which proves this fact again. \begin{proposition}\label{prop:321} For $0 \leq k \leq n$, we have $\#S_{n+1}^k(321)=C_{n,k}$. \end{proposition} \begin{proof} We divide $S_n^k(321)$ into two parts, namely, \[S_n^k(321)=\{\pi \in S_n^k(321) \colon k \geq \pi^{-1}(k) \} \cup \{\pi \in S_n^k(321) \colon k < \pi^{-1}(k) \}.\] It suffices to show that \begin{equation}\label{eq:321a} \#\{\pi \in S_n^k(321) \colon k \geq \pi^{-1}(k) \}=C_{n-1,k-1}, \end{equation} and \begin{equation}\label{eq:321b} \#\{\pi \in S_n^k(321) \colon k < \pi^{-1}(k) \}=C_{n-2,k}. \end{equation} To show (\ref{eq:321a}), it is enough to construct a bijection from $\{\pi \in S_n^k(321) \colon k \geq \pi^{-1}(k) \}$ to $\{\pi \in S_{n}^{k-1}(321) \}$ by exchanging $k$ and $k+1$ in a permutation. To show (\ref{eq:321b}), we need to construct a bijection from $\{\pi \in S_n^k(321) \colon k < \pi^{-1}(k) \}$ to $\{\pi \in S_{n-1}^k(321)\}$. Given $\pi \in S_n^k(321)$ with $k < \pi^{-1}(k)$, it is easy to see that $\pi_{n-1}=k$ or $\pi_{n-1}=n$. If $\pi_{n-1}=n$, then delete $n$. If $\pi_{n-1}=k$, then delete $k$ and change $k+2$ to $k$ and $i$ to $i-1$ for $k+3 \leq i \leq n$. It is easy to check that this map is a bijection and we omit it here. In view of (\ref{eq:re1}), we complete the proof. \end{proof} In the following, we will construct bijections from $A_{n,k}(213)$ to $\Gamma_{n,k}$ and from $S^k_{n+1}(321)$ to $\Gamma_{n,k}$ to give a proof of Theorem \ref{thm:equidis}. \begin{proof}[Proof of Theorem \ref{thm:equidis}] Firstly, we construct a bijection $\alpha$ from $A_{n,k}(213)$ to $\Gamma_{n,k}$. Given $\pi=\pi_1 \pi_2 \cdots \pi_n \in A_{n,k}(213)$, let the right-to-left maximum letters of $\pi$ be $\pi_{i_1}, \pi_{i_2}, \ldots,\pi_{i_r}$ with $i_1 <i_2 < \cdots <i_r$ and $\pi_{i_1} >\pi_{i_2}> \cdots > \pi_{i_r}$. Create an non-negative $n,k$-arrangement $a=\alpha(\pi)$ by the following steps \begin{itemize} \item Firstly, writing $i_1$ $1$'s. \item Secondly, for $j$ from $1$ to $r-1$, adjoining $(\pi_{i_{j}}-\pi_{i_{j+1}})$ $-1$'s with $(i_{j+1}-i_{j})$ $1$'s . \item Lastly, writing $(\pi_n-n+k)$ $-1$'s. \end{itemize} It is easy to check that $a$ has $i_r$ $1$'s and $(\pi_{i_1}-n+k)$ $-1$'s. Since $i_r=n$ and $\pi_{i_1}=n$, then $a$ is an $n,k$-arrangement. Notice that, for $1 \leq j \leq r-1$, elements larger than $\pi_{i_{j+1}}$ is at the position $i_j$ or to the left. Hence, $i_j \geq n- \pi_{i_{j+1}}$, which implies that $a$ is non-negative. To show that $\alpha$ is a bijection, we shall construct its inverse. Let $a \in \Gamma_{n,k}$ with $\vpk(a)=r-1$ and $r \geq 1$. Assume that $a$ has $p_i$ $1$'s and $n_i$ $-1$'s alternatively for $1 \leq i \leq r$, then the corresponding $\pi$ in $A_{n,k}(213)$ can be constructed as follows. The positions of the right to left maxima of $\pi$ are $n-p_r- p_{r-1}-\cdots -p_2$, $n-p_r-p_{r-1}-\cdots-p_3$, $\cdots$, $n-p_r$, $n$. The values of the right to left maxima of $\pi$ are $n-k+n_{r}+n_{r-1}+\cdots +n_1$, $n-k+n_{r}+n_{r-1}+\cdots +n_2$, $\cdots$, $n-k+n_r$. Then picking out the largest $p_r-1$ remaining elements that are smaller than $n-k+n_r$ and filling them in the positions between $n-p_r$ and $n$ in increasing order. Similarly, we can fill the positions between $n-p_r-\cdots-p_{j+1}-p_j$ and $n-p_r-\cdots-p_{j+1}$ in the same way for $1<j<r$. From the construction of $\pi$, it is easily to check that $\pi$ is $132$-avoiding. By Lemma \ref{lem:lastmaxdrop}, we obtain that $\maxdrop(\pi) = n - \pi_n = n-(n-k+n_r)=k-n_r\leq k$. Hence $\pi$ is a permutation in $A_{n,k}(213)$ and $\alpha$ is a bijection between $A_{n,k}(213)$ and $\Gamma_{n,k}$. Moreover, it is easy to see from the construction of the bijection $\alpha$ that $\des(\pi)=\rmax(\pi)-1=\vpk(a)$. Now, we proceed to give a bijection $\beta$ from $S_{n+1}^k(321)$ to $\Gamma_{n,k}$. Notice that a permutation is $321$-avoiding if and only if both the subsequence formed by its weak excedance values and the one formed by the remaining non-weak excedance values are increasing. Given $\pi=\pi_1 \pi_2 \cdots \pi_{n+1} \in S_{n+1}^k(321)$, let the non-weak excedance values of $\pi$ that are before position $n+1$ be $\pi_{i_1}, \pi_{i_2}, \ldots,\pi_{i_r}$ with $i_1 <i_2 < \cdots <i_r$ and $\pi_{i_1} <\pi_{i_2}< \cdots < \pi_{i_r}$. Create a non-negative $n,k$-arrangement $a=\beta(\pi)$ by the following steps \begin{itemize} \item Firstly, writing $(i_1-1)$ $1$'s and $\pi_{i_1}$ $-1$'s. \item Secondly, for $j$ from $1$ to $r-1$, adjoining $(i_{j+1}-i_{j})$ $1$'s with $(\pi_{i_{j+1}}-\pi_{i_{j}})$ $-1$'s. \item Lastly, writing $(n+1-i_{r})$ $1$'s and $(k-\pi_{i_r})$ $-1$'s. \end{itemize} Since $i_j-1 \geq \pi_{i_j}$ and $n \geq k$, it is easy to check that $a$ is an non-negative $n,k$-arrangement. To show $\beta$ is a bijection, we will construct its inverse. Let $a \in \Gamma_{n,k}$ with $\vpk(a)=r-1$ and $p_i$ $1$'s and $n_i$ $-1$'s alternatively for $1 \leq i \leq r$. We construct the corresponding $\pi$ in $S^k_{n+1}(321)$ as follows. Elements $k-n_{r}- \cdots -n_3-n_2 $, $k-n_{r}- \cdots -n_3 $, $\cdots$, $k-n_{r}$ are at the positions $p_1+1$, $p_1+p_2+1$, $\cdots$, $p_1+p_2+\cdots +p_{r-1}+1$. Element $k+1$ is at the position $n+1$. The remaining elements are placed in the remaining places in increasing order. By the definition of $\beta$, we may easily deduce that $\vpk(a)=\vnw(\pi)$. The proof is completed. \end{proof} \begin{example} Let $n=8$, $k=7$ and $\pi=83475612$, the right to left maxima of $\pi$ are $\pi_1=8, \pi_4=7, \pi_6=6, \pi_8=2$. Then, we have \[a=\alpha(\pi)=1,-1,1,1,1,-1,1,1,-1,-1,-1,-1,1,1,-1.\] As the inverse of $\alpha$, for $a=1,-1,1,1,1,-1,1,1,-1,-1,-1,-1,1,1,-1$, we have $n=8$, $k=7$, $r=\vpk(a)+1=4$, $p_1=1$, $p_2=3$, $p_3=2$, $p_4=2$, $n_1=1$, $n_2=1$, $n_3=4$ and $n_4=1$. Then $8,7,6,2$ are placed at positions $1,4,6,8$ and $\pi=\alpha^{-1}(a)=83475612$. Let $n=8$, $k=7$ and $\sigma=314527698$. The non-weak excedance values of $\sigma$ that are before position $9$ are $\sigma_2=1, \sigma_5=2, \sigma_7=6$. Then, we have \[b=\beta(\sigma)=1,-1,1,1,1,-1,1,1,-1,-1,-1,-1,1,1,-1.\] As the inverse of $\beta$, for $b=1,-1,1,1,1,-1,1,1,-1,-1,-1,-1,1,1,-1$, we have $n=8$, $k=7$, $r=\vpk(b)+1=4$, $p_1=1$, $p_2=3$, $p_3=2$, $p_4=2$, $n_1=1$, $n_2=1$, $n_3=4$ and $n_4=1$. Then $1,2,6$ are placed at positions $2,5,7$ and $\sigma=\beta^{-1}(b)=314527698$. \end{example} For completeness, we present the bijection from $A_{n,k}(213)$ to $S_{n+1}^k(321)$ as follows. Given $\pi=\pi_1 \pi_2 \cdots \pi_n \in A_{n,k}(213)$, let the right-to-left maximum letters of $\pi$ be $\pi_{i_1}, \pi_{i_2}, \ldots,\pi_{i_r}$ with $i_1 <i_2 < \cdots <i_r$ and $\pi_{i_1} >\pi_{i_2}> \cdots > \pi_{i_r}$. Construct a permutation $\sigma \in S^k_{n+1}(321)$ by the following steps: \begin{itemize} \item The positions of the non-weak excedances of $\sigma$ before the last element are $i_1+1$, $i_2+1$, $\ldots$, $i_{r-1}+1$. \item The values of the the non-weak excedances of $\sigma$ before the last element are $n-\pi_{i_2}$, $n-\pi_{i_3}$, $\ldots$, $n-\pi_{i_r}$. \item Place $k+1$ in the $n+1$-th place and then put the remaining elements to the remaining places in increasing order. \end{itemize} It is easy to check that it is a bijection and the inverse of the bijection is omitted. As an example, $83475612$ is mapped to $314527698$. In the following, we will give the distribution of $\des$ over $A_{n,k}(213)$. By Theorem \ref{thm:equidis}, it suffices to deduce the distribution of $\vpk$ over $\Gamma_{n,k}$. \begin{proof}[Proof of Theorem \ref{thm:generating}] Let \[F(p,q,z)=\sum_{a \in \Gamma_{n,k},\, 0 \leq k \leq n } p^{\vpk(a)} q^{n-k}z^n,\] \[F^u(p,q,z)=\sum_{a \in \Gamma^u_{n,k},\, 0 \leq k \leq n } p^{\vpk(a)} q^{n-k}z^n,\] \[F^d(p,q,z)=\sum_{a \in \Gamma^d_{n,k},\, 0 \leq k \leq n } p^{\vpk(a)} q^{n-k}z^n.\] where $\Gamma^u_{n,k}$ ($\Gamma^d_{n,k}$) is the set of arrangements $a \in \Gamma(n,k)$ with $a_{n+k}=1$ ($a_{n+k}=-1$). Let \begin{equation*} F(p,q,z)=\sum_{k \geq 0} F_k(p,z)q^k = \sum_{n \geq 0} f_n(p,q)z^n, \end{equation*} \begin{equation*} F^u(p,q,z)=\sum_{k \geq 0} F^u_k(p,z)q^k = \sum_{n \geq 0} f_n^u(p,q)z^n, \end{equation*} \begin{equation*} F^d(z,p,q)=\sum_{k \geq 0} F^d_k(p,z)q^k = \sum_{n \geq 0} f^d_n(p,q)z^n. \end{equation*} Then we have $f^u_1(p,q)=q$. By considering two cases of the the last element of an non-negative $n,k$-arrangement, being $1$ or $-1$, we deduce the following two recurrences. For $n \geq 1$, we have \begin{equation}\label{eq:furecurrence} f^u_{n+1}(p,q) =q f^u_{n}(p,q) + pq f^d_{n}(p,q). \end{equation} For $n \geq 0$, we have \begin{equation}\label{eq:fdrecurrence} f^d_{n+1}(p,q) =q^{-1} f^u_{n+1}(p,q) + \{q^{ \geq 0}\}(q^{-1} f^d_{n+1}(p,q)), \end{equation} where $\{q^{ \geq 0}\}$ is the linear operator extracting all terms in the power series representation containing non-negative powers of $q$. Summing (\ref{eq:furecurrence}) over $n \geq 1$ and summing (\ref{eq:fdrecurrence}) over $n \geq 0$, we deduce that \begin{align*} \sum_{n \geq 1} f^u_{n+1}(p,q) z^{n+1}&=\sum_{n \geq 1} (q f^u_{n}(p,q) + pq f^d_{n}(p,q)) z^{n+1}, \\[4pt] \sum_{n \geq 0} f^d_{n+1}(p,q) z^{n+1}&=\sum_{n \geq 0} (q^{-1} f^u_{n+1}(p,q) + \{q^{ \geq 0}\}(q^{-1} f^d_{n+1}(p,q))) z^{n+1}. \end{align*} We deduce that \begin{align} &F^u(p,q,z)-qz=qz F^u(p,q,z)+pqzF^d(p,q,z) , \label{eq:genre1} \\[5pt] &F^d(p,q,z)=q^{-1}F^u(p,q,z)+q^{-1}F^d(p,q,z)-q^{-1}F_0(p,z), \label{eq:genre2} \end{align} where \[F_0(p,z)=\sum_{a \in \Gamma_{n,n},\, n \geq 1} p^{\vpk(a)} z^n.\] Recall that the Narayana number $N_{n,m}$ is the number of Dyck path of semilength $n$ having $m$ peaks and \begin{equation*} \sum_{1 \leq m \leq n} N_{n,m} x^n y^m= \frac{1-x(1+y)-\sqrt{(1+x(1-y))^2-4x}}{2x}, \end{equation*} see \cite{Callan} for reference. \begin{align}\label{eq:F0} F_0(p,z)&=\frac{\sum_{1 \leq m \leq n} N_{n,m} z^n p^m}{p} \notag\\[4pt] &=\frac{1-z(1+p)-\sqrt{(1+z(1-p))^2-4z}}{2zp}. \end{align} By solving (\ref{eq:genre1}) and (\ref{eq:genre2}), we derive that \begin{align*} F^u(p,q,z) & = \frac{(1-q^{-1})qz-pzF_0(z,p)}{1-qz-pz-(1-qz)q^{-1}}, \\[6pt] F^d(p,q,z) & = \frac{z-q^{-1}(1-qz)F_0(z,p)}{1-qz-pz-(1-qz)q^{-1}}. \end{align*} Then \begin{align*} F(p,q,z) & =1 + F^u(z,p,q)+ F^d(z,p,q) \\[5pt] & =\frac{1-pz-(1-qz)q^{-1}+(z-pz-q^{-1})F_0(z,p)}{1-qz-pz-(1-qz)q^{-1}}. \end{align*} Hence, by changing $q$ to $q^{-1}$ and $z$ to $zq$ above we obtain (\ref{eq:main}). \end{proof} \section*{Acknowledgement} The first author was supported by the National Natural Science Foundation of China (No.~11701420) and Natural Science Foundation Project of Tianjin Municipal Education Committee (No.~2017KJ243,~No.~2018KJ193). The second author was supported by the National Natural Science Foundation of China (No.~11801378,~No.~12071440) and the Zhejiang Provincial Natural Science Foundation of China (No.~LQ17A010004).
{'timestamp': '2021-01-18T02:10:59', 'yymm': '2101', 'arxiv_id': '2101.06026', 'language': 'en', 'url': 'https://arxiv.org/abs/2101.06026'}
\section{Introduction} \label{sec:Introduction} The basis of modern galaxy formation theory was laid down by \cite{WhiteRees:1978}, who proposed that galaxies form and evolve inside dark haloes. The field of galaxy formation has progressed enormously since then, but this basic premise still holds. A natural corollary is that the properties of galaxies should be intimately related to the properties of their host haloes. This fundamental galaxy-halo connection is at the heart of several of the most popular models currently used to interpret galaxy clustering measurements. One of these popular techniques is the ``halo occupation distribution'' (HOD) which describes the abundance of galaxies inside a given halo as a parametric function of the host halo mass. HOD dates back to the early 2000s (e.g., \citealt{Jing:1998a,Benson:2000,Peacock:2000,Seljak:2000,Scoccimarro:2001,Berlind:2002,Berlind:2003,Cooray:2002,Yang:2003}), and even today it is routinely employed to interpret observations of the correlation function of galaxies, infer the typical halo masses of observed galaxies, build mock catalogues, and even constrain cosmological parameters (eg. \citealt{AEMULUS3}). Another popular method is the so-called subhalo abundance matching (SHAM, e.g. \citealt{Vale:2006,Conroy:2006}), where, essentially, the most massive/luminous galaxies are assumed to be hosted by the most massive subhaloes. SHAM variants have shown to be remarkably accurate in reproducing the clustering of galaxies in observations \citep{Reddick:2013} and in hydrodynamical simulations \citep{ChavesMontero:2016}. These models have recently evolved into more sophisticated empirical models which attempt to interpret a wide range of galaxy properties \citep{Moster:2018,Behroozi:2019}. In the context of increasingly accurate galaxy surveys and clustering measurements, one of the main limitations of these models is the amount of ``galaxy assembly bias'' they predict. The galaxy assembly bias is the excess (or lack of) large scale clustering of a galaxy sample caused by details of how the galaxy-halo connection depends on halo assembly history and properties other than mass. This concept must not be confused with the differences in the clustering of galaxies with the same halo mass but different secondary halo property, which is technically halo assembly bias traced by galaxies and not galaxy assembly bias, see \citealt{Croton:2007} for more details. Galaxy assembly bias is the consequence of two effects: halo assembly bias and occupation variation. Halo assembly bias (e.g. \citealt{Gao:2005}) is the difference in halo clustering among haloes of the same mass but a different secondary property (e.g. formation time, concentration, spin, etc). Occupancy variation \citep[see][]{Zehavi:2018,Artale:2018} is the dependence of the galaxy population on halo properties other than mass. Note that none of these effects on its own would cause galaxy assembly bias. The degree of galaxy assembly bias predicted by realistic galaxy formation models has been studied by various authors. \cite{C19} found that the level of galaxy assembly bias increases with number density and decreases with redshift for both, stellar mass and SFR-selected samples. These authors found also that the amplitude is always higher for stellar mass-selected samples (with $\sim 15\%$ and $\sim 3\%$ of galaxy assembly bias signal for the stellar mass and SFR selected sample respectively, at its higher value) and that it can become negative for the most extreme cases (e.g. to be $\sim 10\%$ negative for a sample with $n = 0.001 h^{3}{\rm Mpc}^{-3}$ at z=3). \cite{ChavesMontero:2016} detected galaxy assembly bias to be of 15\% in the EAGLE hydrodynamical simulation at $z=0$ for various stellar-mass selected samples. More recently, \cite{Montero-Dorta:2020} showed that galaxies in the Illustris-TNG simulation cluster differently depending on the properties of their host haloes. Since in the standard HODs the galaxy population of a halo depends only on its halo mass, their predictions have no galaxy assembly bias. On the contrary, the abundance of SHAM galaxies does depend on the halo assembly (e.g. recently-formed haloes host more subhaloes), and it predicts that about 10\% of the galaxy clustering to be caused by galaxy assembly bias. Note, however, that in general the amount of assembly bias is expected to be connected with environmental processes (e.g. \citealt{Dalal:2008,Ramakrishnan:2019,Mansfield:2020} which might or might not be captured accurately in current hydrodynamical simulations and galaxy formation models. This suggests that if a given observed galaxy population has a different assembly bias to that in the model, the respective inferences about cosmology or galaxy formation will be biased. This problem has motivated several attempts to incorporate assembly bias in HODs \citep{Paranjape:2015}. They have had, however, limited success. One of the most common ways to add assembly bias to empirical techniques is the decorated HOD approach \citep{Hearin:2016}. In this approach, the halo occupation is splited in two-parts (per halo mass bin) depending on a secondary halo property that contains halo assembly bias (e.g. concentration). Then the galaxy occupation of these sub-population is varied to imprint assembly bias on the mock sample, keeping the same mean galaxy occupation. The main issue with this method is the selection of the secondary property. The most commonly used property is halo concentration (e.g. \citealt{Wang:2019,Zentner:2019,Vakili:2019}), which, although contains an amount of halo assembly bias (e.g. \citealt{Gao:2005}) it is only responsible of a small part of the total galaxy assembly bias signal, and so, even considering an unrealistic relation between halo occupation and concentration, by itself is not enough to reproduce the full galaxy assembly bias signal of a galaxy sample (e.g. \citealt{Croton:2007, Hadzhiyska:2020,Xu:2020}). Only a few other works have tried other halo properties, like environment (eg. \citealt{McEwen:2016,Xu:2020}). To our knowledge, the only attempt of modelling assembly bias in SHAM is the work of \cite{Lehmann:2017}, also using concentration as a way to add galaxy assembly bias. Lehmann et al. model also allowed a change in the satellite fraction when varying the level of assembly bias, which not only changes the assembly bias of the sample, but also the total bias. In the first part of this paper, we aim to systematically quantify how similar or different the predictions for the galaxy assembly bias signal is on different state-of-the-art galaxy formation models. For that, we employ a semi-analytical model, a hydrodynamic simulation, and a SHAM mock. Specifically, we will use the Illustris TNG300 simulation \citep{TNGa}; the SAGE semi-analytical model \citep{Croton:2016}, and SHAM mocks using $V_{\rm peak}$ as main subhalo property. All these modellings were carried out over the same simulated cosmic volume, which facilitates their comparison. Here we will find that the galaxy assembly bias signal is not universal and that different models predict very different amplitude, redshift evolution, and dependence with the number density of the sample. Motivated by this finding, in the second part of this paper we will propose a flexible formalism to include galaxy assembly bias in SHAM. In short, this method adds a tuneable degree of correlation between the large-scale bias of individual subhaloes and the scatter in stellar mass for a fixed $V_{\rm peak}$. We will demonstrate that this approach is flexible enough to mimic the galaxy assembly bias as measured in SAGE as well as in the TNG300 catalogues, at all redshifts and number densities. This improves over previous works (eg. \citealt{Lehmann:2017}) by forcing a constant abundance of satellite galaxies. This means that the galaxy assembly bias introduced by our method should not contain other sources of bias, making it easier to interpret the results of our work. The model presented here can be easily extend by using any secondary halo property (eg. environment, halo age, concentration) including the ``object-by-object'' bias \citep{Paranjape:2018}, that is the property we use to give assembly bias to our galaxies. We anticipate that being able to create SHAM samples with any amount of assembly bias would ultimately result in a more accurate interpretation of observational data. The outline of the paper is the following: in \S~\ref{sec:models} we describe the three different methods to model galaxies we use. In \S~\ref{sec:GAB} we quantify the magnitude and redshift evolution of the galaxy assembly bias signal in these models. We also explore the causes of the galaxy assembly bias in our samples. In \S~\ref{sec:SHAM_AB} we present a new extension to the SHAM algorithm that enables flexible modelling of the galaxy assembly bias. We finalize in \S~\ref{sec:Conclusions} with our conclusions and a summary of our main results. \section{Simulations and empirical models} \label{sec:models} In this section we first describe the three different galaxy formation models we will analyse: a state-of-the-art hydrodynamical simulation, a semi-analytic galaxy formation model, and an empirical model. We then describe the galaxy samples catalogues we will use throughout this paper. \subsection{The TNG300 } \label{sec:TNG300} The hydrodynamical simulation we will consider is ``The Next Generation'' Illustris Simulations (IllustrisTNG, \citealt{TNGa, TNGb, TNGc, TNGd, TNGe}). The Illustris-TNG is a suite of magneto-hydrodynamic cosmological simulations, successors of the original Illustris simulation \citep{Illustrisa,Illustrisb,Illustrisc,Illustrisd}. The simulations were run using AREPO \citep{AREPO} adopting cosmological parameters consistent with recent analyses \citep{Planck2015}. Specifically, $\Omega_{\rm dm}$ = 0.3089, $\Omega_{\rm b}$ = 0.0486, $\sigma_8$ = 0.8159, $n_s$ = 0.9667 and $h$ = 0.6774. These simulations feature a series of improvements upon their predecessor, the Illustris simulation, including: i) an updated kinetic AGN feedback model for the low accretion state \citep{Weinberger:2017}; ii) an improved parameterisation of galactic winds \citep{Pillepich:2018}; and iii) the inclusion of magnetic fields based on ideal magneto-hydrodynamics \citep{Pakmor:2011,Pakmor:2013,Pakmor:2014}. In this paper, we will use the Illustris-TNG300 (TNG300 thereafter), which is the largest high-resolution hydrodynamic simulation currently available in the world. This simulated volume is a periodic box of 205 $ h^{-1}{\rm Mpc}$ ($302.5 \sim 300$ Mpc) aside. The number of dark matter particles and gas cells is $2500^3$ each, implying a baryonic mass resolution of $7.44\times10^6\, h^{-1}{\rm M_{ \odot}}$ and of $3.98\times 10^7\, h^{-1}{\rm M_{ \odot}}$ for dark matter. We will analyse the $z=0$, $z=0.5$, and $z=1$ outputs, publicly available at the TNG project webpage\footnote{\url{https://www.tng-project.org/}}. Note that we will consider catalogues of galaxies with stellar masses above $\sim 5\times10^{9}\, h^{-1}{\rm M_{ \odot}}$, which means galaxies resolved with more than $1000$ resolution elements. We define the stellar mass of a galaxy as the mass of all star particles bound to the respecive main subhalo. Particles bound to subtructure of a subhalo are not included. The SFR is defined as the sum of the individual instantaneous star formation rates of all gas cells in the subhalo. In addition to the TNG300, we will also employ its dark-matter-only counterpart, TNG300-Dark, as the basis for our SAM and SHAM models. This gravity-only simulation was carried out with the same initial white-noise field and with the same number of dark matter particles ($2500^3$) as the TNG300, which implies a particle mass of $4.73\times10^7\, h^{-1}{\rm M_{ \odot}}$. In some parts of our analysis, we will also use the TNG300-2-Dark and TNG300-3-Dark, lower-resolution versions of the TNG300-Dark ran with the same initial conditions but with only $1250^3$ and $625^3$ particles of mass $3.78\times 10^8 h^{-1}M_{\odot}$ and $3.03\times 10^9 h^{-1}M_{\odot}$, respectively. Finally, we will use the respective subhalo merger trees to compute additional subhalo properties such as the peak maximum circular velocity (referred to as $V_{\rm peak}$). \begin{figure} \includegraphics[width=0.45\textwidth]{Fig1a.pdf} \includegraphics[width=0.45\textwidth]{Fig1b.pdf} \caption{The cumulative stellar mass function (top panel) and the cumulative SFR function (bottom panel) predicted by the TNG300 hydrodynamic simulation (solid lines) and the SAGE semi-analytical galaxy formation models (dashed lines). Different colours indicate different redshifts, as labelled. Dotted horizontal lines mark the number densities of the samples used in this work. The galaxies included in a sample are those located to the right of the intersection between the solid or dashed lines and horizontal dotted lines. For comparison, for the cumulative stellar mass function we show the observational data from Baldry et al. (2008).} \label{Fig:CMF} \end{figure} \subsection{SAGE} \label{sec:SAGE} The second model we consider is a semi-analytic galaxy formation model (SAM). Specifically, we consider the ``Semi-Analytic Galaxy Evolution'' code \cite[SAGE,][]{Croton:2016}, a SAM based on the model presented in \cite{Croton:2006} and the L-Galaxies code \citep{Henriques:2015}. This model includes a variety of physical processes -- gas cooling, star formation, chemical enrichment, etc -- and was the first galaxy formation model to include feedback from AGNs as a mean of suppressing star formation on massive galaxies (along with \citealt{Bower:2006}). One of the main characteristics of this SAM is that it does not use orphan subhaloes, i.e. subhaloes that are not possible to identify in the simulation for numerical reasons, but that are expected to still exist and host a galaxy. This is so the model can be easily run on any dark matter simulation, as long as the merger trees are provided in an appropriate format. We run SAGE on the merger trees of the TNG300-3-Dark simulation. This simulation has a slightly lower mass resolution than the original simulation employed to calibrate its free parameters (the Millennium Simulation, \citealt{Springel:2005}). We check the main predictions of SAGE finding good agreement with the observed stellar mass function. We also test running SAGE over the TNG300-2-Dark and the TNG300-1-Dark, that have a much higher mass resolution that the Millennium Simulation, finding less agreement with the observed stellar mass function, especially at low masses. We expect SAGE to provide numerically robust predictions for the number densities studied here. We use the default calibration of the model to run this SAM. While this could introduce some differences compared to a calibrated SAGE for the specific cosmology of the TNG suite, we found that a new calibration would only introduce differences in the main prediction of the SAM (not shown here). \subsection{Subhalo abundance matching} \label{sec:SHAM} The third model we consider is the so-called ``subhalo abundance matching''. SHAM is an empirical method to populate subhaloes of an $N$-body simulation with galaxies. In its most basic version, SHAM assumes a one-to-one mapping between the mass of a subhalo and its stellar mass or luminosity. More recent implementations of SHAM add scatter to this mapping and include satellite galaxies by using subhalo properties before infall or their maximum values over their full history. These modifications are critical to get even approximately accurate results in agreement with observed clustering. One of the main advantages of SHAMs is their predictability while being computationally efficient. In most implementations, they use a single free parameter, the scatter between the subhalo property used and the stellar mass, in contrast to HOD models which use between 5 and 10 free parameters (if assembly bias, velocity bias and other effects are included). Additionally, SHAM predicts galaxy clustering in rough agreement with hydrodynamical simulations and reproduce some of it galaxy assembly bias signal, but not all \citep{ChavesMontero:2016}. In this paper, we use the TNG300-Dark to create our SHAM mocks with $V_{\rm peak}$ as the subhalo property. We adopt a scatter of $0.125$ dex between $V_{\rm peak}$ and stellar mass, which is set by measuring this value directly in the outputs of the TNG300. We chose $V_{\rm peak}$ as the main property of our SHAM because (a) is widely used in the literature, and (b) we find it has a better agreement between centrals as satellite galaxies compared to other properties such as $M_{\rm peak}$ \citep[see also the discussion in][]{Campbell:2018}. Finally, we assign a stellar-mass to each subhalo as that of the galaxy in the TNG300 at the same rank in a list sorted by stellar mass. We also tested using a different stellar mass function for our SHAMs (the one of SAGE-SAM), finding sub per cent differences in the correlation function, when selecting the galaxies according to a fixed number density. This means that our conclusions should be independent of the mass function selected. \begin{figure} \includegraphics[width=0.45\textwidth]{Fig2.pdf} \caption{The correlation functions of galaxies at $z=0$ in the TNG300 simulation (black solid lines), SAGE semi-analytical model (red solid lines), and SHAMs mocks (blue solid lines). Top, middle, and bottom panels show the predictions for three samples selected by stellar mass with number densities of $\rm n=0.01,\ 0.00316,\ \&\ 0.001\ h^{3}{\rm Mpc}^{-3}$, roughly corresponding to stellar mass cuts of $6\times10^9$, $2\times10^{10}$, and $5\times10^{10}\, h^{-1}{\rm M_{ \odot}}$, respectively.} \label{Fig:xi_SHAM} \end{figure} \subsection{Galaxy catalogues} In the following sections, we will measure and compare the assembly bias signal predicted in the three models described before. For each model we will consider three galaxy samples selected according to either star formation rate or stellar mass and with number densities of $n=0.01\, h^{3}{\rm Mpc}^{-3}$, $n=0.00316\, h^{3}{\rm Mpc}^{-3}$, and $n=0.001\, h^{3}{\rm Mpc}^{-3}$. We build catalogues at $z=0$, $z=0.5$, and $z=1$. Given the volume of the TNG300 simulation, these catalogues contain $8.61\times10^{4}$, $2.72\times10^{5}$, and $8.61\times10^{5}$ objects, from the sparsest to the densest. In Fig.~\ref{Fig:CMF} we show the cumulative stellar mass function (top) and star-formation rate (SFR) function (bottom) at our three redshifts. For comparison, for the cumulative stellar mass function we show the observational data from \cite{Baldry:2008}. Solid and dashed lines indicate the results from the TNG300 and SAGE models, respectively (note that, by construction, the stellar mass function in SHAM is identical to that in the TNG300). Both models are in a reasonable agreement, except for the abundance of the most massive/star-forming galaxies (which is sensitive to the definition of how exactly stellar mass is computed, as \cite{TNGd} showed). For SAGE, when comparing with observations we find a good agreement at all masses. For the case of the cumulative SFR function, the difference between the models are consistent with those reported by \cite{C13,C15}, who showed that, in general, different galaxy formation models tend to not agree in their predictions for SFRs. Nevertheless, all these discrepancies do help in our aim to explore the variety of predictions from current galaxy formation models. Horizontal dotted lines indicate the number density of the three catalogues we will use in this work. By choosing a fixed number density instead of a cut in stellar mass or SFR, we facilitate the comparison with other galaxy formation models/mocks that do not share the same stellar mass and/or SFR distribution. We summarise the cuts on stellar mass for the different redshift and number densities in Table~\ref{Table:n_cuts}. \begin{table} \caption{The cuts of stellar mass and SFR for the TNG300 and SAGE at the different number densities and redshift output used on this work. The units are $h^{-1}{\rm M_{\odot}}$ for the stellar masses, $\rm M_{\odot}/yr$ for the SFR and $ h^{-3}{\rm Mpc^3}$ for the densities. \label{Table:n_cuts}} \begin{tabular}{c c c c} \hline & $n=0.001$ & $n=0.00316$ & $n=0.01$ \\ [0.5ex] \hline TNG300 $\rm log(M_{stell})$ & & & \\ z = 0 & 10.81 & 10.47 & 9.92 \\ z = 0.5 & 10.78 & 10.44 & 9.86 \\ z = 1 & 10.69 & 10.37 & 9.76 \\ \hline TNG300 $\rm SFR$ & & & \\ z = 0 & 3.03 & 1.49 & 0.47 \\ z = 0.5 & 7.59 & 3.81 & 1.32 \\ z = 1 & 13.57 & 6.87 & 2.35 \\ \hline SAGE $\rm log(M_{stell})$ & & & \\ z = 0 & 10.78 & 10.50 & 10.06 \\ z = 0.5 & 10.72 & 10.45 & 10.01 \\ z = 1 & 10.60 & 10.28 & 9.75 \\ \hline SAGE $\rm SFR$ & & & \\ z = 0 & 6.24 & 2.74 & 0.93 \\ z = 0.5 & 10.64 & 10.37 & 9.93 \\ z = 1 & 19.63 & 9.91 & 3.72 \\ \hline \end{tabular} \end{table} The predicted $z=0$ clustering of our catalogues is shown in Fig.~\ref{Fig:xi_SHAM}. Each panel presents the results for a different number density for our galaxy formation model, as indicated by the legend. There is an overall good agreement between the TNG300 and the SHAM models, with small but systematic differences. The SAGE model tends to underpredict the clustering compared to these two models. Since all models employ an identical simulated volume, we expect the differences to originate from the galaxy modelling and assumptions, rather than from statistical fluctuations. For instance, SHAM might overestimate the clustering on small scales compared to the other models because using $V_{\rm peak}$ is equivalent to assuming that the stellar mass of the objects never decreases (i.e. that there is no stellar stripping). Also, the scatter of SHAM was chosen to mimic that of the TNG300, so it is expected to have a clustering similar to this model. On the other hand, the tendency of SAGE to underpredict the clustering, especially at small scales, could be because of different assumptions for satellite disruption. Also, by looking at the halo occupation distribution of these models (not shown here) we noticed that SAGE tends to populate less massive (i.e. lower bias) haloes compared to the TNG300 and SHAM, resulting in the difference on large scales. Therefore, the differences in the clustering are likely to be caused by differences in the {\it physical} assumptions -- e.g. star formation prescription, tidal disruption or quenching that affect the galaxies in this model. Hence, by investigating the galaxy assembly bias signal in these catalogues, we will estimate to which degree its magnitude is a generic prediction of galaxy formation or, instead, what the plausible range of values is. We turn to this question in the next section. \section{The galaxy assembly bias in the galaxy models} \label{sec:GAB} The concept of ``assembly bias'' was first introduced by \cite{Sheth:2004} and \cite{Gao:2005}, and it refers to the dependence of the large-scale clustering of haloes on formation time. This effect was generalised by \cite{Gao:2007} to show that the large-scale halo bias also depends on other secondary properties beside the formation time (as concentration, spin, number of substructures) \citep[see also][]{Wechsler:2006,Faltenbacher:2010} and by \cite{Angulo:2009} to higher-order bias parameters. More recent works have extended this list of secondary properties even further (eg. \citealt{Mao:2018}). The existence of ``halo assembly bias'' in simulated haloes is nowadays widely accepted. Since the evolution of galaxies and haloes are linked, it is expected that an effect analogous to halo assembly bias exists for galaxies. In fact, this effect was detected by \cite{Croton:2007} in SAMs and it is commonly known as ``galaxy assembly bias''. Specifically, Croton et al. showed that, for a fixed cut in stellar mass, the large-scale clustering of galaxies in SAMs was $10\%$ to $20\%$ higher than that of a sample where the galaxy population was only a function of its host halo mass. Likewise, \cite{ChavesMontero:2016} measured a similar amplitude for the ``galaxy assembly bias'' in stellar-mass selected samples of galaxies in the hydrodynamical simulation EAGLE \citep{Schaye:2015}. The same authors, however, reported that SHAM galaxies had a significantly lower amount of assembly bias. Finally, \cite{C19}, found that for stellar mass and SFR-selected samples, galaxy assembly bias in SAMs decreased at lower number densities and higher redshifts, even becoming negative. Observationally, the situation is even less clear with multiple claims of detection (e.g., \citealt{Berlind:2006,Yang:2006,Cooper:2010,Wang:2013b,Lacerna:2014a,Lacerna:2014b,Hearin:2015,Miyatake:2016,Saito:2016,Obuljen:2020}) and non-detection/detection due to different systematics (e.g. \citealt{Campbell:2015b,Zu:2016b,Zu:2017,Busch:2017,Sin:2017,Tinker:2017a,Lacerna:2017}) of assembly bias. In other cases, more data are required to reveal the nature of the reported signal (e.g., \citealt{Montero-Dorta:2017b,Niemiec:2018}). The lack of a clear theoretical expectation has certainly been a difficulty, as it does not provide a clear target nor an optimal observational strategy. An efficient observational strategy to measure assembly bias is key since the predicted halo masses in observation, commonly used to infer the assembly bias signal, is normally biased and highly inaccurate. In this following section, we will quantify the amplitude of assembly bias as a function of redshift, selection criteria, and number density for catalogues constructed in our three galaxy models; the TNG300 hydrodynamical simulation, the SAGE semi-analytical model, and SHAM mocks. \subsection{The galaxy assembly bias evolution} \label{sec:GAB_ev} \begin{figure*} \includegraphics[width=0.8\textwidth]{Fig3.pdf} \caption{The ratio between the correlation functions of the TNG300 simulation (black solid lines), SAGE semi-analytical model (blue solid lines) and SHAMs mocks (red solid lines) with their respective shuffle realisations (i.e. the square of the amount of the galaxy assembly bias signal). The shuffle correlation functions are measured averaging 20 different realisations. The top, middle and bottom rows show the prediction for $\rm z=0,\ z=0.5,\ \&\ z=1$. The left, middle and right column show the predictions for a number density of $\rm n=0.01,\ 0.00316,\ \&\ 0.001$ $h^{3}{\rm Mpc^{-3}}$ for stellar mass-selected galaxies. The dashed horizontal lines show the galaxy assembly bias signal predicted by measuring the individual bias of all the galaxies of each sample, as explained in Section~\ref{sec:GAB_or}.} \label{Fig:shuffle} \end{figure*} \begin{figure*} \includegraphics[width=0.8\textwidth]{Fig4.pdf} \caption{Similar to Fig.~\ref{Fig:shuffle}, but for SFR-selected galaxies. We show only the predictions of the TNG300 simulation (black solid line) and SAGE (red solid line), since the standard SHAM implementation does not predict SFR. Note the SAGE curves do not all appear to go to unity on small scales owing to their typical 1-halo term being located on very small scales.} \label{Fig:shuffle_SFR} \end{figure*} \label{sec:GAB_or} \begin{figure*} \includegraphics[width=0.8\textwidth]{Fig5.pdf} \caption{The galaxy assembly bias signal of stellar mass-selected galaxies (top panel) and SFR selected galaxies (bottom panel) for z=0 (left panel), z=0.5 (middle panel) and z=1 (right panel). The signal is computed as the square root of the ratio between the correlation function to that of its shuffled counterpart, as shown in Figs.~\ref{Fig:shuffle} and \ref{Fig:shuffle_SFR}, for scales $3 < r[ h^{-1}{\rm Mpc}] < 16$. The shaded region represents the standard deviation of the ratio at different scales.} \label{Fig:bias_ev} \end{figure*} {To measure the galaxy assembly bias, $b_{assembly}$}, we compute the ratio between the two-point correlation function of galaxies, $\xi(r)$, to that after randomly shuffling the galaxy population of haloes in mass bins of 0.1 dex following the procedure presented in \cite{Croton:2007}. To reduce stochastic noise, we will display the results after averaging 20 different shuffled catalogues. There are a few technical details worth highlighting regarding the shuffling procedure. First, we consider haloes in the shuffling even if they do not contain any galaxy. Second, satellite galaxies keep their relative distance to the central galaxy, and the central galaxy is located on the original position of the central galaxy that used to populate that halo. Finally, if there is no galaxy, then we use the position of the centre of the potential of the target halo. In Fig.~\ref{Fig:shuffle} we show the ratio between the correlation function of stellar-mass selected catalogues to that of their respective shuffled version. The top, middle and bottom rows show the predictions for $z=0$, $0.5$, and $1$, respectively. Left, middle and right column show the results for number densities, as indicated by the legend. We recall that the magnitude of assembly bias is equal to the square root of the differences on large scales shown here. Overall, we can see that all models predict a different amount of assembly bias, different redshift dependence, and different dependence with number density. For instance, TNG300 shows a roughly constant assembly bias signal of $\sim15\%$ in this range of galaxy number density and redshift. On the other hand, SAGE roughly agrees with the TNG300 for $z=0$ at all number densities, but it predicts significantly less at higher redshifts. SHAM, instead, predicts significantly less assembly bias than SAGE or TNG300, at most redshifts and number densities. The differences between models are even larger for SFR-selected galaxies, which is shown in Fig.~\ref{Fig:shuffle_SFR}. Note that we only display results for SAGE and TNG since, in its basic form, SHAM does not predict star formation rates. In this figure we can appreciate that, unlike for the stellar mass selection, SAGE and the TNG300 do not agree on the magnitude of the assembly bias for almost any case. Specifically, the assembly bias signal is much higher for the TNG300 than for SAGE, and it displays a different redshift evolution: the signal slightly decreases with redshift for SAGE, and it significantly increases for the TNG300. We note that the evolution of the signal for SAGE is in similar to that found by \cite{C19} using the \cite{Guo:2013a} SAM. We summarise these results in Fig~\ref{Fig:bias_ev} which shows the evolution of the assembly bias, as a function of the number density for $z=0$, $z=0.5$ and $z=1$. We compute the assembly bias as the square root of the ratio of the correlation function and its shuffled form, averaged over separations $3 < r/[ h^{-1}{\rm Mpc}] < 16$. The shaded region indicates one standard deviation of the ratio in each case. We would like to emphasise that the amplitude and redshift evolution of the galaxy assembly bias in a given model is a result of the physical processes implemented. For instance, if in a given model galaxies are quenched very rapidly after infall, then a SFR-selected galaxy sample will preferentially select young haloes and would inherit a halo assembly bias. If instead, quenching is very slow, SFR selection would simply return a larger variety of formation times, washing out dependencies with halo formation time. These physical processes, and galaxy formation in general, are still very uncertain and many degrees of freedom exist in the (sub-grid) physics implemented, parametric form, as well as in the calibration of the models. This implies that, for the foreseeable future, galaxy assembly bias will not be a prediction of galaxy formation models, but it should rather be considered as an additional parameter to be constrained by models attempting to do inferences from the observed distribution of galaxies. Before turning into the problem of incorporating a model for assembly bias in SHAM, in the next section we will investigate and quantify further this effect. \subsection{The object-by-object bias} \label{sec:objectbias} \begin{figure} \includegraphics[width=0.45\textwidth]{Fig6.pdf} \caption{The galaxies in a $205\, h^{-1}{\rm Mpc}\times\ 205\, h^{-1}{\rm Mpc}\times\ 10\, h^{-1}{\rm Mpc}$ slice of the TNG300 simulation. The galaxies are colour coded by their individual large-scale bias, as described in \S~\ref{sec:objectbias}. For clarity, only a $10\%$ of the objects are displayed. } \label{Fig:CW} \end{figure} To further investigate the galaxy assembly bias in our catalogues, we have computed the large-scale bias of {\it each} galaxy in our sample. We estimate this quantity ``object-by-object'', following \cite{Paranjape:2018} \citep[see also][]{Paranjape:2020}, as: \begin{equation} \label{eq:bias} b_g^i = \left \langle \frac{V}{P(|{\bf k}|) N(k)} \, \exp(i {\bf k}\cdot {\bf x}^i) \delta^*({\bf k}) \right\rangle_{k}, \end{equation} \noindent where $V$ is the volume of our simulated box, ${\bf x}$ is the location of a given galaxy, $\delta^*$ is the complex conjugate of the dark matter density field in Fourier space and $P(k)$ its power spectrum. Operationally, we measure $\delta(k)$ from a diluted catalogue of dark matter particles in the TNG300-3 using an NGP assignment scheme on a $256^3$ grid. We have tested that using the TNG300-2 simulation (with $1/2^3$ fewer particles than the original TNG300) gives almost identical results. We perform the average over modes in the range $0.008 < k/{\rm Mpc^{-1}} h < 0.316$. Note that, ideally, we would like to use only scales in the limit $k\rightarrow0$ (e.g. $k/{\rm Mpc^{-1}} h < 0.1$) but given the limited volume of our simulations, we are in the need of using these intermediate scales. Still, we checked that computing the bias using $k/{\rm Mpc^{-1}} h < 0.1$ yields consistent, but noisier, results. We also tested only computing the ``object-by-object'' bias for the haloes and then assigning it to its substructures, finding identical results. Intuitively, this estimator corresponds to the cross-correlation between a given point in space and the dark matter density field. Alternatively, it can be regarded as the large-scale overdensity field after a top-hat filter in Fourier space. We highlight that the average of the individual bias of galaxies in a sample is mathematically equivalent to the large scale bias of that sample. Fig.~\ref{Fig:CW} shows the distribution of galaxies in a $10 h^{-1}{\rm Mpc}$ deep slice of the TNG300 catalogue, colour-coded by their individual bias. As expected, galaxies located in denser regions have higher biases than those in less dense regions. Note that galaxies near dense regions, even if they are hosted by low-mass haloes, will still be highly biased. \begin{figure} \includegraphics[width=0.45\textwidth]{Fig7.pdf} \caption{The histogram of the individual biases for the galaxies a number density of $\rm n=0.01\ h^3 {\rm Mpc^3}$ selected by stellar mass at z=0. Different colours denote results from our three different galaxy models. In each case, we show the original catalogue and its shuffled version where the galaxy content of a halo is only a function of its mass. Vertical lines mark the average bias in each of the samples. \label{Fig:bias_distro}} \end{figure} In the top panel of Fig.~\ref{Fig:bias_distro} we show the distribution of individual large-scale biases, $b_g$ (c.f. Eq.~\ref{eq:bias}), for stellar-mass selected galaxies in our three models at $z=0$ and for a number density of $n=0.01\, h^{3}{\rm Mpc}^{-3}$. Solid and dashed lines show the predictions of the original and shuffled samples, respectively. Usually, haloes of the same mass are thought to have all the same large-scale bias, which is in fact not true \citep[e.g.][]{Paranjape:2018}. The same holds for galaxies, as we can see in Fig.~\ref{Fig:bias_distro}, as they can have very different values. The distribution of large-scale biases is very broad: some galaxies have a bias of $\sim10$ whereas others have $-2.5$. This diversity is mostly a consequence of galaxies living in very different environments -- they can be located in extremely dense regions or in empty voids -- even if hosted by haloes of the same mass. The shuffled version of the catalogue displays a remarkable similar bias distribution. This is because haloes of the same mass also can be found in a wide variety of environments, and also a consequence of halo mass being the primary factor determining the bias of galaxy sample. Under a closer inspection, we see that there are systematic differences between the original and shuffled catalogues. This can be better appreciated in the bottom panel, which shows the difference between these two histograms. There we can see that the shuffled sample contains more low-bias galaxies and less high-bias galaxies than the original catalogue. In other words, at a fixed halo mass, the TNG300 simulation preferentially locates galaxies in haloes living in high large-scale density. In the case of SHAM, shown as blue curves in Fig.~\ref{Fig:bias_distro}, we see a very similar distribution of individual biases. In particular, the mean (indicated by vertical lines) is almost identical to that in the TNG sample, which is consistent with their clustering agreeing very well (c.f. Fig.~\ref{Fig:xi_SHAM}). In addition, we can see how SHAM also preferentially places galaxies in haloes with higher large-scale bias compared to full halo population. However, this preferential selection is not as strong as in the case of the TNG catalogues. On the other hand, for the case of SAGE we see that even though the average bias is different to that of TNG300 and SHAM (c.f. Fig.~\ref{Fig:xi_SHAM}), the way in which it preferentially selects low and high-bias haloes is more similar to that of the TNG than in the SHAM mocks. The small difference in the bias distribution implies that the average bias would be slightly different between the original and shuffled catalogues. This is then equivalent to the assembly bias signal! For instance, SAGE and TNG display similar preferences for high/low biased haloes compared to their respective shuffled version. Thus, we expect assembly bias to also be similar. This is in fact what we obtained in Fig.~\ref{Fig:shuffle}. In contrast, we expect SHAM to display less amount of assembly bias, which is also what we found in section \ref{sec:GAB_ev}. To see this quantitatively, we have computed the difference of the mean large-scale bias between original and shuffled catalogues for all our samples. Horizontal dashed lines in Figs.~\ref{Fig:shuffle} and \ref{Fig:shuffle_SFR} mark these average values. As we can see, these figures, in fact, coincide remarkable well with the values estimated from the correlation functions. While the match is not perfect in all cases (mostly because of the noisy correlation function measurements) we confirm that the difference in the bias distribution is indeed equivalent to the galaxy assembly bias signal. We can, therefore, think of galaxy assembly bias as the consequence of a given model slightly preferring or avoiding regions with different large-scale biases. Different models would have different amounts of ``preferential selection'' that can vary as a function of redshift, selection criteria, etc. Of course, none of the aforementioned models makes an explicit connection between galaxy properties and the large-scale bias. Instead, the underlying physical cause of this can be a mixture of many processes and assumptions in a given galaxy formation model, which correlate with specific details of the halo assembly history, which in turn is correlated with the large-scale overdensity. In any case, although the connection between large-scale density and galaxy properties is, in some sense, artificial, this is by definition the galaxy assembly bias. Correlations between galaxy properties and local halo properties, more fundamental from a physics perspective, can at most only partially capture the effect of assembly bias, and are likely to depend sensitively on the underlying galaxy formation physics. A general working model would have needed to consider possible correlations between the occupation number and {\it all} halo/subhalo properties. All this suggests an interesting opportunity of using the individual large-scale bias as a second parameter in empirical models. This would open a series of opportunities to search for the origin of the galaxy assembly bias, model observations more precisely, as well as to create mocks with a tunable degree of assembly bias. This should be much more flexible and accurate than other methods that use other secondary properties of the haloes, as the concentration in the decorated HODs \citep{Hearin:2016}, and it would truly cover the full range of assembly bias possible. In the remainder of this paper, we will focus on this idea and propose a new version of the subhalo abundance matching that features a tuneable degree of galaxy assembly bias. \section{Modelling assembly bias in SHAM} \label{sec:SHAM_AB} \begin{figure*} \includegraphics[width=1.05\textwidth]{Fig8.pdf} \caption{The predicted stellar mass of the SHAM as a function of $V_{\rm peak}$ for the galaxies of the TNG300 only dark matter at $\rm z=0$. The galaxies are colour coded by their bias (with $f_k$ = $f_k^{\rm cen}$ = $f_k^{\rm sat}$), assigned using the implementation descrived in section~\ref{sec:objectbias}. From left to right and top to bottom the galaxies show a perfected anticorrelation between stellar mass and bias ($f_k = -1$), moderated anticorrelation ($f_k = -0.5$), no correlation ($f_k=0$, equivalent to a standard SHAM), moderated correlation ($f_k = 0.5$) and complete correlation ($f_k = 1$). The bottom right panel shows the ratio between the correlation function of these samples and the shuffling run of the standard SHAM.} \label{Fig:vpeak_mstell_bias} \end{figure*} In the previous sections, we showed that there is not a unique prediction for assembly bias among galaxy formation models, and that this appears very clearly in the correlation between stellar mass and the large-scale bias of each galaxy. Motivated by this, in this section we propose and test a method to incorporate a tuneable amount of bias in empirical models, such it can mimic the galaxy assembly bias signal from any model or observational sample. Specifically, our method will employ the bias of individual objects as an ancillary parameter to enhance or suppress correlations with the large-scale density field thus providing assembly bias as a degree of freedom. Specifically, our idea works in SHAM by re-assigning the stellar mass of galaxies in narrow bins of $V_{\rm peak}$ depending on their large-scale bias. This is done for centrals and satellites independently. The stellar mass values of the samples are conserved, only shuffling its values in each bin, meaning the amount of galaxies above a threshold (eg. the stellar mass cut of our sample) is preserved, as well as the scatter of the sample. Since this is done separately, for centrals and satellites, the satellite fraction of the sample is also preserved. This is done with two correlation parameters, $f_k^{\rm cen}$ and $f_k^{\rm sat}$ -- one for central and other for satellite galaxies -- that control the strength of the correlation between the scatter in the $M_*-V_{\rm peak}$ and the large-scale bias of each subhalo. Different from other methods that use secondary properties on the SHAM to have colours or SFR (e.g. \citealt{Hearin:2013b}) our model changes the intrinsic stellar mass-$V_{\rm peak}$ relation, creating an assembly bias signal without the need of employing a secondary galaxy property. In theory, our mocks could be used along with these methods to have colours or SFR more realistically. Technically speaking, we are not necessarily adding assembly bias to the sample, but just bias. This is because we are using an environmental property to select the haloes, and not a secondary halo property, such as halo concentration, age or spin. Since there is a correlation between several secondary halo properties and environment, we expect that part of this bias can be classified as halo assembly bias, but this is not a requirement for the model. Nevertheless, we will show that this bias has the same behaviour than the galaxy assembly bias from galaxy formation models, and can therefore be used to mimic it. Now we describe our algorithm. Let us first consider a SHAM sample built using $V_{\rm peak}$ as the primary property. Then, for a given bin in $V_{\rm peak}$ and values of $f_k = \{f_k^{\rm cen}, f_k^{\rm sat}\}$, our method is as following: \begin{itemize} \item If $f_k > 0$, we sort galaxies in increasing order according to their large-scale bias. Otherwise, we sort the sample in decreasing order. If $f_k = 0$, we randomly reassign the value of the stellar mass. \item We assign a value to each galaxy equal to the ranking in the sorted sample, divided by the number of galaxies in the sample ($g_k$). For example, if $f_k > 0$, and for $N$ galaxies, then the less biased galaxy will have $g_k = 0$, and the most biased will have $g_k = (N-1)/N$. \item We define, $f_k^{'} = 1-|f_k|$, and $g_k^{'}$ as a random value between max($g_k-f_k^{'}$, 0) and min($g_k+f_k^{'}$, 1). For example, for $f_k = 0.9$ ($f_k^{'} = 0.1$) a galaxy with a bias equal to the median of the bias of the sample (i.e. $g_k = 0.5$) can have a $g_k^{'}$ between 0.4 and 0.6 \item Reassign the stellar mass of the galaxies in function of $g_k^{'}$, keeping the same values as the original sample. This means that the galaxy with the largest (lowest) $g_k^{'}$ will have the highest (lowest) stellar mass of the sample. The values available of stellar mass do not change, keeping the same original distribution of stellar masses in the bin of $V_{\rm peak}$ \end{itemize} We repeat this procedure separately for satellite and central galaxies, and for all $V_{\rm peak}$ bins. As a result, if $f_k = 0$, then there will be no additional dependence between bias and the stellar mass other than the originally predicted by SHAM. Instead, if $f_k = 1\ (-1)$, then there will be a perfect (anti-) correlation between the large-scale bias and stellar mass for a constant $V_{\rm peak}$. If $f_k$ is in between these values, the sample will display different degrees of correlation with the large-scale bias (at a fixed $V_{\rm peak}$), and thus it will display different degrees of assembly bias. An example of the performance of this method is shown in Fig.~\ref{Fig:vpeak_mstell_bias}. This figure shows the relation between stellar mass and $V_{\rm peak}$ present in our SHAM catalogues at $z=0$. We colour-code each galaxy by the large-scale bias. The horizontal dotted line shows the stellar mass cut corresponding to our densest sample. In each panel, we show the results after adopting different values for $f_k$, as indicated by the legend. For this particular example, we assume $f_k = f_k^{\rm sat} = f_k^{\rm cen}$, i.e. the correlation between the scatter and the large-scale bias to be identical between central and satellite galaxies. We can appreciate that, at a fixed $V_{\rm peak}$, negative values of $f_k$ result into a secondary anti-correlation between stellar mass and large-scale bias. On the contrary, positive $f_k$ values imply that at a fixed $V_{\rm peak}$ high mass galaxies will be preferentially located in high-bias regions. Although not done here, we note that a very similar algorithm could be developed to implement different degrees of assembly bias in HOD models, and in predictions for SFR in SHAM, where the scatter in the predicted star formation rate (e.g. based on the mass accretion rate) could be correlated with the large-scale bias in similar ways as we do for the scatter in stellar mass. Since we compare catalogues above a stellar-mass threshold, the previous correlations imply that our samples will display different distribution of biases, clustering amplitudes, and degrees of assembly bias. We can see this in Fig.~\ref{Fig:fk_hist}, which shows for the SHAM samples with varying values of $f_k$, the distribution of biases and the correlation functions relative to their shuffled version. The respective clustering, relative to their shuffled version, is shown in the bottom right panel of Fig.~\ref{Fig:vpeak_mstell_bias}. Consistent with our previous discussion, we see that the higher the value of the assembly-bias-correlation parameter, $f_k$, the more preferentially high-bias haloes will be selected, {which results in an increase of a 90\% in the correlation function (compared to the shuffled version).} In contrast, lower $f_k$ preferentially select low-bias haloes, which implies a negative amount of assembly bias reducing the correlation function amplitude by up to 40\%. In turn, $f_k=0$ shows an identical distribution as that of the original catalogues, thus the assembly bias stays at the 15\% level in agreement with the standard SHAM analysis of \cite{ChavesMontero:2016}. \subsection{Our model in practice} \begin{figure} \includegraphics[width=0.45\textwidth]{Fig9.pdf} \caption{Similar to Fig.~\ref{Fig:bias_distro} but for SHAMs with different levels of correlation between the stellar mass and the bias per object, following the procedure explained in \S~\ref{sec:SHAM_AB}. The levels of additional bias are denoted by $f_k$ with $f_k = -1$ perfected anticorrelation between stellar mass and bias, $f_k = -0.5$ moderated anticorrelation, $f_k=0$ no correlation (equivalent to a standard SHAM), $f_k = 0.5$ moderated correlation and $f_k = 1$ complete correlation.} \label{Fig:fk_hist} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{Fig10.pdf} \caption{(Top) The ratio between the correlation function of the TNG300 and it shuffled run (black solid line) and SHAM and it shuffled run (blue solid line), for a number density of $\rm n=0.01\, h^{3}{\rm Mpc}^{-3}$ selected by stellar mass at $\rm z=0$. The green dashed line shows the prediction of the SHAM with assembly bias, as explained in section \ref{sec:SHAM_AB}. (middle) Same to the top panel, but for the cross-correlation between the central galaxies of the sample and the full galaxy sample. (bottom) Same as the middle panel, but for satellites instead of central galaxies.} \label{Fig:TNG_SHAMe} \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{Fig11.pdf} \caption{Same as Fig~\ref{Fig:TNG_SHAMe}, but for galaxies from SAGE semi-analytical model (solid red lines) instead of the TNG300 hydrodynamic simulation.} \label{Fig:SAGE_SHAMe} \end{figure} We now apply our model and show that with it, SHAM can mimic the magnitude of assembly bias measured in either the TNG300 or SAGE-SAM catalogues. For this, we first fit the values of $f_k^{\rm cen}$ and $f_k^{\rm sat}$ in SHAM that provides the best match to the difference between the original and shuffled distribution in either SAGE or the TNG. Then, we constructed a new catalogue with these values and measure its clustering properties. We chose to use the same SHAM to fit the galaxy assembly bias signal from both, the TNG300 and the SAGE-SAM. This supposes an extra challenge since the SAGE-SAM has a different scatter in its $V_{\rm peak}$-stellar mass relation than the TNG300. By using the same SHAM it facilitates the comparison between models, and help us to prove the flexibility of our model. In Figs.~\ref{Fig:TNG_SHAMe} and Fig.~\ref{Fig:SAGE_SHAMe} we compare the assembly bias signal between the TNG300, SAGE, and our original and new (enhanced) SHAM catalogues. We display the case of a stellar mass-selected sample with a number density of $n=0.01\, h^{3}{\rm Mpc}^{-3}$ at $z=0$. Middle and bottom panels show the assembly bias only for central and satellite galaxies, respectively. The values to mimic the TNG300 clustering are $f_k = $ 0.24 and 0.075 and for SAGE are $f_k = $ 0.01 and 0.19, for centrals and satellites respectively. We notice that $f_k^{\rm sat}<f_k^{\rm cen}$ for the TNG300 but is the other way around for SAGE ($f_k^{\rm sat}>f_k^{\rm cen}$). We checked that these relations hold also for the other number densities and redshifts considered in this work. Overall, we see that our model reproduces very well the amount of assembly bias present in SAM or TNG, both for central and satellite galaxies. It is particularly striking that, although the values of $f_k$ were set to reproduce the large-scale assembly bias, they do also reproduce the scale-dependence of assembly bias on intermediate scales, i.e $1 < r [ h^{-1}{\rm Mpc}] < 5$. The agreement is particularly remarkable for the case of TNG galaxies. For both central and satellite galaxies, the amplitude and scale-dependence of the galaxy assembly bias coincide to a few percents. The agreement is somewhat poorer for central galaxies in SAGE, especially on intermediate scales. We note that these scales in the correlation function of central galaxies receive an important contribution of ``splashback'' galaxies, thus they are sensitive to the way SAGE treats them (e.g. \citealt{Zehavi:2019}). In any case, they contribute in a minor way to the full correlation function, whose assembly bias is also reproduced to a few percents in our model. Also, as mentioned on \S~\ref{sec:SHAM}, the scatter of the SHAM is set to mimic the scatter of the TNG300, so we expect a better agreement between those models. Although not shown in the main body of this paper, we highlight that we find similarly good agreement for all the samples considered here. In Appendix A, we show the respective results. We remind the reader that having the same assembly bias does not guarantee to have the same correlation function. This is because the SHAM has other limitations besides assembly bias (see \citealt{Smith:2016, Campbell:2018}). In a future work, we plan to use the improvements shown here with other extensions to the SHAM to accurately reproduce the galaxy clustering of more sophisticated galaxy formation models. \subsection{Galaxy assembly bias in the bispectrum} Since our model manipulates internal correlations of the catalogue, one might wonder whether other statistical properties of the sample are preserved. To explore this question, we have computed the bispectrum of our original and shuffled catalogues. This quantity is defined as: \begin{equation} B({\bf k_1},{\bf k_2},{\bf k_3}) = \langle \delta({\bf k_1}) \delta({\bf k_2}) \delta({\bf k_3})\rangle \delta_D({\bf k_1} + {\bf k_2} + {\bf k_3}) \end{equation} \noindent where $\delta_D$ is the Dirac's delta. We have considered isosceles (|$\vec{k_1}|=|\vec{k_2}|=|\vec{k_3}|$) and squezed triangular configurations ($|\vec{k_1}|=0.01 h\,{\rm Mpc}^{-1}$, and $|\vec{k_2}|=|\vec{k_3}|$). In particular, the squeezed configuration will test whether the correlations between small scales is responding adequately to fluctuations on larger scales. To measure these bispectra we use the publicly available {\sc bskit} code\footnote{https://github.com/sjforeman/bskit} \citep{Foreman:2019}. We show our results in Fig.~\ref{Fig:bispec}, where we display the measured bispectrum in our original catalogues over that in their shuffled counterpart. As in the case of the power spectrum, we can see that SHAM underestimates the amount of galaxy assembly bias in the bispectrum, for both triangular configurations displayed. In contrast, our model with additional assembly bias agrees remarkably well with that measured in the TNG300 galaxies. Although this agreement is primarily a consequence of the agreement in the two-point statistics, we highlight that the values to fit the correlation parameters were set making no reference whatsover to this three-point statistics. In other words, our values for $f_k$ were set to reproduce the effect on the mean large-scale bias of the sample, the bispectrum, however, is also sensitive to higher-order cumulants of the distribution. Thus, there could be infinitely many values of assembly bias in the bispectrum for a given effect in the correlation function. Finally, to our knowledge, this is the first measurement of the effect of galaxy assembly bias in a three-point statistics. \begin{figure} \includegraphics[width=0.45\textwidth]{Fig12.pdf} \includegraphics[width=0.45\textwidth]{Fig13.pdf} \caption{ \label{Fig:bispec} The ratio between the galaxy bispectrum to that of its shuffled version. We consider a catalogue of stellar-mass selected galaxies at $z=0$ with a number density of $0.01 h^{3}{\rm Mpc}^{-3}$. Top panel shows an isosceles configuration, ($|k_1|=|k_2|=|k3|=k$), whereas the bottom panel shows an squeezed configuration with $k_1 = 0.01 h\,{\rm Mpc}^{-1}$ and $|k_2|=|k_3|=k$. In each panel, black, blue and green lines denote results for the TNG300 simulation, SHAM mocks, and the version of SHAM with a tuneable degree of assembly bias presented in this paper.} \end{figure} All these cases illustrate the flexibility and accuracy of our technique. While this time we were limited by the volume of the TNG300 simulation, larger simulations would allow for more detailed investigations and possible further improvements. \section{Summary and Conclusions} \label{sec:Conclusions} In this paper, we studied the behaviour of galaxy assembly bias of various samples with different number densities, redshifts, and selection criteria. We use the classic definition of galaxy assembly bias introduced by \cite{Croton:2007} that is the difference in the large scale clustering of galaxies at a fixed halo mass due to correlations with the assembly history and other properties of host haloes and can be measure by comparing the galaxy clustering of a sample with the one of its shuffled counterpart. We consider three different galaxy models. Specifically, the TNG300 simulation, a state-of-the-art magneto-hydrodynamic simulation of 205 $ h^{-1}{\rm Mpc}$; SAGE, a state-of-the-art semi-analytical model of galaxy formation; and subhalo abundance matching built with $V_{\rm peak}$. These three models were performed over the same simulated cosmic volume, which enables a precise comparison. Below we summarise the main results of this work: \begin{itemize} \item Quantifying the redshift evolution and dependence with number density and for galaxies selected by stellar mass and SFR, we find that all the models feature different amplitude for the galaxy assembly bias. The differences were particularly evident for star-forming samples. (Figs.~\ref{Fig:shuffle} \&\ ~\ref{Fig:shuffle_SFR}). \item We found that the evolution with redshift and number density of the galaxy assembly bias are similar for SAGE and SHAM. This is in agreement with previous results from the literature (eg. \citealt{C19}), but they are different to that of the TNG300. Based on this we argued that, while the presence of galaxy assembly bias is part of the current galaxy formation theory, its amplitude and behaviour is strongly model-dependent. (fig.~\ref{Fig:bias_ev}). \item By looking at the individual large-scale bias of the galaxies, we showed that galaxy assembly bias is equivalent to how different selection criteria and physics modelled preferentially select locations with different large-scale bias. While not surprising, this perspective can improve the way we focus our efforts on understanding the origin of galaxy assembly bias and in the creation of mocks with galaxy assembly bias. (fig.~\ref{Fig:bias_distro}). \item We developed a method to model assembly bias in SHAM. The method works by re-ordering the galaxies keeping constant their $V_{\rm peak}-M_{{\rm stell}}$ relation and its distribution of satellites. We find that by maximising or minimising the correlation with the bias, we are able to modify the large scale clustering by a factor of 3 (meaning $\sim 70\%$ in the differences of the bias of the sample). (fig.~\ref{Fig:vpeak_mstell_bias}). \item We used our SHAMs extended with assembly bias to reproduce the level of galaxy assembly bias in the TNG300 or SAGE catalogues. These mocks can be used to create catalogues with a fixed assembly bias signal (e.g. in case we want the same level of assembly bias of a hydrodynamic simulation or a SAM) or we can let it free when interpreting the observed galaxy clustering (Figs.~\ref{Fig:TNG_SHAMe} \&\ ~\ref{Fig:SAGE_SHAMe}) This new SHAM model presents an upgrade of previous improvements to the standard model by mimicking the assembly bias signal in a flexible way, based on an environmental halo property (not concentration) and, thanks to the imposition of keeping the satellite fraction constant, that only add galaxy assembly bias while keeping the clustering amplitude fixed. We anticipate our extended SHAM catalogues to have various applications. First, they could help in designing new observational tests to measure galaxy assembly bias in galaxy surveys. This is done by creating mocks with different amplitudes of assembly bias and fitting the total galaxy clustering from observations (similar to the approach of \citealt{Salcedo:2020}). This will constrain the level of assembly bias necessary to properly reproduce the correlation function, indicating us the amount of assembly bias from the observed universe. With the same idea, it can be used to put constraints on the maximum level of assembly bias possible and studying its origin, by looking at how the galaxy assembly bias signal evolves with redshift and number density. Furthermore, it can help to explore the degeneracy between cosmological parameters and galaxy formation physics (e.g. following the procedure shown in \citealt{C20a}). This could be particularly relevant in current searches for signature of modified gravity, where there exists additional correlations between galaxy properties and large-scale densities. The model presented here is simple enough so it can be implemented without the need of extensive and complex calculation, not only with the ``object-by-object'' bias but any secondary halo property. In a recent work \citep{C20c}, we develop a new extension of the traditional SHAM model that includes a novel treatment of orphan galaxies, tidal disruption for the satellite galaxies, star formation rate predictions and a flexible level of galaxy assembly bias using the model presented in this paper.
{'timestamp': '2021-04-22T02:22:25', 'yymm': '2005', 'arxiv_id': '2005.03672', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.03672'}
\section{ Introduction} Nuclear collision experiments, performed at ion accelerators, are a very powerful tool to study nuclear properties at low and intermediate energies. In order to interpret accumulated experimental data appropriate theoretical methods are necessary enabling the simultaneous description of the available elastic, rearrangement and breakup reactions. Regardless of its importance, the theoretical description of quantum-mechanical collisions turns out to be one of the most complex and slowly advancing problems in theoretical physics. If during the last decade accurate solutions for the nuclear bound state problem became available, full solution of the scattering problem (containing elastic, rearrangement and breakup channels) remains limited to the three-body case. The main difficulty is related to the fact that, unlike the bound state wave functions, scattering wave functions are not localized. In configuration space one is obliged to solve multidimensional differential equations with extremely complex boundary conditions; by formulating the quantum-mechanical scattering problem in momentum space one has to deal with non-trivial singularities in the kernel of multivariable integral equations. A rigorous mathematical formulation of the quantum mechanical three-body problem in the framework of non relativistic dynamics has been introduced by Faddeev in the early sixties~\cite{Fad_60}, in the context of the three-nucleon system with short range interactions. In momentum space these equations might be slightly modified by formulating them in terms of three-particle transition operators that are smoother functions compared to the system wave functions. Such a modification was proposed by Alt, Grassberger, and Sandhas~\cite{alt:67a} (AGS). Solutions of the AGS equations with short range interactions were readily obtained in the early seventies. As large computers became available progress followed leading, by the end eighties, to fully converged solutions of these equations for neutron-deuteron ($n$-$d$) elastic scattering and breakup using realistic short range nucleon-nucleon ($N$-$N$) interactions. Nevertheless the inclusion of the long range Coulomb force in momentum space calculations of proton-deuteron ($p$-$d$) elastic scattering and breakup with the same numerical reliability as calculations with short range interactions alone, only become possible in the last decade. Significant progress has been achieved~\cite{deltuva:05a,deltuva:05d} by developing the screening and renormalization procedure for the Coulomb interaction in momentum space using a smooth but at the same time sufficiently rapid screening. This technique permitted to extend the calculations to the systems of three-particles with arbitrary masses above the breakup threshold~\cite{deltuva:06b,deltuva:07d}. However it has taken some time to formulate the appropriate boundary conditions in configuration space for the three-body problem~\cite{Merkuriev_71,Merkuriev_74,MGL_76} and even longer to reformulate the original Faddeev equations to allow the incorporation of long-range Coulomb like interactions~\cite{Merkuriev_80,Merkuriev_81}. Rigorous solution of the three-body problem with short range interactions has been achieved just after these theoretical developments, both below and above breakup threshold. On the other hand the numerical solution for the three-body problem including charged particles above the three-particle breakup threshold has been achieved only recently. First it has been done by using approximate Merkuriev boundary conditions in configuration space~\cite{kievsky:97}. Nevertheless this approach proved to be a rather complex task numerically, remaining unexplored beyond the $p$-$d$ scattering case, but not yet for the $p$-$d$ breakup. Finally, very recently configuration space method based on complex scaling have been developed and applied for $p$-$d$ scattering~\cite{lazauskas:11a}. This method allows to treat the scattering problem using very simple boundary conditions, equivalent to the ones employed to solve the bound-state problem. \bigskip The aim of this lecture is to present these two recently developed techniques, namely the momentum-space method based on screening and renormalization as well as the configuration-space complex scaling method. This lecture is structured as follows: the first part serves to introduce theoretical formalisms for momentum space and configuration space calculations; in the second part we present some selected calculations with an aim to test the performance and validity of the two presented methods. \section{Momentum-space description of three-particle scattering} \label{sec:p} We describe the scattering process in a system of three-particles interacting via pairwise short-range potentials $v_\alpha$, $\alpha=1,2,3$; we use the odd-man-out notation, that is, $v_1$ is the potential between particles 2 and 3. In the framework of nonrelativistic quantum mechanics the center-of-mass (c.m.) and the internal motion can be separated by introducing Jacobi momenta \begin{eqnarray}\label{eq:Jacobi} \vec{p}_\alpha & = &\frac{m_{\gamma} \vec{k}_\beta - m_{\beta} \vec{k}_\gamma } {m_{\beta} + m_{\gamma} }, \\ \vec{q}_\alpha & = & \frac{m_{\alpha} (\vec{k}_\beta + \vec{k}_\gamma) - (m_{\beta} + m_{\gamma}) \vec{k}_\alpha } {m_{\alpha} + m_{\beta} + m_{\gamma} }, \end{eqnarray} with ($\alpha \beta \gamma $) being cyclic permutations of (123); $\vec{k}_\alpha$ and $m_{\alpha}$ are the individual particle momenta and masses, respectively. The c.m. motion is free and in the following we consider only the internal motion; the corresponding kinetic energy operator is $H_0$ while the full Hamiltonian is \begin{equation} \label{eq:H} H = H_0 + \sum_{\alpha=1}^3 v_\alpha . \end{equation} \subsection{Alt, Grassberger, and Sandhas equations} We consider the particle $\alpha$ scattering from the pair $\alpha$ that is bound with energy $ \epsilon_\alpha$. The initial channel state $|b_{\alpha}\vec{q}_\alpha\rangle$ is the product of the bound state wave function $|b_\alpha \rangle$ for the pair $\alpha$ and a plane wave with the relative particle-pair $\alpha$ momentum $\mathbf{q}_\alpha$; the dependence on the discrete quantum numbers is suppressed in our notation. $|b_{\alpha}\vec{q}_\alpha\rangle$ is the eigenstate of the corresponding channel Hamiltonian $H_\alpha = H_0 + v_\alpha$ with the energy eigenvalue $E= \epsilon_\alpha + q^2_\alpha/2M_\alpha$ where $M_\alpha$ is the particle-pair $\alpha$ reduced mass. The final channel state is the particle-pair state in the same or different configuration $|b_{\beta}\vec{q}_\beta\rangle$ in the case of elastic and rearrangement scattering or, in the case of breakup, it is the state of three free particles $|\vec{p}_{\gamma}\vec{q}_\gamma\rangle$ with the same energy $E= p_\gamma^2/2\mu_\gamma + q_\gamma^2/2M_\gamma $ and pair $\gamma$ reduced mass $\mu_\gamma$; any set of Jacobi momenta can be used equally well for the breakup state. The stationary scattering states~\cite{schmid:74a,gloeckle:83a} corresponding to the above channel states are eigenstates of the full Hamiltonian; they are obtained from the channel states using the full resolvent $G = (E+i0-H)^{-1}$, i.e., \begin{eqnarray} \label{eq:psi_a} |b_\alpha \vec{q}_\alpha \rangle^{(+)} & = & i0 G |b_\alpha \vec{q}_\alpha \rangle, \\ \label{eq:psi_0} |\vec{p}_\alpha\vec{q}_\alpha \rangle^{(+)} & = & i0 G |\vec{p}_\alpha\vec{q}_\alpha \rangle. \end{eqnarray} The full resolvent $G$ may be decomposed into the channel resolvents $G_\beta = (E+i0-H_\beta)^{-1}$ and/or free resolvent $G_0 = (E+i0-H_0)^{-1}$ as \begin{equation} G = G_\beta + G_\beta \bar{v}_\beta G , \end{equation} with $\beta=0,1,2,3$ and $ \bar{v}_\beta = \sum_{\gamma=1}^3 \bar{\delta}_{\beta \gamma} v_\gamma$ where $\bar{\delta}_{\beta \gamma} = 1-{\delta}_{\beta \gamma}$. Furthermore, the channel resolvents \begin{equation} G_\beta = G_0 + G_0 T_\beta G_0 , \end{equation} can be related to the corresponding two-particle transition operators \begin{equation} T_\beta = v_\beta + v_\beta G_0 T_\beta , \end{equation} embedded into three-particle Hilbert space. Using these definitions Eqs.~(\ref{eq:psi_a}) and (\ref{eq:psi_0}) can be written as triads of Lippmann-Schwinger equations \begin{eqnarray} \label{eq:psi_LS} |b_\alpha \vec{q}_\alpha \rangle^{(+)} & = {} & \delta_{\beta \alpha} |b_\alpha \vec{q}_\alpha \rangle + G_\beta \bar{v}_\beta |b_\alpha \vec{q}_\alpha \rangle^{(+)} , \\ |\vec{p}_\alpha\vec{q}_\alpha \rangle^{(+)} & = {} & (1+ G_0 T_\beta ) |\vec{p}_\alpha\vec{q}_\alpha \rangle + G_\beta \bar{v}_\beta |\vec{p}_\alpha\vec{q}_\alpha \rangle^{(+)} , \end{eqnarray} with $\alpha$ being fixed and $\beta =1,2,3$; they are necessary and sufficient to define the states $|b_\alpha \vec{q}_\alpha \rangle^{(+)}$ and $|\vec{p}_\alpha\vec{q}_\alpha \rangle^{(+)}$ uniquely. However, in scattering problems it may be more convenient to work with the multichannel transition operators $U_{\beta \alpha}$ defined such that their on-shell elements yield scattering amplitudes, i.e., \begin{equation} \label{eq:U-V} U_{\beta \alpha} |b_\alpha \vec{q}_\alpha \rangle = \bar{v}_\beta |b_\alpha \vec{q}_\alpha \rangle^{(+)}. \end{equation} Our calculations are based on the AGS version~\cite{alt:67a} of three-particle scattering theory. In accordance with Eq.~(\ref{eq:U-V}) it defines the multichannel transition operators $U_{\beta \alpha}$ by the decomposition of the full resolvent $ G$ into channel and/or free resolvents as \begin{equation} \label{eq:G-U} G = \delta_{\beta \alpha} G_\alpha + G_\beta U_{\beta \alpha} G_\alpha . \end{equation} The multichannel transition operators $U_{\beta \alpha}$ with fixed $\alpha$ and $\beta = 1,2,3$ are solutions of three coupled integral equations \begin{equation} \label{eq:AGSnsym_a} U_{\beta \alpha} = \bar{\delta}_{\beta \alpha} G_0^{-1} + \sum_{\gamma=1}^3 \bar{\delta}_{\beta \gamma} T_{\gamma} G_0 U_{\gamma \alpha}. \end{equation} The transition matrix $U_{0 \alpha} $ to final states with three free particles can be obtained from the solutions of Eq.~(\ref{eq:AGSnsym_a}) by quadrature, i.e., \begin{equation} \label{eq:AGSnsym_b} U_{0 \alpha} = G_0^{-1} + \sum_{\gamma=1}^3 T_{\gamma} G_0 U_{\gamma \alpha}. \end{equation} The on-shell matrix elements $\langle b_{\beta} \vec{q}'_\beta |U_{\beta \alpha} |b_\alpha \vec{q}_\alpha \rangle$ are amplitudes (up to a factor) for elastic ($\beta = \alpha$) and rearrangement ($\beta \neq \alpha$) scattering. For example, the differential cross section for the $\alpha + (\beta\gamma) \to \beta + (\gamma\alpha)$ reaction in the c.m. system is given by \begin{equation} \label{eq:dcsab} \frac{d \sigma_{\alpha \to \beta}}{d \Omega_\beta} = (2\pi)^4 M_\alpha M_\beta \frac{q'_\beta}{q_\alpha} | \langle b_{\beta} \vec{q}'_\beta |U_{\beta \alpha} |b_\alpha \vec{q}_\alpha \rangle|^2. \end{equation} The cross section for the breakup is determined by the on-shell matrix elements $\langle \vec{p}'_{\gamma} \vec{q}'_\gamma |U_{0 \alpha} |b_\alpha \vec{q}_\alpha \rangle$. Thus, in the AGS framework all elastic, rearrangement, and breakup reactions are calculated on the same footing. Finally we note that the AGS equations can be extended to include also the three-body forces as done in Ref.~\cite{deltuva:09e}. \subsection{Inclusion of the Coulomb interaction} The Coulomb potential $w_C$, due to its long range, does not satisfy the mathematical properties required for the formulation of standard scattering theory as given in the previous subsection for short-range interactions $v_\alpha$. However, in nature the Coulomb potential is always screened at large distances. The comparison of the data from typical nuclear physics experiments and theoretical predictions with full Coulomb is meaningful only if the full and screened Coulomb become physically indistinguishable. This was proved in Refs.~\cite{taylor:74a,semon:75a} where the screening and renormalization method for the scattering of two charged particles was proposed. We base our treatment of the Coulomb interaction on that idea. Although we use momentum-space framework, we first choose the screened Coulomb potential in configuration-space representation as \begin{equation} \label{eq:wr} w_R(r) = w_C(r)\; e^{-(r/R)^n} , \end{equation} and then transform it to momentum-space. Here $R$ is the screening radius and $n$ controls the smoothness of the screening. The standard scattering theory is formally applicable to the screened Coulomb potential $w_R$, i.e., the Lippmann-Schwinger equation yields the two-particle transition matrix \begin{equation} \label{eq:tr} t_R = w_R + w_R g_0 t_R , \end{equation} where $g_0$ is the two-particle free resolvent. It was proven in Ref.~\cite{taylor:74a} that in the limit of infinite screening radius $R$ the on-shell screened Coulomb transition matrix (screened Coulomb scattering amplitude) $\langle \mathbf{p}'| t_R | \mathbf{p} \rangle$ with $p'=p$, renormalized by an infinitely oscillating phase factor $z_R^{-1}(p) = e^{2i\phi_R(p)}$, approaches the full Coulomb amplitude $\langle \mathbf{p}'| t_C | \mathbf{p} \rangle$ in general as a distribution. The convergence in the sense of distributions is sufficient for the description of physical observables in a real experiment where the incoming beam is not a plane wave but wave packet and therefore the cross section is determined not directly by the scattering amplitude but by the outgoing wave packet, i.e., by the scattering amplitude averaged over the initial state physical wave packet. In practical calculations~\cite{alt:02a,deltuva:05a} this averaging is carried out implicitly, replacing the renormalized screened Coulomb amplitude in the $R \to \infty$ limit by the full one, i.e., \begin{equation} \label{eq:taylor2} \lim_{R \to \infty} z_R^{-1}(p) \langle \mathbf{p}'| t_R | \mathbf{p} \rangle \to \langle \mathbf{p}'| t_C | \mathbf{p} \rangle. \end{equation} Since $z_R^{-1}(p)$ is only a phase factor, the above relations indeed demonstrate that the physical observables become insensitive to screening provided it takes place at sufficiently large distances $R$ and, in the $R \to \infty$ limit, coincide with the corresponding quantities referring to the full Coulomb. Furthermore, renormalization by $ z_{R}^{-\frac12}(p_i)$ in the $R \to \infty$ limit relates also the screened and full Coulomb wave functions~\cite{gorshkov:61}, i.e., \begin{equation} \label{eq:gorshkov} \lim_{R \to \infty} (1 + g_0 t_R) |\mathbf{p} \rangle z_R^{-\frac12}(p) = |\psi_C^{(+)}(\mathbf{p}) \rangle. \end{equation} The screening and renormalization method based on the above relations can be extended to more complicated systems, albeit with some limitations. We consider the system of three-particles with charges $z_\alpha$ of equal sign interacting via pairwise strong short-range and screened Coulomb potentials $v_\alpha + w_{\alpha R}$ with $\alpha$ being 1, 2, or 3. The corresponding two-particle transition matrices are calculated with the full channel interaction \begin{equation} \label{eq:TR} T^{(R)}_\alpha = (v_\alpha + w_{\alpha R}) + (v_\alpha + w_{\alpha R}) G_0 T^{(R)}_\alpha, \end{equation} and the multichannel transition operators $U^{(R)}_{\beta \alpha}$ for elastic and rearrangement scattering are solutions of the AGS equation \begin{equation} U^{(R)}_{\beta \alpha} = \bar{\delta}_{\beta \alpha} G_0^{-1} + \sum_{\gamma=1}^3 \bar{\delta}_{\beta \gamma} T^{(R)}_\gamma G_0 U^{(R)}_{\gamma \alpha} ; \label{eq:Uba} \end{equation} all operators depend parametrically on the Coulomb screening radius $R$. In order to isolate the screened Coulomb contributions to the transition amplitude that diverge in the infinite $R$ limit we introduce an auxiliary screened Coulomb potential $W^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}$ between the particle $\alpha$ and the center of mass (c.m.) of the remaining pair. The same screening function has to be used for both Coulomb potentials $w_{\alpha R}$ and $W^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}$. The corresponding transition matrix \begin{equation} \label{eq:Tcm} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R} = W^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R} + W^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R} G^{(R)}_{\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R} , \end{equation} with $ G^{(R)}_{\alpha} = (E+i0-H_0-v_\alpha - w_{\alpha R})^{-1}$ is a two-body-like operator and therefore its on-shell and half-shell behavior in the limit $R \to \infty$ is given by Eqs.~(\ref{eq:taylor2}) and (\ref{eq:gorshkov}). As derived in Ref.~\cite{deltuva:05a}, the three-particle transition operators may be decomposed as \begin{eqnarray} U^{(R)}_{\beta \alpha} &=& \delta_{\beta\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R} + [1 + T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\beta R} G^{(R)}_{\beta}] \tilde{U}^{(R)}_{\beta\alpha} [1 + G^{(R)}_{\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}] \quad \label{eq:U-T} \\ &=& \delta_{\beta\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R} + (U^{(R)}_{\beta \alpha} - \delta_{\beta\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}). \label{eq:U-T2} \end{eqnarray} where the auxiliary operator $\tilde{U}^{(R)}_{\beta\alpha}$ is of short range when calculated between on-shell screened Coulomb states. Thus, the three-particle transition operator $U^{(R)}_{\beta \alpha}$ has a long-range part $\delta_{\beta\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}$ whereas the remainder $U^{(R)}_{\beta \alpha} - \delta_{\beta\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}$ is a short-range operator that is externally distorted due to the screened Coulomb waves generated by $[1 + G^{(R)}_{\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}]$. On-shell, both parts do not have a proper limit as $R \to \infty$ but the limit exists after renormalization by an appropriate phase factor, yielding the transition amplitude for full Coulomb \begin{eqnarray} \nonumber && \langle b_\beta \mathbf{q}'_\beta | U^{(C)}_{\beta \alpha} |b_\alpha \mathbf{q}_\alpha\rangle = \delta_{\beta \alpha} \langle b_\alpha \mathbf{q}'_\beta |T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha C} |b_\alpha \mathbf{q}_\alpha \rangle \\ & & + \lim_{R \to \infty} [ Z^{-\frac{1}{2}}_{\beta R}(q'_\beta) \langle b_\beta \mathbf{q}'_\beta | ( U^{(R)}_{\beta \alpha} - \delta_{\beta\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}) |b_\alpha \mathbf{q}_\alpha \rangle Z^{-\frac{1}{2}}_{\alpha R}(q_\alpha) ]. \quad \label{eq:UC2} \end{eqnarray} The first term on the right-hand side of Eq.~(\ref{eq:UC2}) is known analytically~\cite{taylor:74a}; it corresponds to the particle-pair $\alpha$ full Coulomb transition amplitude that results from the implicit renormalization of $T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}$ according to Eq.~(\ref{eq:taylor2}). The $R \to \infty$ limit for the remaining part $( U^{(R)}_{\beta \alpha} - \delta_{\beta\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R})$ of the multichannel transition matrix is performed numerically; due to the short-range nature of this term the convergence with the increasing screening radius $R$ is fast and the limit is reached with sufficient accuracy at finite $R$; furthermore, it can be calculated using the partial-wave expansion. We emphasize that Eq.~(\ref{eq:UC2}) is by no means an approximation since it is based on the obviously exact identity (\ref{eq:U-T2}) where the $R \to \infty$ limit for each term exists and is calculated separately. The renormalization factor for $R \to \infty $ is a diverging phase factor \begin{equation} Z_{\alpha R}(q_\alpha) = e^{-2i \Phi_{\alpha R}(q_\alpha)}, \end{equation} where $\Phi_{\alpha R}(q_\alpha)$, though independent of the particle-pair relative angular momentum $l_\alpha$ in the infinite $R$ limit, may be realized by \begin{equation} \label{eq:phiRl} \Phi_{\alpha R}(q_\alpha) = \sigma_{l_\alpha}^{\alpha}(q_\alpha) - \eta_{l_\alpha R}^{\alpha}(q_\alpha), \end{equation} with the diverging screened Coulomb phase shift $\eta_{l_\alpha R}^{\alpha}(q_\alpha)$ corresponding to standard boundary conditions and the proper Coulomb one $\sigma_{l_\alpha}^{\alpha}(q_\alpha)$ referring to the logarithmically distorted proper Coulomb boundary conditions. For the screened Coulomb potential of Eq.~(\ref{eq:wr}) the infinite $R$ limit of $\Phi_{\alpha R}(q_\alpha)$ is known analytically, \begin{equation} \label{eq:phiRlln} \Phi_{\alpha R}(q_\alpha)=\mathcal{K}_{\alpha}(q_\alpha)[\ln{(2q_\alpha R)} - C/n], \end{equation} where $C \approx 0.5772156649$ is the Euler number and $\mathcal{K}_{\alpha}(q_\alpha) = \alpha_{e.m.}z_\alpha \sum_\gamma \bar{\delta}_{\gamma\alpha} z_\gamma M_\alpha/q_\alpha$ is the Coulomb parameter with $\alpha_{e.m.} \approx 1/137$. The form of the renormalization phase $\Phi_{\alpha R}(q_\alpha)$ to be used in the actual calculations with finite screening radii $R$ is not unique, but the converged results show independence of the chosen form of $\Phi_{\alpha R}(q_\alpha)$. For breakup reactions we follow a similar strategy. However, the proper three-body Coulomb wave function and its relation to the three-body screened Coulomb wave function is, in general, unknown. This prevents the application of the screening and renormalization method to the reactions involving three free charged particles (nucleons or nuclei) in the final state. However, in the system of two charged particles and a neutral one with $z_\rho = 0$, the final-state Coulomb distortion becomes again a two-body problem with the screened Coulomb transition matrix \begin{equation} T_{\rho R} = w_{\rho R} + w_{\rho R} G_0 T_{\rho R}. \end{equation} This makes the channel $\rho$, corresponding to the correlated pair of charged particles, the most convenient choice for the description of the final breakup state. As shown in Ref.~\cite{deltuva:05d}, the AGS breakup operator \begin{equation}\label{eq:U0a} U^{(R)}_{0\alpha} = {} G_0^{-1} + \sum_{\gamma=1}^3 T^{(R)}_{\gamma} G_0 U^{(R)}_{\gamma \alpha} , \end{equation} can be decomposed as \begin{equation}\label{eq:U0t} U^{(R)}_{0\alpha} = {} (1 + T_{\rho R} G_{0}) \tilde{U}^{(R)}_{0\alpha} (1 + G^{(R)}_{\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R}), \end{equation} where the reduced operator $\tilde{U}^{(R)}_{0\alpha}(Z)$ calculated between screened Coulomb distorted initial and final states is of finite range. In the full breakup operator $U^{(R)}_{0 \alpha}(Z)$ the external distortions show up in screened Coulomb waves generated by $(1 + G^{(R)}_{\alpha} T^{\mathrm{c\!\:\!.m\!\:\!.}}_{\alpha R})$ in the initial state and by $(1 + T_{\rho R} G_{0})$ in the final state; both wave functions do not have proper limits as $R \to \infty$. Therefore the full breakup transition amplitude in the case of the unscreened Coulomb potential is obtained via the renormalization of the on-shell breakup transition matrix $ U^{(R)}_{0 \alpha}$ in the infinite $R$ limit \begin{equation} \langle \mathbf{p}'_\rho \mathbf{q}'_\rho | U^{(C)}_{0 \alpha} |b_\alpha \mathbf{q}_\alpha \rangle = \lim_{R \to \infty} [ z^{-\frac{1}{2}}_{\rho R}(p'_\rho) \langle \mathbf{p}'_\rho \mathbf{q}'_\rho | U^{(R)}_{0 \alpha} |b_\alpha \mathbf{q}_\alpha \rangle Z_{\alpha R}^{-\frac{1}{2}}(q_\alpha )], \label{eq:UC1a} \end{equation} where $\mathbf{p}'_\rho$ is the relative momentum between the charged particles in the final state, $\mathbf{q}'_\rho$ the corresponding particle-pair relative momentum, and \begin{equation} \label{eq:phiRp} z_{\rho R}(p'_\rho) = e^{-2i\kappa_\rho(p'_\rho)[\ln{(2p'_\rho R)} - C/n]} , \end{equation} the final-state renormalization factor with the Coulomb parameter $\kappa_\rho(p'_\rho)$ for the pair $\rho$. The limit in Eq.~(\ref{eq:UC1a}) has to be performed numerically, but, due to the short-range nature of the breakup operator, the convergence with increasing screening radius $R$ is fast and the limit is reached with sufficient accuracy at finite $R$. Thus, to include the Coulomb interaction via the screening and renormalization method one only needs to solve standard scattering theory equations. \subsection{Practical realization} We calculate the short-range part of the elastic, rearrangement, and breakup scattering amplitudes (\ref{eq:UC2}) and (\ref{eq:UC1a}) by solving standard scattering equations (\ref{eq:Uba}), (\ref{eq:Tcm}), and (\ref{eq:U0a}) with a finite Coulomb screening radius $R$. We work in the momentum-space partial-wave basis~\cite{deltuva:phd}, i.e., we use three sets \\ $|p_\alpha q_\alpha \nu_\alpha \rangle \equiv |p_\alpha q_\alpha (l_\alpha \{ [L_\alpha(s_\beta s_\gamma)S_\alpha] I_\alpha s_\alpha \} K_\alpha) { J M} \rangle$ with $(\alpha,\beta,\gamma)$ being cyclic permutations of (1,2,3). Here $s_\alpha$ is the spin of particle $\alpha$, $L_\alpha$ and $l_\alpha$ are the orbital angular momenta associated with $p_\alpha$ and $q_\alpha$ respectively, whereas $S_\alpha$, $I_\alpha$, and $K_\alpha$ are intermediate angular momenta that are coupled to a total angular momentum $J$ with projection $M$. All discrete quantum numbers are abbreviated by $\nu_\alpha$. The integration over the momentum variables is discretized using Gaussian quadrature rules thereby converting a system of integral equations for each $J$ and parity $\Pi = (-)^{L_\alpha +l_\alpha}$ into a very large system of linear algebraic equations. Due to the huge dimension those linear systems cannot be solved directly. Instead we expand the AGS transition operators (\ref{eq:Uba}) into the corresponding Neumann series \begin{equation} \label{eq:neumann} U^{(R)}_{\beta \alpha} = \bar{\delta}_{ \beta \alpha } G_0^{-1} + \sum_{\gamma=1}^3 \bar{\delta}_{ \beta \gamma } T^{(R)}_\gamma \bar{\delta}_{ \gamma \alpha } + \sum_{\gamma=1}^3 \bar{\delta}_{ \beta \gamma } T^{(R)}_\gamma G_0 \sum_{\sigma=1}^3 \bar{\delta}_{\gamma \sigma } T^{(R)}_\sigma \bar{\delta}_{\sigma \alpha} + \cdots , \end{equation} that are summed up by the iterative Pade method~\cite{chmielewski:03a}; it yields an accurate solution of Eq.~(\ref{eq:Uba}) even when the Neumann series (\ref{eq:neumann}) diverges. Each two-particle transition operator $ T^{(R)}_{\gamma}$ is evaluated in its proper basis $|p_\gamma q_\gamma \nu_\gamma \rangle$, thus, transformations between all three bases are needed. The calculation of the involved overlap functions $ \langle p_\beta q_\beta \nu_\beta |p_\alpha q_\alpha \nu_\alpha \rangle$ follows closely the calculation of three-nucleon permutation operators discussed in Refs.~\cite{deltuva:phd,gloeckle:83a}. A special treatment~\cite{chmielewski:03a,deltuva:phd} is needed for the integrable singularities arising from the pair bound state poles in $ T^{(R)}_{\gamma}$ and from $G_0$. Furthermore, we have to make sure that $R$ is large enough to achieve (after renormalization) the $R$-independence of the results up to a desired accuracy. However, those $R$ values are larger than the range of the nuclear interaction resulting in a slower convergence of the partial-wave expansion. As we found in Ref.~\cite{deltuva:05a}, the practical success of the screening and renormalization method depends very much on the choice of the screening function, in our case on the power $n$ in Eq.~(\ref{eq:wr}). We want to ensure that the screened Coulomb potential $w_R$ approximates well the true Coulomb one $w_C$ for distances $r<R$ and simultaneously vanishes rapidly for $r>R$, providing a comparatively fast convergence of the partial-wave expansion. As shown in Ref.~\cite{deltuva:05a}, this is not the case for simple exponential screening $(n =1)$ whereas the sharp cutoff $(n \to \infty)$ yields slow oscillating convergence with the screening radius $R$. However, we found that values of $3 \le n \le 8$ provide a sufficiently smooth and rapid screening around $r=R$. The screening functions for different $n$ values are compared in Ref.~\cite{deltuva:05a} together with the results demonstrating the superiority of our optimal choice: using $3 \le n \le 8$ the convergence with the screening radius $R$, at which the short range part of the amplitudes was calculated, is fast enough such that the convergence of the partial-wave expansion, though being slower than for the nuclear interaction alone, can be achieved and there is no need to work in a plane-wave basis. Here we use $n=4$ and show in Figs.~\ref{fig:Rad}and \ref{fig:Radb} few examples for the $R$-convergence of the $\alpha$-deuteron scattering observables calculated in a three-body model $(\alpha,p,n)$; the nuclear interaction is taken from Ref.~\cite{deltuva:06b}. The convergence with $R$ is impressively fast for both $\alpha$-deuteron elastic scattering and breakup. In addition we note that the Coulomb effect is very large and clearly improves the description of the experimental data, especially for the differential cross section in $\alpha$-deuteron breakup reaction. This is due to the shift of the $\alpha p$ $P$-wave resonance position when the $\alpha p$ Coulomb repulsion is included that leads to the corresponding changes in the structure of the observables. \begin{figure}[t] \sidecaption[t] \includegraphics[scale=.55]{Rd48c.eps} \caption{ Differential cross section and deuteron vector analyzing power $iT_{11}$ of the $\alpha d$ elastic scattering at 4.81~MeV deuteron lab energy as functions of the c.m. scattering angle. Convergence with the screening radius $R$ used to calculate the short-range part of the amplitudes is studied: $R= 5$~fm (dotted curves), $R= 10$~fm (dash-dotted curves), and $R= 15$~fm (solid curves). Results without Coulomb are given by dashed curves. The experimental data are from Refs.~\cite{bruno:80,gruebler:70a}. } \label{fig:Rad} \end{figure} \begin{figure}[t] \sidecaption[t] \includegraphics[scale=.55]{Ra15.eps} \caption{ Fivefold differential cross section of the $\alpha d$ breakup reaction at 15~MeV $\alpha$ lab energy for several combinations of $\alpha$ and proton scattering angles as function of the final-state energy variable $S$ with $dS = (dE_\alpha^2 + dE_p^2)^{1/2}$. Convergence with the screening radius $R$ is studied: $R= 10$~fm (dotted curves), $R= 15$~fm (dash-dotted curves), and $R= 20$~fm (solid curves). Results without Coulomb are given by dashed curves. The experimental data are from Ref.~\cite{koersner:77}. } \label{fig:Radb} \end{figure} In addition to the internal reliability criterion of the screening and renormalization method --- the convergence with $R$ --- we note that our results for proton-deuteron elastic scattering~\cite{deltuva:05b} agree well over a broad energy range with those of Ref.~\cite{kievsky:01a} obtained from the variational configuration-space solution of the three-nucleon Schr\"odinger equation with unscreened Coulomb potential and imposing the proper Coulomb boundary conditions explicitly. \section{Configuration space}\label{sec:r} In contrast to the momentum-space representation, the Coulomb interaction has a trivial expression in configuration space and thus may seem to be easier to handle. However the major obstacle for configuration-space treatment of the scattering problem is related with the complexity of the wave function asymptotic structure, which strongly complicates once three-particle breakup is available. Although for short range interactions the analytical behavior of the breakup asymptote of the configuration space wave function is well established, this is not a case once long range interactions (like Coulomb) are present. Therefore a method which enables the scattering problem to be solved without explicit use of the wave function asymptotic form is of great importance. The complex scaling method has been proposed~\cite{Nuttal_csm,CSM_71} and successfully applied to calculate the resonance positions~\cite{Moiseyev} by using bound state boundary conditions. As has been demonstrated recently this method can be extended also for the scattering problem~\cite{CSM_Curdy_04,Elander_CSM}. We demonstrate here that this method may be also successfully applied to solve three-particle scattering problems which include the long-range Coulomb interaction together with short range optical potentials. \subsection{Faddeev-Merkuriev equations} Like in the momentum space formalism described above Jacobi coordinates are also used in configuration space to separate the center of mass of the three-particle system. One has three equivalent sets of three-particle Jacobi coordinates \begin{eqnarray} \mathbf{x}_{\alpha } &=&\sqrt{\frac{2m_{\beta }m_{\gamma }}{(m_{\beta }+m_{\gamma })m}}(\mathbf{r}_{\gamma }-\mathbf{r}_{\beta }) , \\ \mathbf{y}_{\alpha } &=&\sqrt{\frac{2m_{\beta }(m_{\beta }+m_{\gamma })}{% (m_{\alpha }+m_{\beta }+m_{\gamma })m}}(\mathbf{r}_{\alpha }-\frac{m_{\beta }% \mathbf{r}_{\beta }+m_{\gamma }\mathbf{r}_{\gamma }}{m_{\beta }+m_{\gamma }}) , \nonumber \end{eqnarray}% here $r_{\alpha }$ and $m_{\alpha }$ are individual particle position vectors and masses, respectively. The choice of a mass scale $m$ is arbitrary. The three-particle problem is formulated here using Faddeev-Merkuriev (FM) equations~\cite{Merkuriev_80}: \begin{eqnarray} (E-H_{0}-\sum_{\kappa =1}^{3}w_{i}^{l})\psi _{\alpha }=(v_{\alpha }+w_{\alpha }^{s})(\psi _{\alpha }+\psi _{\beta }+\psi _{\gamma })\nonumber , \\ (E-H_{0}-\sum_{\kappa =1}^{3}w_{i}^{l})\psi _{\beta }=(v_{\beta }+w_{\beta }^{s})(\psi _{\alpha }+\psi _{\beta }+\psi _{\gamma }) , \\ (E-H_{0}-\sum_{\kappa =1}^{3}w_{i}^{l})\psi _{\gamma }=(v_{\gamma }+w_{\gamma }^{s})(\psi _{\alpha }+\psi _{\beta }+\psi _{\gamma }) , \nonumber% \end{eqnarray}% where the Coulomb interaction is split in two parts (short and long range), $% w_{\alpha }=w_{\alpha }^{s}+w_{\alpha }^{l}$, by means of some arbitrary cut-off function $\chi _{\alpha }(x_{\alpha },y_{\alpha })$: \begin{equation} w_{\alpha }^{s}(x_{\alpha },y_{\alpha })=w_{\alpha }(x_{\alpha })\chi _{\alpha }(x_{\alpha },y_{\alpha })\qquad w_{\alpha }^{l}(x_{\alpha },y_{\alpha })=w_{\alpha }(x_{\alpha })[1-\chi _{\alpha }(x_{\alpha },y_{\alpha })] \end{equation}% This cut-off function intends to shift the full Coulomb interaction in the $w_{\alpha }^{s}$ term if $% x_{\alpha }$ is small, whereas the $w_{\alpha }^{l}$ term acquires the full Coulomb interaction if $x_{\alpha }$ becomes large and $y_{\alpha }<x_{\alpha }$. The practical choice of function $\chi _{\alpha }(x_{\alpha },y_{\alpha })$ has been proposed in~\cite{Merkuriev_80}: \begin{equation} \chi _{\alpha }(x_{\alpha },y_{\alpha })=\frac{2}{[1+exp{(\frac{[x_{\alpha }/x_0]^\mu}{1+y_{\alpha }/y_0})}]} , \end{equation}% with free parameters $x_{0},y_{0}$ having size comparable with the charge radii of the respective binary systems; the value of parameter $\mu$ must be larger than 1 and is usually set $\mu\approx2$. In such a way the so-called Faddeev amplitude $\psi _{\alpha }$ intends to acquire full asymptotic behavior of the binary $\alpha -(\beta \gamma )$ channels, i.e: \begin{eqnarray} \psi _{\alpha }(\mathbf{x}_{\alpha },\mathbf{y}_{\alpha }\rightarrow \infty )=\delta _{\kappa ,\alpha }\psi _{\alpha }^{i_{\kappa }}(\mathbf{x}_{\alpha })\phi _{\alpha }^{i_{\kappa },in}(\mathbf{y}_{\alpha })&+&\sum_{j_{\alpha }}f_{j_{\alpha i_{\kappa }}}(\mathbf{x}_{\alpha }.\mathbf{y}_{\alpha })\psi _{\alpha }^{j_{\alpha }}(\mathbf{x}_{\alpha })\phi _{\alpha }^{j_{\alpha },out}(\mathbf{y}_{\alpha })\nonumber \\ &+&A_{i_{\kappa }}(\mathbf{x}_{\alpha },\mathbf{y}% _{\alpha })\Phi _{i_{\kappa }}^{out}(\mathbf{\rho }) , \end{eqnarray} where the hyperradius is $\rho =\sqrt{x_{\alpha }^{2}+y_{\alpha }^{2}}$. An expression $\varphi _{\alpha }^{i_{\alpha }}(\mathbf{x}_{\alpha })\phi _{\alpha }^{i_{\kappa },in}(\mathbf{y}_{\alpha })$ represents the incoming wave for particle $\alpha $ on pair $(\beta \gamma )$ in the bound state $i_{\alpha }$, with $\varphi _{\alpha }^{i_{\alpha }}(% \mathbf{x}_{\alpha })$ representing the normalized wave function of bound state $i_{\alpha }$. This wave function is a solution of the $(E-H_{0}-w_{\alpha }-v_{\alpha }-W_{\alpha }^{c.m.})$ two-body Hamiltonian. The $\phi _{\alpha }^{j_{\alpha },out}(% \mathbf{y}_{\alpha })$ and $\Phi _{i_{\kappa }}^{out}(\mathbf{\rho }_{\alpha })$ represent outgoing waves for binary and three-particle breakup channels respectively. In the asymptote, one has the following behavior: \begin{eqnarray} \varphi _{\alpha }^{i_{\alpha }}(x_{\alpha } &\rightarrow &\infty )\propto \exp (-k_{i_{\alpha }}x_{\alpha }) , \nonumber \\ \phi _{\alpha }^{i_{\alpha },out}(y_{\alpha } &\rightarrow &\infty )\propto \exp (iq_{i_{\alpha }}y_{\alpha }) , \\ \Phi _{i_{\alpha }}^{out}(\rho &\rightarrow &\infty )\propto \exp (iK\rho ) , \label{eq:assf} \end{eqnarray} with $k_{i_{\alpha }}=\sqrt{-\varepsilon _{_{i_{\alpha }}}m}$ representing momentum of 2-body bound state $i_{\alpha }$ with a negative binding energy $% \varepsilon _{_{i_{\alpha }}}$; $q_{i_{\alpha }}=\sqrt{(E-\varepsilon _{_{i_{\alpha }}})m}$ is relative scattering momentum for the $\alpha -(\beta \gamma )$ binary channel, whereas $K=\sqrt{mE}$ is a three-particle breakup momentum (three-particle breakup is possible only if energy value $E$ is positive). When considering particle's $\alpha $ scattering on the bound state $i_{\alpha }$ of the pair $(\beta \gamma )$, it is convenient to separate readily incoming wave $\psi _{\alpha }^{i_{\alpha },in}=\psi _{\alpha }^{i_{\alpha }}(\mathbf{x}_{\alpha })\phi _{\alpha }^{i_{\alpha },in}(\mathbf{y}_{\alpha })$, by introducing: \begin{eqnarray} \psi _{\alpha }^{i_{\alpha },out} &=&\psi _{\alpha }^{i_{\alpha }}-\psi _{\alpha }^{i_{\alpha }}(\mathbf{x}_{\alpha })\phi _{\alpha }^{i_{\alpha },in}(\mathbf{y}_{\alpha }) , \\ \psi _{\beta }^{i_{\alpha },out} &=&\psi _{\beta }^{i_{\alpha }}\qquad \beta \neq \alpha , \nonumber \end{eqnarray} Then Faddeev-Merkuriev equations might be rewritten in a so-called driven form: \begin{eqnarray} (E-H_{0}-\sum_{\kappa =1}^{3}w_{\kappa }^{l})\psi _{\alpha }^{out}&=&(v_{\alpha }+w_{\alpha }^{s})(\psi _{\alpha }^{out}+\psi _{\beta }^{out}+\psi _{\gamma }^{out})+\left[ \sum_{\kappa =1}^{3}w_{\kappa }^{l}-w_{\alpha }-W_{\alpha }^{c.m.}\right] \psi _{\alpha }^{in} , \nonumber \\ (E-H_{0}-\sum_{\kappa =1}^{3}w_{\kappa }^{l})\psi _{\beta }^{out}&=&(v_{\beta }+w_{\beta }^{s})(\psi _{\alpha }^{out}+\psi _{\beta }^{out}+\psi _{\gamma }^{out}+\psi _{\alpha }^{in}) , \\ (E-H_{0}-\sum_{\kappa =1}^{3}w_{\kappa }^{l})\psi _{\gamma }^{out}&=&(v_{\gamma }+w_{\gamma }^{s})(\psi _{\alpha }^{out}+\psi _{\beta }^{out}+\psi _{\gamma }^{out}+\psi _{\alpha }^{in}) . \nonumber \label{eq:drive_FM} \end{eqnarray} In this expression index of the incoming state $i_{\alpha }$ has been omitted in all Faddeev component expressions $\psi _{\alpha }^{in}$ and $\psi _{\alpha }^{out}$. \subsection{Complex scaling} Next step is to perform the complex scaling operations i.e. scale all the distances $% x $ and $y$ by a constant complex factor $e^{i\theta },$ so that both $% Re(e^{i\theta })$ and $Im(e^{i\theta })$ are positive (angle $\theta $ must be chosen in the first quartet in order to satisfy this condition). The complex scaling operation, in particular, implies that the analytical continuation of the interaction potentials is performed: $v_{\alpha }(x_{\alpha }e^{i\theta })$ and $w_{\alpha }(x_{\alpha }e^{i\theta })$. Therefore the complex scaling method may be used only if these potentials are analytic. It is easy to see that the solutions of the complex scaled equations coincide with the ones obtained without complex scaling but to which the complex scaling operation is applied: $\left[ \psi (x_{\alpha },y_{\alpha })\right] ^{CS}=\psi (x_{\alpha }e^{i\theta },y_{\alpha }e^{i\theta })$. Namely, it is easy to demonstrate that all the outgoing wave functions of eq.(\ref{eq:assf}) becomes exponentially bound after the complex scaling operation: \begin{eqnarray} \left[ \varphi _{\alpha }^{i_{\alpha }}(x_{\alpha }\rightarrow \infty )% \right] ^{CS} &\propto &\exp (-k_{i_{\alpha }}x_{\alpha }\cos \theta ) , \nonumber\\ \left[ \phi _{\alpha }^{i_{\alpha },out}(y_{\alpha }\rightarrow \infty )% \right] ^{CS} &\propto &\exp (-q_{i_{\alpha }}y_{\alpha }\sin \theta ) , \\ \left[ \Phi _{i_{\alpha }}^{out}(\rho \rightarrow \infty )\right] ^{CS} &\propto &\exp (-K\rho \sin \theta ) . \nonumber \end{eqnarray} Nevertheless an incoming wave diverges in $y_{\alpha }$ after the complex scaling: \begin{equation} \left[ \phi _{\alpha }^{i_{\alpha },out}(y_{\alpha }\rightarrow \infty )% \right] ^{CS}\propto \exp (+q_{i_{\alpha }}y_{\alpha }\sin \theta ) . \end{equation} However these terms appear only on the right hand sides of the driven Faddeev-Merkuriev equation~(\ref{eq:drive_FM}) being pre-multiplied with the potential terms and under certain conditions they may vanish outside of some finite (resolution) domain $x_{\alpha }\in \lbrack 0,x^{\max }]$ and $y_{\alpha }\in \lbrack 0,y^{\max }]$. Let us consider the long range behavior of the term $\left[ (v_{\beta }+w_{\beta }^{s})\psi _{\alpha }^{in}% \right] ^{CS}$. Since the interaction terms $v_{\beta }$ and $w_{\beta }^{s}$ are of short range, the only region the former term might not converge is along $y_{\beta }$ axis in $(x_{\beta },y_{\beta })$ plane, i.e. for $% x_{\beta }\ll y_{\beta }$. On the other hand $x_{\alpha }(% \mathbf{x}_{\beta }\mathbf{,y}_{\beta })\approx \sqrt{m_{\gamma }/(m_{\gamma }+m_{\beta })}\sqrt{M/(m_{\gamma }+m_{\alpha })}y_{\beta }$ and $y_{\alpha }(% \mathbf{x}_{\beta }\mathbf{,y}_{\beta })\approx \sqrt{m_{\beta }/(m_{\gamma }+m_{\beta })}\sqrt{m_{\alpha }/(m_{\gamma }+m_{\alpha })}y_{\beta }$ under condition $x_{\beta }\ll y_{\beta }$. Then one has: \begin{equation} \small{ \left[ (v_{\beta }+w_{\beta }^{s})\psi _{\alpha }^{i_{\alpha },in}\right] ^{CS}_{x_{\beta }\ll y_{\beta }}\propto \exp\left(-k_{i_{\alpha }}\sqrt{\frac{% m_{\gamma }M}{(m_{\gamma }+m_{\beta })(m_{\gamma }+m_{\alpha })}}y_{\beta }\cos \theta +q_{i_{\alpha }}\sqrt{\frac{m_{\alpha }m_{\beta }}{% (m_{\gamma }+m_{\beta })(m_{\gamma }+m_{\alpha })}}y_{\beta }\sin \theta \right)} . \end{equation} This term becomes bound to finite domain in $(x_{\beta },y_{\beta }) $ plane, if condition: \begin{equation} \tan \theta <\sqrt{\frac{m_{\gamma }M}{m_{\alpha }m_{\beta }}}\frac{% k_{i_{\alpha }}}{q_{i_{\alpha }}}=\sqrt{\frac{m_{\gamma }M}{m_{\alpha }m_{\beta }}}\sqrt{\frac{\left\vert B_{_{i_{\alpha }}}\right\vert }{% E+\left\vert B_{_{i_{\alpha }}}\right\vert }} , \label{max_theta} \end{equation} is satisfied. This implies that for rather large scattering energies $E$, above the break-up threshold, one is obliged to use rather small complex scaling parameter $\theta $ values. The term $\left[\sum_{\kappa =1}^{3}w_{\kappa }^{l}-w_{\alpha }-W_{\alpha }^{c.m.}\right] \psi _{\alpha }^{i_{\alpha },in}$, in principle, is not exponentially bound after the complex scaling. It represents the higher order corrections to the residual Coulomb interaction between particle $% \alpha $ and bound pair $(\beta \gamma )$. These corrections are weak $% o(1/y^{2})$ and might be neglected by suppressing this term close to the border of the resolution domain. Alternative possibility might be to use incoming wave functions, which account not only for the bare $\alpha -(\beta \gamma )$ Coulomb interaction but also takes into account higher order polarization corrections. \bigskip Extraction of the scattering observables is realized by employing Greens theorem. One might demonstrate that strong interaction amplitude for $% \alpha -(\beta \gamma )$ collision is: \begin{equation} f_{j_{\alpha i_{\kappa }}}(\mathbf{x}_{\alpha }.\mathbf{y}_{\alpha })=-\frac{m}{q_{j_{\alpha}}}\int \int \left[ (\psi _{\alpha }^{j_{\alpha },in})^*\right] ^{CS}(\overline{v}% _{\alpha }+\overline{w}_{\alpha }-W_{\alpha }^{c.m.})^{CS}\left[ \Psi _{i_{\kappa }}\right] ^{CS}e^{6i\theta }d^{3}\mathbf{x}_{i}d^{3}\mathbf{y}% _{i} \label{3b_amp_nc} , \end{equation}% with $\left[ \Psi _{i_{\kappa }}\right] ^{CS}=\left[ \psi _{\alpha }^{i_{\kappa },out}+\psi _{\beta }^{i_{\kappa },out}+\psi _{\gamma }^{i_{\kappa },out}+\psi _{\alpha }^{i_{\kappa },in}\right] ^{CS}$ being the total wave function of the three-body system. In the last expression the term containing product of two incoming waves is slowest to converge. Even stronger constraint than eq.(\ref{max_theta}) should be implied on complex scaling angle in order to make this term integrable on the finite domain. Nevertheless this term contains only the product of two-body wave functions and might be evaluated without using complex scaling prior to three-body solution. Then the appropriate form of the integral~(\ref{3b_amp_nc}) to be used becomes: \begin{eqnarray} f_{j_{\alpha i_{\kappa }}}(\mathbf{x}_{\alpha }.\mathbf{y}_{\alpha }) &=&-\frac{m}{q_{j_{\alpha}}}\int \int \left[ (\psi _{\alpha }^{j_{\alpha },in})^*\right] ^{CS}(% \overline{v}_{\alpha }+\overline{w}_{\alpha }-W_{\alpha }^{c.m.})^{CS}\left[ \Psi _{i_{\kappa }}-\psi _{\alpha }^{j_{\alpha },in}\right] ^{CS}e^{6i\theta }d^{3}\mathbf{x}_{i}d^{3}\mathbf{y}_{i} \nonumber\\ &&-\frac{m}{q_{j_{\alpha}}}\int \int (\psi _{\alpha }^{j_{\alpha },in})^*(\overline{v}_{\alpha }+% \overline{w}_{\alpha }-W_{\alpha }^{c.m.})\psi _{\alpha }^{j_{\alpha },in}d^{3}\mathbf{x}_{i}d^{3}\mathbf{y} . \end{eqnarray} \bigskip \bigskip \section{Application to three-body nuclear reactions} The two methods presented in sections~\ref{sec:p} and~\ref{sec:r} were first applied to the proton-deuteron elastic scattering and breakup~\cite{deltuva:05a,deltuva:05d,deltuva:09e,lazauskas:11a}. The three-nucleon system is the only nuclear three-particle system that may be considered realistic in the sense that the interactions are given by high precision potentials valid over a broad energy range. Nevertheless, in the same way one considers the nucleon as a single particle by neglecting its inner quark structure, in a further approximation one can consider a cluster of nucleons (composite nucleus) to be a single particle that interacts with other nucleons or nuclei via effective potentials whose parameters are determined from the two-body data. A classical example is the $\alpha$ particle, a tightly bound four-nucleon cluster. As shown in Figs.~\ref{fig:Rad} and \ref{fig:Radb} and in Ref.~\cite{deltuva:06b}, the description of the $(\alpha,p,n)$ three-particle system with real potentials is quite successful at low energies but becomes less reliable with increasing energy where the inner structure of the $\alpha$ particle cannot be neglected anymore. At higher energies the nucleon-nucleus or nucleus-nucleus interactions are modeled by optical potentials (OP) that provide quite an accurate description of the considered two-body system in a given narrow energy range; these potentials are complex to account for the inelastic excitations not explicitly included in the model space. The methods based on Faddeev/AGS equations can be applied also in this case, however, the potentials within the pairs that are bound in the initial or final channel must remain real. The comparison of the two methods based on the AGS and FM equations will be performed in section~\ref{sec:compare} for such an interaction model with OP. In the past the description of three-body-like nuclear reactions involved a number of approximate methods that have been developed. Well-known examples are the distorted-wave Born approximation (DWBA), various adiabatic approaches \cite{johnson:70a}, and continuum-discretized coupled-channels (CDCC) method~\cite{austern:87}. Compared to them the present methods based on exact Faddeev or AGS equations, being more technically and numerically involved, have some disadvantages. Namely, their application in the present technical realization is so far limited to a system made of two nucleons and one heavier cluster. The reason is that the interaction between two heavier cluster involves very many angular momentum states and the partial-wave convergence cannot be achieved. The comparison between traditional nuclear reaction approaches and momentum-space Faddeev/AGS methods for various neutron + proton + nucleus systems are summarized in section~\ref{sec:cdcc}. On the other hand, the Faddeev and AGS methods may be more flexible with respect to dynamic input and thereby allows to test novel aspects of the nuclear interaction not accessible with the traditional approaches. Few examples will be presented in section \ref{sec:nonloc}. \subsection{Numerical comparison of AGS and FM methods} \label{sec:compare} As an example we consider the $n+p+^{12}C$ system. For the $n$-$p$ interaction we use a realistic AV18 model~\cite{wiringa:95a} that accurately reproduces the available two-nucleon scattering data and deuteron binding energy. To study not only the $d+{}^{12}$C but also $p+{}^{13}$C scattering and transfer reactions we use a $n$-$^{12}$C potential that is real in the $^2P_\frac{1}{2}$ partial wave and supports the ground state of $^{13}C$ with 4.946 MeV binding energy; the parameters are taken from Ref.~\cite{nunes:11b}. In all other partial waves we use the $n$-$^{12}$C optical potential from Ref.~\cite{CH89} taken at half the deuteron energy in the $d+{}^{12}$C channel. The $p$-$^{12}$C optical potential is also taken from Ref.~\cite{CH89}, however, at the proton energy in the $p+{}^{13}$C channel. We admit that, depending on the reaction of interest, other choices of energies for OP may be more appropriate, however, the aim of the present study is comparison of the methods and not the description of the experimental data although the latter are also included in the plots. We consider $d+{}^{12}$C scattering at 30 MeV deuteron lab energy and $p+{}^{13}$C scattering at 30.6 MeV proton lab energy; they correspond to the same energy in c.m. system. First we perform calculations by neglecting the $p$-$^{12}$C Coulomb repulsion. One observes a perfect agreement between the AGS and FM methods. Indeed, the calculated S-matrix elements in each three-particle channel considered (calculations have been performed for total three-particle angular momentum states up to $J=13$) agree within three digits. Scattering observables converge quite slowly with $J$ as different angular momentum state contributions cancel each other at large angles. Nevertheless, the results of the two methods are practically indistinguishable as demonstrated in Fig.~\ref{fig:dC-noC} for $d+{}^{12}$C elastic scattering and transfer to $p+{}^{13}$C. Next we perform the full calculation including the $p$-$^{12}$C Coulomb repulsion; we note that inside the nucleus the Coulomb potential is taken as the one of a uniformly charged sphere~\cite{deltuva:06b}. Once again we obtain good agreement between the AGS and FM methods. However, this time small variations up to the order of 1\% are observed when analyzing separate $S$-matrix elements, mostly in high angular momentum states. This leads to small differences in some scattering observables, e.g., differential cross sections for $d+{}^{12}$C elastic scattering (at large angles where the differential cross section is very small) and for the deuteron stripping reaction $d+{}^{12}$C $ \to p+{}^{13}$C shown in Fig.~\ref{fig:dC}. The $p+{}^{13}$C elastic scattering observables presented in Fig.~\ref{fig:pC} converge faster with $J$. As a consequence, the results of the two calculations are indistinguishable for the $p+{}^{13}$C elastic cross section and only tiny differences can be seen for the proton analyzing power at large angles. In any case, the agreement between the AGS and FM methods exceeds both the accuracy of the data and the existing discrepancies between theoretical predictions and experimental data. \begin{figure \sidecaption[t] \includegraphics[scale=.56]{dC30noC.eps} \caption{ Comparison of momentum- (solid curves) and configuration-space (dashed-dotted curves) results for the deuteron-${}^{12}$C scattering at 30 MeV deuteron lab energy. Differential cross sections for elastic scattering and stripping are shown neglecting the Coulomb interaction.} \label{fig:dC-noC} \end{figure} \begin{figure \sidecaption[t] \includegraphics[scale=.56]{dC30.eps} \caption{ Comparison of momentum- (solid curves) and configuration-space (dashed-dotted curves) results for the deuteron-${}^{12}$C scattering at 30 MeV deuteron lab energy. Differential cross sections for elastic scattering and stripping are shown, the former in ratio to the Rutherford cross section $d\sigma_R/d\Omega$. The experimental data are from Refs.~\cite{perrin:77,dC30p}.} \label{fig:dC} \end{figure} \begin{figure \sidecaption[t] \includegraphics[scale=.56]{pC30.eps} \caption{ Comparison of momentum- (solid curves) and configuration-space (dashed-dotted curves) results for the proton-${}^{13}$C elastic scattering at 30.6 MeV proton lab energy. Differential cross section divided by the Rutherford cross section and proton analyzing power are shown. The experimental data are from Ref.~\cite{pC30}.} \label{fig:pC} \end{figure} \subsection{Comparison with traditional nuclear reaction approaches} \label{sec:cdcc} The method based on the momentum-space AGS equations has already been used to test the accuracy of the traditional nuclear reaction approaches; limitations of their validity in energy and kinematic range have been estalished. The distorted-wave impulse approximation for breakup of a one-neutron halo nucleus ${}^{11}$Be on a proton target has been tested in Ref.~\cite{crespo:08a} while the adiabatic-wave approximation for the deuteron stripping and pickup reactions ${}^{11}$Be$(p,d){}^{10}$Be, ${}^{12}$C$(d,p){}^{13}$C, and ${}^{48}$Ca$(d,p){}^{49}$Ca in Ref.~\cite{nunes:11b}. However, one of the most sophisticated traditional approaches is the CDCC method~\cite{austern:87}. A detailed comparison between CDCC and AGS results is performed in Ref.~\cite{deltuva:07d}. The agreement is good for deuteron-${}^{12}$C and deuteron-${}^{58}$Ni elastic scattering and breakup. In these cases nucleon-nucleus interactions were given by optical potentials; thus, there was no transfer reaction. A different situation takes place in proton-${}^{11}$Be scattering where ${}^{11}$Be nucleus is assumed to be the bound state of a ${}^{10}$Be core plus a neutron. In this case, where the transfer channel $d + {}^{10}$Be is open, the CDCC approach lacks accuracy as shown in Ref.~\cite{deltuva:07d}. The semi-inclusive differential cross section for the breakup reaction $p + {}^{11}$Be $\to p + n + {}^{10}$Be was calculated also using two CDCC versions where the full scattering wave function was expanded into the eigenstates of either the $n + {}^{10}$Be (CDCC-BU) or the $p+n$ (CDCC-TR) pair. Neither of them agrees well with AGS over the whole angular regime as shown in Fig.~\ref{fig:cdcc}. It turns out that, depending on the ${}^{10}$Be scattering angle, the semi-inclusive breakup cross section is dominated by different mechanisms: at small angles it is the proton-neutron quasifree scattering whereas at intermediate and large angles it is the neutron-${}^{10}$Be $D$-wave resonance. However, a proper treatment of proton-neutron interaction in CDCC-BU and of neutron-${}^{10}$Be interaction in CDCC-TR is very hard to achieve since the wave function expansion uses eigenstates of a different pair. No such problem exists in the AGS method that uses simultaneously three sets of basis states and each pair is treated in its proper basis. \begin{figure \includegraphics[scale=.45]{ags-cdcc.eps} \caption{Semi-inclusive differential cross section for the breakup reaction $p + {}^{11}$Be $\to p + n + {}^{10}$Be at lab energy of 38.4 MeV/nucleon. Results obtained with AGS and CDCC methods are compared.} \label{fig:cdcc} \end{figure} \subsection{Beyond standard dynamic models} \label{sec:nonloc} The standard nucleon-nucleus optical potentials employed in three-body calculations have central and, eventually, spin-orbit parts that are local. This local approximation yields a tremendous simplification in the practical realization of DWBA, CDCC and other traditional approaches that are based on configuration-space representations where the use of nonlocal optical potentials was never attempted. However, nonlocal optical potentials do not yield any serious technical difficulties in the momentum-space representation. Thus, they can be included quite easily in the AGS framework employed by us. There are very few nonlocal parametrizations of the optical potentials available. We take the one from Refs.~\cite{giannini,giannini2} defined in the configuration space as \begin{equation} \label{eq:vnl} v_{\gamma}(\vec{r}',\vec{r}) = H_c(x)[V_c(y) + iW_c(y)] + 2\vec{S_\gamma}\cdot \vec{L_\gamma} H_s(x) V_s(y) , \end{equation} with $x = |\vec{r}'-\vec{r}|$ and $y=|\vec{r}'+\vec{r}|/2$. The central part has real volume and imaginary surface parts, whereas the spin-orbit part is real; all of them are expressed in the standard way by Woods-Saxon functions. Some of their strength parameters were readjusted in Ref.~\cite{deltuva:09b} to improve the description of the experimental nucleon-nucleus scattering data. The range of the nonlocality is determined by the functions $H_i(x) = (\pi \beta_i^2)^{-3/2} \exp{(-x^2/\beta_i^2)}$ with the parameters $\beta_i$ being of the order of 1 fm. A detailed study of nonlocal optical potentials in three-body reactions involving stable as well as weakly bound nuclei, ranging from ${}^{10}$Be to ${}^{40}$Ca, is carried out in Ref.~\cite{deltuva:09b}. In order to isolate the nonlocality effect we also performed calculations with a local optical potential that provides approximately equivalent description of the nucleon-nucleus scattering at the considered energy. The nonlocality effect turns out to be very small in the elastic proton scattering from the bound neutron-nucleus system and of moderate size in the deuteron-nucleus scattering. However, the effect of nonlocal proton-nucleus optical potential becomes significant in deuteron stripping and pickup reactions $(d,p)$ and $(p,d)$; in most cases it considerably improves agreement with the experimental data. Examples for $(d,p)$ reactions leading to ground and excited states of the stable nucleus ${}^{17}$O and one-neutron halo nucleus ${}^{15}$C are presented in Figs.~\ref{fig:Odp} and \ref{fig:Cdp}. We note that in these transfer reactions the proton-nucleus potential is taken at proton lab energy in the proton channel while the neutron-nucleus potential has to be real in order to support the respective bound states. \begin{figure \sidecaption[t] \includegraphics[scale=.56]{dOp36nl.eps} \caption{ Differential cross section for $(d,p)$ reaction on ${}^{16}$O at 36 MeV deuteron lab energy leading to ${}^{17}$O nucleus in the ground state $5/2^+$ (top) and first excited state $1/2^+$ (bottom). Predictions of nonlocal (solid curve) and local (dashed curve) optical potentials (OP) are compared with the experimental data from Ref.~\cite{dO25-63}.} \label{fig:Odp} \end{figure} \begin{figure \sidecaption[t] \includegraphics[scale=.56]{dCp14.eps} \caption{ Differential cross section for $(d,p)$ reaction on ${}^{14}$C at 14 MeV deuteron lab energy leading to one-neutron halo nucleus ${}^{15}$C in the ground state $1/2^+$ (top) and first excited state $5/2^+$ (bottom). Curves as in Fig.~\ref{fig:Odp} and the experimental data are from Ref.~\cite{d14C14p}.} \label{fig:Cdp} \end{figure} Another extension beyond the standard dynamic models includes the AGS method using energy-dependent optical potentials Although such calculations don't correspond to a rigorous Hamiltonian theory, they may shed some light on the shortcomings of the traditional nuclear interaction models. A detailed discussion of the calculations with energy-dependent optical potentials is given in Ref.~\cite{deltuva:09a}. \section{Summary} We have presented the results of three-body Faddeev-type calculations for systems of three particles, two of which are charged, interacting through short-range nuclear plus the long-range Coulomb potentials. Realistic applications of three-body theory to three-cluster nuclear reactions --- such as scattering of deuterons on a nuclear target or one-neutron halo nucleus impinging on a proton target --- only became possible to address in recent years when a reliable and practical momentum-space treatment of the Coulomb interaction has been developed. After the extensive and very complete study of $p$-$d$ elastic scattering and breakup, the natural extension of these calculations was the application to complex reactions such as $d$-${}^{4}$He, $p$-${}^{17}$O, ${}^{11}$Be-$p$, $d$-${}^{58}$Ni and many others using a realistic interaction such as AV18 between nucleons, and optical potentials chosen at the appropriate energy for the nucleon-nucleus interactions. The advantage of three-body calculations vis-\`{a}-vis traditional approximate reaction methods is that elastic, transfer, and breakup channels are treated on the same footing once the interaction Hamiltonian has been chosen. Another advantage of the three-body Faddeev-AGS approach is the possibility to include nonlocal optical potentials instead of local ones as commonly used in the standard nuclear reaction methods; as demonstrated, this leads to an improvement in the description of transfer reactions in a very consistent way across different energies and mass numbers for the core nucleus. Although most three-body calculations have been performed in momentum space over a broad range of nuclei from ${}^{4}$He to ${}^{58}$Ni and have encompassed studies of cross sections and polarizations for elastic, transfer, charge exchange, and breakup reactions, coordinate space calculations above breakup threshold are coming to age using the complex scaling method. We have demonstrated here that both calculations agree to within a few percent for all the reactions we have calculated. This is a very promising development that may bring new light to the study of nuclear reactions given that the reduction of the many-body problem to an effective three-body one may be better implemented and understood by the community in coordinate space rather than in momentum space. On the other hand, compared to DWBA, adiabatic approaches, or CDCC, the Faddeev-type three-body methods are computationally more demanding and require greater technical expertise rendering them less attractive to analyze the data. Nevertheless, when benchmark calculations have been performed comparing the Faddeev-AGS results with those obtained using CDCC or adiabatic approaches, some discrepancies were found in transfer and breakup cross sections depending on the specific kinematic conditions. Therefore the Faddeev-AGS approach is imminent in order to calibrate and validate approximate nuclear reaction methods wherever a comparison is possible. \begin{acknowledgement} The work of A.D. and A.C.F. was partially supported by the FCT grant PTDC/FIS/65736/2006. The work of R.L. was granted access to the HPC resources of IDRIS under the allocation 2009-i2009056006 made by GENCI (Grand Equipement National de Calcul Intensif). We thank the staff members of the IDRIS for their constant help. \end{acknowledgement}
{'timestamp': '2012-01-25T02:02:37', 'yymm': '1201', 'arxiv_id': '1201.4979', 'language': 'en', 'url': 'https://arxiv.org/abs/1201.4979'}
\section{Introduction} This paper presents an ongoing study on multimodal human-robot interaction (HRI) with a reinforcement learning (RL) dialogue agent. To develop an effective HRI system for social robots that can naturally interact with human users, the robots need to accurately identify the user's affective states and respond to the users with personalized behaviors accordingly. Moreover, robots can also improve the user's engagement in the long-term interaction~\cite{leite2013social} through personalized interaction policy. However, in reality, emotion recognition could be challenged in HRI since humans convey information and feelings via different sources of social cues such as facial expression, body language, and speech. Thus, the studies of multimodal emotion recognition have attracted considerable interest in recent years. These fusion strategies in the previous years can be summarized into three main categories: feature-level fusion, decision-level fusion, and model-level fusion~\cite{wu2014survey}. \citet{schuller2012avec} demonstrated a baseline model in the Audio/Video Emotion Challenge (AVEC) 2021, which concatenated the audio and visual features using feature-level fusion and then used support vector regression to predict the continuous affective values. On the other hand, the decision-level fusion can process different types of inputs with diverse classifiers, and the final estimation can fuse the outputs from all classifiers. For example, the posterior probabilities from the predictions of the audio and video classifiers can be combined to get the final estimations~\cite{schuller2011avec}. Deep learning models, such as Long-Short Term Memory Recurrent Neural Network (LSTM-RNN), have been used to achieve a model-level fusion~\cite{chen2016multi}. More recently, researchers have leveraged the transformer to fuse different modalities of inputs~\cite{tsai2019multimodal, xie2021robust}, where the transformer is proposed by~\citet{vaswani2017attention} for solving Sequence-to-Sequence (Seq2Seq) problem based on attention mechanism only without any recurrent structure as RNN. Once the robots have the ability to recognize the user's affective states, the HRI system can utilize this information as inputs and determine the behaviors of the robots. The utilization of emotional models in HRI can create more natural and engaging HRI experiences, as evidenced by Ficocelli et al.~\cite{ficocelli2015promoting}. The developed emotional model can also be used for the empathetic appraisal for social robots that can interact with children in the long-term study~\cite{leite2014empathic}. One recent study also presents a social robot with emotional interaction that can elicit particular emotions using non-verbal emotional behaviors~\cite{shao2020user}. Thus, we believe that the affective states of the human user can be an effective indicator for the HRI system to generate more engaging and natural behaviors for the interaction. Moreover, in order to develop a more natural HRI framework, the robots need to learn human preferences and skills. Recent studies utilize RL to let the robots learn social skills through the interactions~\cite{kim2017intrinsic, qureshi2018intrinsically}. Furthermore, \citet{ritschel2018socially} proposed an RL framework that can adapt robot behavior using social signals. Robots' capacity to learn multimodal cues adaptively and associate them with the context is recognized as being the vital factor in HRI. \citet{cui2020empathic} proposed a novel data-driven framework, EMPATHIC, which aims to improve the policy of the agent in learning tasks by using implicit human facial reactions feedback. However, there is still a need for the study on the RL agent for multimodal HRI. Thus, in this study, we present a multimodal HRI system with an RL framework that can accept multiple modalities of inputs to shape the reward signal and adapt the robot behaviors to generate more positive feedback for personalized interaction with the user. For the rest of the paper, We first present the developed dialogue RL framework in our system cause the speech will be the primary social cue for our robot platform. Then, we depict the integral diagram of our multimodal HRI system. Finally, we propose the evaluation plan for our future study. \section{Related Studies} Recent studies in multimodal HRI are providing enhanced abilities for emotional interaction~\cite{liu2016multimodal, hong2020multimodal, li2019expressing}, where the emotional abilities, including understanding users' affective states and expressing emotional behaviors accordingly. In the study proposed by~\citet{liu2016multimodal}, multiple modalities, including vocal, facial, and gesture features, were combined to detect the emotions by using a Bayesian classifier. The Nao humanoid robot was used as the platform to convey emotional behaviors by mimicking the user's behaviors. \citet{hong2020multimodal} proposed a multimodal emotional HRI architecture that can interpret human emotional states via multimodal social cues and promote natural bidirectional communications with users. Gestures and vocal features were fused to detect the user's affect and the robot utilized the user's affect to express the corresponding emotional behaviors. Another study by~\citet{li2019expressing} also used emotions for a spoken dialogue system in multimodal HRI which aimed to conduct the natural conversation. The study proposed an emotion recognition model which combines valence from prosody and sentiment analysis for the robot to express reactive emotions adequately. However, previous studies failed to consider using a dedicated dialogue system for the HRI system~\cite{liu2016multimodal,hong2020multimodal} or did not use different multimodal reactive emotion expressions for the robot~\cite{li2019expressing}. \section{Methodology} This section will first formulate the natural language processing (NLP) problem we will solve with the RL agent based on physiological rewards. Then, we will introduce our overall robot system that can personalize the behaviors of our robot for more natural HRI scenarios. \subsection{Problem Formulation} In the Seq2Seq problem of NLP, the goal is to train a language model (LM) than can create a mapping between the input sequence $(x_0, \dots , x_n) \in \mathcal{X}$ and output sequence $(y_0, \dots , y_n) \in \mathcal{Y}$, where $\mathcal{X}$ and $\mathcal{Y}$ are the input and output space, respectively. Given a vocabulary $\sum$, where both $\mathcal{X}$ and $\mathcal{Y} \in \sum$, and a LM can generate sequences of tokens $(y_0, \dots , y_n) \in \mathcal{Y}$ with a probability distribution using the chain rule~\cite{bengio2003neural}: \begin{equation} p(y_0, \dots , y_n) = \prod_{0 \leq i < n} p(y_i|x_0, \dots, x_{n-1}). \end{equation} Given the current sequences of tokens, LM can generate the probability distribution of the next token. We use the probability distribution assigned by LM as the initialization of the policy, $\pi_{\theta} = p$ with learnable parameters $\theta$, and then train $\pi_{\theta}$ via RL. A reward function $r: \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ can be defined and use RL to optimize the expected reward: \begin{equation} \mathbb{E} = \mathbb{E}_{x,y}[r(x,y)]. \end{equation} We formulate the reward $r$ as a distance-based function based on Russel’s circumplex psychological model of affect~\cite{russell1980circumplex}. The positive emotions, such as \textit{happy}, and \textit{excited} have positive rewards given by the values of arousal (A) and valence (V) of these emotions. On the contrary, the negative feelings will be assigned negative reward values. Thus, the reward can be expressed as follows: \begin{equation} r(x,y) = \pm\sqrt{A^2 + V^2}. \end{equation} The objective of our RL task is to generate more positive responses. To achieve this, we need first to train a reward model to estimate the reward based on the generated embedding vectors from the LM. Following the setting of the previous study~\cite{xie2021empathetic}, a linear classifier is added on the top of the LM to estimate the emotion and predict the level of arousal and valence values. The reward model is optimized using this loss: \begin{equation} \label{eqn:loss} \mathcal{L}_r = \mathbb{E}_{x,y}[r(x,y)]. \end{equation} \begin{figure}[hbpt] \centering \includegraphics[width=1\columnwidth]{PPO.png} \caption{The workflow of the proximal policy optimization (PPO) with psychological rewards.} \label{ppo} \end{figure} Following \citet{ziegler2019fine}, when fine-tuning the policy $\pi_{\theta}$ to optimize the reward via proximal policy optimization (PPO) algorithm~\cite{schulman2017proximal}, the Kullback-Leibler (KL) divergence penalty is added to prevent $\pi_{\theta}$ from changing too fast from $p$. Thus, a modified reward function is given by: \begin{equation} \label{eqn:reward} R(x,y) = r(x,y) - \beta\log\frac{\pi_{\theta}(y|x)}{p(y|x)}, \end{equation} \noindent where $\beta$ can be chosen either fixed or adaptive using KL values~\cite{schulman2017proximal}. As can be seen in Figure~\ref{ppo}, the main objective for policy iteration will consider both psychological rewards and the KL values. The overall training process is given in Algorithm~\ref{alg:alg1}. The RL agent will optimize the LM for each sentence with annotated emotions, and by leveraging the reward function~\ref{eqn:reward}, the LM model will generate more positive responses but doesn't deviate too much compared with the original model. \RestyleAlgo{ruled} \begin{algorithm} \caption{PPO with psychological reward} \label{alg:alg1} Algorithm parameters: either fixed or dynamic $\beta$\; Initialize $\pi = p$\; Initialize $r$ to $p$ (reward model added on top of LM)\; \ForEach{episode}{% Run language model policy $\pi$ to generate sentence embeddings vectors\; Optimize the reward model by the embedding vectors using loss (\ref{eqn:loss})\; Update the language model parameters $\theta$ via PPO with reward function (\ref{eqn:reward})\; $\theta \gets \theta'$\; } \KwRet{$\pi^*$}\; \end{algorithm} \subsection{Pre-trained Language Model} Generative Pre-training Transformer (GPT)~\cite{radford2018improving} is considered to be employed as the pre-trained LM in this study. GPT is transformer-based LM, where the transformer is designed purely based on the attention mechanism proposed by~\citet{vaswani2017attention}. GPT consists of multiple layers of transformer with self-attention operations, and it is also pre-trained on the large corpus. To enable the LM the ability to recognize emotions from the dialogue, the GPT is also fine-tuned on a dialogue dataset, MELD dataset~\cite{poria2018meld}, which is targeting for the task of emotion recognition during the conversation. Each sentence from each dialogue sample of the dataset is annotated with an emotion category. The emotion labels available in the MELD dataset are \textit{Anger}, \textit{Disgust}, \textit{Sadness}, \textit{Joy}, \textit{Neutral}, \textit{Surprise}, and \textit{Fear}. The expert annotations of this dataset can provide the model the ability to assess the emotion of each user's response. The reward model, $r$ in the RL optimization process, is added as an extra layer on top of the GPT model, which is also mentioned in Algorithm~\ref{alg:alg1}. \subsection{Human-Robot Interaction Design} \begin{figure}[hbpt] \centering \includegraphics[width=1\columnwidth]{rl_sys.png} \caption{The proposed multimodal HRI framework with RL agent. Our robotic system (Pepper) will observe and learn human behaviors and multimodal responses, while updating behavioral policy focused on personalizing for each user.} \label{sys} \end{figure} A multimodal HRI system will be designed for evaluating the RL framework. As can be seen in Figure~\ref{sys}, several modalities of observations, such as body gestures, facial expression, and speech, will be considered as the social cues and input to the system. The extracted features for the body gestures will be the human skeleton joint poses, and facial landmarks for the facial expression, and sentence embeddings for the conversational speech. The robotic platform will be the Pepper robot manufactured by the SoftBank robotics group~\cite{softbank}. Several modalities of inputs are then to be fused as part of the reward signals to the RL agent. The overall reward signal will also combine the human preferences, such as the self-evaluations from the user as the intrinsic rewards. We will adopt a self-assessment scale for the measurement of user's emotions, e.g., the self-assessment manikin~\cite{bradley1994measuring}, which is a non-verbal assessment tool that can measure affective reactions to different stimuli. The personalized robot behaviors are learned by creating the mapping between the interaction context and the affective states. The interaction context includes the textual, vocal, and gesture information from the users. The personalized robot behaviors will also be optimized through the RL reward signals. \section{Evaluation Plan} We plan to conduct a user study to evaluate the performance of our HRI system in generating personalized behaviors and eliciting positive emotions from the users. A two-stage study will be conducted. In the first stage, we will give the model the general knowledge of recognizing human emotions via multimodal inputs and generating feedback accordingly. To achieve this, we will use a large-scale dataset to train the model. Moreover, we will test the ability of the LM to generate positive responses after optimization by the reward signals. In the second stage, we will hold the user study to evaluate our HRI system. The agent will be personalized by utilizing the user's multimodal information and preferences. Our hypothesis is that multimodal social cues can give the robot a better ability to understand human emotional states and enhance interactive behaviors. To evaluate the user study, we propose to use Negative Attitudes towards Robots Scale (NARS)~\cite{nomura2006measurement} and Engagement~\cite{sidner2004look} questionnaires. This study hypothesizes that the proposed measures between the test group and the control group should be significantly different. A two-sided independent samples t-test would be done to compare the mean values of every extracted feature between two groups. The experiment will be held using the Pepper humanoid robot. The user can prompt different basic topics to express their emotions during the interaction, and our empathetic dialogue agent in the system will respond accordingly. The built-in dialogue module from the Pepper robot will be used as the baseline for the control group, which only includes simple pre-defined chit-chat and rule-based contents. After the experiment, the NARS and Engagement questionnaires will be used to score both models. \section{Conclusion} In this study, we present a multimodal HRI framework with an RL agent. The NLP problem optimized by RL is formulated and solved by the PPO algorithm. The physiological measures will be investigated to be used as the reward signals for optimizing the RL agent. GPT is used as the pre-trained language model for the dialogue system. The proposed multimodal HRI framework will be evaluated with a two-stage user study. Different inputs modalities will be fused to recognize the user's affective states, which will also be treated as intrinsic reward signals for the RL agent. By fusing the multimodal behaviors/responses and preferences as rewards from the users, the robotic behaviors can be personalized through learning human skills and preferred feedback. We are optimistic that this work will extend the current research on social robots to provide more natural and personalized interaction capabilities, especially using multimodal HRI interaction policies and optimization through RL. For the future study, we propose using this framework to analyze and detect emotional cues from multimodal interaction data and provide personalized intervention for adolescents with Autism Spectrum Disorder (ASD). \section{Acknowledgment} This research is supported by the National Science Foundation (NSF) under the grant \#1846658.
{'timestamp': '2021-10-12T02:36:58', 'yymm': '2110', 'arxiv_id': '2110.05186', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.05186'}
\section{Introduction} \label{sec:intro} The recent detection of gravitational waves (GWs) has consecrated gravitational wave interferometers (GWIs) as a vital experimental tool for astronomy, cosmology, and fundamental physics~\cite{LIGOScientific:2018mvr,Abbott:2020niy}. The GWs detected thus far are a product of cataclysmic transient events, such as binary black hole mergers. These signals are strong, with gravitational strain of the order of \(h\sim10^{-21}\), but very short, from a fraction of a second to several seconds. Much weaker signals can be detected if they are coherent over a longer time, such as the continuous GWs (CWs) emitted by rapidly spinning neutron stars~\cite{Riles:2017evm} or ultra-compact Galactic binaries~\cite{Nelemans:2001hp}. In the former case, recent searches for this type of signal have been performed in~\cite{Pisarski:2019vxw,Dergachev:2020fli,Steltner:2020hfd}. Having detected no CWs, this set the upper limit \(h\sim10^{-25}\) on the maximum strain for this type of signal at frequencies of about \(f\sim10^2\)~Hz. Another important source of CWs is due to the scattering of ultra-light bosons off black holes via a mechanism known as superradiance~\cite{Brito_2020}. Bosons with masses \(m\ll1\)~eV, are predicted in theories beyond the Standard Model of particle physics, and are an excellent candidate for the cosmological dark matter dubbed ultra-light dark matter (ULDM)~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah,Turner:1983he,Nelson:2011sf,Ferreira:2020fam}. In particular, spin-2 ULDM is especially interesting because it arises as a modification of gravity itself, even though it is in the guise of an additional particle, the dark matter~\cite{Marzola:2017,Aoki:2017cnz}. Searches for CWs produced by superradiance have been carried out only for spin-0 bosons~\cite{Palomba:2019vxe,Ng:2020ruv}, whereas no limits on the properties of spin-2 ultra-light bosons with GWI data exist yet. In this work we show that, if ULDM has spin two, it interacts with GWIs in a way that, owing to its quasi-monochromaticity and persistence, closely resembles CWs. The spin-2 ULDM-CW signal can be detected by existing Earth-based facilities such as advanced LIGO~\cite{TheLIGOScientific:2014jea} / advanced Virgo~\cite{TheVirgo:2014hva} (HLV) in their entire accessible frequency range, approximately corresponding to masses \(4\times10^{-14}~\mathrm{eV}\lesssim m \lesssim4\times10^{-11}~\mathrm{eV}\). Furthermore, planned facilities such as LISA~\cite{Baker:2019nia}, DECIGO~\cite{Seto:2001qf}, and the BBO~\cite{Harry:2006fi} will extend this range down to \(m\sim\mathrm{few}\times10^{-19}~\mathrm{eV}\). The spin-2 ULDM-CW signal is produced by the coherent oscillations of the ULDM field which is universally coupled to Standard Model fields and is unrelated to superradiance; this is similar to dark photon dark matter, where the ULDM carries additional interactions~\cite{Pierce:2018xmy,Miller:2020vsl}\footnote{Other types of direct interactions between ULDM and matter have also been considered, see, e.g., \cite{Arvanitaki:2014faa,Morisaki:2018htj,Grote:2019uvn,Michimura:2020vxn}} (notice however that in the spin-2 case the interaction can not be tuned away). Moreover, regardless the spin, if the ULDM field only interacts gravitationally, the signal is undetectable by GWIs~\cite{Aoki:2016kwl}. Our findings demonstrate that, in case of a null result, GWIs can place some of the most stringent bounds on the spin-2 Yukawa fifth force strength \(\al\) in the frequency ranges accessible by GWIs. This paper is structured as follows. In Section~\ref{sec:maths} we compute the strength and shape of the expected signal from spin-2 ULDM for the frequency ranges of interest for GWIs. In Section~\ref{sec:res} we present our results, and in Section~\ref{sec:out} we put them into context and give an outlook for future work. We work with units in which \(c=k_B=\hslash=1\), and use latin indices \((i,j,\ldots)\in[1,3]\) for spatial tensor components. \section{The shape and strength of the signal} \label{sec:maths} The behaviour of the spin-2 ULDM in sufficiently small regions inside the local dark matter halo is described by the oscillating tensor field~\cite{Marzola:2017} \begin{align} \Mij(t) &= \frac{\sqrt{2\rhoDM}}{m}\cos{(mt+\Upsilon)}\epij(\vr) \,, \label{eq:Mij} \end{align} where \(\rhoDM\) is the observed local dark matter energy density, for which we assume \(\rhoDM=0.3\)~GeV/cm\(^3\)~\cite{Piffl:2014mfa,Evans:2018bqy,2015ApJ...814...13M}, and \(\Upsilon\) is a random phase. The five polarisations of the spin-2 field are encoded in the \(\epij(\vr)\) tensor, which has unit norm and zero trace, is symmetric and is direction-dependent via the unit vector \(\vr\)~\cite{Maggiore:1900zz}. The solution \Eq{eq:Mij} assumes a single frequency \(2\pi f=m\) and a coherent polarisation structure. The latter is justified for scales shorter than the characteristic scale of the inhomogeneities of the ULDM field, which is given by the de~Broglie wavelength \(\ldB \deq 2\pi/mv = 1/fv\) where \(v\sim 10^{-3}\) is the effective velocity of the ULDM. Thus, owing to the fact that \(\ldB\) is much larger than the physical size of the GWIs and the distance between the HLV sites, we can safely neglect gradients (see~\cite{Armaleo:2020yml} for further discussion). The coherence of the oscillation frequency is instead guaranteed up to a coherence time that is given by\footnote{Notice the definition of the coherence time differs from the one that is commonly used in the ULDM literature by a factor~4. We adopt the definition used in the GW literature here.} \(\tcoh \deq 4\pi/mv^2 = 2/fv^2\). Given that a typical GWI observation run will last for much longer than \(\tcoh\), a more precise description of the ULDM field would be a superposition of plane waves, see~\cite{Pierce:2018xmy,Miller:2020vsl}; we neglect this for our order of magnitude estimates\footnote{This solution is also valid provided that the energy, or frequency, scales of the system is well below the ultra-violet cutoff of the effective field theory, that is \(f \sim m/2\pi \ll (M_\text{P}m^2)^{1/3}\)~\cite{Akrami:2015qga}; this is easily verified for all the values of the spin-2 mass \(m\) we consider in this work.}. In the ULDM reference frame \((\vp,\vq,\vr)\) the polarisations of the spin-2 field can be described as \(\epij(\vr) \deq \sum_\kappa \vep_\kappa {\cal Y}^\kappa_{ij}(\vr)\)~\cite{Maggiore:1900zz,Armaleo:2019gil}, where the summation runs over the five amplitudes \(\left\{ \vepCross, \vepPlus, \vepL,\vepR, \vepS \right\}\) that obey \(\sum_\kappa \vep_\kappa^2=1\)---the overall amplitude is fixed by the requirement that \(\Mij\) makes up all of the dark matter. The five polarisation matrices are given by \begin{align} {\cal Y}^\times_{ij} &\deq \frac{1}{\sqrt2} \left(p_i q_j + q_i p_j\right) \,, & {\cal Y}^+_{ij} &\deq \frac{1}{\sqrt2} \left(p_i p_j - q_i q_j\right) \,, & \nn\\ {\cal Y}^L_{ij} &\deq \frac{1}{\sqrt2} \left(q_i r_j + r_i q_j\right) \,, & {\cal Y}^R_{ij} &\deq \frac{1}{\sqrt2} \left(p_i r_j + r_i p_j\right) \,, & \nn\\ {\cal Y}^S_{ij} &\deq \frac{1}{\sqrt6} \left(3 r_i r_j - \delta_{ij}\right) \,. &&& \nn \end{align} Notice that, unlike for CWs, there is no propagation along the \(\vr\) direction, which in our case serves merely as reference for the decomposition in tensor, vector, and scalar helicities according to their behaviour under a rotation about \(\vr\) (see also Appendix~\ref{app:not}). Spin-2 ULDM couples to Standard Model fields \(\Psi\) as~\cite{Marzola:2017} \begin{align}\label{eq:int} S_\text{int}[g,\Mij,\Psi] & \deq -\frac{\al}{2\mpl} \int\!\dd^4x\,\sqrt{-g} \Mij T_\Psi^{ij} \,, \end{align} where \(T_\Psi^{ij}\) is the stress tensor of the fields \(\Psi\) and \(\mpl\) is the reduced Planck mass. At leading (linear) order in \(\alpha\) the interaction \Eq{eq:int} can be absorbed into a redefinition of the metric \(g_{ij}\to g_{ij} + \al M_{ij}/\mpl\)~\cite{Armaleo:2020yml}. Therefore, the effect of spin-2 ULDM on the detector can be equivalently described by the gravitational effect of an oscillating metric perturbation \(\hij\) given by \begin{align} \hij(t) &= \frac{\al}{\mpl}\Mij(t) = \frac{\al\sqrt{2\rhoDM}}{m\mpl}\cos{(mt+\Upsilon)}\epij(\vx) \,. \end{align} The parameter \(\al\) is idiosyncratic for spin-2 ULDM because it is required by the self-consistency of the model, such as in bigravity~\cite{Babichev:2016bxi}. This parameter defines the inverse ULDM self-interaction strength: there is no ULDM at all with \(\al\rar0\) because the ULDM field becomes infinitely strongly coupled in this limit. Furthermore, spin-2 ULDM is ineluctably coupled universally to standard matter fields, so that ULDM will appear as a Yukawa-like fifth force modification of the gravitational potential \(\Phi\) in the weak field regime, for which \(\al\) quantifies the strength: \(\Phi\rar\Phi\left[1+\al^2\exp(-mr)\right]\). The strength of this fifth force for different values of the mass \(m\) (or, equivalently, frequency) is constrained by several experiments and tests of gravity~\cite{Murata:2014nra,Sereno:2006mw}: we call this maximal coupling \(\al=\al_Y\). In the reference frame of the detector, \((\vx,\vy,\vz)\), the response function \(D^{ij}\) is given by the differential change in the length of the detector arms directed along the unit vectors \(\vn\) and \(\vm\) as \(D^{ij} = (n^i n^j - m^i m^j)/2\)~\cite{Maggiore:1900zz}. The signal is the combination of the variation of the metric perturbation and the response function: \begin{align} h(t) &\deq D^{ij} h_{ij}(t) = \frac{\al\sqrt{\rhoDM}}{\sqrt2 m\mpl}\cos{(mt+\Upsilon)}\Delta\vep \deq h_s\sin{(mt)} + h_c\cos{(mt)}\,, \label{eq:signal} \end{align} where we defined \(\Delta\vep \deq \,\epij (n^i n^j - m^i m^j)\), and introduced the sine \(h_{s}\) and cosine \(h_{c}\) amplitudes. This is the central equation of the paper. The theoretical spin-2 ULDM-CW signal \Eq{eq:signal} presents two key features. First, the signal is inversely proportional to the spin-2 boson mass \(m\). This inverse linear scaling is also found in dark photon dark matter, where the spin-1 ULDM field carries a additional charges such as baryon number \(B\) or baryon minus lepton number \(B-L\), through which the ULDM directly interacts with the mirrors of the detector~\cite{Pierce:2018xmy,Miller:2020vsl}. The inverse linear dependence should be compared with the generic inverse \emph{quadratic} dependence obtained by pure gravitational interaction~\cite{Aoki:2016kwl}. In other words, in absence of non-gravitational interactions, the signal strength decays much more rapidly with increasing mass (or frequency). This makes it practically impossible to detect such a signal with future GWIs, let alone existing ones. Second, the spin-2 ULDM-CW signal has a unique geometric structure that sets it apart from other CWs. Explicitly we have \begin{align} \Delta\vep &= \sqrt{2} \vepCross\left[\left(\vp\cdot\vn\right) \left(\vq\cdot\vn\right) - \left(\vp\cdot\vm\right) \left(\vq\cdot\vm\right)\right] + \frac{\vepPlus}{\sqrt{2}}\left[\left(\vp\cdot\vn\right)^2 - \left(\vq\cdot\vn\right)^2 -\left(\vp\cdot\vm\right)^2 + \left(\vq\cdot\vm\right)^2\right] \nn\\ &~~+ \sqrt{2} \vepL\left[\left(\vq\cdot\vn\right) \left(\vr\cdot\vn\right) - \left(\vq\cdot\vm\right) \left(\vr\cdot\vm\right)\right] + \sqrt{2}\vepR\left[\left(\vp\cdot\vn\right) \left(\vr\cdot\vn\right) - \left(\vp\cdot\vm\right) \left(\vr\cdot\vm\right)\right] \nn\\ &~~+ \sqrt{\frac{3}{2}}\,\vepS\left[\left(\vr\cdot\vn\right)^2 - \left(\vr\cdot\vm\right)^2\right] \label{eq:signal_rpq} \\ &=\frac{\cos2\phi}{\sqrt{2}} \left[\vepPlus\left(\cos^2\theta+1\right) + \vepR\,\sin2\theta + \sqrt3\,\vepS\,\sin^2\theta\right] -\sqrt{2}\sin2\phi\left(\vepCross\,\cos\theta + \vepL\,\sin\theta\right) \,, \label{eq:signal_xy} \end{align} where in obtaining the last expression we have set \(\vn=\vx\) and \(\vm=\vy\), which we can always do for a single L-shaped detector, and we have defined the ULDM reference frame in terms of the detector's frame as \(\vr = (\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)\), \(\vp = (\cos\theta\cos\phi,\cos\theta\sin\phi,-\sin\theta)\), \(\vq = (-\sin\phi,\cos\phi,0)\); the origins of the two frames are connected by the vector \(r\vr\). Before moving on to our results, a comment is in order here. The detector is moving with respect to the ULDM, this motion being the result of three contributions: (1) the Earth is rotating about its axis with equatorial velocity of approximately \(v\sim10^{-6}\) (this only applies to Earth-bound detectors); (2) the Earth is moving along its orbit around the Sun with speed \(v\sim10^{-4}\); (3) the Solar System is moving through the dark matter halo at a speed of \(v\sim10^{-3}\) causing what is known as the dark matter wind. Therefore, in principle we should Lorentz-boost the ULDM frame to the reference frame of the detector. However, owing to the smallness of the velocities in question, the effect of the boost on \(r\vr\) amounts to less than a percent correction to the theoretical signal and can be safely neglected. The relative acceleration of the two frames also induces a Doppler frequency shift \(\Delta f_\text{Doppler}\) that affects the spin-2 ULDM-CW signal, and that needs to be accounted for when designing a data analysis pipeline~\cite{Miller:2020vsl,Frasca:2005ey,DAntonio:2018sff}. All-sky searches for CWs with Earth-bound GWIs resort to semi-coherent methods because it is not computationally feasible to analyse the data from the entire observation campaign in a fully coherent way\footnote{In the case of space-based detectors such as the upcoming LISA interferometer, owing to the sparse sampling frequency of around 1~Hz, compared to the HLV sampling of about \(10^4\)~Hz, this is not an issue.}~\cite{Brady:1998nj,Krishnan:2004sv,Antonucci:2008jp,PhysRevD.90.042002}. In semi-coherent methods the whole data set is broken into shorter time chunks of length \(\tchunk\), each of which is then analysed coherently but separately. One of the advantages of this approach is that, by choosing \(\tchunk<\tdop\deq1/\Delta f_\text{Doppler}\) the Doppler frequency shift can be neglected\footnote{To be more precise, within each chunk the instantaneous Doppler shift that would contribute to \(\dot{f}\) can be neglected, i.e., the frequency is held constant. Nevertheless, in CW searches, in order to identify viable source candidates for the follow-up steps in the hierarchical semi-coherent analysis, the predicted Doppler shift for each chunk and each location in the sky needs to be corrected for. Being there no ``sky location'' for ULDM searches, this is not a concern.}. Moreover, one should ensure that \(\tchunk<\tcoh\) in order to have a stable ULDM configuration within a given chunk. The sensitivity for a coherent analysis over the whole observation campaign time \(\tobs\) scales as \(\tobs^{-1/2}\). In semi-coherent methods, assuming that all \(N\) chunks last the same time \(\tchunk\) and all together they cover the whole observation run such that \(\tobs=N\tchunk\), the sensitivity scales instead as \(N^{-1/4}\tchunk^{-1/2} = \tobs^{-1/4}\tchunk^{-1/4}\). Thanks to the coherence of the signal, even within the limitation of the semi-coherent methods, the actual sensitivity attained by the HLV collaboration for CW searches is more than a factor \(10^{-3}\) smaller than the design sensitivity \(h_0\) for transient events~\cite{Pisarski:2019vxw,Dergachev:2020fli,Steltner:2020hfd}. The semi-coherent techniques have been adapted and optimised, taking into account the coherence time and the geometry of the signal, for dark photon dark matter searches~\cite{Miller:2020vsl}. They can therefore be tailored for spin-2 ULDM-CW searches by replacing the average over the different polarisations of ULDM waves (which for the spin 1 dark photon case amounts to a factor \(\sqrt{2}/3\)~\cite{Pierce:2018xmy,Miller:2020vsl}) with \(\sqrt{\langle\Delta\varepsilon^2\rangle} = \sqrt{2/5}\). We define the effective theoretical strain amplitude \(h\) for the spin-2 ULDM-CW signal as the root mean square average, taken over all the polarisation angles and the random phase \(\Upsilon\), of the sine and cosine amplitudes of \Eq{eq:signal}: \begin{align}\label{eq:signalaver} h &\deq \langle h_s^2+h_c^2\rangle^{1/2} = \frac{\al\sqrt{\rhoDM}}{\sqrt{5}m\mpl} \,. \end{align} \section{Results} \label{sec:res} In order to estimate the values of \(\al\) accessible with GWIs, we compare the expected theoretical signal \(h\) of \Eq{eq:signalaver} with the design sensitivities of a number of current and planned GWIs (Fig.~\ref{fig:signal}). We find that the HLV detectors can nominally detect spin-2 ULDM for \(\al\gtrsim10^{-4}\) depending on the frequency (Fig.~\ref{fig:signal}). We expect that a dedicated semi-coherent search for the spin-2 ULDM-CW signal will improve the range of detectable \(\al\) by a few orders of magnitude, potentially down to \(\al\sim10^{-7}\) or less for frequencies of tens of Hz, corresponding to masses around the \(10^{-13}\)~eV mark; this is shown in Fig.~\ref{fig:signal} as the dotted line ``HLV opt''---the details on how we obtained this curve can be found in Appendix~\ref{app:opt}. In this frequency range, from \(f\sim10\)~Hz (\(m\sim4\times10^{-14}\)~eV) to \(f\sim10^3\)~Hz (\(m\sim4\times10^{-12}\)~eV) and beyond the planned experiments Einstein Telescope (ET)~\cite{Hild:2010id} and Cosmic Explorer (CE)~\cite{Evans:2016mbw} should reach sensitivities of order \(h_0\sim10^{-22}\text{---}10^{-23}\), further improving the chances to detect spin-2 ULDM. \begin{figure}[htbp] \centering \includegraphics[width=1.0\textwidth]{sensitivitiesVSalpha} \caption{Design sensitivity \(h=h_0\) for several current and planned GWIs, as a function of frequency (solid lines). The dotted line ``HLV opt'' is the optimised sensitivity obtained with a semi-coherent method tailored for spin-2 ULDM-CW searches (Appendix \ref{app:opt}). Overlaid as dashed lines are the signal strains \(h\) of \Eq{eq:signalaver} for different values of the parameter \(10^{-4}\leq\al\leq10^{-10}\). The dot-dashed black line is the spin-2 ULDM-CW strain corresponding to the maximal values of \(\al\) allowed by fifth force constraints, \(h=h(\al_Y)\) with \(\al_Y\) obtained from~\cite{Murata:2014nra,Sereno:2006mw}; the region above this line is excluded.}\label{fig:signal} \end{figure} Future facilities will also be able to probe much lower values of the ULDM mass. In the intermediate frequency range \(0.1~\mathrm{Hz}\lesssim f \lesssim1\)~Hz, corresponding to \(4\times10^{-16}~\mathrm{eV}\lesssim m \lesssim4\times10^{-15}\)~eV, the BBO and DECIGO detectors are expected to attain sensitivities of order of \(h_0\sim10^{-23}\text{---}10^{-24}\)~\cite{Seto:2001qf,Harry:2006fi}. This means these GWIs could detect a spin-2 ULDM-CW signal for \(\al\lesssim10^{-8}\) at those frequencies. In the low frequency range the planned space-based interferometer LISA will reach a sensitivity of \(h_0\sim10^{-21}\) for \(f\sim10^{-2}\)~Hz (\(m\sim4\times10^{-17}\)~eV), which means that it could detect spin-2 ULDM with \(\al\sim10^{-7}\) and below. These limits would be much improved with a dedicated pipeline for these interferometers, as is the case for HLV. We collect all the sensitivities as compiled in~\cite{Schmitz:2020syl} and compare them to the theoretical signal in Fig.~\ref{fig:signal}---notice that strictly speaking these sensitivities are valid only for the standard tensor modes of GWs, namely the \(\vepCross\) and \(\vepPlus\) in our notation, but the differences are small and not relevant for our order of magnitude estimates~\cite{Zhang:2019oet}. \section{Conclusion and outlook} \label{sec:out} GWIs are a unique tool to understand the nature of gravity. In this work we have shown that GWIs have the potential to test the properties of gravity \emph{and} dark matter by detecting or constraining spin-2 ULDM. In particular, we expect that the existing HLV facilities could detect spin-2 ULDM for values of the coupling parameter \(\al\) as little as \(\al\sim10^{-7}\) for frequencies of \(f\sim\mathrm{few}~\times100\)~Hz (that is, a Yukawa range \(\lambda\deq1/2\pi f\sim10^4\)~m). A null result would place the most stringent limits the strength of the Yukawa-like fifth force modification of the inverse-square law of gravitational interaction, quantified by \(\al\), provided that the fifth force is carried by the dark matter. Looking forward, future GWIs in the same frequency range can push this limit even further by up to two orders of magnitude, whereas planned facilities such as DECIGO and the BBO (\(f\sim0.1\)~Hz), and the milli-Hertz space-based LISA interferometer are expected to attain \(\al\lesssim10^{-7}\text{---}10^{-8}\) in their respective frequency ranges; limits which can be significantly improved with a dedicated pipeline for the spin-2 ULDM-CW signal. Our results complement our previous studies on the bounds on the spin-2 ULDM coupling \(\al\) coming from PTAs~\cite{Armaleo:2020yml} and individual pulsar timing data~\cite{Armaleo:2019gil}, which cover the frequency range \(10^{-9}~\mathrm{Hz}\lesssim f \lesssim10^{-3}\)~Hz, and for which comparable limits on \(\al\) were obtained. Our findings should be compared with existing limits on spin-2 ULDM coming from superradiance. By measuring the spin and mass of known black holes and other astrophysical objects, the mass ranges \(6.4\times10^{-22}~\mathrm{eV}\lesssim m \lesssim7.7\times10^{-21}~\mathrm{eV}\), \(1.8\times10^{-20}~\mathrm{eV}\lesssim m \lesssim1.8\times10^{-16}~\mathrm{eV}\) and \(2.2\times10^{-14}~\mathrm{eV}\lesssim m \lesssim2.8\times10^{-11}~\mathrm{eV}\) are excluded, or else these black holes and celestial objects would not be there~\cite{Stott:2020gjj}. These bounds are valid provided that any additional interactions that the bosons might possess are small enough not to interfere with the onset and development of superradiance. In particular, these limits are valid if \(10^{-30}\,\mathrm{eV}/m\ll\al\ll1\)~\cite{Brito:2020lup}, which is verified for most of the parameter space we are considering. Therefore, the limits we can obtain from GWIs will at the same time independently exclude some of the parameter space that is probed by superradiance, as well as test regions not accessible by it. Spin-2 ULDM can also be detected thanks to the CW signal that superradiance produces, which is physically unrelated to and distinct from the signal we have described in this work~\cite{Brito:2020lup}; no such searches have been carried out yet for spin-2 ULDM. In order to fully take advantage of GWI data to test spin-2 ULDM, a dedicated pipeline should be developed. As we have shown in Sec.~\ref{sec:maths} the signal \Eq{eq:signal} has a peculiar geometric structure that is explicitly given in \Eq{eq:signal_rpq}. Moreover, the ULDM signal is expected to be coherent for a time \(\tcoh = 2/fv^2\). An optimised analysis molded onto the shape of this signal can not only improve the sensitivity of GWIs to spin-2 ULDM-CW, but also discriminate between ULDM and other sources of CWs at different frequencies, such as fast-spinning Galactic neutron stars (at high frequencies) or ultra-compact Galactic binaries (in the milli-Hertz band), CW coming from superradiance, and other variants of ULDM, furthering our grasp of dark matter and gravity. \acknowledgments The Authors would like to thank C.~Palomba for valuable discussion on the semi-coherent methods used in GW full-sky searches, R.~Brito and V.~Cardoso for an update on the status of spin-2 superradiance, and N.~Tamanini, A. Klein and R. Sturani for useful correspondence on CW searches with LISA. FU is supported by the European Regional Development Fund (ESIF/ERDF) and the Czech Ministry of Education, Youth and Sports (MEYS) through Project CoGraDS - \verb|CZ.02.1.01/0.0/0.0/15_003/0000437|. The work of DLN and JMA has been supported by CONICET, ANPCyT and UBA.
{'timestamp': '2021-04-26T02:03:44', 'yymm': '2012', 'arxiv_id': '2012.13997', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.13997'}
\section{Introduction} Semantic change -- that is, change in the meaning of individual words \cite{campbell_1998} -- is a continuous, inevitable process stemming from numerous reasons and influenced by various factors. Words are continuously changing, with new senses emerging all the time. \cite{campbell_1998} presents no less than 11 types of semantic change, that are generally classified in two wide categories: narrowing and widening. Most linguists found structural and psychological factors to be the main cause of semantic change, but the evolution of technology and cultural and social changes are not to be omitted. Measuring semantic divergence across languages can be useful in theoretical and historical linguistics -- being central to models of language and cultural evolution -- but also in downstream applications relying on cognates, such as machine translation. \textbf{Cognates} are words in sister languages (languages descending from a common anscestor) with a common proto-word. For example, the Romanian word \emph{victorie} and the Italian word \emph{vittoria} are cognates, as they both descend from the Latin word \emph{victoria} (meaning \emph{victory}) -- see Figure \ref{fig:cognates}. In most cases, cognates have preserved similar meanings across languages, but there are also exceptions. These are called deceptive cognates or, more commonly, false friends. Here we use the definition of cognates that refers to words with similar appearance and some common etymology, and use "true cognates" to refer to cognates which also have a common meaning, and "deceptive cognates" or "false friends" to refer to cognate pairs which do not have the same meaning (anymore). The most common way cognates have diverged is by changing their meaning. For many cognate pairs, however, the changes can be more subtle, relating to the feeling attached to a word, or its conotations. This can make false friends even more delicate to distinguish from true cognates. \begin{figure}[ht] \center \includegraphics[width=200pt]{resources/example_paper_new.pdf} \caption{\label{fig:cognates}Example of cognates and their common ancestor} \end{figure} Cognate word pairs can help students when learning a second language and contributes to the expansion of their vocabularies. False friends, however, from the more obvious differences in meaning to the more subtle, have the opposite effect, and can be confusing for language learners and make the correct use of language more difficult. Cognate sets have also been used in a number applications in natural language processing, including for example machine translation \cite{kondrak2003cognates}. These applications rely on properly distinguishing between true cognates and false friends. \subsection{Related work} Cross-lingual semantic word similarity consists in identifying words that refer to similar semantic concepts and convey similar meanings across languages \cite{vulic_and_moens_2}. Some of the most popular approaches rely on probabilistic models \cite{vulic_and_moens} and cross-lingual word embeddings \cite{soegard_et_al}. A comprehensive list of cognates and false friends for every language pair is difficult to find or manually build - this is why applications have to rely on automatically identifying them. There have been a number of previous studies attempting to automatically extract pairs of true cognates and false friends from corpora or from dictionaries. Most methods are based either on ortographic and phonetic similarity, or require large parallel corpora or dictionaries \cite{inkpen2005automatic,st2017identifying,nakov2009unsupervised,chen2016false}. We propose a corpus-based approach that is capable of covering the vast majority of the vocabulary for a large number of languages, while at the same time requiring minimal human effort in terms of manually evaluating word pairs similarity or building lexicons, requiring only large monolingual corpora. In this paper, we make use of cross-lingual word embeddings in order to distinguish between true cognates and false friends. There have been few previous studies using word embeddings for the detection of false friends or cognate words, usually using simple methods on only one or two pairs of languages \cite{castro2018high,torres2011using}. \begin{figure*}[!ht] \centering \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/fr_es_scores3.png} \caption{Es-Fr} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/es_it_scores3.png} \caption{Es-It} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/ro_es_scores3.png} \caption{Es-Ro} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/es_pt_scores3.png} \caption{Es-Pt} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/fr_it_scores3.png} \caption{Fr-It} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/ro_fr_scores3.png} \caption{Fr-Ro} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/fr_pt_scores3.png} \caption{Fr-Pt} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/ro_it_scores3.png} \caption{It-Ro} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/it_pt_scores3.png} \caption{It-Pt} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/ro_pt_scores3.png} \caption{Ro-Pt} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/fr_en_scores3.png} \caption{En-Fr} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/it_en_scores3.png} \caption{En-It} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/es_en_scores3.png} \caption{En-Es} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/ro_en_scores3.png} \caption{En-Ro} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/pt_en_scores3.png} \caption{En-Pt} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/en_la_scores3.png} \caption{En-La} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/fr_la_scores3.png} \caption{Fr-La} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/es_la_scores3.png} \caption{Es-La} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/it_la_scores3.png} \caption{It-La} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/pt_la_scores3.png} \caption{Pt-La} \end{subfigure} \begin{subfigure}{0.20\textwidth} \includegraphics[width=\linewidth]{resources/cognates/histograms3/ro_la_scores3.png} \caption{Ro-La} \end{subfigure} \caption{\label{fig:exp1}Distributions of cross-language similarity scores between cognates.} \end{figure*} \subsection{Contributions} The contributions of our paper are twofold: firstly, we propose a method for quantifying the semantic divergence of languages; secondly, we provide a framework for detecting and correcting false friends, based on the observation that these are usually deceptive cognate pairs: pairs of words that once had a common meaning, but whose meaning has since diverged. We propose a method for measuring the semantic divergence of sister languages based on cross-lingual word embeddings. We report empirical results on five Romance languages: Romanian, French, Italian, Spanish and Portuguese. For a deeper insight into the matter, we also compute and investigate the semantic similarity betweeen modern Romance languages and Latin. We finally introduce English into the mix, to analyze the behavior of a more remote language, where words deriving from Latin are mostly borrowings. Further, we make use of cross-lingual word embeddings in order to distinguish between true cognates and false friends. There have been few previous studies using word embeddings for the detection of false friends or cognate words, usually using simple methods on only one or two pairs of languages \cite{castro2018high,torres2011using}. Our chosen method of leveraging word embeddings extends naturally to another application related to this task which, to our knowledge, has not been explored so far in research: false friend correction. We propose a straightforward method for solving this task of automatically suggesting a replacement when a false friend is incorrectly used in a translation. Especially for language learners, solving this problem could result in a very useful tool to help them use language correctly. \section{The Method} \subsection{Cross-lingual Word Embeddings} Word embeddings are vectorial representations of words in a continuous space, built by training a model to predict the occurence of a given word in a text corpus given its context. Based on the distributional hypothesis stating that similar words occur in similar contexts, these vectorial representations can be seen as semantic representations of words and can be used to compute semantic similarity between word pairs (representations of words with similar meanings are expected to be close together in the embeddings space). To compute the semantic divergence of cognates across sister languages, as well as identify pairs of false cognates (pairs of cognates with high semantic distance), which by definition are pairs of words in two different languages, we need to obtain a multilingual semantic space, which is shared between the cognates. Having the representations of both cognates in the same semantic space, we can then compute the semantic distance between them using their vectorial representations in this space. We use word embeddings computed using the FastText algorithm, pre-trained on Wikipedia for the six languages in question. The vectors have dimension 300, and were obtained using the skip-gram model described in \cite{bojanowski_et_al} with default parameters. The algorithm for measuring the semantic distance between cognates in a pair of languages $ (lang1, lang2) $ consists of the following steps: \begin{enumerate} \item Obtain word embeddings for each of the two languages. \item Obtain a shared embedding space, common to the two languages. This is accomplished using an alignment algorithm, which consists of finding a linear transformation between the two spaces, that on average optimally transforms each vector in one embedding space into a vector in the second embedding space, minimizing the distance between a few seed word pairs (for which it is known that they have the same meaning), based on a small bilingual dictionary. For our purposes, we use the publicly available multilingual alignment matrices that were published in \cite{smith2017offline}. \item Compute semantic distances for each pair of cognates words in the two languages, using a vectorial distance (we chose cosine distance) on their corresponding vectors in the shared embedding space. \end{enumerate} \subsection{Cross-language Semantic Divergence} We propose a definition of semantic divergence between two languages based on the semantic distances of their cognate word pairs in these embedding spaces. The semantic distance between two languages can then be computed as the average the semantic divergence of each pair of cognates in that language pair. We use the list of cognates sets in Romance languages proposed by \cite{ciobanu_and_dinu_lrec}. It contains 3,218 complete cognate sets in Romanian, French, Italian, Spanish and Portuguese, along with their Latin common ancestors. The cognate sets are obtained from electronic dictionaries which provide information about the etymology of the words. Two words are considered cognates if they have the same etymon (i.e., if they descend from the same word). The algorithm described above for computing semantic distance for cognate pairs stands on the assumption that the (shared) embedding spaces are comparable, so that the averaged cosine similarities, as well as the overall distributions of scores that we obtain for each pair of languages can be compared in a meaningful way. For this to be true, at least two conditions need to hold: \begin{enumerate} \item The embeddings spaces for each language need to be similarly representative of language, or trained on similar texts - this assumption holds sufficiently in our case, since all embeddings (for all languages) are trained on Wikipedia, which at least contains a similar selection of texts for each language, and at most can be considered comparable corpora. \item The similarity scores in a certain (shared) embeddings space need to be sampled from a similar distribution. To confirm this assumption, we did a brief experiment looking at the distributions of a random sample of similarity scores across all embeddings spaces, and did find that the distributions for each language pair are similar (in mean and variance). This result was not obvious but also not surprising, since: \begin{itemize} \item The way we create shared embedding spaces is by aligning the embedding space of any language to the English embedding space (which is a common reference to all shared embedding spaces). \item The nature of the alignment operation (consisting only of rotations and reflections) guarantees monolingual invariance, as described in these papers: \cite{artetxe2016learning,smith2017offline}. \end{itemize} \end{enumerate} \subsubsection{The Romance Languages} We compute the cosine similarity between cognates for each pair of modern languages, and between modern languages and Latin as well. We compute an overall score of similarity for a pair of languages as the average similarity for the entire dataset of cognates. The results are reported in Table \ref{table:exp1}. \begin{table}[!ht] \begin{center} \begin{tabular}{l | l l l l l} \hline & Fr & It & Pt & Ro & La\\ \hline Es & 0.67 & 0.69 & 0.70 & 0.58 & 0.41\\ Fr & & 0.66 & 0.64 & 0.56 & 0.40 \\ It & & & 0.66 & 0.57 & 0.41 \\ Pt & & & & 0.57 & 0.41\\ Ro & & & & & 0.40 \\ \hline \end{tabular} \end{center} \caption{\label{table:exp1}Average cross-language similarity between cognates (Romance languages).} \end{table} We observe that the highest similarity is obtained between Spanish and Portuguese (0.70), while the lowest are obtained for Latin. From the modern languages, Romanian has, overall, the lowest degrees of similarity to the other Romance languages. A possible explanation for this result is the fact that Romanian developed far from the Romance kernel, being surrounded by Slavic languages. In Table \ref{table:exp3} we report, for each pair of languages, the most similar (above the main diagonal) and the most dissimilar (below the main diagonal) cognate pair for Romance languages. \begin{table*}[!ht] \begin{small} \begin{center} \begin{tabular}{l | l l l l l} \hline & Es & Fr & It & Ro & Pt\\ \hline Es & -- & ocho/huit(0.89) & diez/dieci(0.86) & ocho/opt(0.82) & ocho/oito(0.89) \\ Fr & caisse/casar(0.05) & -- & dix/dieci(0.86) & décembre/decembrie(0.83) & huit/oito(0.88) \\ It & prezzo/prez(0.06) & punto/ponte(0.09) & convincere/convinge(0.75) & convincere/convencer(0.88) \\ Ro & miere/mel(0.09) & face/facteur(0.10) & as/asso(0.11) & -- & opt/oito(0.83) \\ Pt & prez/preço(0.05) & pena/paner(0.09) & preda/prea(0.08) & linho/in(0.05) --\\ \hline \end{tabular} \end{center} \caption{\label{table:exp3}Most similar and most dissimilar cognates} \end{small} \end{table*} The problem that we address in this experiment involves a certain \textit{vagueness of reported values} (also noted by \cite{eger_et_al} in the problem of semantic language classification), as there isn't a gold standard that we can compare our results to. To overcome this drawback, we use the degrees of similarity that we obtained to produce a language clustering (using the UPGMA hierarchical clusering algorithm), and observe that it is similar with the generally accepted tree of languages, and with the clustering tree built on intelligibility degrees by \cite{ciobanu_and_dinu_intelligibility}. The obtained dendrogram is rendered in figure \ref{fig:dendrogram}. \begin{figure}[ht] \center \includegraphics[width=0.8\linewidth]{resources/comp/dendrogram.png} \caption{\label{fig:dendrogram}Dendrogram of the language clusters} \end{figure} \subsubsection{The Romance Languages vs English} Further, we introduce English into the mix as well. We run this experiment on a subset of the used dataset, comprising the words that have a cognate in English as well\footnote{Here we \textit{stretch} the definition of \textit{cognates}, as they are generally referring to sister languages. In this case English is not a sister of the Romance languages, and the words with Latin ancestors that entered English are mostly borrowings.}. The subset has 305 complete cognate sets. The results are reported in Table \ref{table:exp2}, and the distribution of similarity scores for each pair of languages is rendered in figure \ref{fig:exp1}. We notice that English has 0.40 similarity with Latin, the lowest value (along with French and Romanian), but close to the other languages. Out of the modern Romance languages, Romanian is the most distant from English, with 0.53 similarity. Another interesting observation relates to the distributions of scores for each language pair, shown in the histograms in \ref{fig:exp1}. While similarity scores between cognates among romance languages usually follow a normal distribution (or another unimodal, more skewed distribution), the distributions of scores for romance languages with English seem to follow a bimodal distribution, pointing to a different semantic evolution for words in English that share a common etymology with a word in a romance language. One possible explanation is that the set of cognates between English and romance languages (which are pairs of languages that are more distantly related) consist of two distinct groups: for example one group of words that were borrowed directly from the romance language to English (which should have more meaning in common), and words that had a more complicated etymological trail between languages (and for which meaning might have diverged more, leading to lower similarity scores). \begin{table}[!ht] \begin{center} \begin{tabular}{l | l l l l l l} \hline & Fr & It & Pt & Ro & En & La\\ \hline Es & 0.64 & 0.67 & 0.68 & 0.57 & 0.61 & 0.42 \\ Fr & & 0.64 & 0.61 & 0.55 & 0.60 & 0.40 \\ It & & & 0.65 & 0.57 & 0.60 & 0.41 \\ Pt & & & & 0.56 & 0.59 & 0.42 \\ Ro & & & & & 0.53 & 0.40 \\ En & & & & & & 0.40 \\ \hline \end{tabular} \end{center} \caption{\label{table:exp2}Average cross-language similarity between cognates} \end{table} \subsection{Detection and Correction of False Friends} In a second series of experiments, we propose a method for identifying and correcting false friends. Using the same principles as in the previous experiment, we can use embedding spaces and semantic distances between cognates in order to detect pairs of false friends, which are simply defined as pairs of cognates which do not share the same meaning, or which are not semantically similar \textit{enough}. This definition is of course ambiguous: there are different degrees of similarity, and as a consequence different potential degrees of \textit{falseness} in a false friend. Based on this observation, we define the notions of \textit{hard false friend} and \textit{soft false friend}. A \textit{hard false friend} is a pair of cognates for which the meanings of the two words have diverged enough such that they don't have the same meaning anymore, and should not be used interchangibly (as translations of one another). In this category fall most known examples of false friends, such as the French-English cognate pair \textit{attendre} / \textit{attend}: in French, \textit{attendre} has a completely different meaning, which is \textit{to wait}. A different and more subtle type of false friends can result from more minor semantic shifts between the cognates. In such pairs, the meaning of the cognate words may remain roughly the same, but with a difference in nuance or connotation. Such an example is the Romanian-Italian cognate pair \textit{amic} / \textit{amico}. Here, both cognates mean \textit{friend}, but in Italian the conotation is that of a closer friend, whereas the Romanian \textit{amic} denotes a more distant friend, or even aquaintance. A more suitable Romanian translation for \textit{amico} would be \textit{prieten}, while a better translation in Italian for \textit{amic} could be \textit{conoscente}. Though their meaning is roughly the same, translating one word for the other would be an inaccurate use of the language. These cases are especially difficult to handle by beginner language learners (especially since the cognate pair may appear as valid a translation in multilingual dictionaries) and using them in the wrong contexts is an easy trap to fall into. Given these considerations, an automatic method for finding the appropriate term to translate a cognate instead of using the false friend would be a useful tool to aid in translation or in language learning. As a potential solution to this problem, we propose a method that can be used to identify pairs of false friends, to distinguish between the two categories of false friends defined above (\textit{hard false friends} and \textit{soft false friends}), and to provide suggestions for correcting the erroneous usage of a false friend in translation. \textbf{False friends} can be identified as pairs of cognates with high semantic distance. More specifically, we consider a pair of cognates to be a false friend pair if in the shared semantic space, there exists a word in the second language which is semantically closer to the original word than its cognate in that language (in other words, the cognate is not the optimal translation). The arithmetic difference between the semantic distance between these words and the semantic distance between the cognates will be used as a measure of the \textit{falseness} of the false friend. The word that is found to be closest to the first cognate will be the suggested "correction". The algorithm can be described as follows: \begin{algorithm} \caption{Detection and correction of false friends} \label{Detection and correction of false friends} \begin{algorithmic}[1] \State Given the cognates pair $(c_1, c_2)$ where $c_1$ is a word in $lang_1$ and $c_2$ is a word in $lang_2$: \State Find the word $w_2$ in $lang_2$ such that for any $w_i$ in $lang_2$, $distance(c_2, w_2) < distance(c_2, w_i)$ \If{$w_2 \neq c_2$} \State $(c_1, c_2)$ is a pair of false friends \State Degree of falseness $= distance(c_1, w_2) - distance(c_1, c_2)$ \\ \Return $w_2$ as potential correction \EndIf \end{algorithmic} \end{algorithm} We select a few results of the algorithm to show in Table \ref{table:exp3}, containing examples of extracted false friends for the language pair French-Spanish, along with the suggested correction and the computed degree of falseness. Depending on the application, the measure of \textit{falseness} could be used by choosing a threshold to single out pairs of false friends that are \textit{harder} or \textit{softer}, with a customizable degree of sensitivity to the difference in meaning. \begin{table}[!ht] \begin{center} \begin{tabular}{l l l l} \hline FR cognate & ES cognate & Correction & Falseness \\ \hline prix & prez & premio & 0.67 \\ long & luengo & largo & 0.57 \\ face & faz & cara & 0.41 \\ change & caer & cambia & 0.41 \\ concevoir & concebir & dise{\~n}ar & 0.18 \\ majeur & mayor & importante & 0.14 \\ \hline \end{tabular} \end{center} \caption{\label{table:exp3}Extracted false friends for French-Spanish} \end{table} \subsubsection{Evaluation} In this section we describe our overall results on identifying false friends for every language pair between English and five Romance languages: French, Italian, Spanish, Portuguese and Romanian. \begin{table}[!ht] \begin{center} \begin{tabular}{l | l l l} \hline & Accuracy & Precision & Recall\\ \hline Our method & 81.12 & 86.68 & 75.59 \\ (Castro et al) & 77.28 & & \\ WN Baseline & 69.57 & 85.82 & 54.50 \\ \hline \end{tabular} \end{center} \caption{\label{table:exp1}Performance for Spanish-Portuguese using curated false friends test set} \end{table} We evaluate our method in two separate stages. First, we measure accuracy of false friend detection on a manually curated list of false friends and true cognates in Spanish and Portuguese, used in a previous study \cite{castro2018high}, and introduced in \cite{torres2011using}. This resource is composed by 710 Spanish-Portuguese word pairs: 338 true cognates and 372 false friends. We also compare our results to the ones reported in this study, which uses a method similar to ours (using a simple classifier that takes embedding similarities as features to identify false friends) and shows improvements over results in previous research. The results are show in Table \ref{table:exp1}. For the second part of the experiment, we use the list of cognates sets in English and Romance languages proposed by \cite{ciobanu_and_dinu_lrec} (the same that we used in our semantic divergence experiments), and try to automatically decide which of these are false friends. Since manually built false friends lists are not available for every language pair that we experiment on, for the language pairs in this second experiment we build our gold standard by using a multilingual dictionary (WordNet) in order to infer false friends and true cognate relationships. We assume two cognates in different languages are true cognates if they occur together in any WordNet synset, and false friends otherwise. \begin{table}[!ht] \begin{center} \small \begin{tabular}{l | l l l} \hline & Accuracy & Precision & Recall\\ \hline EN-ES & 76.58 & 63.88 & 88.46 \\ ES-IT & 75.80 & 41.66 & 54.05 \\ ES-PT & 82.10 & 40.0 & 42.85 \\ EN-FR & 77.09 & 57.89 & 94.28 \\ FR-IT & 74.16 & 32.81 & 65.62 \\ FR-ES & 73.03 & 33.89 & 69.96 \\ EN-IT & 73.07 & 33.76 & 83.87 \\ IT-PT & 76.14 & 29.16 & 43.75 \\ EN-PT & 77.25 & 59.81 & 86.48 \\ \hline \end{tabular} \end{center} \caption{\label{table:exp4}Performance for all language pairs using WordNet as gold standard.} \end{table} We measure accuracy, precision, and recall, where: \begin{itemize} \item a \textit{true positive} is a cognate pair that are not synonyms in WordNet and are identified as false friends by the algorithm, \item a \textit{true negative} is a pair which is identified as true cognates and is found in the same WordNet synset, \item a \textit{false positive} is a word pair which is identified as a false friends pair by the algorithm but also appears as a synonym pair in WordNet, \item and a \textit{false negative} is a pair of cognate words that are not synonyms in WordNet, but are also not identified as false friends by the algorithm. \end{itemize} We should also note that in the WordNet based method we can only evaluate results for only slightly over half of cognate pairs, since not all of them are found in WordNet. This also makes our corpus-based method more useful than a dictionary-based method, since it is able to cover most of the vocabulary of a language (given a large monolingual corpus to train embeddings on). To be able to compare results to the ones evaluated on the manually built test set, we use the WordNet-based method as a baseline in the first experiment. Results for the second evaluation experiments are reported in Table \ref{table:exp4}. In this evaluation experiment we were able to measure performance for language pairs among all languages in our cognates set except for Romanian (which is not available in WordNet). \section{Conclusions} In this paper we proposed a method for computing the semantic divergence of cognates across languages. We relied on word embeddings and extended the pairwise metric to compute the semantic divergence across languages. Our results showed that Spanish and Portuguese are the closest languages, while Romanian is most dissimilar from Latin, possibly because it developed far from the Romance kernel. Furthermore, clustering the Romance languages based on the introduced semantic divergence measure results in a hierarchy that is consistent with the generally accepted tree of languages. When further including English in our experiments, we noticed that, even though most Latin words that entered English are probably borrowings (as opposed to inherited words), its similarity to Latin is close to that of the modern Romance languages. Our results shed some light on a new aspect of language similarity, from the point of view of cross-lingual semantic change. We also proposed a method for detecting and possibly correcting false friends, and introduced a measure for quantifying the \textit{falseness} of a false friend, distinguishing between two categories: hard false friends and soft false friends. These analyses and algorithms for dealing with false friends can possibly provide useful tools for language learning or for (human or machine) translation. In this paper we provided a simple method for detecting and suggesting corrections for false friends independently of context. There are, however, false friends pairs that are context-dependent - the cognates can be used interchangibly in some contexts, but not in others. In the future, the method using word embeddings could be extended to provide false friend correction suggestions in a certain context (possibly by using the word embedding model to predict the appropriate word in a given context). \section*{Acknowledgements} Research supported by BRD --- Groupe Societe Generale Data Science Research Fellowships.
{'timestamp': '2020-12-03T02:24:14', 'yymm': '2012', 'arxiv_id': '2012.01288', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.01288'}
\section{Introduction} \label{sec:intro} The \emph{average sensitivity} $\mathbb{AS}(f)$ of a Boolean function $f:\mathbb{Z}_2^n \to \{+1,-1\}$ is $n$ times the probability that $f(x) \ne f(y)$, where $x$ is chosen uniformly at random and $y$ is chosen uniformly from $x$'s neighbors at Hamming distance $1$. Similarly, the \emph{noise sensitivity} $\mathbb{NS}_\varepsilon(f)$ is the probability that $f(x) \ne f(y)$ where $x$ is uniformly random and $y$ is formed by flipping each bit of $x$ independently with probability $\varepsilon$. Sensitvity is a basic structural characteristic of Boolean functions, with applications to computational complexity, pseudorandomness, machine learning, and the theory of social choice~\cite{LinialMN:1993,KhotKMO:2007,ODonnell:2004,HarshaKM:2014,DiakonikolasSTW:2014,Kalai:2010,odonnell-book}. Many cases of interest focus on threshold functions $f(x)$ defined from a smooth function $p: \mathbb{R}^n \rightarrow \mathbb{R}$ and a threshold value $\theta \in \mathbb{R}$, \[ f(x) = \begin{cases} +1 & \text{if $p(x) \geq \theta$} \, , \\ -1 & \text{if $p(x) < \theta$} \, . \end{cases} \] where we define a Boolean function by restricting $p$, and $f$, to the hypercube $\{\pm 1\}^n \subset \mathbb{R}$. In particular, polynomial threshold functions $f(x) = \mathop{\mathrm{sgn}}(p(x))$, where $p(x)$ is a polynomial of degree $d$, have played a dominant role in this setting. Gotsman and Linial~\cite{gotsman-linial} conjectured that the sensitivity of such functions is maximized when $p$ is a symmetric polynomial whose roots slice the hypercube at $d$ Hamming weights near $n/2$, \[ p(x_1,\ldots,x_n) = q\!\left( \sum_i x_i \right) \quad \text{where} \quad q(x) = \prod_{h = \lfloor (n-d+1)/2 \rfloor}^{\lfloor (n+d-1)/2 \rfloor} \left( x - h - \frac{1}{2} \right) \, . \] This implies that \[ \mathbb{AS}(f) = O(d \sqrt{n}) \quad \text{and} \quad \mathbb{NS}_\varepsilon(f) = O(d \sqrt{\varepsilon}) \, , \] where the constant in the $O$ depends neither on $d$ nor $n$. This is known in the case $d=1$, i.e., where $f(x)$ is a halfspace~\cite{peres}. However, for $d > 1$ it has remained open for some time. The first nontrivial bounds for threshold functions of degree $d > 1$ were obtained quite recently~\cite{DiakonikolasSTW:2014,HarshaKM:2014}, showing \[ \mathbb{AS}(f) = 2^{O(d)} n^{1-\alpha} \log n \quad \text{and} \quad \mathbb{NS}_\varepsilon(f) = 2^{O(d)} \varepsilon^\alpha \log(1/\varepsilon) \, , \] where $\alpha = O(1/d)$. These bounds work by dividing polynomials into two classes: ``juntas'' where a few variables are highly influential, and ``regular'' polynomials where no variable has large influence. The regular case is handled using anticoncentration bounds and the invariance principle of~\cite{MOO05}, showing that the distribution of values of $p(x)$ is close to what it would be if $x$ were drawn from the Gaussian distribution as opposed to the uniform distribution on the hypercube. Using different reasoning~\cite{kane-correct-exponent}, it was recently shown that \[ \mathbb{AS}(f) = \sqrt{n} \,(\log n)^{O(d \log d)} \,2^{O(d^2 \log d)} \quad \text{and} \quad \mathbb{NS}_\varepsilon(f) = \sqrt{\varepsilon} \,(\log (1/\varepsilon))^{O(d \log d)} \,2^{O(d^2 \log d)} \, . \] While this dependence on $d$ is somewhat regrettable, these results show that the Gotsman-Linial conjecture holds, up to polylogarithmic factors, for each fixed $d$. Even when our ultimate questions pertain to sensitivity on the hypercube, working with functions defined on $\mathbb{R}^n$ can permit techniques from analysis to be brought to bear on the problem. This has motivated interest in continuous notions of noise sensitivity, most notably \emph{Gaussian sensitivity}, which is obtained by placing a Gaussian measure on $\mathbb{R}^n$ and applying Gaussian noise. In this setting, a simple and elegant argument~\cite{kane-surface-area} shows that the Gaussian analog of $\mathbb{NS}_\varepsilon(f)$ is indeed $O(d \sqrt{\varepsilon})$. In this article, we introduce a notion of \emph{spherical sensitivity} for functions defined on the unit $n$-sphere. By analyzing how spherical harmonics are carried to Boolean harmonics, i.e., Fourier basis functions over the hypercube $\mathbb{Z}_2^n$, we give a transfer theorem bounding the Boolean sensitivity in terms of the spherical sensitivity. Our results hold in expectation, when the function, or equivalently the hypercube, is randomly rotated in $\mathbb{R}^n$. In essence, we show that the distribution of angles induced by Boolean noise on the hypercube can be modeled by the effect of Brownian motion on the sphere, or equivalently diffusion driven by the spherical heat equation. As an application, by bounding the spherical sensitivity of polynomial threshold functions, we establish the Gotsman-Linial conjecture on average, in the following sense: for any polynomial $p$ of degree $d$, if we apply a random rotation $R$ and then restrict to the hypercube, the expected average sensitivity and noise sensitivity of the resulting Boolean threshold function $f(x) = \mathop{\mathrm{sgn}}(Rp(x))$ are $\Exp_R \mathbb{AS}(f) = O(d \sqrt{n})$ and $\Exp_R \mathbb{NS}_\varepsilon = O(\varepsilon \sqrt{n})$ respectively. \section{Spherical harmonics and the heat equation} Here and in later sections we repeat information from classic texts~\cite{szego,vilenkin} and two excellent reviews~\cite{frye-efthimiou,gallier}. Recall that the Laplace operator on $\mathbb{R}^n$ is defined as \[ \Delta_{\mathbb{R}^n} = \sum_{i=1}^n \frac{\partial^2}{\partial x_i^2} \, . \] The Laplace-Beltrami operator on $S_{n-1}$ consists of the contribution to $\Delta_{\mathbb{R}^n}$ arising from the dependence of a function on angular variables rather than the distance from the origin. It can be defined by writing $\Delta_{\mathbb{R}^n}$ in polar coordinates, \[ \Delta_{\mathbb{R}^n} = \frac{\partial}{\partial r^2} + \frac{n-1}{r} \frac{\partial}{\partial r} + \frac{1}{r^2} \Delta_{S_{n-1}} \, . \] A function $h:\mathbb{R}^n \to \mathbb{R}$ is \emph{harmonic} if $\Delta_{\mathbb{R}^n} h = 0$. In particular, for each $\ell \ge 0$, there is a linear subspace of harmonic homogeneous polynomials of degree $\ell$ of dimension (where we assume $n \ge 3$) \begin{equation} \label{eq:dim} d_\ell = {n + \ell - 1 \choose \ell} - {n + \ell - 3 \choose \ell - 2} = \frac{n+2\ell-2}{n-2} {n + \ell -3 \choose \ell} \, . \end{equation} Restricting these polynomials to $S_{n-1}$ gives the so-called \emph{spherical harmonics}. We denote a basis for these as $\{ Y_{\ell,j} \mid 1 \le j \le d_\ell \}$. They are eigenfunctions of the Laplace-Beltrami operator: \begin{equation} \label{eq:eigen-spherical} \Delta_{S_{n-1}} Y_{\ell,j} = -\ell (n+\ell-2) Y_{\ell,j} \, . \end{equation} Any function $g$ in $L_2(S_{n-1})$ can be expanded in terms of spherical harmonics, \begin{equation} \label{eq:hatf-spherical} g = \sum_{\ell \ge 0} g_\ell = \sum_{\ell \ge 0} \sum_{j=1}^{d_\ell} \widehat{g}(\ell,j) \,Y_{\ell,j} \, . \end{equation} This is analogous to the expansion into Fourier series over $[0,1) \subset \mathbb{R}$, where small $\ell$ corresponds to smooth, low-frequency variations, and larger $\ell$ corresponds to higher frequencies. \section{Noise sensitivity and the heat equation on the cube and the sphere} The standard notion of noise sensitivity for a Boolean function $f:\mathbb{Z}_2^n \to \{\pm 1\}$ is as follows~\cite{odonnell-book}. Define a linear operator $K_\varepsilon$ on the space of probability distributions over $\mathbb{Z}_2^n$ that independently flips each bit with probability $\varepsilon$. That is, if $d(x,y)$ denotes the Hamming distance, \[ K_\varepsilon(x,y) = \varepsilon^{d(x,y)} (1-\varepsilon)^{n-d(x,y)} \, . \] Then if $\delta_x$ is the Kronecker delta function where $\delta_x(z) = 1$ if $z=x$ and $0$ otherwise, \[ \mathbb{NS}_\varepsilon(f) = \Exp_{x,y} \Pr[f(y) \ne f(x)] \, , \] where $x$ is uniform in $\{\pm 1\}^n$ and $y$ is chosen from the distribution $K_\varepsilon \delta_x$. Equivalently, if we define the inner product on the cube as \[ \langle f, g \rangle = \Exp_{x \in \mathbb{Z}_2^n} f(x)^* g(x) \, , \] then \[ \mathbb{NS}_\varepsilon(f) = \frac{1}{2} \left( 1 - \langle f, K_\varepsilon f \rangle \right) \, . \] We write $f$ in the Fourier basis, expanding in terms of the characters $\chi_k$, \[ f(x) = \sum_{k \in \mathbb{Z}_2^n} \widehat{f}(k) \,\chi_k(x) \quad \text{where} \quad \chi_k(x) = (-1)^{k \cdot x} \quad \text{and} \quad \widehat{f}(k) = \langle f, \chi_k \rangle \, . \] Then we can use the fact that $\chi_k$ is an eigenvector of $K_\varepsilon$, \begin{equation} \label{eq:phi-eigen} K_\varepsilon \chi_k = (1-2\varepsilon)^{|k|} \chi_k \, , \end{equation} where $|k|$ denotes the Hamming weight of the frequency vector $k$. Then we obtain \begin{align} \mathbb{NS}_\varepsilon(f) &= \frac{1}{2} \left( 1 - \sum_{k \in \mathbb{Z}_2^n} \abs{\widehat{f}(k)}^2 (1-2\varepsilon)^{|k|} \right) \nonumber \\ &= \frac{1}{2} \sum_{k \in \mathbb{Z}_2^n} \abs{\widehat{f}(k)}^2 \left( 1-(1-2\varepsilon)^{|k|} \right) \, . \label{eq:ns} \end{align} Here we used the fact that $\langle f, f \rangle = \sum_k \abs{\widehat{f}(k)}^2 = 1$ since $\abs{f(x)}^2=1$ for all $x$. However, we can also take~\eqref{eq:ns} as the definition of $\mathbb{NS}_\varepsilon(f)$, in which case it can be applied to any function $f:\mathbb{Z}_2^n \to \mathbb{C}$. We can also write the noise sensitivity in terms of a continuous-time (but discrete-space) heat equation on the hypercube. Let $A$ be the adjacency matrix of the hypercube, and let \[ L = A - n \mathds{1} \] be the graph Laplacian. If we apply the heat equation \[ \frac{\partial f}{\partial t} = Lf \] for time $\varepsilon'$ with initial condition $f(0)=f$, we have \[ f(\varepsilon') = \mathrm{e}^{\varepsilon' L} f \, . \] The characters $\chi_k$ are eigenfunctions of the Laplacian, \[ L \chi_k = -2|k| \chi_k \quad \text{and} \quad \mathrm{e}^{\varepsilon' L} \chi_k = \mathrm{e}^{-2|k|\varepsilon'} \chi_k \, . \] Matching these eigenvalues with~\eqref{eq:phi-eigen}, we see that if \begin{equation} \label{eq:eps-prime} \varepsilon' = \frac{1}{2} \log \frac{1}{1-2\varepsilon} = \varepsilon + O(\varepsilon^2) \, , \end{equation} then \[ K_\varepsilon = \mathrm{e}^{\varepsilon' L} \, , \] and \begin{align} \mathbb{NS}_\varepsilon &= \frac{1}{2} \left( 1-\langle f, f(\varepsilon') \rangle \right) \nonumber \\ &= \frac{1}{2} \left( 1-\langle f, \mathrm{e}^{\varepsilon' L} f \rangle \right) \nonumber \\ &= \frac{1}{2} \left( 1 - \sum_{k \in \mathbb{Z}_2^n} \abs{\widehat{f}(k)}^2 \,\mathrm{e}^{-2 \varepsilon' |k|} \right) \nonumber \\ &= \frac{1}{2} \sum_{k \in \mathbb{Z}_2^n} \abs{\widehat{f}(k)}^2 \left( 1 - \mathrm{e}^{-2 \varepsilon' |k|} \right) \, . \label{eq:cube-heat} \end{align} In analogy with this heat-equation picture of the Boolean noise sensitivity, we define the \emph{spherical sensitivity} $\SS_t(g)$ of a function $g:S_{n-1} \to \{ \pm 1 \}$ as follows. First define the inner product of two functions $f, g:S_{n-1} \to \mathbb{C}$ as \begin{equation} \label{eq:spherical-inner} \langle f, g \rangle_{S} = \Exp_{x \in S_{n-1}} f(x)^* \,g(x) = \frac{1}{\Omega_{n-1}} \int_{S_{n-1}} \mathrm{d}x \,f(x)^* g(x) \, , \end{equation} where $\Omega_{n-1}$ denotes the surface area of $S_{n-1}$, \begin{equation} \label{eq:omega} \Omega_{n-1} = \frac{2 \pi^{n/2}}{\Gamma(n/2)} \, . \end{equation} The heat equation on $S_{n-1}$ is \[ \frac{\partial g}{\partial t} = \Delta_{S_{n-1}} g \, , \] and applying it for time $t$ with the initial condition $g(0)=g$ gives \[ g(t) = \mathrm{e}^{\Delta_{S_{n-1}} t} g \, . \] Then we define $\SS_t(g)$ as \begin{align} \SS_t(g) &= \frac{1}{2} \left( 1-\langle g, g(t) \rangle_{S} \right) \nonumber \\ &= \frac{1}{2} \left( 1-\langle g, \mathrm{e}^{t \Delta_{S_{n-1}}} g \rangle_{S} \right) \nonumber \\ &= \frac{1}{2} \sum_{\ell,j} \abs{\widehat{g}(\ell,j)}^2 \left( 1-\mathrm{e}^{-t \ell (n+\ell-2)} \right) \, , \label{eq:ss} \end{align} where we used~\eqref{eq:eigen-spherical} and the expansion~\eqref{eq:hatf-spherical}. Here we assumed that $g$ takes values in $\{ \pm 1 \}$. In that case, $\SS_t(g)$ is the probabiility that $g(x) \ne g(y)$ if $x$ is uniformly random and $y$ is the position of a particle that starts at $x$ and unergoes Brownian motion for time $t$. However, as with~\eqref{eq:ns}, we will take~\eqref{eq:ss} as the definition of $\SS_t(g)$, thus extending the notion of sensitivity to arbitrary functions $g: S_{n-1} \to \mathbb{C}$. Comparing~\eqref{eq:cube-heat} and~\eqref{eq:ss}, we see that flipping bits with probability $\varepsilon$ is roughly analogous to running the heat equation on the sphere for time $t = O(\varepsilon/n)$. We will tighten this analogy in Theorem~\ref{thm:transfer} below. \section{Zonal harmonics and Gegenbauer polynomials} \label{sec:zonal} In this section and the next, we continue our review of spherical harmonics and their associated orthogonal polynomials~\cite{szego,vilenkin,frye-efthimiou,gallier}. We include somewhat more machinery than is strictly necessary to prove our main result. However, in many cases this machinery gives us a more explicit picture of what is going on, and may be useful in proving more detailed results. Let $\eta = (1,0,\ldots,0)$ be the north pole. If $f(Rx) = f(x)$ for all rotation matrices that fix $\eta$, then $f(x)$ depends only on $z = \eta \cdot x$. Such functions are called \emph{zonal}. The inner product of two such functions can be written as a weighted inner product over the interval $-1 \le z \le 1$, \begin{equation} \label{eq:weighted-inner} \langle f, g \rangle_{S} = \frac{\Omega_{n-2}}{\Omega_{n-1}} \int_{-1}^1 \mathrm{d}z \,w_\alpha(z) \,f(z)^* g(z) \, , \end{equation} where we wantonly abuse notation by identifying $f(z)$ with $f(x)$, and the weight \begin{equation} \label{eq:weight} w_\alpha(z) = (1-z^2)^{\alpha-\frac{1}{2}} \end{equation} and the constant $\Omega_{n-2}$ account for the volume of the annulus between height $z$ and $z+\mathrm{d}z$. There is a unique harmonic polynomial of each degree $\ell$. A nice orthogonal family of such polynomials, called the \emph{zonal spherical harmonics} or the \emph{ultraspherical} or \emph{Gegenbauer} polynomials, are as follows. For historical reasons, we parametrize them with a half-integer $\alpha$ rather than the integer dimension $n$: \[ \alpha = \frac{n}{2} - 1 \, , \] in which case the dimension~\eqref{eq:dim} becomes \begin{equation} \label{eq:dim-alpha} d_\ell = \frac{\alpha+\ell}{\alpha} {2\alpha + \ell - 1 \choose \ell} \, . \end{equation} For each value of $\alpha$, we have a family of polynomials $\{ \gamma_\ell \mid \ell \in \mathbb{N} \}$ of degree $\ell$, \begin{align} \label{eq:gegen} \gamma_\ell(z) = \frac{1}{\sqrt{N^{(\ell)}}} \,G^{(\ell)}(z) \quad \text{where} \quad G^{(\ell)}(z) &= \sum_{k=0}^{\lfloor \ell/2 \rfloor} (-1)^k \frac{(\ell-k+\alpha-1)!}{(\alpha-1)! \,k! \,(\ell-2k)!} (2z)^{\ell-2k} \\ \text{and} \quad N^{(\ell)} & = \left\langle G^{(\ell)}, G^{(\ell)} \right\rangle \nonumber \\ &= \frac{\Omega_{n-2}}{\Omega_{n-1}} \times \frac{\pi \,2^{1-2\alpha} \,\Gamma(\ell+2\alpha)}{(\ell+\alpha) \,\Gamma(\ell+1) \,\Gamma(\alpha)^2} \nonumber \\ &= \frac{\alpha}{\alpha+\ell} {2\alpha + \ell - 1 \choose \ell} \, . \end{align} Note that $G^{(\ell)}$ is the $\ell$th Legendre polynomial when $\alpha = 1/2$ (i.e., when $n=3$). Then for each $\alpha$, the $\gamma_\ell$ are orthonormal with respect to the inner product~\eqref{eq:weighted-inner}: \[ \left\langle \gamma_\ell, \gamma^{(m)} \right\rangle_{S} = \delta_{\ell,m} \, . \] Thus, given a zonal function $f(z)$, we can write \[ f(z) = \sum_{\ell \ge 0} \widehat{f}^{(\ell)} \gamma_\ell(z) \quad \text{where} \quad \widehat{f}^{(\ell)} = \left\langle f, \gamma_\ell \right\rangle_{S} \, . \] This transform is unitary, so inner products are preserved: \[ \langle f, g \rangle_{S} = \sum_{\ell \ge 0} \widehat{f}^{(\ell)*} \,\widehat{g}^{(\ell)} \, . \] The Gegenbauer polynomials have deep roots in the representation theory of Lie groups. Let $\mathsf{SO}_n$ be the group of orthogonal rotations of $\mathbb{R}^n$; then the harmonic polynomials of degree $\ell$ form an irreducible representation $\rho_\ell$ of $\mathsf{SO}_n$ with dimension $d_\ell$. We can think of $L_2(S_{n-1})$ as the subspace of $L_2(\mathsf{SO}_n)$ consisting of functions that are right-invariant under the subgroup $\mathsf{SO}_{n-1}$ that fixes the north pole $\eta$: that is, functions $f(R)$ that only depend on $R\eta$. The Gegenbauer polynomials span the subspace of $L_2(\mathsf{SO}_n)$ consisting of functions which are left- and right-invariant under $\mathsf{SO}_{n-1}$; that is, functions that are zonal, depending only on $z = \langle \eta \cdot R\eta \rangle$, or equivalently on the latitude of $R\eta$. The fact that $\gamma_\ell$ is the unique zonal polynomial of degree $\ell$ corresponds to the fact that $\mathsf{SO}_n$ and $\mathsf{SO}_{n-1}$ form a Gel'fand pair, i.e., this subspace is one-dimensional. \section{Schur's lemma and evaluation maps} \label{sec:schur} Of course, $\eta$ is is an arbitrary choice for the north pole. For any $w \in S_{n-1}$, there is a unique polynomial of degree $\ell$ which is zonal around $w$, i.e., which is fixed under the copy of $\mathsf{SO}_{n-1}$ that preserves $w$. Given a function $f$ and $R \in \mathsf{SO}_n$, define $Rf$ as the function \[ Rf(x) = f(R^{-1} x) \, . \] Now let $R \in \mathsf{SO}_n$ be any rotation such that $R\eta = w$. Then if we write $\gamma_\ell^{(\eta)}(x) = \gamma_\ell(\eta \cdot x)$, we can define $\gamma_\ell^{(w)} = R \gamma_\ell^{(\eta)}$, so that \[ \gamma_\ell^{(w)}(x) = \gamma_\ell^{(\eta)}(R^{-1} x) = \gamma_\ell(w \cdot x) \, . \] The inner products of these functions are again given by a Gegenbauer polynomial: for any $w, y \in S_{n-1}$, \begin{equation} \label{eq:gegen-inner} \left\langle \gamma_\ell^{(w)}, \gamma_\ell^{(y)} \right\rangle_{S} = \frac{1}{\Omega_{n-1}} \int_{S_{n-1}} \mathrm{d}x \,\gamma_\ell(w \cdot x) \,\gamma_\ell(y \cdot x) = \frac{1}{\sqrt{d_\ell}} \,\gamma_\ell(w \cdot y) \, . \end{equation} Taking $w=y$ tells us how large $\gamma_\ell$ gets at the poles: \begin{equation} \label{eq:gegen-max} \gamma_\ell(1) = \abs{\gamma_\ell(-1)} = \sqrt{d_\ell} \, . \end{equation} In addition, if we fix $w$ and take the expectation over $y$ of the inner product squared, we get \begin{equation} \Exp_y \babs{ \left\langle \gamma_\ell^{(w)}, \gamma_\ell^{(y)} \right\rangle_{S} }^2 = \frac{1}{d_\ell} \babs{ \left\langle \gamma_\ell, \gamma_\ell \right\rangle_{S} }^2 = \frac{1}{d_\ell} \, . \end{equation} Since averaging over $y$ is the same as averaging over $R$ according to the Haar measure on $\mathsf{SO}_n$, this is equivalent to \begin{equation} \label{eq:schur} \Exp_R \babs{ \left\langle \gamma_\ell, R \gamma_\ell \right\rangle_{S} }^2 = \frac{1}{d_\ell} \, . \end{equation} This holds more generally for any spherical harmonic $f_\ell$ of degree $\ell$, \begin{equation} \label{eq:schur} \Exp_R \babs{ \left\langle f_\ell, R f_\ell \right\rangle_{S} }^2 = \frac{\snorm{f_\ell}^2}{d_\ell} \, . \end{equation} where $\snorm{f_\ell}^2 = \langle f_\ell, f_\ell \rangle_{S} = \Exp_{x \in S_{n-1}} |f_\ell(x)|^2$. This is a form of Schur's lemma: for any vector $v$ belonging to an irreducible representation $\rho$ of a group $G$, we have $\Exp_g \abs{ \langle v, gv \rangle }^2 = |v|^2/d_\rho$. More generally, let $M$ be a linear operator on $L_2(S_{n-1})$. Conjugating it with a random rotation yields an operator which commutes with all rotations. By Schur's lemma any such operator is block diagonal, where each block is a scalar matrix operating on the degree-$\ell$ spherical harmonics. Thus \[ \Exp_R R^{-1} M R = \bigoplus_{\ell \ge 0} M_\ell \quad \text{where} \quad M_\ell = \frac{\tr M_\ell}{d_\ell} \,\mathds{1}_\ell \, , \] where $\mathds{1}_\ell$ is the projection operator onto the space of degree-$\ell$ harmonics. Thus if $f = \sum_{\ell \ge 0} f_\ell$ where each $f_\ell$ is a spherical harmonic of degree $\ell$, \begin{align} \Exp_R \,\langle Rf, M \cdot Rf \rangle_{S} &= \left\langle f, \left( \Exp_R R^{-1} M R \right) f \right\rangle_{S} \nonumber \\ &= \sum_{\ell \ge 0} \langle f_\ell, M_\ell f_\ell \rangle_{S} \nonumber \\ &= \sum_{\ell \ge 0} \,\Exp_R \,\langle Rf_\ell, M \cdot Rf_\ell \rangle_{S} \, . \label{eq:schur-separable} \end{align} In particular, if $f = \sum_\ell f_\ell$ then the expected outer product of $Rf$ with itself is \begin{equation} \label{eq:schur-outer} \Exp_R \ket{Rf} \bra{Rf} = \sum_{\ell \ge 0} \frac{\snorm{f_\ell}^2}{d_\ell} \,\mathds{1}_\ell \, . \end{equation} Thus if $g = \sum_\ell g_\ell$ and $h = \sum_\ell h_\ell$, \begin{equation} \label{eq:schur-outer-inner} \Exp_R \,\langle g, Rf \rangle_{S} \langle Rf, h \rangle_{S} = \sum_{\ell \ge 0} \frac{\snorm{f}^2}{d_\ell} \langle g_\ell, h_\ell \rangle_{S} \, . \end{equation} Another consequence of the irreducibility of $\rho_\ell$ is that any linear operator from $\rho_\ell$ to $\mathbb{C}$ can be written as $\langle \phi, \cdot \rangle_{S}$ where $\phi = \sum_{y \in S_{n-1}} c_y \gamma_\ell^{(y)}$ for some finite set of nonzero coefficients $c_y$. In particular, for any $y \in S_{n-1}$ and any $\ell \ge 0$ there is an \emph{evaluation map} $\mathcal{E}^{(y)}_\ell: \rho_\ell \to \mathbb{C}$ such that, for any spherical harmonic $f$ of degree $\ell$, we have $\mathcal{E}^{(y)}_\ell(f) = f(y)$. We can express it as \begin{equation} \label{eq:eval} \mathcal{E}^{(y)}_\ell(f) = f(y) = \sqrt{d_\ell} \,\left\langle \gamma_\ell^{(y)} , f \right\rangle_{S} \, . \end{equation} To see this, think of $y$ as the north pole, and consider an orthonormal basis that includes $\gamma_\ell^{(y)}$. Since $\gamma_\ell^{(y)}$ is the unique harmonic of degree $\ell$ that is zonal around $y$, all other basis functions are zero at $y$; otherwise they would have a nonzero projection onto $\gamma_\ell^{(y)}$ if we average over the subgroup $\mathsf{SO}_{n-1}$ of rotations that preserve $y$. The normalization $\sqrt{d_\ell}$ then follows from~\eqref{eq:gegen-max}. By summing over all $\ell$, we can similarly express the evaluation map for all integrable functions $f \in L_2(S_{n-1})$. That is, we can define the evaluation map \[ \mathcal{E}^{(y)} = \sum_{\ell \ge 0} \mathcal{E}^{(y)}_\ell \, . \] Then \begin{equation} \label{eq:eval-all} \mathcal{E}^{(y)}(f) = f(y) = \sum_{\ell \ge 0} \sqrt{d_\ell} \,\left\langle \gamma_\ell^{(y)} , f \right\rangle_{S} \, . \end{equation} To put it differently, we can express the Dirac delta function as \begin{equation} \label{eq:dirac} \delta(x-y) = \sum_{\ell \ge 0} \sqrt{d_\ell} \,\gamma_\ell^{(y)}(x) = \sum_{\ell \ge 0} \sqrt{d_\ell} \,\gamma_\ell(y \cdot x) \, . \end{equation} \section{Relating noise sensitivity and spherical sensitivity for randomly rotated functions} In this section we will prove our main transfer theorem, bounding the expected noise sensitivity of a randomly rotated function in terms of its spherical sensitivity. We identify the hypercube $\mathbb{Z}_2^n$ with the set $H = \{\pm 1/\sqrt{n} \}^n$ lying on the unit sphere. If we restrict the inner product to this set, we obtain a cubical inner product, which we write \[ \langle f, g \rangle_{H} = \Exp_{x \in H} f(x)^* g(x) \, . \] In particular, for each frequency vector $k \in \mathbb{Z}_2^n$, if we extend the character $\chi_k$ to the sphere as a multilinear function of degree $|k|$, \[ \chi_k(x) = \prod_{i: k_i=1} \sqrt{n} x_i \, , \] then the $\chi_k$ are orthonormal with respect to the cubical inner product, \[ \langle \chi_k, \chi_{k'} \rangle_{H} = \delta_{k,k'} \, . \] If we define a Boolean function $f|_H$ by restricting a function $f$ to the hypercube, its Fourier coefficients are \[ \widehat{f}|_H(k) = \langle f, \chi_k \rangle_{H} \, . \] The \emph{energy} of $f$ at the character $k$ is $|\widehat{f}(k)|^2$. In order to bound the noise sensitivity, we need to compute the expected energy of $Rf$ where $R$ is uniformly random, i.e., chosen according to the Haar measure in $\mathsf{SO}_n$; or equivalently, the expected energy of $f$'s restriction to a randomly rotated hypercube. One simple observation is the following. Since uniformly rotating any point on the cube yields a uniformly random point on the sphere, cubical inner products are equal to spherical inner products in expectation, \begin{equation} \label{eq:cube-sphere-exp} \Exp_R \langle Rf, Rg \rangle_{H} = \langle f, g \rangle_{S} \, . \end{equation} In Appendix~\ref{app:kravchuk} we give a precise expression for the expected energy of a randomly rotated spherical harmonic. However, here we just need a few facts. First, if we decompose a function into spherical harmonics, then its expected energy at each $k$ is the sum of the expected energies of its harmonics: \begin{lemma} \label{lem:separable} Let $f \in L_2(S_{n-1})$, and write $f = \sum_{\ell \ge 0} f_\ell$ where each $f_\ell$ is a spherical harmonic of degree $\ell$. Let $R \in \mathsf{SO}_n$ be uniform in the Haar measure. Then for any $k \in \mathbb{Z}_2^n$, \[ \Exp_R \babs{ \widehat{Rf}|_H(k) }^2 = \sum_{\ell \ge 0} \,\Exp_R \,\babs{ \widehat{Rf_\ell}|_H(k) }^2 \, . \] \end{lemma} \begin{proof} First we use the evaluation maps of Section~\ref{sec:schur} to write $\widehat{f}(k)$ as a spherical inner product rather than a cubical one. Using~\eqref{eq:eval-all} we have \[ \widehat{f}(k) = \langle \chi_k, f \rangle_{H} = \Exp_{x \in H} \chi_k(x) \,\mathcal{E}^{(x)}(f) = \left\langle \psi_k, f \right\rangle_{S} \, , \] where \[ \psi_k = \Exp_{x \in H} \chi_k(x) \sum_{\ell \ge 0} \sqrt{d_\ell} \gamma_\ell^{(x)} \, . \] Then applying Schur's lemma~\eqref{eq:schur-separable} to the linear operator $M = \ket{\psi_k} \bra{\psi_k}$ gives \begin{align*} \Exp_R \babs{ \widehat{Rf}|_H(k) }^2 &= \Exp_R \,\langle Rf, \psi_k \rangle_{S} \langle \psi_k, Rf \rangle_{S} \\ &= \sum_{\ell \ge 0} \,\Exp_R \,\langle Rf_\ell, \psi_k \rangle_{S} \langle \psi_k, Rf_\ell \rangle_{S} \\ &= \sum_{\ell \ge 0} \,\Exp_R \,\babs{ \widehat{Rf_\ell}|_H(k) }^2 \, , \end{align*} completing the proof. \end{proof} Secondly, restricting a spherical harmonic of degree $\ell$ to the hypercube can only give it nonzero energy at characters of Hamming weight $\ell$ or less: \begin{lemma} \label{lem:lower} Let $f$ be a spherical harmonic of degree $\ell$. If $|k| \ge \ell$, then $\widehat{f}|_H(k) = 0$. \end{lemma} \begin{proof} Restricting any polynomial $f$ of degree $\ell$ to $H = \{ \pm 1/\sqrt{n} \}^n$ imposes the relations $x_i^2 = 1/n$ for all $i$, so there is a multilinear polynomial $f'$ of degree at most $\ell$ such that $f|_H = f'|_H$. Each multilinear monomial is proportional to a character $\chi_k$ with $|k| \le \ell$, and these are orthogonal to all $\chi_k$ with $|k| > \ell$. \end{proof} Thirdly, for any integrable function $f$, the expected energy of $Rf$ summed over all characters of $\mathbb{Z}_2^n$ equals its norm on the sphere: \begin{lemma} \label{lem:total} Let $f \in L_2(S_{n-1})$, and let $R \in \mathsf{SO}_n$ be uniform in the Haar measure. Then \[ \Exp_R \sum_{k \in \mathbb{Z}_2^n} \babs{ \widehat{Rf}|_H(k) }^2 = \snorm{f}^2 \, . \] \end{lemma} \begin{proof} Since the $\chi_k$ are orthonormal with respect to the inner product on the hypercube, for any $f$ we have \[ \sum_{k \in \mathbb{Z}_2^n} \babs{ \widehat{f}|_H(k) }^2 = \sum_k \langle f, \chi_k \rangle_{H} \langle \chi_k, f \rangle_{H} = \langle f, f \rangle_{H} \] Then~\eqref{eq:cube-sphere-exp} gives \[ \Exp_R \sum_{k \in \mathbb{Z}_2^n} \babs{ \widehat{Rf}|_H(k) }^2 = \Exp_R \langle Rf, Rf \rangle_{H} = \langle f, f \rangle_{S} = \snorm{f}^2 \, .\qedhere \] \end{proof} We are now ready to prove our transfer theorem, which bounds the expected noise sensitivity and average sensitivity in terms of the spherical sensitivity. \begin{theorem} \label{thm:transfer} Let $f \in L_2(S_{n-1})$, and let $R \in \mathsf{SO}_n$ be uniform in the Haar measure. Then \begin{equation} \label{eq:transfer-ns} \Exp_R \mathbb{NS}_\varepsilon(Rf|_H) \le \SS_t(f) \, , \end{equation} where \begin{equation} \label{eq:t} t = \frac{1}{n} \log \frac{1}{1-2\varepsilon} = \frac{2\varepsilon}{n} \big(1+O(\varepsilon)\big) \, . \end{equation} \end{theorem} \begin{proof} Starting with the Fourier-theoretic expression for the noise sensitivity~\eqref{eq:ns}, we have \begin{align} \Exp_R \mathbb{NS}_\varepsilon(Rf|_H) &= \frac{1}{2} \Exp_R \,\sum_{k \in \mathbb{Z}_2^n} \babs{\widehat{Rf}|_H(\chi)}^2 \left(1 - (1 - 2\varepsilon)^{|k|}\right) \nonumber \\ &= \frac{1}{2} \Exp_R \,\sum_k \sum_{\ell \ge 0} \babs{\widehat{Rf_\ell}|_H(k)}^2 \left(1 - (1 - 2\varepsilon)^{|k|} \right) \label{eq:separable} \\ &= \frac{1}{2} \Exp_R \,\sum_\ell \sum_{k: |k| \le \ell} \babs{\widehat{Rf_\ell}|_H(k)}^2 \left(1 - (1 - 2\varepsilon)^{|k|} \right) \label{eq:lower} \\ &\le \frac{1}{2} \Exp_R \,\sum_\ell \sum_k \babs{\widehat{Rf_\ell}|_H(k)}^2 \left(1 - (1 - 2\varepsilon)^{\ell} \right) \nonumber \\ &\le \frac{1}{2} \sum_\ell \snorm{f_\ell}^2 \left(1 - (1 - 2\varepsilon)^{\ell} \right) \, . \label{eq:total} \end{align} Here we used Lemma~\ref{lem:separable} in~\eqref{eq:separable}, Lemma~\ref{lem:lower} in~\eqref{eq:lower}, and Lemma~\ref{lem:total} in~\eqref{eq:total}. On the other hand, using $\snorm{f_\ell}^2 = \sum_j \abs{ \widehat{f}(\ell,j) }^2$ in~\eqref{eq:ss} gives \begin{align} \SS_t(f) &= \frac{1}{2} \sum_{\ell} \snorm{f_\ell}^2 \left( 1-\mathrm{e}^{-t \ell (n+\ell-2)} \right) \nonumber \\ &\ge \frac{1}{2} \sum_{\ell} \snorm{f_\ell}^2 \left( 1-\mathrm{e}^{-t \ell n} \right) \, . \label{eq:ss-lower} \end{align} Setting $1-2\varepsilon = \mathrm{e}^{-tn}$ completes the proof of~\eqref{eq:transfer-ns}. \end{proof} Although we don't need it below, we record an analogous theorem regarding the expected average sensitivity. \begin{theorem} \label{thm:transfer-as} Let $f \in L_2(S_{n-1})$, and let $R \in \mathsf{SO}_n$ be uniform in the Haar measure. Then for any $\alpha > 0$, \begin{equation} \label{eq:transfer-as} \Exp_R \mathbb{AS}(Rf|_H) \le \frac{2n}{1-\mathrm{e}^{-\alpha}} \,\SS_{\alpha/n^2}(f) \, . \end{equation} \end{theorem} \begin{proof} Recall the Fourier-theoretic expression for the average sensitivity, \begin{align} \mathbb{AS}(f) &= - \frac{1}{2} \langle f, Lf \rangle \nonumber \\ &= \sum_{k \in \mathbb{Z}_2^n} \abs{\widehat{f}(k)}^2 |k| \, . \label{eq:as} \end{align} Applying the same lemmas to~\eqref{eq:as} as we did to~\eqref{eq:ns} in the proof of the previous lemma, and noting that $|k| \le n$, gives \[ \Exp_R \mathbb{AS}(Rf|_H) \le \sum_\ell \snorm{f_\ell}^2 \max(\ell,n) \, . \] Setting $t=\alpha/n^2$ in~\eqref{eq:ss-lower}, we have for all $0 \le \ell \le n$ \[ \frac{1-\mathrm{e}^{-\alpha \ell/n}}{1-\mathrm{e}^{-\alpha}} \ge \frac{\ell}{n} \, , \] since $1-\mathrm{e}^{-x}$ is concave. This completes the proof of~\eqref{eq:transfer-as}. \end{proof} \section{Application: the Gotsman-Linial conjecture on average} In this section we bound the spherical sensitivity of polynomial threshold functions, and apply Theorem~\ref{thm:transfer} to bound their expected noise sensitivity. Our strategy for bounding the spherical sensitivity is similar to that of Kane~\cite{kane-surface-area}, who proved a similar bound on their Gaussian sensitivity. He used the fact that adding Gaussian noise to a point $u$ can be thought of as choosing a random line through $u$, and then moving along that line by a distance $r$ chosen from the chi-squared distribution: then $p(x)$ can only change sign if it has a root on the intervening line segment. Similarly, we add noise on the sphere by choosing a random great circle that passes through $u$, and then moving an angle $r$ along that circle where $r$ is chosen from a distribution derived from the heat equation on $S_{n-1}$. The sensitivity is then at most the probability that the polynomial has a root on the resulting segment of the great circle. First we need to bound the distribution of the angle, or equivalently the geodesic distance, that we travel on the sphere in time $t$. Given two points $u, v \in S_{n-1}$ and a time $t \ge 0$, let $K_t(u,v)$ denote the \emph{heat kernel} on $S_{n-1}$ after time $t$. That is, for any fixed $t$ and $u$, $K_t(u,v)$ is the probability distribution of the position $v$ of a particle that starts at $u$ and undergoes Brownian motion for time $t$. As a linear operator, it is the solution to the partial differential equation \[ \frac{\partial K_t}{\partial t} = \Delta_{S_{n-1}} K_t \] with the initial condition $K_0 = \mathds{1}$, so \[ K_t(u,v) = \mathrm{e}^{t\Delta_{S_{n-1}}} \, . \] By comparing the heat kernel on $S_{n-1}$ to that on $\mathbb{R}^{n-1}$, we prove the following: \begin{lemma} \label{lem:sphere-flat} Fix $u \in S_{n-1}$, and suppose that $v \in S_{n-1}$ is chosen with probability distribution $K_t(u,v)$. Let $r$ denote the angle between $u$ and $v$. Then \[ \Exp[r] \le \sqrt{2(n-1)t} \, . \] \end{lemma} \begin{proof} In Appendix~\ref{app:sphere-flat} we offer an elementary calculus proof that $\Exp[r^2] \le 2(n-1)t$, which implies the lemma. However, we can prove something much stronger: namely, that $r$ is stochastically dominated by the corresponding process on the flat tangent space $\mathbb{R}^{n-1}$. The heat equation on $\mathbb{R}^{n-1}$ is driven by the Laplacian \begin{equation} \label{eq:heat-flat} \Delta_{\mathbb{R}^{n-1}} = \sum_{i=1}^{n-1} \frac{\partial^2}{\partial x_i^2} \, . \end{equation} Place $u$ at the origin, and let $r$ denote the distance from the origin. Since $K_t(u,v) = f(r)$ is spherically symmetric, transforming to polar coordinates gives \[ \frac{\partial f}{\partial t} = \Delta_{\mathbb{R}^{n-1}} f = \frac{\partial^2 \!f}{\partial r^2} + (n-2) \frac{1}{r} \frac{\partial f}{\partial r} \, . \] Similarly, for $S_{n-1}$, place $u$ at the north pole and let $r$ denote the angle between $u$ and $v$. Then $K_t(u,v) = f(r)$ is a zonally symmetric function, and applying the Laplace-Beltrami operator gives \begin{equation} \label{eq:heat-sphere} \frac{\partial f}{\partial t} = \Delta_{S_{n-1}} f = \frac{\partial^2 \!f}{\partial r^2} + (n-2) \frac{\cos r}{\sin r} \frac{\partial f}{\partial r} \, . \end{equation} We can view~\eqref{eq:heat-flat} and~\eqref{eq:heat-sphere} as governing the probability distributions of two stochastic processes on $r$. These are well known in the theory of Brownian motion, and are referred to as Bessel and Jacobi processes respectively. Since $\cos r / \sin r \le 1/r$, the comparison theorem of stochastic differential equations~\cite{ikeda-watanabe} implies that the distribution of $r$ on $S_{n-1}$ is stochastically dominated by its distribution on $\mathbb{R}^{n-1}$. In particular, its second moment is at most the variance of $n-1$ independent variables $x_1,\ldots,x_{n-1} \in \mathbb{R}$ of variance $2t$, giving \[ \Exp[r^2] \le 2(n-1)t \, . \] Noting that $\Exp[r] \le \sqrt{\Exp[r^2]}$ completes the proof. \end{proof} We remark that for small $n$, we get a small improvement by computing $\Exp[r]$ exactly on $\mathbb{R}^{n-1}$. The fact that $r^2/2$ follows a chi-squared distribution with $n-1$ degrees of freedom implies \[ \Exp[r] \le \frac{2 \Gamma(n/2)}{\Gamma((n-1)/2)} \sqrt{t} = \big( 1-O(1/n) \big) \sqrt{2(n-1)t} \, . \] \begin{lemma} \label{lem:great-circle} Let $p \in \mathbb{R}[x_1, \ldots, x_n]$ be a polynomial of degree $d$, and let $G$ a great circle on $S_{n-1}$. If $p$ is not identically zero on $G$, then $p$ has no more than $2d$ roots on $G$. \end{lemma} \begin{proof} Since applying a linear transformation to $x_1, \ldots, x_n$ doesn't change $p$'s degree, without loss of generality we can assume that $G$ is the unit circle in the plane spanned by the $x_1$ and $x_2$ axes: that is, the variety, or set of roots, of the polynomial \[ q(x_1,x_2) = x_1^2 + x_2^2 - 1 \, . \] Restricting to this plane, i.e., setting $x_i=0$ for all $i > 2$, yields a polynomial $r(x_1,x_2)$ of degree $d_r \le d$. B\'{e}zout's theorem~\cite{Shafarevich:1994,Schmid:1995} states that two polynomials $q, r$ of degree $d_q$ and $d_r$ can share at most $d_q d_r$ roots unless they share a common factor. It is easy to check that $q$ is irreducible over $\mathbb{R}$ (and even over $\mathbb{C}$), since if it had a linear factor then $G$ would consist of the union of two lines. Therefore, $q$ and $r$ share a common factor only if $q$ divides $r$, in which case $p$ is identically zero on $G$. If they do not, they share at most $2d$ roots. \end{proof} Putting these lemmas together gives us a bound on the spherical sensitivity of a polynomial threshold function. \begin{theorem} \label{thm:poly-ss} Let $p: \mathbb{R}^n \to \mathbb{R}$ be a polynomial of degree $d$ and let $f(x) = \mathop{\mathrm{sgn}}(p(x))$ where $\mathop{\mathrm{sgn}} z = +1$ for $z \ge 0$ and $\mathop{\mathrm{sgn}} z = -1$ for $z < 0$. Then \[ \SS_t(f) \le \frac{d}{\pi} \sqrt{2nt} \, . \] \end{theorem} \begin{proof} Recall that $\SS_t(f)$ is the probability that $f(u) \ne f(v)$ where $u$ is chosen uniformly and $v$ is chosen from $K_t(u,v)$. Equivalently, we can choosen $u$ uniformly, and then arrive at $v$ by choosing a uniformly random great circle $G$ passing through $u$ (by choosing a tangent vector from the Haar measure on $S_{n-2}$), choosing $r$ according to the heat kernel, and moving an angle $r$ along $G$. If $p$ is identically zero on $S_{n-1}$, then $f$ is identically zero as well, in which case $\SS_t(f) = 0$. Otherwise, with probability $1$ we have $f(u) \ne 0$, in which case $p$ is not identically zero on $G$. By Lemma~\ref{lem:great-circle}, there are at most $2d$ roots of $p$ on $G$. The probability that $p(u)$ and $p(v)$ have different signs is then at most the expected number of roots of $p$ on $G$ between $u$ and $v$. Since $u$'s position on $G$ is uniformly random, this is simply $(2d/2\pi) \Exp[r]$, which by Lemma~\ref{lem:sphere-flat} is at most $(d/\pi) \sqrt{2nt}$. \end{proof} Theorems~\ref{thm:transfer} and~\ref{thm:poly-ss} immediately imply our bound on the expected noise sensitivity: \begin{theorem} \label{thm:poly-ns} Let $p$ be a polynomial of degree $d$ and let $f(x) = \mathop{\mathrm{sgn}}(p(x))$. Let $R \in \mathsf{SO}_n$ be uniform in the Haar measure. Then \begin{equation} \label{eq:poly-ns} \Exp_R \mathbb{NS}_\varepsilon(Rf|_H) \le \big( 1+O(\varepsilon) \big) \frac{2}{\pi} \,d \sqrt{\varepsilon} \, . \end{equation} \end{theorem} Similar but simpler reasoning implies a bound on the expected average sensitivity: \begin{theorem} \label{thm:poly-as} Let $p$ be a polynomial of degree $d$ and let $f(x) = \mathop{\mathrm{sgn}}(p(x))$. Let $R \in \mathsf{SO}_n$ be uniform in the Haar measure. Then \begin{equation} \label{eq:poly-as} \Exp_R \mathbb{AS}(Rf|_H) \le \big( 1+O(1/n) \big) \frac{2}{\pi} \, d \sqrt{n} \, . \end{equation} \end{theorem} \begin{proof} The angle between two adjacent corners of the hypercube is \[ r = \cos^{-1} (1-2/n) = \frac{2}{\sqrt{n}} \,\big(1+O(1/n) \big) \, , \] and $\Exp_R \mathbb{AS}(Rf|_H)$ is $n$ times the probability that $p$ changes sign between a uniformly random pair of points $r$ apart. Using Lemma~\ref{lem:great-circle} as in the proof of Theorem~\ref{thm:poly-ns}, this probability is at most $(d/\pi)r$. \end{proof} \begin{remark} For small $\varepsilon$, we can also prove Theorem~\ref{thm:poly-ns} by noting that the angle $r$ between two points $x, y$ on the hypercube where we have flipped each bit independently with probability $\varepsilon$ obeys $\Exp[\cos r] = 1-2\varepsilon$. \end{remark} \begin{remark} The leading constant in~\eqref{eq:poly-as} is better than we would obtain from Theorem~\ref{thm:transfer-as}, even after optimizing the parameter $\alpha$. \end{remark} \begin{remark} Theorems~\ref{thm:poly-ss} and~\ref{thm:poly-ns} immediately generalize from polynomials to any class of functions with a bound on the number of roots lying on a great circle. If there are at most $b$ such roots, the expected noise sensitivity of the corresponding threshold function is $O(b \sqrt{\varepsilon})$ and its expected average sensitivity is $O(b \sqrt{n})$. \end{remark} \paragraph{Acknowledgments} We are grateful to Fabrice Baudoin, Laura De Carli, Costas Efthimiou, Veit Elser, Josh Grochow, Ilia Krasikov, Ryan O'Donnell, Thomas H.\ Parker, Dan Rockmore, and James Stokes for helpful conversations. This work was supported by NSF grants CCF-1117426 and CCF-1219117.
{'timestamp': '2014-08-26T02:00:41', 'yymm': '1408', 'arxiv_id': '1408.5425', 'language': 'en', 'url': 'https://arxiv.org/abs/1408.5425'}
\section{Introduction} We consider the task of identifying fixed-parameter tractable (FPT) problems that admit space-efficient algorithms. Towards this end, we devise algorithms for various parameterized problems that run in time $f(k)\, n^{\Oh{1}}$ and use either $g(k) + \log{n}$ or $g(k) \log{n}$ bits of space, where $n$ denotes the input size, $k$ denotes the parameter, and $f, g: \N \to \N$ are computable functions. \subsubsection{Previous Work} Work on restricted-space classes of parameterized problems was initiated by Cai et al.~\cite{CCDF1997AnnPureApplLogic}, who defined the classes $\cLAdv$ and $\cSlL$, which we call $\cParaL$ and $\cXL$ respectively. Among other things, they showed that $\pVC$ under the usual solution-size parameterization is in $\cParaL$. Continuing this line of work, Flum and Grohe~\cite{FG2003InfComput} showed that the parameterized model-checking problem of first-order formulas on graphs of bounded degree is in $\cParaL$. As a consequence, many standard parameterized graph problems are in $\cParaL$ when restricted to bounded-degree graphs. Then, Elberfeld et al.~\cite{EST2012IPEC} introduced two new restricted-space classes and gave completeness results for those classes, showing that there are problems fixed-parameter tractable (FPT) problems which are outside $\cParaL$ under the assumption that $\cL{} \neq \cNL$. In particular, they identified $\pDFVS$ as one such problem. In contrast, $\pDFVS$ restricted to tournaments can be shown to be in $\cParaL$ (Corollary~\ref{corr:del_dhs}). Later work in this setting includes the paper of Chen and M\"{u}ller~\cite{CM2015TOCT}, who studied additional complexity-theoretic aspects of restricted-space classes and showed that $\pLongPath$ is in $\cParaL$. $\pdHS$, a generalization of $\pVC$ to hypergraphs was shown by Fafanie and Kratsch~\cite{FK2015MFCS} to be kernelizable in logarithmic space, which puts it in $\cParaL$ (see Lemma~\ref{lemm:para-l_kern}). As a consequence they showed that various graph deletion problems where the target classes are characterized by finite forbidden sets are also in $\cParaL$. For related results, see~\cite{BT2020TheoryComputSyst,BST2020SWAT} which give constant-time parallel kernelization algorithms for $\pHS$. \subsubsection{Results, Techniques and Organization} Section~\ref{sect:deletion} begins by showing that $\pHS$ (with possibly unbounded set sizes) can be placed in $\cParaL$ under the restriction that pairwise intersections of sets in the instances have bounded sizes. Then in Section~\ref{sect:deletion_problems}, we extend the results of Fafianie and Kratsch~\cite{FK2015MFCS} by showing that some graph deletion problems where the target classes have infinite forbidden sets are also in $\cParaL$. As applications, we show that $\pDLF$ and $\pDPo$ are in $\cParaL$. Initially, studies in parameterized complexity concerned the solution-size parameter or width parameters such as treewidth and cliquewidth. In recent times, vertex cover number as a parameter has become a subject of serious investigation~\cite{Jan2013thesis,FJP2014JCSS,Jd2020SWAT}. In Section~\ref{sect:vc_parameterization} we give $\cParaL$ algorithms for problem parameterized by vertex cover number. We devise a logarithmic-space variant of a general kernelization algorithm~\cite{FJP2014JCSS} for problems parameterized by vertex cover number, and as a consequence obtain $\cParaL$ algorithms for $\pFVS$, $\pOCT$, $\pChVD$, $\pPlan$, $\pLongPath$, $\pLongCycle$ and $\pdCol$ under this parameterization. Finally, in Section~\ref{sect:fvs}, we address problems that are not known to be in $\cParaL$, but are known to be in $\cFPT$ as well as in $\cXL$, i.e.\ solvable using $f(k) + \Oh{\log n}$ bits of space. We ask whether there are algorithms for those problems that \emph{simultaneously} run in time $f(k)\, n^{\Oh{1}}$ and in $f(k) \log{n}$ bits of space. Towards this end, we devise such an algorithm for $\pFVS$. This is achieved via a careful space-efficient implementation of the iterative-compression algorithm of Chen et al.~\cite{CFL+2008JCSS}. \section{Preliminaries} \subsubsection{Notation and Definitions} For a graph $G$, we denote its vertex set by $V(G)$ and edge set by $E(G)$. A class of graphs $\Pi$ is said to be \emph{characterized} by a set $\Phi$ of induced subgraphs if $\Pi$ consists of precisely those graphs that do not include induced subgraphs isomorphic to any graph in $\Phi$. \subsubsection{Appendix} Details for all items marked $\dagger$ can be found in the Appendix. \subsubsection{Parameterized Problems and Space Classes} Let $A$ be a decision problem over the alphabet $\Sigma$ and $t: \Sigma^* \to \N$ be a computable function. The pair $(A, t)$ is called a \emph{parameterized} problem and $t$ is called the \emph{parameterization}. An instance of the problem is a pair $(I, t(I))$, where $I$ is an instance of $A$. The problem $(A, t)$ is said to be \emph{fixed-parameter tractable} (FPT) if there is an algorithm which solves any instance $(I, t(I))$ in time $f(t(I)) {\abs{I}}^c$, where $f: \N \to \N$ is a computable function, $n = \abs{I}$, and $c > 0$ is a constant. Later on, we refer to such running times as \emph{FPT time}. The problem is \emph{kernelizable} if there is an algorithm which in time polynomial in $n$ computes an instance $I'$ such that $\abs{I'} = g(t(I))$ for some computable function $g: \N \to \N$ and $I$ is a \YES{} instance if and only if $I'$ is a \YES{} instance. The instance $I'$ is called a \emph{kernel}. Kernels are obtained through what are called \emph{reduction rules}, which transform a given instance into another. And the rule is \emph{safe} if the two instances are equivalent. In this paper, we focus on the \emph{space complexity} aspect of parameterized problems. A natural problem class to consider in this setting is $\cParaL$, defined below. \begin{definition}[$\cParaL$; Cai et al.~\cite{CCDF1997AnnPureApplLogic}] $\cParaL$ is the class of all parameterized problems $(A, t)$ for which there is a deterministic algorithm which solves instances $(I, t(I))$ using space $f(k) + \Oh{\log{n}}$, where $f: \N \to \N$ is a computable function, $k = t(I)$ and $n = \abs{I}$. \end{definition} We also consider an analogous class $\cXL$, where the space bound is $f(k) \cdot \log{n}$ instead of $f(k) + \Oh{\log{n}}$. The two classes are known to be distinct unless $\cP{} = \cL$ (Theorem 3.1 of~\cite{CCDF1997AnnPureApplLogic}). \begin{definition}[$\cXL$; Cai et al.~\cite{CCDF1997AnnPureApplLogic}] $\cXL$ is the class of all parameterized problems $(A, t)$ for which there is a deterministic algorithm which solves instances $(I, t(I))$ using space $f(k) \cdot \log{n}$, where $f: \N \to \N$ is a computable function, $k = t(I)$ and $n = \abs{I}$. \end{definition} \subsubsection{Machine Model} We use the standard model for space-efficient algorithms. The input to an algorithm resides in read-only memory and can be randomly accessed, while the output is written to a stream which cannot be read from. The algorithm uses a small number of read-write cells as auxiliary memory, where each cell can hold a word of size $\Oh{\log{n}}$ bits ($n$ is the input size). The machine is a unit-cost RAM in which basic arithmetic and logic operations involving two words take constant time. A basic result in parameterized complexity theory is that a problem is fixed-parameter tractable if and only if it is kernelizable. The following lemma is an easy consequence of the result. \begin{lemma}\label{naivespace} If a parameterized problem is fixed-parameter tractable, then it can be solved using $f(k) + n^{O(1)}$ space where $f$ is some function of the parameter $k$, and $n$ is the input size. \end{lemma} The next result is implicit in~\cite{CCDF1997AnnPureApplLogic}; we include it here for the sake of clarity. \begin{lemma}\label{lemm:para-l_kern}\exS{$\dagger$} Let $(A, t)$ be a parameterized problem. For any computable function $f: \N \to \N$, the following statements are true. \begin{itemize} \item If $(A, t)$ can be solved in space $f(k) + \Oh{\log{n}}$, then it can be kernelized in space $\Oh{\log{n}}$. \item If $(A, t)$ can be kernelized in space $f(k) + \Oh{\log{n}}$, then it can be solved in space $g(k) + \Oh{\log{n}}$, where $g: \N \to \N$ is a computable function. \end{itemize} \end{lemma} \section{Hitting Set and Graph Deletion Problems}\label{sect:deletion} We begin by examining two known small-space kernelization results. The first is a combination a logarithmic-space implementation of the Buss rule~\cite{BG1993SICOMP} for $\pVC$ and the observation that the kernel produced is itself a vertex cover. \begin{proposition}[Cai et al.~\cite{CCDF1997AnnPureApplLogic}, Theorem 2.3]\label{prop:vc_para-l} There is an algorithm which takes as input a graph $G$ and $k \in \N$, and using space $\Oh{\log n}$, either answers correctly that $G$ has no vertex cover of size at most $k$ or produces a vertex cover of size at most $2k^2$. \end{proposition} The next result gives an equivalent algorithm for $\pdHS$. \begin{proposition}[Fafianie and Kratsch~\cite{FK2015MFCS}, Theorem 1]\label{prop:dhs_log_kern} There is an algorithm which takes as input an instance of $\pdHS$, i.e. a universe $U$, a family $\mathcal{F}$ of subsets of $U$ of size $d$ each and an integer $k$, and in time $n^{\Oh{d^2}}$ and $\Oh{d^2 \log{n}}$ bits of space, either answers correctly that the instance has no hitting set of size at most $k$ or produces an instance $(U', \mathcal{F}', k)$ such that \begin{itemize} \item $U' \subseteq U,\ \abs{U'} \leq d{(k + 1)}^d$, \item $\mathcal{F}' \subseteq \mathcal{F},\ \abs{\mathcal{F}'} \leq {(k + 1)}^d$, and \item for all $S \subseteq U$ with $\abs{S} \leq k$, $S\ \text{is a minimal hitting set for}\ \mathcal{F}$ if and only if $S\ \text{is a minimal hitting set for}\ \mathcal{F}'$. \end{itemize} \end{proposition} Since the above algorithm kernelizes $\pdHS$ in logarithmic space (for constant $d$), we have the following result as a consequence of Lemma~\ref{lemm:para-l_kern}. \begin{corollary}\label{corr:dhs_para-l} For constant $d$, $\pdHS$ is in $\cParaL$. \end{corollary} \subsection{Deletion Problems}\label{sect:deletion_problems} We begin this section by showing that if a certain restriction of a general graph deletion problem ($\pDelPi$, defined below) is in $\cParaL$, then the entire problem is in $\cParaL$. Combining this with a method for partially encoding these problems as $\pHS$, we show that $\pDLF$ and $\pDPo$ are in $\cParaL$. The problem $\pDelPi$ has the following form. \textbf{Instance:} $(G, k)$, where $G$ is a graph and $k \in \N$\\ \hspace*{\parindent}\textbf{Question:} Is there a set $S \subseteq V$ with $\abs{S} \leq k$ such that $G - S \in \Pi$?\\ When $\Pi$ is characterized by a finite set of forbidden induced subgraphs $\Phi$, instances of $\pDelPi$ can be modelled as $\pdHS$. As an example, consider $\pCVD$, where the objective is to delete up to $k$ vertices in an input graph such that the resulting graph is a cluster graph. The class of cluster graphs is exactly the class of graphs which do not have a path on three vertices $P_3$ as induced subgraphs. Because of this, an instance $(G, k)$ of the problem can be encoded as an instance $(U, \mathcal{F}, k)$ of $3$--$\pHS$, where $U = \V{G}$ and $\mathcal{F} = \setb{\V{P}}{P\ \text{is an induced}\ P_3\ \text{in}\ G}$. The family of sets $\mathcal{F}$ is can be generated in logarithmic space: enumerate all subsets of $\V{G}$ of size $3$ and output those that induce a $P_3$. t is easy to see that $(U, \mathcal{F}, k)$ is a \YES{} instance of $3$--$\pHS$ if and only if $(G, k)$ is a \YES{} instance of $\pCVD$. This family can now be used as input to a $3$--$\pHS$ algorithm, and because of Corollary~\ref{corr:dhs_para-l}, the instance can be solved in $f(k) + \log{n}$ bits of space. A variety of other problems can be solved in a similar fashion, leading to the following result. \begin{corollary}\label{corr:del_dhs} $\pCVD$, $\pDFVS$ restricted to tournaments, $\pSVD$ and $\pThVD$ are in $\cParaL$. \end{corollary} In what follows, we describe the main result of this section, which extends Corollary~\ref{corr:del_dhs} to also include some variants of $\pDelPi$ where $\Pi$ is characterized by an infinite set of forbidden induced subgraphs. \begin{theorem}\label{thm:restr_para_l}\exS{$\dagger$} Let $\Psi \supseteq \Pi$ be a class of graphs and let $\prob{ResDel\textPi}$ be the restriction of $\pDelPi$ to graphs in $\Psi$. If $\Psi$ is characterized by a finite set $\Phi$ of forbidden induced subgraphs and $\prob{ResDel\textPi} \in \cParaL$, then $\pDelPi \in \cParaL$. \end{theorem} Since $\pDelPi$ restricted to $\Psi$ is in $\cParaL$, there is an algorithm which solves restricted instances $(H, k)$ of $\pDelPi$ using space $f(k) + \Oh{\log{n}}$ ($f: \N \to \N$, a computable function). Let \algo{SolveDel\textPi{}On\textPsi{}} be such an algorithm. Using this as a subroutine, Algorithm~\ref{algo:SolveDelPi} solves $\pDelPi$. \begin{algorithm}[h] \KwIn{$(G, k)$, where $G$ is a graph and $k \in \N$} \KwOut{\YES{} if there is a set $S \subseteq V$ of size at most $k$ such that $G - S \in \Pi$, and \NO{} otherwise} let $c_1, \dotsc, c_t$ be the sizes of the sets in $\Phi$\; $d \gets \max \brc{c_1, \dotsc, c_t},\ V \gets \V{G}$\; $\mathcal{F} \gets \setb{S \subseteq V}{G[S]\ \text{is isomorphic to some}\ H \in \Phi}$\; $(V', \mathcal{F}', k) \gets \algoi{KernelizeDeeHS}{d, V, \mathcal{F}, k}$\; \Return{\algoi{BranchAndCall}{$G, \mathcal{F}', \emptyset, k$}}\; \SetKwFunction{bncproc}{BranchAndCall} \SetKwProg{subproc}{Procedure}{}{} \subproc{\bncproc{$G, \mathcal{F}', S, l$}}{ \If{$l < 0$}{ \Return{\NO{}}\; } \eIf(\tcp*[f]{$G - S$ is in $\Psi$}){$S$ hits all of $\mathcal{F}'$}{ \If{\algoi{SolveDel\textPi{}On\textPsi{}}{$G - S, l$} returns \YES{}}{ \Return{\YES{}}\; } }{ find a set $A \in \mathcal{F}'$ such that $A \cap S = \emptyset$\; \For{$v \in A$}{ \If{\algoi{BranchAndCall}{$G, \mathcal{F}', S \cup \brc{v}, l - 1$} returns \YES{}}{ \Return{\YES{}}\; } } } \Return{\NO{}}\; } \caption{SolveDel\textPi: solve \pDelPi{} with access to \algo{KernelizeDeeHS}, which kernelizes \pdHS{}, and \algo{SolveDel\textPi{}on\textPsi{}}, which solves the restriction of \pDelPi{} to \textPsi{}}\label{algo:SolveDelPi} \end{algorithm} \begin{lemma}\label{lem:correctness}\exS{$\dagger$} Algorithm~\ref{algo:SolveDelPi} solves $\pDelPi$ in $g(k) + \Oh{\log{n}}$ bits of space ($g: \N \to \N$, a computable function). \end{lemma} The claim of Theorem~\ref{thm:restr_para_l} now readily follows from the preceding discussion. The next result is an application of Theorem~\ref{thm:restr_para_l} which makes use of \algo{SolveDel\textPi{}On\textPsi{}} routines for some specific graph classes $\Psi$. It is pertinent here to note that the target classes $\Pi$ do not have characterizations by finite forbidden sets. \begin{corollary}\label{corr:dlf_pw1}\exS{$\dagger$} \pDLF{} and \pDPo{} are in $\cParaL$. \end{corollary} Both problems are related to $\pFVS$ (delete vertices so that the resulting graph is a forest): a solution to either problem is also a feedback vertex set. While we are able to show that \pDLF{} and \pDPo{} are in \cParaL{}, the known kernelization algorithms and fixed-parameter algorithms for $\pFVS$ (where the parameter, i.e.\ the solution size, is smaller) use strategies which appear difficult to carry out in the $\cParaL$ setting. \subsection{$\pHS$ with Bounded Intersection} Proposition~\ref{prop:dhs_log_kern} shows that one can, in logarithmic space, compute kernels for instances of $\pdHS$ in which each set is of size $d$. In this section, we obtain a similar result for the scenario where not the set sizes, but the intersection between any pair of sets is bounded by a constant $s$. Interestingly, the kernelization question for this natural $\pHS$ variant has not been addressed in earlier works, even with no constraints on space. \subsection*{Case $s = 1$} We first consider the case $s = 1$, i.e.\ when the instances are linear hypergraphs~\cite{Ber1989book}. Let $(U, \mathcal{F}, k)$ be an instance of $\pHS$ where the intersection between any pair of sets is of size at most $s = 1$. The algorithm consists of the following sequence of reduction rules. \begin{description} \item[Rule $1$] Let $S$ be the set of all elements that appear in at least $k + 1$ sets in $\mathcal{F}$. If $\abs{S} > k$, return \NO{}. Otherwise, delete all sets in $\mathcal{F}$ that intersect with $S$ and delete all elements of $S$ from $U$. Set $k \gets k - \abs{S}$. \item[Rule $2$] If the number of sets remaining is more than $k^2$, then return NO. \item[Rule $3$] For each remaining set, determine the elements that appear in no other set. Delete all but one of them from the set and from the universe. \end{description} The following lemma establishes the correctness of the above rules and gives a bound on the size of the final instance. \begin{lemma}\label{lemm:hs_corr}\exS{$\dagger$} Reduction Rules $1$, $2$ and $3$ produce a kernel $(U', \mathcal{F}', k')$, where $\abs{U'} \leq k^2$ and $\abs{\mathcal{F}'} \leq k^4$. \end{lemma} \subsubsection{Space-Efficient Implementation} Reduction Rule $1$ can be implemented using two counters in addition to a constant number of counters for iteration. The first counter counts, for each element, the number of sets it appears in. The second counter counts the number of elements for which the first counter is larger than $k$. If the second counter goes beyond $k$, we return \NO{}. Otherwise, we set $k \gets k - k'$, where $k'$ is the value of the second counter. To implement Reduction Rule $2$, we set up two additional counters and run through all the sets. The first counter checks for each set whether there is any element in it that appears more than $k$ times in $\mathcal{F}$. If no element of the set appears in more than $k$ times, then the set survives after Reduction Rule $1$, and we increment the second counter. Once all sets have been processed in this manner, if the value of the second counter is more than $k^2$, we return \NO{}. To implement Reduction Rule $3$, we first determine as for Rule $2$ which sets $S$ survive the application of Rule $1$. We suppress the sets that do not survive, and output a subset of $S$ as follows. Observe that at this point, elements in each surviving set appear at most $k$ times in the entire instance. For each element of $S$, we count the number of times it appears in the instance. We output all elements that appear more than once, and of the ones that appear exactly once, we output the first such element in the set. The latter step can be performed using the following rule for each element $x$ in $S$ that appears exactly once in the instance: output $x$ only if there is no other element in $S$ before $x$ which also appears exactly once in the instance. This only uses a constant number of additional counters. Observe that in all the steps above, only a constant number of additional counters are used, from which it readily follows that the entire procedure uses $\Oh{\log{n}}$ bits of space. Combining this with Lemma~\ref{lem:correctness}, we have the following result. \begin{theorem} Given a number $k \in \N$ and a family of sets from a finite universe of size $n$ where each pair of sets intersect in at most one element, using $\Oh{\log{n}}$ bits of space, one can either determine that the given instance is a \NO{} instance of $\pHS$ or output an equivalent instance consisting of a family of at most $k^2$ sets with at most $k^2$ elements each such that each pair of sets intersects in at most one element. \end{theorem} Combining the above result with Lemma~\ref{lemm:para-l_kern}, we have the following corollary. \begin{corollary} $\pHS$ restricted to instances where any pair of sets intersect in at most one element is in $\cParaL$. \end{corollary} \subsection{Case $s > 1$}\label{ssct:bd_hs_dg1} In this case, the instance can be reduced using similar rules as before, with a sequence of $s - 1$ rules replacing Reduction Rule $1$ in the $s = 1$ case. \begin{description} \item[Rule $i$ ($i = 1, \dotsc, s - 1$)] If there is an $(s+1-i)$-element subset of the universe that appears as a subset of $k^i+1$ sets, then replace all those sets with just the $(s+1-i)$-element set. For example, Reduction Rule $1$ ensures that if an $s$-element subset of the universe appears in $k+1$ sets, then all those sets are replaced with just the common $s$-element subset. \item[Rule $s$] Let $S$ be the set of all elements that appear in at least $k^s + 1$ sets in $\mathcal{F}$. If $\abs{S} > k$, return \NO{}. Otherwise, delete all sets in $\mathcal{F}$ that intersect with $S$ and delete all elements of $S$ from $U$. Set $k \gets k - \abs{S}$. \item[Rule $(s+1)$] If the number of sets remaining is more than $k^{s + 1} + 1$, then return \NO{}. \item[Rule $(s+2)$] For each remaining set, determine the elements that appear in no other set. Delete all but one of them from the set and from the universe. \end{description} As in the $s = 1$ case, the above rules ensure that the size of the final instance is bounded by a function of $k$. In addition, the rules can be applied using $\Oh{s \log{n}}$ bits of space per rule. Owing to space constraints, we defer the space-efficient implementation of the above rules to Appendix~\ref{apdx:deletion} and state our result directly. \begin{theorem}\exS{$\dagger$} Given a family of sets from an $n$-element universe where each pair of sets intersects in at most $s$ elements, using $\Oh{s^2 \log{n}}$ bits of space, one can either determine that the given instance is a \NO{} instance of $\pHS$ or output an equivalent family of at most $k^{s + 1}$ sets with at most $s k^{s + 1}$ elements each such that each pair of sets intersects in at most $s$ elements. \end{theorem} Combining the above result with Lemma~\ref{lemm:para-l_kern} yields the following corollary. \begin{corollary} For constant $s$, $\pHS$ restricted to instances where any pair of sets intersect in at most $s$ elements is in $\cParaL$. \end{corollary} \section{Vertex Cover Number as Parameter}\label{sect:vc_parameterization} With respect to the standard parameter (solution size), problems such as $\pTW$ and $\pFVS$ are not known to be in $\cParaL$. In this section, we show that when parameterized by vertex cover number, these problems are in fact in $\cParaL$. Elberfeld et al.~\cite{EJT2010FOCS} give an algorithm for computing the treewidth of an undirected graph using space $f(k) \cdot \log{n}$, which puts the problem in $\cXL$, but it is not known if the problem is in $\cParaL$. For $\pFVS$, the obvious ``guess and verify'' algorithm puts it in $\cXL$, but does not run in FPT time. The known fixed-parameter algorithms for these problems (see e.g.~\cite{CFL+2008JCSS,RSS2006TALG}) involve complex branching strategies and it seems unlikely that they can be carried out using space $f(k) + \Oh{\log{n}}$. The following definition captures classes of graphs which are closed under certain modification operations. Such graph classes encompass ``target'' graphs for many graph modification and partitioning problems, and can be used to show the kernelizability of these problems, as shown later. \begin{definition}[Bounded-Adjacency Characterization; Fomin et al.~\cite{FJP2014JCSS}] Let $\Pi$ be a class of graphs and $c_{\Pi} \in \N$ be a constant. If for any graph $G \in \Pi$ and any vertex $v \in \V{G}$, there is a set of vertices $D \subseteq \V{G} \setminus \brc{v}$ with $\abs{D} \leq c_{\Pi}$ such that adding or removing any number of edges between $v$ and $\V{G} \setminus{D}$ produces a graph which is also in $\Pi$, then $\Pi$ is said to be characterized by $c_{\Pi}$ adjacencies. \end{definition} Consider the following problem, defined with respect to a graph class $\Pi$ characterized by $c_{\Pi}$ adjacencies. \begin{description} \item[$\pIndPiFDS$]\hfill\\ $\pi$ is a class of \emph{non-empty} graphs and there is a non-decreasing polynomial $p$ such that every vertex minimal graph $G \in \Pi$ satisfies $\abs{G} \leq p(\tau)$, where $\tau$ is the size of a minimum vertex cover for $G$.\\ \textbf{Instance:} $(G, k)$, where $G$ is a graph and $k \in \N$\\ \textbf{Question:} Is there a set $S \subseteq V$ with $\abs{S} \leq k$ such that $G - S$ includes no graph in $\Pi$ as an induced subgraph? \end{description} Fomin et al.~\cite{FJP2014JCSS} give a generic procedure \algo{Reduce} (see Appendix~\ref{apdx:vc_param}), which kernelizes the problem as described in the following proposition. \begin{proposition}[Theorem 2, Fomin et al.~\cite{FJP2014JCSS}]\label{prop:reduce_pvc_kernels} For an instance $(G, k)$ of $\pIndPiFDS$ with a vertex cover $X$ of size $r \geq k$, \algoi{Reduce}{$G, X, r + p(r), c_{\Pi}$} produces a kernel of size ${r}^{c_{\Pi}} (r + p(r))$. \end{proposition} We now state the main result of this section, which is obtained by carefully combining the logarithmic-space $\pVC$ kernelization from earlier with a modified \algo{Reduce} procedure. \begin{theorem}\label{thm:kern_pvc} $\pIndPiFDS$ is in $\cParaL$. \end{theorem} While the \algo{Reduce} procedure of Fomin et al.~\cite{FJP2014JCSS} can be used to kernelize $\pIndPiFDS$, it is not clear that the procedure can be carried out in small space. We use a modified reduction procedure \algo{ReduceLog} (see Appendix~\ref{apdx:vc_param}) instead, which uses space $\Oh{\log{n}}$, where $n$ is the order of the input graph. \begin{lemma}\label{lemm:reduce_log_pvc}\exS{$\dagger$} For an instance $(G, k)$ of $\pIndPiFDS$ with a vertex cover $X$ of size $r \geq k$, \algoi{ReduceLog}{$G, X, r + p(r), c_{\Pi}$} produces a kernel of size ${r}^{c_{\Pi}} (r + p(r))$. The procedure uses $\Oh{\log{n}}$ bits of space, where $n = \abs{G}$. \end{lemma} The proof of Theorem~\ref{thm:kern_pvc} is now straightforward. \begin{proof}[Theorem~\ref{thm:kern_pvc}] Lemma~\ref{lemm:reduce_log_pvc} shows that given access to the input instance $(G, k)$ and a vertex cover $X$ for $G$, $\pIndPiFDS$ is kernelizable (using \algo{ReduceLog}) in space $\Oh{\log{n}}$, where $n = \abs{G}$. For a solution-size value $k$ (the size of the vertex cover), the $\pVC$ kernelization of Proposition~\ref{prop:vc_para-l} either correctly determines that $G$ has no vertex cover of size $k$ or produces a kernel $G'$ with $\Oh{k^2}$ vertices. In this case, the vertex set of $G'$ is also a vertex cover for $G$ (Proposition~\ref{prop:vc_para-l}). Running \algo{ReduceLog} on $(G, k)$ with access to $X = \V{G'}$ ($r = \abs{X} = \Oh{k^2}$) thus produces a kernel of size $r^{c_{\Pi}} (r + p(r)) = \Oh{k^{2 c_{\Pi}} (k^2 + p(k^2))}$ for $\pIndPiFDS$. Oracle access to $X = \V{G'}$ can be be provided using space $\Oh{\log{n}}$ and $\algo{ReduceLog}$ uses space $\Oh{\log{n}}$, so the total space used is $\Oh{\log{n}}$. By Theorem~\ref{lemm:para-l_kern}, any problem that can be kernelized in space $\Oh{\log{n}}$ is in $\cParaL$, and thus the claim is true. \end{proof} The following corollary to Theorem~\ref{thm:kern_pvc} shows how various deletion problems are in $\cParaL$ via formulations as $\pIndPiFDS$ for suitable $\Pi$. \begin{corollary}\label{corr:vc_pidel}\exS{$\dagger$} Under the vertex-cover parameterization, $\pPlan$, $\pOCT$ and $\pChVD$ are in $\cParaL$. \end{corollary} Via formulations as the alternative intermediate problems $\pLIndPiS$ and $\pqPartDisjPiFree$ (defined in Appendix~\ref{apdx:vc_param}), the same \algoi{ReduceLog}{$G, X, l, c_{\Pi}$} procedure with different values for $l$ can also be used to prove the next result. \begin{corollary}\label{corr:vc_rest_gen}\exS{$\dagger$} Under the vertex-cover parameterization, $\pFVS$, $\pLongPath$, $\pLongCycle$ and $\pdCol$ are in $\cParaL$. \end{corollary} \section{Solving $\pFVS$ in $5^k \cdot n^{\Oh{1}}$ time and $\Oh{k \cdot \log{n}}$ space}\label{sect:fvs} It is known (Lemma~\ref{naivespace}) that problems in $\cFPT$ can be solved in space $f(k) + n^{\Oh{1}}$. If the sole objective is space efficiency, many problems can also be solved (na\"{i}vely, by running over all subsets of size at most $k$, for example) in $f(k) \cdot \log{n}$ bits of space, which puts them in $\cXL$. It is pertinent to note that $\cXL$ also contains problems like $\pIS$ and $\pClq$ which are $\probc{W}[1]$-complete, and hence are unlikely to be in $\cFPT$. On the other hand, it is believed that not all problems in $\cFPT$ can be solved in $f(k) \cdot \log{n}$ bits of space~\cite{CCDF1997AnnPureApplLogic}. A third class, which requires problems in it to be simultaneously solvable in time $f(k) \cdot n^{\Oh{1}}$ and $g(k) \cdot \log{n}$ bits of space also exists, and contains $\pFVS$~\cite{EST2015Algorithmica} and $\pMOdS$~\cite{ABB+2019FSTTCS}. This class, which we call $\cFPTXL$, has been previously identified in the literature as $D[f \oper{poly}, f \oper{log}]$~\cite{EST2015Algorithmica}. In what follows, we devise an $\cFPTXL$ algorithm for $\pFVS$ which runs in time $5^k \cdot n^{\Oh{1}}$ and uses $\Oh{k \log{n}}$ bits of space. This improves on an earlier $((2k)^k \cdot n^{\Oh{1}})$-time, $\Oh{k \log{n}}$-space algorithm of Elberfeld et al.~\cite{EST2015Algorithmica}. Along the way, we also devise a much simpler $(3k)^k \cdot n^{\Oh{1}}$-time $\cFPTXL$ algorithm for the problem. Known $\cFPT$ algorithms for $\pFVS$~\cite{RSS2006TALG,CFK+2015book} are based on reduction rules, one of which is the following ``short circuiting'' rule for vertices of degree two. \subsubsection{Degree--$2$ Reduction Rule} If there is a vertex $x$ of degree $2$ with neighbours $y$ and $z$, delete $x$ and add an edge between $y$ and $z$, while retaining pre-existing edges. Once the rule is applied, the graph possibly has parallel edges, so we assume throughout this section that we work with multigraphs. Additionally, we can assume that the graph has no isolated or degree-$1$ vertices (as they can be safely removed), no vertices with self-loops (as they must be included in the solution and then removed) and no more than two edges between a pair of vertices (as other multi-edges do not influence the solution can be removed). The correctness of these assumptions and the rule is easy to see~\cite{CFK+2015book}. We first discuss the space-efficient application of these rules. The degree of a vertex can be found from the adjacency list, and so it is easy to `discard' vertices with degree at most one. Let us remark that our algorithms (in the next subsection) deletes some (up to $k$) vertices and recursively applies these rules. Those deleted vertices are stored in a separate array using $\Oh{k \log n}$ bits, and so when checking for the degree of a vertex, we also check that array appropriately besides checking the read-only adjacency list of the graph. one \begin{proposition}[Elberfeld et al.~\cite{EST2015Algorithmica}, Theorem 4.13]\label{degreetwoimpl} On a multigraph with $n$ vertices, the Degree--$2$ rule can be applied using $\Oh{\log{n}}$ bits of additional space. \end{proposition} One can combine this proposition with a simple branching strategy to show that $\pFVS$ is in $\cFPTXL$. \begin{theorem}\exS{$\dagger$} $\pFVS$ can be solved in time ${(3k)}^k \cdot n^{\Oh{1}}$ using $\Oh{k \log{n}}$ bits of space. \end{theorem} \subsection{Improved algorithm based on Iterative Compression} We now describe a restricted-space implementation of an algorithm of Chen et al.~\cite{CFL+2008JCSS} which allows us to prove the following result. \begin{theorem}~\label{iteratefvs} $\pFVS$ can be solved using $O(k \log{n})$ bits of space and in time $5^k poly(n)$. \end{theorem} The algorithm goes through a two stage process on an instance $(G, k)$. \textbf{Iterative Stage} In the iterative stage, the algorithm starts with an arbitrary induced subgraph $H_0$ of $G$ with $k + 1$ vertices. The vertex set of this graph is trivially also a feedback vertex set for $H_0$. The algorithm uses a \emph{compression} algorithm which given a subgraph $H$ of $G$, and a feedback vertex set $C$ for $H$ of size $k+1$, either produces a feedback vertex set $C'$ for $H$ of size at most $k$ or determines correctly that such a feedback vertex set for $H$ does not exist. In the latter case, we correctly conclude that even $G$ has no feedback vertex set of size $k$. In the former case, both the subgraph and the (compressed) solution are included an additional vertex from the graph, and the compression step is applied again. This process continues until we cover the entire graph. During the iteration, the $(k+1)$-sized solution is stored using $O(k \log n)$ bits of space. \textbf{Compression Stage} The compression algorithm is given a feedback vertex set $S$ of size $k+1$ for the whole graph $G$, and the goal is to determine whether it has a feedback vertex set of size at most $k$. It starts by checking for every subset $Z$ of $S$ of size at most $k$ whether there is a feedback vertex set of size at most $k$ containing $Z$. This requires storing the set $Z$ which uses $\Oh{k \log{n}}$ bits of space, and requires that $G[P]$ is a forest where $P=S\setminus Z$ which can be checked using $\Oh{\log{n}}$ bits of space (by making $\Oh{n^2}$ connectivity queries on $G[P]$). Recall that $G[V\setminus S]$ is a forest as $S$ is a feedback vertex set. Next, as a processing step, we include all vertices $v$ of $V\setminus S$ into the solution, if $v$ has at least two neighbors in a connected component of $G[P]$ which can be checked in $\Oh{\log{n}}$ bits of space using Reingold's connectivity algorithm~\cite{Rei2008JACM}. Then for any leaf vertex $v$ in $G[V\setminus S]$, we simply branch by picking $v$ into the solution or excluding $v$ from the solution which involves including $v$ into $P$. If either branch returns \NO{}, we return \NO{}. The depth of the recursion is at most $k$ and each level uses $\Oh{\log{n}}$ bits of space, so the overall process uses $\Oh{k \log{n}}$ bits of space. Observe that both the iterative and compression stages use $\Oh{k \log{n}}$ bits of space overall. Appendix~\ref{apdx:fvs_compress} gives pseudocode (\algo{FVSCompress}, Procedure~\ref{proc:CompressFVS}) for the compression procedure described above. For proof of correctness, we refer the reader to Chen et al.~\cite{CFL+2008JCSS}, who also prove a running time bound of $5^k \cdot n^{\Oh{1}}$ which readily carries over to this setting. We now have the following lemma. \begin{lemma} \algo{FVSCompress} solves instances $(G, S, k)$ using $O(k \log{n})$ bits in time $5^k poly(n)$. \end{lemma} The compression procedure is called at most $n - k$ times (to include one additional vertex each time starting with the first $(k + 1)$-sized subgraph), and each time it drops a vertex and adds a new vertex, with at most $k+1$ vertices at any point of time. Thus, the iterative stage adds only an overhead of $n^{\Oh{1}}$ to the running time of \algo{FVSCompress}, and Theorem~\ref{iteratefvs} follows. The routine either stops at an intermediate stage and declares that $(G, k)$ is a \NO{} instance or continues until $i = n$, at which stage it returns the output $T_n$ of the compression routine as a feedback vertex set for $G$ of size at most $k$.
{'timestamp': '2022-01-03T02:18:03', 'yymm': '2112', 'arxiv_id': '2112.15233', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.15233'}
\section{Introduction}\ We fix $(\Omega,\FF,(\FF_t)_{t \geq 0},\P)$ a filtered probability space satisfying the usual conditions. Let $T>0$ be a finite horizon of time, $d,d_1 \in \N^*$ with $d_1 \geq d,$ and $(B_t)_{t \geq 0} $ a $(\FF_t)_{t \geq 0}$-Brownian motion of dimension $d_1$. We consider the Itô process on $\R^d$ defined, for $t \in [0,T],$ by \begin{equation}\label{itoprocess} X_t := X_0 + \int_0^t b_s \, ds + \int_0^t \sigma_s \, dB_s,\end{equation} where $X_0 \in L^2(\Omega,\FF_0;\R^d)$, $b: [0,T]\times \Omega\rightarrow \R^d$ and $\sigma : [0,T]\times \Omega \rightarrow \R^{d \times d_1}$ are progressively measurable processes. In the following, we will denote by $\mu_t$ the law of $X_t$ and by $a$ the matrix $\sigma \sigma^*.$\\ Let us fix a real-valued function $u$ defined on the 2-Wasserstein space $\PPP_2(\R^d),$ i.e. the space of probability measures on $\R^d$ having a finite moment of order $2.$ In this paper, we are interested in Itô's formula for $u$ and the flow of probability measures $(\mu_t)_{t \in [0,T]}.$ This formula describes the dynamics of $t \mapsto u(\mu_t),$ essentially by computing its derivative (see \eqref{formulaito11} below). It thus requires differential calculus on the space of measures $\PPP_2(\R^d).$ There exists several notions of differentiability for functions defined on $\PPP_2(\R^d).$ The L-derivative, which was introduced by Lions in his lectures at Collège de France \cite{LionscourscollegedeFrance}, is well-adapted to establish Itô's formula for a flow of measures. We say that $u$ is L-differentiable if its lifting defined by $$ \tilde{u} : X \in L^2(\Omega;\R^d) \mapsto u(\mathcal{L}(X)) \in \R,$$ where $ \mathcal{L}(X)$ denotes the law of $X,$ is Fréchet differentiable on $L^2(\Omega;\R^d).$ Moreover, there exists a $\R^d$-valued function $\partial_{\mu} u$ defined on $\PPP_2(\R^d) \times \R^d$ such that the gradient of $\tilde{u}$ at $X \in L^2(\Omega;\R^d)$ is given by the random variable $\partial_{\mu}u (\mathcal{L}(X))(X).$ The function $\partial_{\mu}u$ is called the L-derivative of $u.$ The advantage of the L-derivative is that it permits to use standard tools of differential calculus on Banach spaces. However, another notion of differentiability is quite appealing to prove Itô's formula for a flow of measures: the linear (functional) derivative. This is a standard notion of differentiability for functions of measures relying on the convexity of $\PPP_2(\R^d).$ We say that $u$ has a linear derivative if there exists a real-valued and continuous function $\del$ defined on $\PPP_2(\R^d)\times \R^d,$ at most of quadratic growth (with respect to the space variable) on each compact set of $\PPP_2(\R^d),$ and such that for all $\mu,\nu \in \PPP_2(\R^d)$ $$ u(\mu) - u(\nu) = \int_0^1 \int_{\R^d} \del(t\mu + ( 1-t) \nu)(v) \, d (\mu - \nu)(v) \, dt.$$ Of course, there is a link between the L-derivative and the linear derivative of $u.$ Indeed, in general, the L-derivative $\partial_{\mu} u(\mu)(\cdot)$ is equal to the gradient of the linear derivative $\partial_v \del (\mu)(\cdot)$ (see Propositions 5.48 and 5.51 in \cite{CarmonaProbabilisticTheoryMean2018} for the precise assumptions). In this paper, we have chosen to work with the linear derivative (see below and Remark \ref{rqchoicelinearderivative} for a justification of this choice).\\ The standard Itô formula for a flow of measures can be found in \cite{BuckdahnMeanfieldstochasticdifferential2014} (see Theorem 6.1) or in Section $3$ of \cite{ChassagneuxProbabilisticapproachclassical2015} and Chapter 5 of \cite{CarmonaProbabilisticTheoryMean2018} under less restrictive assumptions. The common point of these results is that the function $u$ considered has to be $\CC^2$ in some sense. More precisely, it is always assumed that for all $\mu \in \PPP_2(\R^d),$ the L-derivative $\partial_{\mu}u(\mu)(\cdot)$ belongs to $\CC^1(\R^d)$ or equivalently that the linear derivative $\del (\mu)(\cdot)$ belongs to $\CC^2(\R^d).$\\ This paper aims at proving Itô's formula for functions $u$ having a linear derivative $\del$ that is not $\CC^2$ with respect to the space variable. Let us write the form of Itô's formula both with the linear derivative and with the L-derivative in order to make the following discussion clearer. It states that for all $t \in [0,T]$ \begin{equation} \left\{ \begin{aligned} \label{formulaito11} u(\mu_t) &= u(\mu_0) + \int_0^t \E \left( \partial_v \del (\mu_s)(X_s)\cdot b_s\right) \,ds + \frac{1}{2} \int_0^t \E \left( \partial^2_v \del (\mu_s)(X_s)\cdot a_s\right) \,ds \\ u(\mu_t) &= u(\mu_0) + \int_0^t \E \left( \partial_{\mu}u(\mu_s)(X_s)\cdot b_s\right) \,ds + \frac{1}{2} \int_0^t \E \left( \partial_v \partial_{\mu}u (\mu_s)(X_s)\cdot a_s\right) \,ds, \end{aligned} \right. \end{equation} where $x\cdot y$ denotes the usual scalar product of two vectors $x,y \in \R^d$ and $A\cdot B := \text{Tr}(A^*B)$ the usual scalar product of two matrices $A,B \in \R^{d\times d}.$ \\ We now fix the assumptions on the Itô process $(X_t)_{t \in [0,T]}.$ In this paper, we always assume that the drift $b$ and the diffusion matrix $\sigma$ in \eqref{itoprocess} satisfy the following properties. \begin{enumerate} \item[] \textbf{(A)} There exists $K>0$ such that almost surely $$ \forall t \in [0,T], \, |b_t| + | \sigma_t| \leq K.$$ \item[] \textbf{(B)} There exists $\delta >0$ such that almost surely $$ \forall t \in [0,T], \, \forall \lambda \in \R^d, \, a_t\lambda\cdot\lambda \geq \delta |\lambda|^2.$$ \end{enumerate} \vspace{4pt} Assumptions \textbf{(A)} and \textbf{(B)} are taken from Section $2.10$ of \cite{KrylovControlledDiffusionProcesses2009}. Therein, Krylov deals with controlled diffusion processes and needs to apply the standard Itô formula to the so-called pay-off function which is not $\CC^2.$ That is why he proves an extension of the classical Itô formula for the Itô process $(X_t)_{t \in [0,T]}$ satisfying Assumptions \textbf{(A)} and \textbf{(B)}, and for a function $g: \R^d \rightarrow \R$ belonging to an appropriate Sobolev space. The crucial point is that $(X_t)_t$ satisfies the non-degeneracy Assumption \textbf{(B)}. It ensures that the noise does not degenerate and allows to produce a regularizing effect. Let us explain how. The non-degeneracy assumption leads to Krylov's inequality (see Theorem \ref{inegalitekrylov} taken from Section $2.3$ of \cite{KrylovControlledDiffusionProcesses2009}). This inequality, in turn, implies that for almost all $t \in [0,T],$ $\mu_t,$ the law of $X_t,$ has a density $p(t,\cdot)$ with respect to the Lebesgue measure (see Proposition \ref{densiteexistence}). Moreover, this density belongs to $L^{(d+1)'}([0,T]\times \R^d),$ where $(d+1)'$ denotes the conjugate exponent of $d+1$ defined in Section \ref{notations}. The existence of densities together with the integrability property permit to assume Sobolev regularity for the function $g.$ More precisely, Itô-Krylov's formula is established under the assumption that $g$ is continuous on $\R^d$ and that $\partial g$ belongs to the Sobolev space $W_{\text{loc}}^{1,k}(\R^d),$ for $ k \geq d+1,$ i.e. that $\partial g$ and $\partial^2 g$ are in $L_{\text{loc}}^{k}(\R^d)$ (see Section 2.10 of \cite{KrylovControlledDiffusionProcesses2009}). \\ Our goal here is to take advantage of the regularizing effect of the noise, stemming from the existence of the densities $p(t,\cdot)$ and their integrability property, to establish an analogue of Itô-Krylov's formula in the measure-dependent case. Looking at Itô's formula for a flow of measures \eqref{formulaito11}, the regularizing effect comes from the presence of expectations which average, with respect to the space variable, the derivatives of $\del$ on all the trajectories of $(X_t)_t.$ Indeed, the regularization by noise will only appear through the space variable of the linear derivative but not through its measure variable. This is not surprising since the space of measures $\PPP_2(\R^d)$ is somehow infinite dimensional while the noise is of finite dimension. Thus, we cannot expect a true regularization in the measure variable of $\del.$ The fact that a finite dimensional noise cannot have a regularizing effect in the space $\PPP_2(\R^d)$ is explained in \cite{marx2020infinitedimensional} in the context of McKean-Vlasov Stochastic Differential Equations (SDEs). \\ In order to prove Itô's formula \eqref{formulaito11} for $u$ (with the linear derivative), it is clear that $u$ needs to admit a linear derivative with at least distributional derivatives of order $1$ and $2$ with respect to the space variable in $L^k(\R^d)$ for some $k,$ as for the standard Itô-Krylov formula. Let us describe more precisely our assumptions on $u.$ As said before, for almost all $t \in [0,T],$ the law $\mu_t$ has a density $p(t,\cdot)$ such that $p$ belongs to $L^{(d+1)'}([0,T]\times \R^d).$ Denoting by $\PP(\R^d)$ the space of measures $\mu \in \PPP_2(\R^d)$ having a density with respect to the Lebesgue measure in $ L^{(d+1)'}(\R^d),$ our assumptions on the derivatives of $\del(\mu)(\cdot)$ are only made for measures $\mu$ belonging to $\PP(\R^d).$ This is natural since for almost all $t \in [0,T],$ $\mu_t$ belongs to $\PP(\R^d),$ and the derivatives of $\del$ are evaluated along the flow $(\mu_t)_{t \in [0,T]}$ and integrated in time. Moreover, because of the integrability property of the densities $p(t,\cdot),$ the derivatives of $\del (\mu)(\cdot)$ do not need to be defined and continuous on the whole space $\R^d$ because they are somehow integrated against the densities $p(t,\cdot)$ (see \eqref{formulaito11}). We say "somehow" because it is not completely the case since $b$ and $a$ are random. But as they are bounded, we can omit them in some sense. More precisely, the integrability property of the densities leads us to assume that $u$ admits a linear derivative such that for all $\mu \in \PP(\R^d),$ $\partial_v \del(\mu)(\cdot)$ belongs to the Sobolev space $W^{1,k}(\R^d)$ defined in Section \ref{notations}, with $k \geq d+1.$ This is exactly the same condition as in the standard Itô-Krylov formula, except that we replace $W_{\text{loc}}^{1,k}(\R^d)$ by $W^{1,k}(\R^d).$ This is essentially explained by the expectations in Itô's formula \eqref{formulaito11}. Indeed, the process $(X_t)_t$ cannot be localized by stopping times. Moreover, we assume that the map $\mu \in \PP(\R^d) \mapsto \partial_v \del(\mu)(\cdot) \in W^{1,k}(\R^d)$ is continuous for a distance on $\PP(\R^d)$ satisfying the assumptions of Definition \ref{defespaceP}. This continuity assumption can be interpreted as the fact that the noise has no regularizing effect in the measure variable of the linear derivative, as explained above. The precise assumptions of our Itô-Krylov's formula are given in Definition \ref{sobolevW1} and Theorem \ref{ito_krylov}. Eventually, we extend in Theorem \ref{extensionito} our formula to functions depending also on the time and space variables satisfying the assumptions of Definition \ref{sobolevW}.\\ Let us explain our choice to work with the linear derivative. Under the assumptions mentioned in the preceding paragraph, Sobolev embedding theorem ensures that for all $\mu \in \PP(\R^d),$ $\del(\mu)(\cdot)$ belongs to $\CC^1(\R^d;\R),$ and that $\partial_v \del(\mu)(\cdot)$ is continuous and bounded on $\R^d.$ We would be tempted to deduce that $u$ admits a L-derivative given, as recalled above, by $\partial_v \del(\mu)(\cdot).$ However, this term is assumed to exist only for measures $\mu \in \PP(\R^d)$ and not for $\mu \in \PPP_2(\R^d).$ This is the case in Example \ref{exquadratic}, where this term is not clearly well-defined for any $\mu \in \PPP_2(\R^d)$ (see Remark \ref{rqchoicelinearderivative}). It seems therefore more restrictive to work with the L-derivative and thus motivates our choice to work with the linear derivative. \\ We now focus on some applications of Itô's formula for a flow of measures. This one has been developed with the increasing interest for Mean-Field Games and McKean-Vlasov SDEs over the last decade. Mean-Field Games were initiated independently by Caines, Huang and Malhame in \cite{CainesLargepopulationstochastic2006} and by Lasry and Lions in \cite{LasryMeanfieldgames2007}. The notion of Master equations has been introduced by Lions in his lectures at Collège de France \cite{LionscourscollegedeFrance} in order to describe Mean-Field Games. Master equations are Partial Differential Equations (PDEs) on the space of probability measures and can be derived with the help of Itô's formula. We refer to Lions' lectures \cite{LionscourscollegedeFrance}, the notes written by Cardialaguet \cite{CardaliaguetnotesMFG2013}, and the books of Carmona-Delarue \cite{CarmonaProbabilisticTheoryMean2018,CarmonaProbabilisticTheoryMean2018a} for more details on Mean-Field Games and Master equations. We also mention Bensoussan, Frehse and Yam \cite{BensoussanMasterEquationMean2014} and Carmona, Delarue \cite{CarmonaMasterEquationLarge2014} where Master equations are derived, with the help of Itô's formula in \cite{CarmonaMasterEquationLarge2014}. The question of existence and uniqueness of classical solutions to Master equations was addressed by Cardaliaguet, Delarue, Lasry and Lions in \cite{CardaliaguetMasterEquationConvergence2015} and by Chassagneux, Crisan and Delarue in \cite{ChassagneuxProbabilisticapproachclassical2015}. From a different point of view, Mou and Zhang deal with the well-posedness of Master equations in some weaker senses in \cite{MouWellposednessSecondOrder2020}. \\ Moreover, Itô's formula appears to be the natural way to connect a McKean-Vlasov SDE (more precisely the associated semigroup $(P_t)_t$ acting on the space of functions of measures) to a PDE on the space of probability measures (the Master equation) in the same manner as for classical SDEs. It turns out to be a crucial tool to study the stochastic flow generated by a McKean-Vlasov SDE, as explained in Chapter $5$ of \cite{CarmonaProbabilisticTheoryMean2018}. The link between McKean-Vlasov SDEs and PDEs on the space of measures is at the heart of the work of Buckdahn, Li, Peng and Rainer \cite{BuckdahnMeanfieldstochasticdifferential2014} where the authors prove that the PDE admits a unique classical solution expressed with the flow of measures associated with the McKean-Vlasov SDE. Moreover, in the parallel work \cite{ChassagneuxProbabilisticapproachclassical2015}, Chassagneux, Crisan and Delarue adopt a similar approach and study the flow generated by a forward-backward stochastic system of McKean-Vlasov type under weaker assumptions on the coefficients of the equation. Both works are motivated by Mean-Field Games, and Itô's formula plays a key role. Furthermore, the problem of propagation of chaos for the interacting particles system associated with the McKean-Vlasov SDE can also be addressed with the help of the associated PDE on the space of measures (see Chapter 5 of \cite{CarmonaProbabilisticTheoryMean2018}). It allows to obtain quantitative weak propagation of chaos estimates between the law of the solution to the McKean-Vlasov SDE and the empirical measure of the associated particle system. This approach was adopted for example by Chaudru de Raynal and Frikha in \cite{deraynal2021wellposedness,frikha2021backward}, by Delarue and Tse in \cite{delarue2021uniform} and by Chassagneux, Szpruch and Tse in \cite{chassagneux2019weak}. Finally, Itô's formula for a flow of measures is also important to deal with McKean-Vlasov control problems because it allows to derive a dynamic programming principle describing the value function of the problem as presented in Chapter $6$ of \cite{CarmonaProbabilisticTheoryMean2018}.\\ Recently, Itô's formula has been extended to flows of measures generated by càdlàg semi-martingales. It was achieved independently by Guo, Pham and Wei in \cite{GuoItoFormulaFlow2020}, who studied McKean-Vlasov control problems with jumps and by Talbi, Touzi and Zhang in \cite{talbi2021dynamic} who worked on mean-field optimal stopping problem. In both works, dynamic programming principles are established thanks to Itô's formula for a flow of measures.\\ The paper is organized as follows. Section \ref{notations} gathers some notations and definitions used throughout the paper. In Section \ref{sectionspacesandformula}, more precisely in Definitions \ref{sobolevW1} and \ref{sobolevW}, we define the spaces of functions for which we will establish Itô-Krylov's formula. These formulas are given in Theorem \ref{ito_krylov} for functions defined on $\PPP_2(\R^d)$ and in Theorem \ref{extensionito} for functions depending also on the time and space variables. Moreover, we give examples of functions for which our formulas hold and we discuss our assumptions through them. The proofs of these examples are postponed to Appendix \ref{sectionappendix} for ease of reading. In Section \ref{sectionpreliminaries}, we give some preliminary results. We start with Krylov's inequality and its consequences on the existence of densities for the flow of measures $(\mu_t)_{t\in[0,T]}$ in Proposition \ref{densiteexistence}. Then we recall some classical results on convolution and regularization. Finally, Sections \ref{sectionproof1} and \ref{sectionproof2} are respectively dedicated to the proofs Theorems \ref{ito_krylov} and \ref{extensionito}.\\ \section{Notations and definitions}\label{notations} \subsection{General notations} \noindent Let us introduce some notations used several times in the article. \begin{enumerate} \item[-]$B_R$ is the open ball centered at $0$ and of radius $R$ in $\R^d$ for the euclidean norm. \item[-] $p'$ is the conjugate exponent of $p \in [1,+\infty]$, defined by $\frac1p + \frac{1}{p'} = 1.$ \item[-] $L^p_{\text{loc}}(\R^d)$ is the space of functions $f$ such that for all $R>0$, $f\in L^p(B_R).$ \item[-] $W^{m,k}(\mathcal{O})$ is the Sobolev space of functions $u \in L^{k}(\mathcal{O})$ admitting distributional derivatives of order between $1$ and $m$ in $ L^{k}(\mathcal{O}),$ where $\mathcal{O}$ is open in $\R^d.$ It is equipped with the norm $$ \|u\|_{W^{m,k}(\mathcal{O})} = \sum_{\alpha \in \N^d, \, |\alpha| \leq m} \|\partial^{\alpha} u \|_{L^k(\mathcal{O})}.$$ \item[-]$W^{m,k}_{\text{loc}}(\R^d)$ is the space of functions $u$ such that for all $R>0$, $u$ belongs to $W^{m,k}(B_R).$ \item[-] $(\rho_n)_n$ is a mollifying sequence on $\R^d$, that is a sequence of non-negative $\CC^{\infty}$ functions, such that for all $n$, $\int_{\R^d} \rho_n(x) \, dx =1$ and $\rho_n$ is equal to $0$ outside $B_{1/n}.$ We assume that $\rho_n(x) = \rho_n(-x)$ for all $x.$ \item[-] $f*g$ denotes the convolution of two functions $f$ and $g$, when it is well-defined. \item[-] $\mu \star \nu$ denotes the convolution of two probability measures on $\R^d.$ \item[-] $\BB(E)$ is the Borel $\sigma$-algebra where $E$ is a metric space. \item[-] $A^*$ denotes the transpose of the matrix $A \in \R^{d\times d}.$ \item[-] $A\cdot B$ denotes the usual scalar product of two matrices $A,B \in \R^{d \times d}$ given by $A\cdot B := \text{Tr}(A^*B).$ \item[-] $\PP(\R^d)$ is defined in Definition \ref{defespaceP}. \item[-] $\mathcal{W}_1(\R^d)$ is defined in Definition \ref{sobolevW1} \item[-] $\mathcal{W}_2(\R^d)$ is defined in Definition \ref{sobolevW}. \end{enumerate} \subsection{Spaces of measures and linear derivative} The set $\PPP(\R^d)$ is the space of probability measures on $\R^d$ equipped with the topology of weak convergence. The Wasserstein space $\PPP_2(\R^d)$ denotes the set of measures $\mu \in \PPP(\R^d)$ such that $\int_{\R^d} |x|^2 \,d\mu(x) < + \infty,$ equipped with the $2$-Wasserstein distance $W_2$ defined for $\mu, \nu \in \PPP_2(\R^d)$ by $$ W_2(\mu,\nu) = \inf_{\pi \in \Pi(\mu,\nu)} \left(\int_{\R^d\times \R^d} |x-y|^2 \, d\pi(x,y)\right)^{1/2},$$ where $\Pi(\mu,\nu)$ is the subset of $\PPP_2(\R^d \times \R^d)$ with marginal distributions $\mu$ and $\nu.$ We will work with the standard notion of linear derivative for functions of measures. \begin{Def}[Linear derivative] A function $ u: \PPP_2(\R^d) \rightarrow \R$ is said to have a linear derivative if there exists a continuous function $(\mu,v) \in \PPP_2(\R^d) \times \R^d \mapsto \del(\mu)(v) \in \R,$ satisfying the following properties. \begin{enumerate} \item For all compact $ \KK \subset \PPP_2(\R^d)$ $ \displaystyle\sup_{v \in \R^d}\displaystyle\sup_{\mu \in \KK} \left\{ (1 + |v|^2)^{-1}\left| \del (\mu)(v)\right|\right\} < + \infty.$\item For all $\mu,\nu \in \PPP_2(\R^d),$ $u(\mu) - u(\nu) = \displaystyle\int_0^1 \int_{\R^d} \del(t\mu + (1-t) \nu)(v) \, d(\mu - \nu)(v) \, dt.$ \end{enumerate} \end{Def} \begin{Rq}\label{Rqlinearderivative}Instead of the second point of the previous definition, it is equivalent to assume that for all $\mu,\nu \in \PPP_2(\R^d),$ $t \in [0,1] \mapsto u(t\mu + (1-t)\nu)$ is of class $\CC^1$ with $$\forall t \in [0,1], \, \frac{d}{dt} u(t\mu + (1-t)\nu) = \int_{\R^d} \del(t\mu + (1-t)\nu)(v) \, d(\mu-\nu)(v).$$ \end{Rq} One can find more details in Chapter $5$ of \cite{CarmonaProbabilisticTheoryMean2018}, in particular the connection with the $L$-derivative. \begin{Def}\label{defespaceP} Let us define $\PP(\R^d)$ as the space of measures $\mu \in \PPP_2(\R^d)$ which admit a density $\frac{d\mu}{dx}$ with respect to the Lebesgue measure belonging to $L^{(d+1)'}(\R^d).$ We endow $\PP(\R^d)$ with a general distance $d_{\PP}$ satisfying the following properties. \begin{enumerate} \item[] \textbf{(H1)} For any $n \geq 1$, $\mu \in (\PPP_2(\R^d),W_2) \mapsto \mu \star \rho_n \in (\PP(\R^d),d_{\PP})$ is continuous. \item[] \textbf{(H2)} For any $\mu \in \PP(\R^d),$ $\mu \star \rho_n \underset{n \rightarrow + \infty}{\longrightarrow} \mu$ for $d_{\PP}.$ \end{enumerate} \end{Def} Note that for all $n \geq 1$ and for all $\mu \in \PPP_2(\R^d)$, $\mu \star \rho_n \in \PP(\R^d).$ Indeed, its density is given by $x \mapsto \rho_n \star \mu(x) = \int_{\R^d} \rho_n(x-y) \, d\mu(y).$ Jensen's inequality ensures that it belongs to $L^{(d+1)'}(\R^d).$ Considering the space $(\PP(\R^d),d_{\PP})$ comes in a natural way with Assumptions \textbf{(A)} and \textbf{(B)} on the Itô process $X.$ As explained in the introduction, it implies the existence of a density $p \in L^{1}([0,T] \times \R^d;\R^+) \cap L^{(d+1)'}([0,T] \times \R^d;\R^+)$ such that for almost all $t \in [0,T],$ the law of $X_t$ is equal to $ p(t,\cdot)\,dx$ and belongs to $\PP(\R^d)$ (see Proposition \ref{densiteexistence}). Let us give two examples for the distance $d_{\PP}.$ \begin{Ex}\label{choicedistance} The Wasserstein distance $W_2$ clearly satisfies Assumptions \textbf{(H1)} and \textbf{(H2)} in Definition \ref{defespaceP}. Another family of examples is given by the distance $d_k$ defined, for $ k \in [d+1,+\infty[, \mu,\nu \in \PP(\R^d),$ by $$ d_k(\mu,\nu) = \left\Vert \frac{d\mu}{dx} - \frac{d\nu}{dx} \right\Vert_{L^{k'}(\R^d)}.$$ \end{Ex} Note that $d_k$ is well-defined since for any $\mu \in \PP(\R^d)$, $\frac{d\mu}{dx} \in L^1(\R^d) \cap L^{(d+1)'}(\R^d)$ which is included in $L^{k'}(\R^d)$ by interpolation. The proof is postponed to the Appendix (Section \ref{proofchoicedistance}).\\ \section{Itô-Krylov's formula, ah-hoc spaces of functions and examples}\label{sectionspacesandformula} Let us introduce now the Sobolev-type space of functions on $\PPP_2(\R^d)$ for which we will prove Itô's formula for a flow of measures. \begin{Def}\label{sobolevW1} Let $\mathcal{W}_1(\R^d)$ be the space of continuous functions $u: \PPP_2(\R^d) \rightarrow \R$ having a linear derivative $\del$ such that for all $\mu \in \PP(\R^d)$, the function $ \del (\mu)(\cdot)$ admits distributional derivatives of order $1$ and $2$ in $L^{k}(\R^d)$, for a certain $k \geq d+1,$ and satisfies the following properties. \begin{enumerate} \item $ \mu \in (\PP(\R^d),d_{\PP}) \mapsto \partial_v \del (\mu)(\cdot) \in \left(W^{1,k}(\R^d)\right)^d$ is continuous for a certain distance $d_{\PP}$ satisfying \textbf{(H1)} and \textbf{(H2)}. \item There exists $\alpha \in \N$ such that $ k\geq (1+\alpha)d$ and for all compact $\KK \subset \PPP_2(\R^d)$ and for any $ \mu \in \KK \cap \PP(\R^d)$ $$ \left\Vert \partial_v \del (\mu)(\cdot) \right\Vert_{L^k(\R^d)} + \left\Vert \partial_v^2 \del (\mu)(\cdot) \right\Vert_{L^k(\R^d)} \leq C_{\KK} \left( 1 + \left\Vert \frac{d\mu}{dx} \right\Vert_{L^{k'}(\R^d)}^{\alpha}\right).$$ \end{enumerate} \end{Def} \begin{Rq}\label{rqW1} -The space $\mathcal{W}_1(\R^d)$ contains the functions which satisfy Assumption $(1)$ in Definition \ref{sobolevW1} with $(\PPP_2(\R^d),W_2)$ instead of $(\PP(\R^d),d_{\PP}).$ Indeed, the second point is clearly satisfied with $\alpha =0$ since $\KK$ is compact. \\ \noindent-Assumption $(2)$ in Definition \ref{sobolevW1} allows to control the growth of $\left\Vert \partial_v \del(\mu)(\cdot)\right\Vert_{W^{1,k}(\R^d)}$ with respect to the measure $\mu.$ It allows us to take advantage of the continuity of the flow in $\PPP_2(\R^d)$ (because the control is assumed on compact subsets of $\PPP_2(\R^d)$), but also of its integrability properties proved in Lemmas \ref{integrability1} and \ref{integrability2}. The form of the inequality suggests the integration of functions in $L^k(\R^d)$ with respect to $\mu,$ at least when the function $u$ is linear in $\mu.$ \end{Rq} Having this definition at hand, we can now state Itô-Krylov's formula for functions in $\mathcal{W}_1(\R^d).$ \begin{Thm}[Itô-Krylov's formula]\label{ito_krylov}\ Let $ u $ be a function in $\mathcal{W}_1(\R^d),$ which was defined in Definition \ref{sobolevW1}. We have for all $t \in [0,T]$ \begin{align}\label{formulaito1} u(\mu_t) &= u(\mu_0) + \int_0^t \E \left( \partial_v \del (\mu_s)(X_s)\cdot b_s\right) \,ds + \frac{1}{2} \int_0^t \E \left( \partial^2_v \del (\mu_s)(X_s)\cdot a_s\right) \,ds, \end{align} where $\partial^2_v \del (\mu_s)(X_s)\cdot a_s := \text{Tr}\Big(\partial^2_v \del (\mu_s)(X_s)a_s\Big)$ is the usual scalar product on $\R^{d\times d}.$ \\ \end{Thm} Now, we focus on examples of functions belonging to $\mathcal{W}_1(\R^d).$ Let us start with the linear case. \begin{Ex}\label{exlinear} Fix $g \in \CC^0(\R^d;\R)$ admitting a distributional derivative such that $\partial g \in (W^{1,k}(\R^d))^d$ for some $k \geq d+1.$ Then, the function $$u : \left\{ \begin{array}{rll} \PPP_2(\R^d) &\rightarrow \R \\ \mu&\mapsto \displaystyle\int_{\R^d } g(x) \, d\mu(x), \end{array} \right. $$ belongs to the space $\mathcal{W}_1(\R^d).$ \end{Ex} Indeed, Sobolev embedding theorem (see Corollary 9.14 in \cite{BrezisFunctionalAnalysisSobolev2010}) implies that $\partial g \in L^{\infty}(\R^d)$ since $ k \geq d+1.$ Thus $g$ is at most of linear growth so that for all $\mu \in \PPP_2(\R^d),$ $\del(\mu) = g,$ which clearly satisfies Assumptions $(1)$ and $(2)$ (with $\alpha = 0$) in Definition \ref{sobolevW1}.\\ Let us now focus on the quadratic case. \begin{Ex}\label{exquadratic} Fix $g \in \CC^0(\R^d \times \R^d;\R)$ such that \begin{enumerate} \item[-] there exists $C>0$ such that for all $v,y \in \R^d, \,|g(v,y)| \leq C(1+|v|^2 +|y|^2),$ \item[-] the distributional derivative $\partial g$ belongs to $(W^{1,k}(\R^{2d}))^{2d}$ for a certain $k \in [2d, + \infty[.$ \end{enumerate} Then, the function $$u : \left\{ \begin{array}{rll} \PPP_2(\R^d) &\rightarrow \R \\ \mu&\mapsto \displaystyle\int_{\R^d \times \R^d} g(x,y) \, d\mu(x) \, d\mu(y), \end{array} \right. $$ belongs to the space $\mathcal{W}_1(\R^d)$ for $d_{\PP}=d_k.$ \end{Ex} The proof is postponed to the Appendix (Section \ref{proofexquadratic}). \begin{Rq}\label{rqchoicelinearderivative} - In Definition \ref{sobolevW1}, the distributional derivatives of the linear derivative $\del(\mu)$ are not necessarily integrable functions for all $\mu \in \PPP_2(\R^d).$ Of course, in Example \ref{exlinear}, it is the case for all $\mu \in \PPP_2(\R^d)$ as the linear derivative does not depend on the measure $\mu.$ However, in Example \ref{exquadratic}, the linear derivative is given by \begin{equation}\label{linderquadra} \del (\mu)(v) = \int_{\R^d} g(v,y) \, d\mu(y) + \int_{\R^d} g(y,v) \, d\mu(y).\end{equation} Formally, the derivative with respect to $v$ of the first integral in \eqref{linderquadra} is $$ \int_{\R^d} \partial_v g(v,y) \, d\mu(y).$$ This term is not well-defined for general measures $\mu \in \PPP_2(\R^d)$ because we have only assumed that $\partial g\in (W^{1,k}(\R^{2d}))^{2d}$ with $k \geq 2d.$ Indeed, for $k=2d,$ we just know by Sobolev embedding theorem that $\partial g$ belongs to $(L^r(\R^{2d})^{2d}$ with $r \in [2d,+\infty[$ (see Corollary 9.11 in \cite{BrezisFunctionalAnalysisSobolev2010}). As we will see in the proof (Section \ref{proofexquadratic} of the Appendix), it is well-defined as an integrable function of $v$ if we restrict to measures $\mu \in \PP(\R^d).$ This also justifies why we have chosen to work with the linear derivative instead of the L-derivative. Indeed, the L-derivative of $u$ would be equal to the gradient of the linear derivative $\partial_v \del(\mu)(\cdot),$ which is not well-defined for all $\mu \in \PPP_2(\R^d).$ Thus, the function $u$ does not need to be L-differentiable in the usual sense in our setting.\\ \noindent - Our assumptions on the derivatives of $\del$ in Definition \ref{sobolevW1} deal with $\PP(\R^d)$ instead of the whole space $\PPP_2(\R^d)$ essentially because in Itô's formula \eqref{formulaito1}, these derivatives only appear under integrals along the flow $(\mu_s)_{s \in [0,T]},$ which belongs to $\PP(\R^d)$ for almost all $s\in [0,T].$ However, we assume that $u$ is continuous on $\PPP_2(\R^d)$ since the flow $s \in [0,T] \mapsto \mu_s \in \PPP_2(\R^d)$ is continuous but $\mu_t$ does not necessarily belong to $\PP(\R^d)$ for all $t \in [0,T]$ . \end{Rq} The next example focuses on the particular case of convolution. \begin{Ex}\label{exconvol} Let $f \in \CC^0(\R^d;\R)$ be a function such that the distributional derivative $\partial f$ belongs to $(W^{1,k+1}(\R^d))^d,$ for a certain $ k\geq d.$ Then, the function $$u: \left\{ \begin{array}{rll} \PPP_2(\R^d) &\rightarrow \R \\ \mu&\mapsto \displaystyle\int_{\R^d} f \star \mu \, d\mu, \end{array} \right.$$ belongs to $\mathcal{W}_1(\R^d)$ for $d_{\PP}=W_2.$ \end{Ex} Here, the particular structure of convolution enables us to work on the whole space $\PPP_2(\R^d)$ instead of $\PP(\R^d),$ as explained in the first point of Remark \ref{rqW1}. The proof is again postponed to the Appendix (Section \ref{proofexconvol}). \\ We now deal with the extension of Itô's formula to functions depending also on the time and space variables. First, we define the space of functions generalizing the space $\mathcal{W}_1(\R^d).$ \begin{Def}\label{sobolevW} Let $\mathcal{W}_2(\R^d)$ be the set of continuous functions $u: [0,T] \times \R^d \times \PPP_2(\R^d)\rightarrow \R$ satisfying the following properties for a certain distance $d_{\PP}$ satisfying \textbf{(H1)} and \textbf{(H2)}. \begin{enumerate} \item For all $(x,\mu) \in \R^d \times\PPP_2(\R^d)$, $u(\cdot,x,\mu)\in \CC^{1}$ and $\partial_t u$ is continuous on $[0,T] \times \R^d \times \PPP_2(\R^d)$. \item There exists $k_1 \geq d+1$ such that for all $(t,\mu) \in [0,T] \times \PP(\R^d)$, $u(t,\cdot,\mu) \in W^{2,k_1}_{\text{loc}}(\R^d)$ and for all $ t \in [0,T]$ and $R>0$ $$ \mu \in (\PP(\R^d), d_{\PP})\mapsto \partial_x u(t,\cdot,\mu) \in \left(W^{1,k_1}(B_R)\right)^d,$$ is continuous and $\partial_x u$ and $\partial_x^2 u$ are measurable with respect to $(t,x,\mu) \in [0,T]\times \R^d\times \PP(\R^d) .$ \item For all $(t,x) \in [0,T] \times \R^d$, $u(t,x,\cdot)$ admits a linear derivative $\del (t,x,\cdot)(\cdot)$ which is continuous on $[0,T]\times \R^d \times \PPP_2(\R^d)\times \R^d,$ and such that for all $\KK \subset \R^d \times\PPP_2(\R^d)$ compact and $t\in [0,T]$, there exists $C>0$ such that for all $v \in \R^d$ $$ \sup_{(x,\mu) \in \KK }\left|\del (t,x,\mu)(v) \right| \, dx \leq C (1+|v|^2).$$\item There exists $k_2 \geq 2d$ such that for all $(t,\mu) \in [0,T] \times \PP(\R^d),$ $\del (t, \cdot,\mu)(\cdot)$ admits distributional derivatives with respect to $v$ of order $1$ and $2$ such that for all $t$ and $R>0 $ $$\mu\in (\PP(\R^d),d_{\PP}) \mapsto \left(\partial_v \del (t,\cdot,\mu)(\cdot)\,,\, \partial^2_v \del (t,\cdot,\mu)(\cdot)\right) \in (L^{k_2}(B_R\times \R^d))^d \times (L^{k_2}(B_R\times \R^d))^{d \times d},$$ is continuous and measurable with respect to $(t,x,\mu,v) \in [0,T]\times \R^d \times \PP(\R^d) \times \R^d$. \item There exists $\alpha_1, \alpha_2 \in \N$ with $k_1\geq (2\alpha_1+1)d$, $ k_2 \geq (\alpha_2 +2)d$ such that for all $\KK \subset \PPP_2(\R^d)$ compact and $R>0$, there exists $C_{\KK,R}>0$ such that for all $\mu \in \KK \cap \PP(\R^d)$ $$\left\{ \begin{aligned} &\sup_{t\leq T} \left\{ \left\Vert \partial_x u (t,\cdot,\mu) \right\Vert_{L^{k_1}(B_R)} + \left\Vert \partial_x^2 u (t,\cdot,\mu) \right\Vert_{L^{k_1}(B_R)}\right\} \leq C_{\KK,R} \left( 1 + \left\Vert \frac{d\mu}{dx} \right\Vert_{L^{k_1'}(\R^d)}^{\alpha_1}\right) \\&\sup_{t\leq T} \left\{ \left\Vert \partial_v \del (t,\cdot,\mu)(\cdot) \right\Vert_{L^{k_2}(B_R\times\R^d)} + \left\Vert \partial_v^2 \del (t,\cdot,\mu)(\cdot) \right\Vert_{L^{k_2}(B_R\times\R^d)}\right\} \leq C_{\KK,R} \left( 1 + \left\Vert \frac{d\mu}{dx} \right\Vert_{L^{k_2'}(\R^d)}^{\alpha_2}\right). \end{aligned} \right.$$ \end{enumerate} \end{Def} \begin{Rq}\label{CorW_2} - The space $\mathcal{W}_2(\R^d)$ contains the functions satisfying the four first assumptions of Definition \ref{sobolevW} with $(\PP(\R^d),d_{\PP})$ replaced by $ (\PPP_2(\R^d),W_2)$ and also assuming that the functions in Assumptions $(2)$ and $(4)$ are continuous with respect to $(t,\mu)\in[0,T]\times \PPP_2(\R^d)$. Indeed, Assumption $(5)$ is automatically satisfied with $\alpha_1=\alpha_2 = 0$ because $\KK$ is compact. \\ \noindent- The bound in Assumption $(3)$ is quite natural. If the supremum in this bound was taken only over a compact set of $\PPP_2(\R^d)$, it would be the definition of the linear derivative. But we also need to control $\del$ locally uniformly in the space variable $x \in \R^d$ because of our regularization procedure through a convolution both in the space and measure variables. Assumptions $(2)$, $(4)$ and $(5)$ are generalizations of those in Definition \ref{sobolevW1} adapted to the presence of the space and time variables. In Assumption $(5)$, the condition on $k_2$ and $\alpha_2$ changes a bit compared to the analogous assumption in Definition \ref{sobolevW1}, essentially because it deals with functions on $\R^{2d}$ instead of $\R^d.$ Let us mention that Assumption $(5)$ in Definition \ref{sobolevW} can be replaced by the integrability properties \eqref{bound1} established in Step $1$ of the proof of the next theorem (see Section \ref{sectionproof2}). \end{Rq} The next theorem is the natural extension of the formula for functions in $\mathcal{W}_2(\R^d).$ Let $(\eta_s)_{s \in [0,T]} $ and $(\gamma_s)_{s \in [0,T]} $ be two progressively measurable processes, taking values respectively in $\R^d$ and $\R^{d \times d_1}$ and satisfying Assumptions \textbf{(A)} and \textbf{(B)}. We set, for all $t \leq T$ $$\xi_t = \xi_0 + \int_0^t \eta_s \, ds + \int_0^t \gamma_s \, dB_s,$$ where $\xi_0$ is a $\FF_0$-measurable random variable with values in $\R^d.$ \begin{Thm}[Extension of Itô-Krylov's formula]\label{extensionito} Let $u$ be a function in $\mathcal{W}_2(\R^d),$ which was defined in Definition \ref{sobolevW}. We have almost surely, for all $t \in [0,T]$ \begin{align}\label{formulaito2} \notag u(t,\xi_t,\mu_t) &= u(0,\xi_0, \mu_0) + \int_0^t ( \partial_t u(s,\xi_s,\mu_s) + \partial_x u(s,\xi_s,\mu_s)\cdot\eta_s) \, ds + \frac12 \int_0^t \partial^2_x u(s,\xi_s,\mu_s)\cdot \gamma_s\gamma_s^* \, ds \\ &+ \int_0^t \tilde{\E} \left(\partial_v \del (s,\xi_s,\mu_s)(\tilde{X_s})\cdot\tilde{b_s}\right) \, ds + \frac12 \int_0^t \tilde{\E} \left(\partial^2_v \del (s,\xi_s,\mu_s)(\tilde{X_s})\cdot \tilde{a_s}\right) \, ds \\ \notag &+ \int_0^t \partial_x u(s,\xi_s,\mu_s)\cdot(\gamma_s \, dB_s), \end{align} where $(\tilde{\Omega},\tilde{\FF}, \tilde{\P})$ is a copy of $(\Omega, \FF,\P)$ and $(\tilde{X},\tilde{b}, \tilde{\sigma})$ is an independent copy of $(X,b,\sigma)$. \end{Thm} Let us now give examples of functions belonging to the space $\mathcal{W}_2(\R^d).$ \begin{Ex}\label{exlinear2} Let $g \in \CC^0(\R^{2d};\R)$ be a function such that its distributional derivative $\partial g$ belongs to $(W^{1,k}(\R^{2d}))^{2d}$ for some $k \geq 5d.$ Then, the function $$u: \left\{ \begin{array}{rll} \R^d \times \PPP_2(\R^d) &\rightarrow \R \\ (x,\mu)&\mapsto \displaystyle\int_{\R^d} g(x,y) \, d\mu(y) \end{array} \right.$$ belongs to $\mathcal{W}_2(\R^d)$ for $d_{\PP}=d_k.$ \end{Ex} The proof is postponed to the Appendix (Section \ref{proofexlinear2}). \begin{Ex}\label{ex1} Let $F \in \CC^1(\R^d\times \R ; \R)$ be a function such that for all $R>0$ $$ y \in \R \mapsto \partial F(\cdot,y) \in (W^{1,k_1}(B_R))^{d+1},$$ is well-defined and continuous for some $k_1 \geq d+1.$ Let $g \in \CC^0(\R^d;\R)$ be such that the distributional derivative $\partial g$ belongs to $(W^{1,k_2}(\R^d))^d$ for some $k_2 \geq 2d.$ Then $$u: \left\{ \begin{array}{cll} \R^d \times \PPP_2(\R^d) &\rightarrow \R \\ (x,\mu)&\mapsto F\left(x,\int_{\R^d} g \, d\mu \right) \end{array} \right.$$ belongs to $\mathcal{W}_2(\R^d)$ for $d_{\PP}=W_2.$ \end{Ex} The proof is again postponed to the Appendix (Section \ref{proofex1}). \begin{Rq}In the abstract, we said that our Itô-Krylov's formula for a flow of measure was the almost analogue of the standard Itô-Krylov formula. We used the word "almost" because Assumption $(1)$ in Definition \ref{sobolevW} is not completely satisfactory. Indeed, we do not assume Sobolev regularity with respect to time, as it is the case in Itô-Krylov's formula for functions defined on $[0,T]\times \R^d.$ Of course if $u$ is of the form $u(t,\mu)=\int_{\R^d} g(t,x) \, d\mu(x)$ with $g\in \CC^0([0,T]\times \R^d;\R)$ at most of quadratic growth in $x$ uniformly in $t$, and such that the distributional derivatives $\partial_t g,$ $\partial_x g$ and $ \partial_x^2 g$ are in $L^k([0,T]\times \R^d)$ for some $k \geq d+1,$ we will succeed in proving Itô-Krylov's formula for $u.$\\ Let us give the idea of the proof. We regularize $u$ by setting $u^n(t,\mu) := \int_{\R^d} g*\rho_n(t,x) \, d\mu(x),$ where $(\rho_n)_n$ is a mollifying sequence on $\R \times \R^d.$ The function $u^n$ clearly satisfies the assumptions of the standard Itô formula for a flow of measures (see Proposition 5.102 in \cite{CarmonaProbabilisticTheoryMean2018}). It ensures that for all $t \in [0,T]$ \begin{align}\label{regularityintime} \notag u^n(t,\mu_t) &= u^n(0,\mu_0) + \int_0^t \E (\partial_t g * \rho_n (s,X_s)) \, ds + \int_0^t \E \left( \partial_x g * \rho_n(s,X_s) \cdot b_s \right) \, ds \\ &+ \frac{1}{2} \int_0^t \E \left( \partial^2_x g * \rho_n(s,X_s) \cdot a_s \right) \, ds. \end{align} As $g$ is continuous, $g * \rho_n$ converges to $g$ uniformly on compact sets. It follows from the growth assumption on $g$ that $u^n$ converges point-wise to $u$. Using that $\partial_t g* \rho_n$ converges in $L^k([0,T]\times \R^d)$ to $\partial_t g$ as $n \rightarrow + \infty,$ we deduce with Krylov's inequality in Corollary \ref{corkrylov} that for all $t \in [0,T]$ $$ \int_0^t \E (\partial_t g * \rho_n (s,X_s)) \, ds \rightarrow \int_0^t \E (\partial_t g (s,X_s)) \, ds.$$ The same holds with the two other integrals in \eqref{regularityintime}. Taking the limit $n \rightarrow + \infty$ in \eqref{regularityintime} yields for all $t \in [0,T]$ \begin{align*} u(t,\mu_t) &= u(0,\mu_0) + \int_0^t \E (\partial_t g (s,X_s)) \, ds + \int_0^t \E \left( \partial_x g(s,X_s) \cdot b_s \right) \, ds \\ &+ \frac{1}{2} \int_0^t \E \left( \partial^2_x g (s,X_s) \cdot a_s \right) \, ds. \end{align*} In the general case, when the dependence in $\mu$ of the function $u$ is not explicit, we cannot apply Krylov's inequality. Indeed, consider a function $ u: [0,T] \times \PPP_2(\R^d) \rightarrow \R$ such that, for all $\mu \in \PPP_2(\R^d),$ $u(\cdot,\mu) \in W^{1,k}([0,T]).$ In Itô's formula for $u$, as in the classical formula, there should be the term $ \int_0^t \partial_t u(s,\mu_s) \, ds$. The assumption does not imply that this term is well-defined. One possible hypothesis is to assume that for all compact $\KK \subset \PPP_2(\R^d),$ $\sup_{\mu \in \KK} |\partial_tu(\cdot,\mu)| \in L^1([0,T]).$ Following our strategy to prove Itô-Krylov's formula, we would consider the mollified version of $u$ defined by $ u^n(t,\mu) := u(\cdot,\mu\star \rho^1_n)*\rho^2_n(t),$ where $(\rho^1_n)_n$ and $(\rho^2_n)_n$ are mollifying sequences on $\R^d$ and on $\R$ respectively. Assume that we have proved Itô's formula for $u^n.$ In order to take the limit and deduce Itô's formula for $u,$ we would like to show that \begin{equation*}\label{hypothesistime} \int_0^T |\partial_t u(\cdot,\mu_s\star\rho_n^1)*\rho_n^2(s) - \partial_t u(s,\mu_s)| \, ds \rightarrow0. \end{equation*} However, this convergence is not obvious in the general case since the presence of $\mu_s$ prevents us from using the classical results on convolution and we cannot apply Krylov's inequality if the dependence in the measure argument is not linear. \end{Rq} \section{Preliminaries}\label{sectionpreliminaries} \subsection{Krylov's inequality and densities.} The key element to prove the theorem is Krylov's inequality. We recall it in the next theorem taken from \cite{KrylovControlledDiffusionProcesses2009} (see Theorem $4$ in Section $2.3$). \begin{Thm}[Krylov's inequality]\label{inegalitekrylov} Let $b: \R^+ \times \Omega \rightarrow \R^d$ and $ \sigma: \R^+ \times \Omega \rightarrow \R^{d \times d_1}$ be two progressively measurable functions. We assume that $p,d_1 \geq d$. Moreover, assume that there exists $K>0$ and $\delta >0$ such that \begin{enumerate} \item[] \textbf{(A1)} $\forall (t,\omega) \in \R^+ \times \Omega, \, |b_t(\omega)| + | \sigma_t (\omega)| \leq K$ \item[] \textbf{(A2)} $\forall (t,\omega) \in \R^+ \times \Omega, \, \forall \lambda \in \R^d, \, a_t(\omega) \lambda \cdot \lambda \geq \delta |\lambda|^2,$ where $a = \sigma \sigma^*$. \end{enumerate} For $X_0$ a $\R^d$-valued $\FF_0$-measurable random variable, we define the Itô process $X=(X_t)_t,$ for all $t \in [0,T],$ by $$ X_t = X_0 + \int_0^t b_s \, ds + \int_0^t \sigma_s \, dB_s.$$ Let $\lambda >0$ be a positive constant. Then, there exists a constant $N = N( d,p,\lambda, \delta, K)$ such that for all measurable function $f: \R^+ \times \R^d \rightarrow \R$ $$\E \displaystyle\int_0^{\infty} e^{-\lambda t} |f(t,X_t)| \, dt \leq N_1 \Vert f \Vert_{L^{p+1}(\R^+ \times \R^d)}.$$ \end{Thm} We will use the following corollary for a finite horizon of time. \begin{Cor}\label{corkrylov} If $b$ and $\sigma$ satisfy Assumptions \textbf{(A)} and \textbf{(B)}, there exists $N_1 = N_1(d,p,\delta, K,T)$ such that for all measurable function $f: [0,T] \times \R^d \rightarrow \R,$ we have $$ \E \int_0^T |f(s,X_s)| \, ds \leq N_1(d,p,\delta, K,T) \Vert f \Vert_{L^{p+1}([0,T] \times \R^d)}.$$ \end{Cor} \begin{dem} We set $b_t=b_T$ and $\sigma_t = \sigma_T$ for $t > T$ to guarantee that Assumptions \textbf{(A1)} and \textbf{(A2)} are satisfied, without changing the process $X$ on $[0,T].$ It remains to apply Krylov's inequality to $\tilde{f}(t,x) := f(t,x) \1_{ t\in[0,T]},$ which gives the existence of $N_1=N_1(d,p,\delta,K)$ such that $$ e^{-T} \E \int_0^T |f(s,X_s)| \, ds \leq N_1(d,p,\delta, K) \Vert f \Vert_{L^{p+1}([0,T] \times \R^d)}.$$ \end{dem} Krylov's inequality also provides the existence of a density with respect to the Lebesgue measure for $\mu_s$, for almost all $s \in [0,T]$. \begin{Prop}\label{densiteexistence} Under Assumptions $\textbf{(A)}$ and $\textbf{(B)}$ on the coefficients $b$ and $\sigma$, there exists a function $p \in L^{1}([0,T] \times \R^d;\R^+)\cap L^{(d+1)'}([0,T]\times \R^d;\R^+)$ such that for all $f:[0,T]\times \R^d \rightarrow \R^+$ measurable \begin{equation} \int_0^T \E f(s,X_s) \, ds = \int_{[0,T] \times \R^d} f(s,x) p(s,x) \, dx\,ds. \end{equation} If $\tau$ is a stopping time such that $(X_t)_{t \in [0,T]}$ belongs to $B_R$ almost surely on the set $\{ \tau >0\}$, then \begin{equation}\label{inegkrylovdensity} \E \int_0^{\tau \wedge T} f(s,X_s) \, ds \leq \int_{[0,T] \times B_R} f(s,x) p(s,x) \, dx\,ds.\end{equation} Moreover, for almost all $s \in [0,T]$, $\mu_s = \LL(X_s)$ is equal to $p(s,\cdot) \,dx.$ \end{Prop} We give the proof for the sake of completeness. \begin{dem} We denote by $\mu$ the push-forward measure of $\lambda \otimes \P$, where $\lambda$ is the Lebesgue measure on $[0,T],$ by the measurable map $(t,\omega) \in [0,T] \times \Omega \mapsto (t,X_t(\omega)) \in [0,T] \times \R^d$ defined, for any $A \in \BB([0,T])\otimes\BB(\R^d),$ by $$ \mu(A) = \int_0^T \E \1_A(s,X_s) \, ds.$$ Note that $\mu$ is a finite measure on $[0,T]\times \R^d.$ The monotone convergence theorem and Krylov's inequality ensure that for all $f:[0,T]\times \R^d \rightarrow \R^+$ measurable $$ \int_0^T \E f(s,X_s) \, ds = \int_{[0,T]\times \R^d} f(s,x) \, d\mu(s,x) \leq C\|f\|_{L^{p+1}([0,T] \times \R^d)}.$$ Taking $f= \1_{A},$ for $A \in \BB([0,T])\otimes\BB(\R^d)$ with Lebesgue measure $0$, we deduce that $\mu(A)=0.$ Thus $\mu$ is absolutely continuous with respect to the Lebesgue measure on $[0,T]\times\R^d.$ Radon-Nikodym's theorem provides the existence of $p \in L^1([0,T]\times \R^d;\R^+)$ such that for all measurable function $f:[0,T]\times \R^d \rightarrow \R^+$ \begin{equation}\label{equationdensity} \int_0^T \E f(s,X_s) \, ds = \int_{[0,T] \times \R^d} f(s,x) p(s,x) \, dx\,ds. \end{equation} Krylov's inequality exactly proves that the map $ f \in L^{d+1}([0,T]\times \R^d) \mapsto \displaystyle\int_{[0,T] \times \R^d} f(s,x) p(s,x) \, dx\,ds$ is a continuous linear form. Since the dual space of $L^{d+1}([0,T]\times \R^d)$ is $L^{(d+1)'}([0,T] \times \R^d),$ $p$ belongs to $ L^{(d+1)'}([0,T]\times \R^d).$ \\ \noindent To prove \eqref{inegkrylovdensity}, it is enough to notice that $$ \E \int_0^{\tau \wedge T} f(s,X_s) \, ds \leq \E \int_0^T f(s,X_s) \1_{B_R}(X_s) \, ds.$$ Next, we establish that for almost all $s \in [0,T]$, $\mu_s = p(s,\cdot) \,dx.$ We fix $s \in [0,T]$, $n\geq 1$ large enough and $A \in \BB(\R^d).$ Applying \eqref{equationdensity} with $f= \1_{[s-1/n,s+1/n]\times A},$ and using Fubini-Tonelli's theorem, we deduce that $$ \frac n2\int_{s-1/n}^{s+1/n} \P(X_t\in A) \, dt = \frac n2 \int_{s-1/n}^{s+1/n}\int_A p(t,x) \, dx \, ds.$$ Since $t \mapsto \P(X_t \in A)$ is bounded and as Fubini's theorem implies that $ t \mapsto \int_A p(t,x)\, dx$ belongs to $L^1([0,T]),$ it follows from Lebesgue differentiation theorem (see Theorem 7.7 in \cite{RudinRealcomplexanalysis1987}) that for almost all $s \in [0,T]$ $$ \P(X_s \in A) = \int_A p(s,x) \, dx.$$ We denote by $\mathcal{R}$ the set of all Borel sets in $\R^d$ of the form $ \prod_{i=1}^{d}]a_i,b_i[,$ with $a_i < b_i$ two rational numbers for all $i$. The set $\mathcal{R}$ is at most countable, thus for almost $s \in [0,T]$ $$ \forall A \in \mathcal{R}, \quad \P(X_s \in A) = \int_A p(s,x) \, dx.$$ The monotone class theorem enables us to conclude. \end{dem} Note that for almost all $s \in [0,T]$, $p(s,\cdot) \in L^{(d+1)'}(\R^d)$ using Fubini-Tonelli's theorem. We deduce the following corollary. \begin{Cor}\label{cordensite} For almost all $s \in [0,T]$, $ \mu_s \in \PP(\R^d).$ \end{Cor} We now prove two lemmas dealing with the integrability of the density $p.$ \begin{Lemme} \label{integrability1} Let $p$ be the density given by Proposition \ref{densiteexistence}. Then for all $k \geq d+1$ $$ s \in [0,T] \mapsto \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)} \in L^{k/d}([0,T]).$$ \end{Lemme} \begin{dem}Using Jensen's inequality since $\frac{k}{k'} = k-1\geq d,$ we obtain that $$\begin{aligned} \int_0^T \left(\int_{\R^d} p(s,x)^{k'} \, dx\right)^{\frac{k}{dk'}} \, ds &=\int_0^T \left(\int_{\R^d} p(s,x)^{k'-1} p(s,x) \, dx\right)^{\frac{k}{dk'}} \, ds \\ &\leq \int_0^T\int_{\R^d} p(s,x)^{\frac{k}{dk'}(k'-1) +1} \, dx \, ds. \end{aligned}$$ By definition of the conjugate exponent, we get $$ \int_0^T\int_{\R^d} p(s,x)^{\frac{k}{dk'}(k'-1) +1} \, dx \, ds = \int_0^T\int_{\R^d} p(s,x)^{\frac{1}{d} +1} \, dx \, ds,$$ which is finite since $(d+1)' = \frac{1}{d} +1$ and $p \in L^{(d+1)'}([0,T]\times \R^d).$ \end{dem} \begin{Lemme}\label{integrability2} Let $p$ and $q$ be two densities of two Itô processes of the form \eqref{itoprocess} and satisfying \textbf{(A)} and \textbf{(B)} given by Proposition \ref{densiteexistence}. Then for $k,\alpha \in \N$ such that $ k \geq \max \{d+1, d(\alpha +1)\},$ we have $$ \int_0^T \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)}^{\alpha} \Vert q(s,\cdot) \Vert_{L^{k'}(\R^d)} \, ds < + \infty.$$ \end{Lemme} \begin{dem} Owing to Lemma \ref{integrability1}, the function $s \mapsto \Vert q(s,\cdot) \Vert_{L^{k'}(\R^d)}$ belongs to $L^1([0,T]) \cap L^{k/d}([0,T]).$ Using Hölder's inequality, the proof is complete once we prove that $ s\mapsto \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)}^{\alpha}$ belongs to $L^r([0,T])$ for some $r \geq \left( \frac{k}{d}\right)'.$ Lemma \ref{integrability1} ensures that $ s\mapsto \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)}^{\alpha} \in L^{\frac{k}{\alpha d}}([0,T])$ thus we have to prove that $ \left( \frac{k}{d}\right)' \leq \frac{k}{\alpha d}.$ This is equivalent to our assumption $k \geq d(\alpha +1).$ \end{dem} \subsection{Classical results on convolution and regularization.} Fix $p \in [1+\infty[.$ We will need the two following well-known lemmas. \begin{Lemme}[Convolution]\label{convolution} \begin{enumerate} \item[-] For all $f \in L^p(\R^d)$ and for all $g \in L^1(\R^d)$, the convolution $f * g$ is well-defined and belongs to $L^p(\R^d)$. Moreover, we have $\Vert f * g \Vert_{L^p} \leq \Vert f \Vert_{L^p} \Vert g \Vert_{L^1}.$ \item[-] For all $f \in L^p(\R^d)$ and for all $g \in L^{p'}(\R^d)$, the convolution $f * g$ is well-defined and belongs to $L^{\infty}(\R^d)$. Moreover, we have $\Vert f * g \Vert_{L^{\infty}} \leq \Vert f \Vert_{L^p} \Vert g \Vert_{L^{p'}}.$ \end{enumerate} \end{Lemme} \begin{Lemme}[Regularization]\label{regularisation} \ Recall that $(\rho_n)_n$ is a mollifying sequence. \begin{enumerate} \item[-] Let $f \in L^1_{\text{loc}}(\R^d)$ and $\rho \in \CC^{\infty}_c(\R^d)$. Then $f * \rho \in \CC^{\infty}(\R^d)$ and $ \forall\alpha \in \N^d,$ $ \partial^{\alpha} ( f * \rho) = f * \partial^{\alpha} \rho.$ \item[-] If $f \in L^p(\R^d)$, then $f \star \rho_n \overset{L^p}{\longrightarrow} f$, and if $f \in \CC^0(\R^d)$, $f * \rho_n\rightarrow f$ uniformly on compact sets. \item[-] If $f \in L^p_{loc}(\R^d)$, then for all $R>0$, $f * \rho_n \rightarrow f $ in $L^p(B_R).$ \end{enumerate} \end{Lemme} The following proposition will also be useful. \begin{Prop}\label{deriveefaible} Let $f \in \CC^0(\R^d)$ be a function admitting distributional derivatives of order $1$ et $2$ in $L^1_{loc}(\R^d).$ Then $f * \rho_n \in \CC^{\infty}(\R^d)$ and for all $i,j \in \{1, \dots d \}$ $$ \left\{ \begin{array}{rll} \partial_{x_i} ( f * \rho_n) &= \partial_{x_i}f * \rho_n \\ \partial_{x_i \, x_j} ( f * \rho_n) &= \partial_{x_i \, x_j}f * \rho_n. \end{array}\right.$$ \end{Prop} The next lemma deals with the convolution of a function $f \in L^p$ with $\mu \in \PPP(\R^d).$ \begin{Lemme}\label{continuiteconvolution} Let $f \in L^{p}(\R^d)$. Then $\mu \in \PPP(\R^d) \mapsto f \star \mu \in L^p(\R^d)$ is continuous. \end{Lemme} \begin{dem}Note that the convolution $f \star \mu$ is well-defined as an element of $L^p(\R^d)$ thanks to Jensen's inequality which shows that $$ \forall f \in L^p(\R^d), \, \forall \mu \in \PPP(\R^d),\, \Vert f \star \mu \Vert_{L^p} \leq \Vert f \Vert_{L^p}.$$ Let $(\mu_n)_n$ be a sequence of $\PPP(\R^d)$ weakly convergent to $\mu \in \PPP(\R^d)$. Using Skorokhod's representation theorem (see Theorem $6.7$ in \cite{BillingsleyConvergenceProbabilityMeasures1999}), there exists a probability space $(\Omega',\FF', \P')$, a sequence of random variables $(X_n)_n$ converging $\P'$-almost surely to a random variable $X$ such that, the law of $X_n$ is $\mu_n$ for all $n$ and the law of $X$ if $\mu$. For any $a \in \R^d,$ let us denote by $\tau_a f$ the translation of $f$ defined, for all $x \in \R^d,$ by $\tau_a f(x) := f(x-a).$ Jensen's inequality and Fubini-Tonelli's theorem yield \begin{align*} \Vert f \star \mu_n - f \star \mu \Vert_{L^p}^p &= \int_{\R^d} \left\vert \E'(f(x-X_n) - f(x-X))\right\vert^p \, dx \\ &\leq \int_{\R^d} \E'(\left\vert f(x-X_n) - f(x-X)\right\vert^p) \, dx \\ &= \E' (\Vert \tau_{X_n-X}f - f \Vert_{L^p}^p). \end{align*} It follows from the almost sure convergence of $(X_n)_n$ to $X$ and the continuity of the translation operator in $L^p$ that $\Vert \tau_{X_n-X}f - f \Vert_{L^p}^p \overset{a.s.}{\longrightarrow} 0.$ Moreover, the inequality \begin{align*} \Vert \tau_{X_n-X}f - f \Vert_{L^p}^p &\leq 2^{p-1} ( \Vert \tau_{X_n-X}f \Vert_{L^p}^p + \Vert f \Vert_{L^p}^p )\\ &= 2^{p} \Vert f \Vert_{L^p}^p, \end{align*}enables us to conclude with the dominated convergence theorem. \end{dem} \subsection{Convolution of probability measures} \begin{Lemme}[Contraction inequality]\label{convolutionmesure} Fix $\mu,\nu,m \in \PPP_2(\R^d)$. Then, we have $$ W_2(\mu \star m, \nu \star m) \leq W_2 ( \mu, \nu).$$ \end{Lemme} \begin{dem}Let $\pi \in \PPP_2(\R^d \times \R^d)$ be an optimal coupling between $\mu$ and $\nu$. We consider a couple of random variables $(X,Y)$ with law $\pi,$ and a random variable $Z$ independent of $(X,Y)$ with law $m$. The law of $X+Z$ being $\mu \star m$ and the law of $Y+Z$ being $\nu \star m$, one has \begin{align*} W_2(\mu \star m, \nu \star m) \leq \Vert (X+Z)-(Y+Z) \Vert_{L^2} = W_2(\mu,\nu). \end{align*} \end{dem} The next corollary follows from the fact that $\rho_n \overset{W_2}{\longrightarrow} \delta_0.$ \begin{Cor}\label{regularisationconvolutionmeasure} For all $\mu \in \PPP_2(\R^d),$ $ \mu \star \rho_n \overset{W_2}{\longrightarrow} \mu.$ \end{Cor} \subsection{Measurability} We will need the following lemma to guarantee that, for $u \in \mathcal{W}_1(\R^d),$ we can find versions of $\partial_v \del$ and $\partial^2_v \del$ that are measurable with respect to $(\mu,v) \in \PP(\R^d) \times \R^d$. \begin{Lemme}\label{measurabilty} Let $u: E \rightarrow L^{k}(\R^d)$ be a continuous function, where $E$ is a metric space and $k >1$. Then, for all $x \in E$, we can find a version of $u(x)$ such that $(x,v) \in E \times \R^d \mapsto u(x)(v)$ is measurable with respect to $\BB(E)\otimes \BB(\R^d).$ \end{Lemme} \begin{dem} For $(x,v) \in E \times \R^d$, we define $$\tilde{u}(x,v) = \underset{n \rightarrow + \infty}{\underline{\lim}} \frac{1}{\lambda(B(v,1/n))} \int_{B(v,1/n)} u(x)(y) \, dy = \underset{n \rightarrow + \infty}{\underline{\lim}} u^n(x,v),$$ where $\lambda$ denotes the Lebesgue measure on $\R^d$. From Lebesgue differentiation theorem (see Theorem 7.7 in \cite{RudinRealcomplexanalysis1987}), we deduce that for all $x \in E$, $\tilde{u}(x,\cdot) = u(x)$ $\lambda$-almost everywhere. We prove that for all $n\geq 1$, $u^n$ is continuous. Note that $\frac{1}{\lambda(B(v,1/n))}$ does not depend on $v$. The continuity of $u^n$ follows from the continuity of $x \in E \mapsto u(x) \in L^{k}(\R^d)$, $v \in \R^d \mapsto \1_{B(v,1/n)} \in L^{k'}(\R^d)$ (coming from the dominated convergence theorem), and of $(f,g) \in L^{k}(\R^d) \times L^{k'}(\R^d) \mapsto \int_{\R^d} fg \, dx.$ \end{dem} \section{Proof of Theorem \ref{ito_krylov}}\label{sectionproof1} The proof will be divided into three parts. Step 1 is dedicated to prove that all the terms in Itô-Krylov's formula \eqref{formulaito1} are well-defined. In Step 2, we regularize $u$ by convolution of the measure argument with a mollifying sequence $(\rho_n)_n.$ The effect of replacing $u(\mu)$ by $u(\mu \star \rho_n)$ is that the linear derivative is regularized by convolution, in its space variable. Then, we apply the standard Itô's formula for a flow of measure. We finally take the limit $n \rightarrow + \infty$ in Step $3$ with the help of Krylov's inequality. \\ \noindent\textbf{Step 1: All the terms in \eqref{formulaito1} are well-defined.}\\ Let us show that the two integrals in \eqref{formulaito1} are well-defined.\\ \noindent\textbf{Measurability.} Thanks to Lemma \ref{measurabilty}, we can find a version of $\partial_v \del $ which is measurable with respect to $(\mu,v) \in \PP(\R^d)\times \R^d.$ To conclude, we prove that $s \mapsto \mu_s \in \PP(\R^d)$ is measurable. Indeed if it is the case, the function $(s,\omega) \in [0,T]\times \Omega \mapsto \partial_v \del(\mu_s)(X_s(\omega)).b_s(\omega)$ will be measurable by composition. First, note that $\mu_s \in \PP(\R^d)$ for almost all $s \in [0,T]$ (see Corollary \ref{cordensite}) so we can change $\mu_s$ on a negligible set of times $s$ to ensure that $\mu_s \in \PP(\R^d)$ for all $s \in [0,T].$ But $\mu_s = \displaystyle\lim_{n\rightarrow+\infty} \mu_s\star \rho_n$ for $d_{\PP}$ by Assumption \textbf{(H2)} in Definition \ref{defespaceP}. It remains to show that $s \mapsto \mu_s \star \rho_n \in \PP(\R^d)$ is continuous and thus mesurable for all $n$. This follows from the continuity of $s \mapsto \mu_s \in \PPP_2(\R^d)$ and also from Assumption \textbf{(H1)} in Definition \ref{defespaceP}. \\ \noindent \textbf{Integrability.} We can omit the coefficients $b$ and $a$ to prove the integrability properties because they are uniformly bounded. Taking advantage from the existence of a density coming from Proposition \ref{densiteexistence}, we have by Hölder's inequality \begin{align*} \int_0^T \E \left| \partial_v \del (\mu_s)(X_s) \right| \, ds & = \int_0^T \int_{\R^d} \left| \partial_v \del (\mu_s)(x) \right| p(s,x) \, dx\, ds \\ &\leq \int_0^T \left\Vert \partial_v \del(\mu_s)(\cdot)\right\Vert_{L^k(\R^d)} \Vert p(s,\cdot)\Vert_{L^{k'}(\R^d)} \, ds \\ &\leq \int_0^T C \left(1+ \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)}^{\alpha}\right) \Vert p(s,\cdot)\Vert_{L^{k'}(\R^d)} \, ds, \end{align*} for some constant $C$ coming from Assumption $(2)$ in Definition \ref{sobolevW1} because the flow $(\mu_s)_{s\leq T}$ is compact in $\PPP_2(\R^d)$ and belongs to $\PP(\R^d)$ for almost all $s.$ The last bound is finite thanks to Lemma \ref{integrability1} since $ k\geq \max \{d(\alpha+1),d+1\}.$ The same properties hold for the term involving $\partial^2_v \del.$\\ \noindent\textbf{Step 2: Itô's formula for the mollification of $\boldsymbol{u}.$} \\[1pt] For $n \geq 1$, we set $u^n: \mu \in \PPP_2(\R^d) \mapsto u (\mu \star \rho_n).$ By standard arguments, for each $ n \geq 1,$ $u^n$ has a linear derivative given by \begin{align*} \frac{\delta u^n}{\delta m} (\mu)(v) = \int_{\R^d} \del (\mu \star \rho_n)(x) \rho_n(v-x)\, dx = \del (\mu \star \rho_n) * \rho_n (v). \end{align*} Now, we aim at applying the standard Itô formula for a flow of probability measures (see for example Theorem 5.99 in Chapter $5$ of \cite{CarmonaProbabilisticTheoryMean2018} with the L-derivative) to $u^n$ for a fixed $n \geq 1.$ \\ \textbf{(i) Regularity of $\boldsymbol{\frac{\delta u^{n}}{\delta m}(\mu)}$ for a fixed $\boldsymbol{\mu \in \PPP_2(\R^d)}.$} Since for all $\mu \in \PPP_2(\R^d)$, $ \mu \star \rho_{n} \in \PP(\R^d),$ Proposition \ref{deriveefaible} implies that $\frac{\delta u^{n}}{\delta m}(\mu)(.) \in \CC^{\infty}(\R^d)$ and for all $i,j \in \{1, \dots, d\}$ $$ \partial_{v_i} \frac{\delta u^{n}}{\delta_m}(\mu) = \partial_{v_i} \frac{\delta u}{\delta_m}(\mu \star \rho_{n}) * \rho_{n} \quad \text{and}\quad \partial_{v_i \, v_j} \frac{\delta u^{n}}{\delta m}(\mu) = \partial_{v_i \, v_j}\del (\mu \star \rho_{n}) * \rho_{n}.$$ \textbf{(ii) Continuity of $\boldsymbol{\partial_v\frac{\delta u^{n}}{\delta m}}$ and $\boldsymbol{\partial^2_v\frac{\delta u^{n}}{\delta m}}$ with respect to $\boldsymbol{(\mu,v)}.$} Let $ i \in \{1, \dots ,d \}$, $(\mu_m)_m \in \PPP_2(\R^d)^{\N}$ and $(v_m)_m \in (\R^d)^{\N}$ be sequences converging respectively to $\mu$ and $v$. We have \begin{align*} &\left\vert \partial_{v_i} \frac{\delta u^{n}}{\delta m}(\mu_m)(v_m) - \partial_{v_i} \frac{\delta u^{n}}{\delta m}(\mu)(v) \right\vert \\&\leq \left\vert \partial_{v_i} \frac{\delta u^{n}}{\delta m}(\mu_m)(v_m) - \partial_{v_i} \frac{\delta u^{n}}{\delta m}(\mu)(v_m) \right\vert+ \left\vert \partial_{v_i} \frac{\delta u^{n}}{\delta m}(\mu)(v_m) - \partial_{v_i} \frac{\delta u^{n}}{\delta m}(\mu)(v) \right\vert \\ &=: D_1 + D_2 \end{align*} $D_2$ converges to $0$ when $m \rightarrow + \infty$ by $(i).$ For $D_1$, the convolution inequality $L^k*L^{k'}$ gives that \begin{align*} D_1 & = \left\vert \partial_{v_i} \frac{\delta u}{\delta_m}(\mu_m \star \rho_{n}) * \rho_{n} (v_m) - \partial_{v_i} \frac{\delta u}{\delta_m}(\mu \star \rho_{n}) * \rho_{n}(v_m) \right\vert \\ &\leq \left\Vert \partial_{v_i} \frac{\delta u}{\delta_m}(\mu_m \star \rho_{n}) - \partial_{v_i} \frac{\delta u}{\delta_m}(\mu \star \rho_{n}) \right\Vert_{L^{k}} \Vert \rho_{n} \Vert_{L^{k'}}. \end{align*} Assumption \textbf{(H1)} in Definition \ref{defespaceP} provides that $ \mu_m \star \rho_{n} \overset{d_{\PP}}{\longrightarrow} \mu \star \rho_{n}$ when $ m \rightarrow + \infty.$ Finally, using the first assumption in Definition \ref{sobolevW1}, we conclude that $D_1$ converges to $0$ when $m \rightarrow + \infty$. This shows the continuity of $\partial_v\frac{\delta u^{n}}{\delta m}$ on $\PPP_2(\R^d) \times \R^d$. The same reasoning proves the joint continuity of $\partial^2_v\frac{\delta u^{n}}{\delta m}$.\\ \textbf{(iii) Boundedness of $\boldsymbol{\partial_v\frac{\delta u^{n}}{\delta m}}$ and $\boldsymbol{\partial^2_v\frac{\delta u^{n}}{\delta m}}.$} Let $\KK \subset \PPP_2(\R^d)$ be a compact set. For $\mu \in \KK$ and $v \in \R^d$, one has \begin{align*} \left\vert \partial_{v_i}\frac{\delta u^{n}}{\delta m}(\mu)(v) \right\vert &\leq \left\Vert \partial_{v_i} \frac{\delta u}{\delta_m}(\mu \star \rho_{n}) \right\Vert_{L^k} \Vert \rho_{n} \Vert_{L^{k'}}. \end{align*} The set $ \{\mu \star \rho_{n}, \, \mu \in \KK \}$ is compact in $(\PP(\R^d),d_{\PP})$ as the image of the compact $\KK$ by the application $ \mu \in \PPP_2(\R^d) \mapsto \mu \star \rho_{n} \in \PP(\R^d)$ which is continuous by Assumption (\textbf{H1}) in Definition \ref{defespaceP}. The first assumption in Definition \ref{sobolevW1} guarantees that $ \sup_{\mu \in \KK} \left\Vert \partial_{v_i} \frac{\delta u}{\delta_m}(\mu \star \rho_{n})\right\Vert_{L^{k}(\R^d)} < + \infty $ and thus $$ \underset{v \in \R^d}{\sup}\sup_{\mu \in \KK} \left\vert \partial_{v}\frac{\delta u^{n}}{\delta m}(\mu)(v) \right\vert< \infty.$$ The same property holds for $\partial^2_{v}\frac{\delta u^{n}}{\delta m}.$\\ \noindent We can thus apply Itô's formula of \cite{CarmonaProbabilisticTheoryMean2018} to obtain that for all $n \geq 1$ and for all $t \in [0,T]$ \begin{equation}\label{formulau_nito1} u^n(\mu_t) = u^n(\mu_0) + \int_0^t \E \left( \partial_v \frac{\delta u^n}{\delta m} (\mu_s)(X_s)\cdot b_s\right) \,ds + \frac{1}{2} \int_0^t \E \left( \partial^2_v \frac{\delta u^n}{\delta m}(\mu_s)(X_s)\cdot a_s\right) \,ds.\end{equation} \noindent \textbf{Step 3: Letting $\boldsymbol{n \rightarrow + \infty}$.} \\[1pt] Our aim is now to take the limit $n \rightarrow + \infty$ in \eqref{formulau_nito1}. As for all $\mu \in \PPP_2(\R^d),$ $\mu \star \rho_n \overset{W_2}{\longrightarrow} \mu$ and $u$ is continuous on $\PPP_2(\R^d),$ we deduce that $(u^n)_n$ converges pointwise to $u$. It remains to take the limit in the two integrals of \eqref{formulau_nito1}. We show that \begin{equation}\label{term1Ito1} \int_0^t \E \left( \partial_v \frac{\delta u^n}{\delta m} (\mu_s)(X_s)\cdot b_s\right) \,ds \rightarrow \int_0^t \E \left( \partial_v \frac{\delta u}{\delta m} (\mu_s)(X_s)\cdot b_s\right) \,ds.\end{equation} Since $b$ is uniformly bounded, it is enough to prove that $$ \E \int_0^T \left| \partial_v \del (\mu \star \rho_n)* \rho_n (X_s) - \partial_v \del (\mu_s)(X_s) \right| \, ds \rightarrow 0.$$ By Proposition \ref{densiteexistence}, Hölder's inequality and then the $L^1*L^k$ convolution inequality, one has \begin{align*} &\E \int_0^T \left| \partial_v\del (\mu_s \star \rho_n)* \rho_n (X_s) - \partial_v \del (\mu_s)(X_s) \right| \, ds \\ &\leq \int_0^T \left\Vert \partial_v \del (\mu_s \star \rho_n) - \partial_v \del (\mu_s) \right\Vert_{L^k(\R^d)} \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)} \, ds \\ &+ \int_0^T \left\Vert \partial_v \del (\mu_s)* \rho_n - \partial_v \del (\mu_s) \right\Vert_{L^k(\R^d)} \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)} \, ds\\ &=: I_1 + I_2. \end{align*} The integrand in $I_1$ converges to $0$ for almost all $s$ using Assumption (1) in Theorem \ref{ito_krylov} and the fact that $ \mu_s \star \rho_n \overset{d_{\PP}}{\longrightarrow} \mu_s$ for almost all $s$ thanks to Assumption \textbf{(H2)} in Definition \ref{defespaceP}. Let us now prove that the dominated convergence theorem applies. The integrand is bounded by $$ \left[\sup_{n\geq 1} \left\Vert \partial_v \del (\mu_s \star \rho_n) \right\Vert_{L^k(\R^d)} + \left\Vert \partial_v \del (\mu_s ) \right\Vert_{L^k(\R^d)}\right] \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)}.$$ Note that the set $\{\mu_s \star \rho_n, \, s \in [0,T], \, n \geq 1 \} \cup \{ \mu_s, \, s \in [0,T] \}$ is compact in $\PPP_2(\R^d).$ Indeed, if $(s_k)_k \in [0,T]^{\N}$ and $(n_k)_k \in \N^{\N}$ are two sequences, we have to find a convergent subsequence from $(\mu_{s_k}\star \rho_{n_k})_k$. Up to an extraction, we can assume that $(s_k)_k$ converges to some $s \in [0,T]$. There are two cases. If there exists $l$ such that $n_k =l$ infinitely often, then $\mu_{s_k} \star \rho_l \overset{W_2}{\longrightarrow} \mu_s \star \rho_l$ by the contraction inequality (see Lemma \ref{convolutionmesure}). Otherwise, we can assume that $(n_k)_k$ converges to $+ \infty$. We use the triangle inequality to get \begin{align*} W_2 (\mu_{s_k} \star \rho_{n_k}, \mu_s) &\leq W_2(\mu_{s_k} \star \rho_{n_k},\mu_{s} \star \rho_{n_k}) + W_2(\mu_{s} \star \rho_{n_k}, \mu_s). \end{align*} The last term converges to $0$ owing to Lemma \ref{regularisationconvolutionmeasure}, and the first is bounded by $W_2(\mu_{s_k},\mu_s)$ by the contraction inequality (see Lemma \ref{convolutionmesure}), which converges to $0.$ Thus Assumption $(2)$ in Definition \ref{sobolevW1} ensures that there exists $C>0$ such that for almost all $s \in [0,T]$ and for all $n$ $$\left\Vert \partial_v \del (\mu_s \star \rho_n) \right\Vert_{L^k(\R^d)} + \left\Vert \partial_v \del (\mu_s ) \right\Vert_{L^k(\R^d)} \leq C (1+\Vert p(s,\cdot)*\rho_n \Vert_{L^{k'}(\R^d)}^{\alpha} + \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)}^{\alpha}).$$ It follows from the convolution inequality $L^{k'}*L^1$ that for almost all $s$ \begin{align*} &\left[\sup_{n\geq 1} \left\Vert \partial_v \del (\mu_s \star \rho_n) \right\Vert_{L^k(\R^d)} + \left\Vert \partial_v \del (\mu_s ) \right\Vert_{L^k(\R^d)}\right] \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)} \\ &\leq 2C (1 + \Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)}^{\alpha} )\Vert p(s,\cdot) \Vert_{L^{k'}(\R^d)}, \end{align*} which is integrable on $[0,T]$ thanks to Lemma \ref{integrability1} since $ k\geq \max \{d(\alpha+1),d+1\}.$ We conclude by the dominated convergence theorem that $I_1$ converges to $0$. The term $I_2$ also converges to $0$ following the same method. Indeed, for almost all $s$, $\partial_v\del(\mu_s)(\cdot) \in L^k(\R^d)$ thus the integrand converges to $0$ by Lemma \ref{regularisation} and we conclude with the dominated convergence theorem. Therefore \eqref{term1Ito1} is proved. Following the same lines, we take the limit $n \rightarrow + \infty$ in the last integral of \eqref{formulau_nito1} to obtain that for all $t \in [0,T]$ $$ \int_0^t \E \left( \partial^2_v \frac{\delta u^n}{\delta m} (\mu_s)(X_s)\cdot a_s\right) \,ds \rightarrow \int_0^t \E \left( \partial^2_v \frac{\delta u}{\delta m} (\mu_s)(X_s)\cdot a_s\right) \,ds.$$ This concludes the proof of Theorem \ref{ito_krylov}. \hfill$\square$ \section{Proof of Theorem \ref{extensionito}}\label{sectionproof2} The strategy of the proof is the following. In Step 1, we prove some integrability results coming from Assumption $(5)$ in Definition \ref{sobolevW}. Step 2 is devoted to prove that all the terms in Itô-Krylov's formula \eqref{formulaito2} are well-defined using a localization argument, Krylov's inequality, and Step 1. Moreover, we see that it is enough to prove the formula up to random times localizing the process $\xi.$ Step 3 is dedicated to regularize $u$ using convolutions both in space and measure variables. In Step 4 and 5, we follow the strategy of the proof of Theorem $5.102$ in \cite{CarmonaProbabilisticTheoryMean2018} to prove Itô-Krylov's formula for $u^n$, the mollified version of $u.$ Finally, Step 6 aims at taking the limit $ n \rightarrow + \infty$ thanks to Krylov's inequality.\\ Note that there are three kind of integrals in Itô's formula \eqref{formulaito2}: the terms involving standard time and space derivatives in the first line, those involving the linear derivative in the second line and the martingale term in the third line. We will treat them separately. \\ \noindent\textbf{Step 1: Useful integrability results.}\\ It follows from Assumption $(5)$ in Definition \ref{sobolevW} and Lemma \ref{integrability2} that for any $M>0$ the following quantities are finite: \begin{align} \label{bound1}&J_1(M):= \displaystyle\int_0^T \left[\sup_{n \geq 1} \left\Vert \partial_x u(s,\cdot,\mu_s\star \rho_n) \right\Vert_{L^{k_1}(B_M)} + \sup_{n \geq 1}\left\Vert\partial^2_x u (s,\cdot,\mu_s\star \rho_n) \right\Vert_{L^{k_1}(B_M)} \right] \Vert q(s,\cdot)\Vert_{L^{k_1'}(B_M)} \, ds,\\ \notag&J_2(M):=\displaystyle\int_0^T \sup_{n \geq 1} \left\Vert \partial_x u(s,\cdot,\mu_s\star \rho_n) \right\Vert_{L^{2k_1}(B_M)}^2 \Vert q(s,\cdot)\Vert_{L^{k_1'}(B_M)} \, ds,\\\notag&J_3(M):=\displaystyle\int_0^T \sup_{n \geq 1} \left\Vert \partial_v \del(s,\cdot,\mu_s\star \rho_n)(\cdot) \right\Vert_{L^{k_2}(B_M\times\R^{d})} \Vert q(s,\cdot)\Vert_{L^{k_2'}(B_M)} \Vert p(s,\cdot)\Vert_{L^{k_2'}(\R^d)} \, ds,\\\notag&J_4(M):=\displaystyle\int_0^T \sup_{n \geq 1}\left\Vert\partial^2_v \del (s,\cdot,\mu_s\star \rho_n)(\cdot) \right\Vert_{L^{k_2}(B_M\times\R^{d})} \Vert q(s,\cdot)\Vert_{L^{k_2'}(B_M)} \Vert p(s,\cdot)\Vert_{L^{k_2'}(\R^d)} \, ds. \end{align} To prove this, we follow the method employed in Step $3$ of the preceding proof to justify the dominated convergence theorem. We just give details for $J_2(M)$ since it requires a bit more attention. Owing to Assumption $(2)$ in Definition \ref{sobolevW}, we know that for all $(t,\mu) \in [0,T] \times \PP(\R^d),$ $\partial_x u(t,\cdot,\mu) \in W^{1,k_1}(B).$ Sobolev embedding theorem (see Corollary 9.14 in \cite{BrezisFunctionalAnalysisSobolev2010}) ensures that the embedding $ W^{1,k_1}(B_M) \hookrightarrow L^{2k_1}(B_M)$ is continuous since $ k_1 \geq d+1.$ Thus there exists $C>0$ such that $$ \forall t \in [0,T], \, \forall \mu \in \PP(\R^d),\, \Vert \partial_x u(t,\cdot,\mu) \Vert_{L^{2k_1}(B_M)} \leq C \left(\Vert\partial_x u(t,\cdot,\mu) \Vert_{L^{k_1}(B_M)} + \Vert\partial_x^2 u(t,\cdot,\mu) \Vert_{L^{k_1}(B_M)}\right).$$ Thanks to Assumption $(5)$ in Definition \ref{sobolevW}, there exists a constant $C_{M}>0$ such that for almost all $s$ and for all $n\geq 1$ $$ \sup_{n\geq 1} \Vert \partial_x u(s,\cdot,\mu_s\star \rho_n) \Vert^2_{L^{2k_1}(B_M)} \leq C_{M} \left(1 + \Vert p(s,\cdot) \Vert_{L^{k_1'}(\R^d)}^{2\alpha_1}\right),$$ where we used the fact that $\{ \mu_s \star \rho_n, \, s \in [0,T], \, n \geq 1 \}$ is relatively compact in $\PPP_2(\R^d)$ and the convolution inequality $L^{k_1'}*L^1.$ We conclude with Lemma \ref{integrability2} since $k_1 \geq \max \{ d(2\alpha_1+1),d+1\}.$ Note that these integrability properties remain true if we replace $\mu_s \star \rho_n$ by $\mu_s$ and remove the supremum. We justify it only for the second point. It follows from the continuity assumption $(2)$ in Definition \ref{sobolevW} that for almost all $s \in [0,T]$ $$ \partial_x u(s,\cdot,\mu_s\star \rho_n) \overset{W^{1,k_1}(B_M)}{\longrightarrow} \partial_x u(s,\cdot,\mu_s),$$ because $\mu_s \star \rho_n \overset{d_{\PP}}{\longrightarrow} \mu_s$ for almost all $s$. Sobolev embedding theorem guarantees that $$ \Vert \partial_x u(t,\cdot,\mu\star \rho_n) \Vert_{L^{2k_1}(B_M)} \rightarrow \Vert \partial_x u(t,\cdot,\mu) \Vert_{L^{2k_1}(B_M)}.$$ Thus we obtain \begin{equation*} \int_0^T \Vert \partial_x u(s,\cdot,\mu_s) \Vert_{L^{2k_1}(B_M)}^2 \Vert q(s,\cdot)\Vert_{L^{k_1'}(B_M)} \, ds \leq J_2(M) <+ \infty.\end{equation*} \\ \noindent\textbf{Step 2: Meaning of the terms in \eqref{formulaito2} and localization.}\\ Let $(T_M)_M$ be the sequence of stopping times converging almost surely to $T$ defined by $$T_M= \inf \{ t \in[0,T],\, |\xi_t| \geq M \}\wedge T.$$ Let $\xi^M_t = \xi_{t\wedge T_M},$ which is bounded by $M$ on the set $\{T_M > 0 \}$. \\ \noindent\textbf{(i) Terms involving standard derivatives in \eqref{formulaito2}.} We prove that almost surely $$\int_0^T |\partial_x u(s,\xi_s,\mu_s)\cdot\eta_s| \, ds < + \infty.$$ By Proposition \ref{densiteexistence} and Hölder's inequality, one has \begin{align*}\E \int_0^{T\wedge T_M} |\partial_x u (s, \xi_s,\mu_s)| \,ds &\leq \int_0^T \int_{B_{M}} |\partial_x u (s, x,\mu_s)|q(s,x) \, dx\, ds\\ &\leq \int_0^T \Vert \partial_x u (s, \cdot,\mu_s)\Vert_{L^{k_1}(B_M)} \Vert q(s,\cdot) \Vert_{L^{k_1'}(B_M)} \, ds \\ &\leq J_1(M), \end{align*}which is finite (see \eqref{bound1} in Step $1$). We deduce that almost surely, for all $ M \geq 1$ $$\int_0^{T\wedge T_M} |\partial_x u (s, \xi_s,\mu_s)| \,ds < \infty.$$ But it is clear that for almost all $\omega \in \Omega$ and for $M$ bigger than some random constant $M(\omega) \geq 1$, $T_M(\omega) =T.$ Thus, since $\eta$ is uniformly bounded, $\int_0^T |\partial_x u(s,\xi_s,\mu_s).\eta_s| \, ds$ is finite almost surely. The other terms in the first line of Itô's formula \eqref{formulaito2} are treated with the same method.\\ \noindent \textbf{(ii) Martingale term in \eqref{formulaito2}.} We need to prove that $\int_0^T |\partial_x u(s,\xi_s,\mu_s) |^2 \, ds$ is almost surely finite. Reasoning as before, it is a consequence of the fact that $J_2$ is finite since we have $$ \int_0^T \Vert \partial_x u (s, \cdot,\mu_s)\Vert_{L^{2k_1}(B_M)}^2 \Vert q(s,\cdot) \Vert_{L^{k_1'}(B_M)} \, ds \leq J_2(M). $$ Therefore the martingale term in \eqref{formulaito2} is well-defined.\\ \noindent \textbf{(iii) Terms involving the linear derivative in \eqref{formulaito2}.} We remark that $\tilde{X}$ and $\xi$ can be seen as independent processes on the product space $\Omega \times \tilde{\Omega}$ with $\LL(\tilde{X}_s) = p(s,\cdot) \, dx$ and $\LL(\xi_s) = q(s,\cdot) \, dx$ for almost all $s.$ Hölder's inequality gives that \begin{align*}&\E\int_0^{T \wedge T_M} \tilde{\E} \left|\partial_v \del (s,\xi_s,\mu_s)(\tilde{X_s})\right| \, ds \\&\leq \int_0^T \int_{B_M \times \R^d} \left|\partial_v \del (s,x,\mu_s)(v)\right|q(s,x) p(s,v) \, dx \, dv \, ds \\ &\leq \int_0^T \left\Vert \partial_v \del (s,\cdot,\mu_s)(\cdot)\right\Vert_{L^{k_2}(B_M\times \R^d)} \Vert q(s,\cdot) \Vert_{L^{k_2'}(B_M)} \Vert p(s,\cdot) \Vert_{L^{k_2'}(\R^d)} \, ds \\ &=J_3(M),\end{align*} which was defined in \eqref{bound1} and is finite. We deduce as previously that $\int_0^T \tilde{\E} \left|\partial_v \del (s,\xi_s,\mu_s)(\tilde{X_s}).\tilde{b_s}\right| \, ds$ is almost surely finite. The term involving $\partial_v^2 \del$ is dealt similarly. \\ Since all the terms in \eqref{formulaito2} are well-defined, it is enough to prove Itô-Krylov's formula for \newline $u(t\wedge T_M, \xi_{t \wedge T_M}, \mu_{t\wedge T_M})$ almost surely for all $t \in [0,T],$ and then take the limit $ M \rightarrow + \infty$ using the continuity of the integrals in Itô-Krylov's formula with respect to $t$. So we fix $\tau := T_M$ for $M \geq 1$ and we want to prove the formula up to time $\tau.$\\ \noindent\textbf{Step 3: Mollification of $\boldsymbol{u}$.}\\ Let $u^n$ be the function defined by $u^n(t,x,\mu):= u(t,\cdot,\mu\star \rho_n)*\rho_n(x).$ It is clearly continuous on $[0,T]\times \R^d \times \PPP_2(\R^d),$ as $u.$ Since $\partial_t u$ is jointly continuous, it follows from Leibniz's rule that $u^n$ is $\CC^1$ with respect to $t$ and that we can differentiate under the integral i.e. for all $(t,x,\mu) \in [0,T]\times \R^d \times \PPP_2(\R^d)$ $$ \partial_t u^n(t,x,\mu) = \partial_t u(t,\cdot, \mu \star \rho_n) * \rho_n(x),$$ which is also jointly continuous. As a result of Lemma \ref{regularisation} and Proposition \ref{deriveefaible}, $u^n$ is $\CC^2$ with respect to $x$ and we have $$\partial_x u^n(t,x,\mu) = \partial_x u(t,\cdot,\mu \star \rho_n)* \rho_n(x) \quad \text{and} \quad \partial^2_x u^n(t,x,\mu) = \partial^2_x u(t,\cdot,\mu \star \rho_n) * \rho_n(x).$$ These two functions are continuous on $[0,T]\times \R^d \times \PPP_2(\R^d)$ by the dominated convergence theorem and the fact that $u$ is jointly continuous. We define $\tilde{\rho_n}$ by $\tilde{\rho_n}(x,v) := \rho_n(x) \rho_n(v)$ for all $x,v \in \R^d.$ It is easy to see that $(\tilde{\rho_n})_n$ is a mollifying sequence on $\R^{2d}.$ Next, we claim that for all $(t,x) \in [0,T] \times \R^d$, $u^n(t,x,\cdot)$ has a linear derivative given by \begin{equation}\label{eqderlin} \frac{\delta u^n}{\delta m} (t,x,\mu)(v) := \del (t,\cdot, \mu \star \rho_n)(\cdot) *\tilde{\rho_n}(x,v).\end{equation} This convolution is well-defined as $\del$ is jointly continuous. To prove \eqref{eqderlin}, note first that the bound of Assumption $(3)$ in Definition $\ref{sobolevW}$ implies that for all $(t,x) \in [0,T] \times \R^d$, $\frac{\delta u^n}{\delta m} (t,x,\mu)(\cdot)$ is at most of quadratic growth, uniformly in $\mu$ on each compact set. Since for all $(t,x) \in [0,T] \times \R^d$, $\del(t,x,\cdot)(\cdot)$ is continuous on $\PPP_2(\R^d) \times \R^d$, the dominated convergence theorem proves that $\frac{\delta u^n}{\delta m} (t,x,\cdot)(\cdot)$ is continuous. As explained in Remark \ref{Rqlinearderivative}, it is enough to compute, for $\mu,\nu \in \PPP_2(\R^d)$ and $\lambda \in [0,1]$, the derivative with respect to $\lambda$ of $u^n(t,x,m_{\lambda})$, where $m_{\lambda} = \lambda \mu + (1- \lambda) \nu.$ As recalled in the proof of Theorem \ref{ito_krylov}, when $(t,x)$ are fixed $$\frac{d }{d\lambda} u(t,x,m_{\lambda} \star \rho_n) = \int_{\R^d}\del(t,x,m_{\lambda}\star \rho_n)*\rho_n(v) \, d(\mu - \nu)(v).$$ Thanks to the bound Assumption $(3)$ in Definition $\ref{sobolevW}$ for all compact $K \subset \R^d,$ one has $$ \sup_{x \in K}\sup_{\lambda \in [0,1]} \left| \frac{d }{d\lambda} u(t,x,m_{\lambda} \star \rho_n)\right| \leq C \left( 1 + \int_{\R^d} |v|^2 \, d(\mu + \nu)(v)\right).$$We can conclude with the help of Leibniz's rule and Fubini's theorem that $$\frac{d }{d\lambda} u^n(t,x,m_{\lambda}) = \int_{\R^d}\del(t,\cdot,m_{\lambda}\star \rho_n)*\tilde{\rho}_n(x,v) \, d(\mu - \nu)(v).$$ It follows from the joint continuity of $\del$ and Leibniz's rule that $\frac{\delta u^n}{\delta m}$ is $\CC^2$ with respect to $v$ and that $$\left\{ \begin{aligned} \partial_v \frac{\delta u^n}{\delta m}(t,x,\mu)(v) &=& \frac{\delta u}{\delta m}(t,\cdot,\mu\star\rho_n)(\cdot) * \partial_v\tilde{\rho_n}(x,v) = \partial_v\frac{\delta u}{\delta m}(t,\cdot,\mu\star\rho_n)(\cdot) * \tilde{\rho_n}(x,v) \\\vspace{10pt} \partial_v^2 \frac{\delta u^n}{\delta m}(t,x,\mu)(v) &=& \frac{\delta u}{\delta m}(t,\cdot,\mu\star\rho_n)(\cdot) * \partial^2_v\tilde{\rho_n}(x,v) = \partial_v^2\frac{\delta u}{\delta m}(t,\cdot,\mu\star\rho_n)(\cdot) * \tilde{\rho_n}(x,v). \end{aligned}\right.$$ Note that $\partial_v \frac{\delta u^n}{\delta m}$ and $\partial^2_v \frac{\delta u^n}{\delta m}$ are continuous on $[0,T] \times \R^d \times \PPP_2(\R^d) \times \R^d$ thanks to the dominated convergence theorem and the joint continuity of $\del.$ Moreover for all compact $ \KK \subset \PPP_2(\R^d)$ and for all $M>0$ \begin{equation}\label{eqbound} \sup_{t \in[0,T]} \sup_{\mu \in \KK} \sup_{|x| \leq M}\sup_{v \in \R^d} \left| \partial_v \frac{\delta u^n}{\delta m} (t,x,\mu)(v)\right| + \left| \partial^2_v\frac{\delta u^n}{\delta m} (t,x,\mu)(v)\right| < + \infty. \end{equation} Indeed, Hölder's inequality ensures that \begin{align*} &\sup_{|x| \leq M}\sup_{v \in \R^d} \left| \partial_v \frac{\delta u^n}{\delta m} (t,x,\mu)(v)\right| + \left| \partial^2_v\frac{\delta u^n}{\delta m} (t,x,\mu)(v)\right| \\&\leq \left[ \left\Vert \partial_v\frac{\delta u}{\delta m}(t,\cdot,\mu\star\rho_n)(\cdot) \right\Vert_{L^{k2}(B_{M+1}\times \R^d)} + \left\Vert \partial_v^2\frac{\delta u}{\delta m}(t,\cdot,\mu\star\rho_n)(\cdot) \right\Vert_{L^{k2}(B_{M+1}\times \R^d)} \right] \Vert \tilde{\rho_n}\Vert_{L^{k_2'}(\R^{2d})},\end{align*} the ball $B_{M+1}$ coming from the fact that the support of $\tilde{\rho_n}$ is included in $B_1.$ Since $\KK \star \rho_n$ is compact in $\PPP_2(\R^d)$ and included in $\PP(\R^d)$, Assumption $(5)$ in Definition \ref{sobolevW} ensures that there exists $C>0$ such that for all $\mu \in \KK$ $$ \begin{aligned} \sup_{t \in [0,T]}\sup_{|x| \leq M}\sup_{v \in \R^d} \left| \partial_v \frac{\delta u^n}{\delta m} (t,x,\mu)(v)\right| + \left| \partial^2_v\frac{\delta u^n}{\delta m} (t,x,\mu)(v)\right| &\leq C \left( 1 + \left\Vert \frac{d \mu \star \rho_n}{dx} \right\Vert_{L^{k_2'}(\R^d)}^{\alpha_2}\right) \Vert \tilde{\rho_n}\Vert_{L^{k_2'}(\R^{2d})}. \end{aligned}$$ But we know that $ \frac{d \mu \star \rho_n}{dx} (x) = \displaystyle\int_{\R^d} \rho_n (x-y) \, d\mu(y).$ We conclude with Jensen's inequality that $$ \left\Vert \frac{d \mu \star \rho_n}{dx} \right\Vert_{L^{k_2'}(\R^d)}^{\alpha_2} \leq \Vert \rho_n \Vert_{L^{k_2'}(\R^d)}^{\alpha_2}.$$ This proves \eqref{eqbound}.\\ \noindent\textbf{Step 4: Itô's formula \eqref{formulaito2} for $\boldsymbol{u^n}$ when the coefficients $\boldsymbol{b}$ and $\boldsymbol{\sigma}$ are continuous. } \\ We claim that $(t,x) \mapsto U^n(t,x) := u^n(t,x,\mu_t) \in \CC^{1,2}([0,T] \times \R^d)$. The regularity with respect to $x$ is clear with the preceding properties on $u^n$. Let us thus focus on the regularity with respect to the time variable. For $(t,x) \in [0,T]\times \R^d$ fixed, the regularity assumption on $u$ with respect to $t$ and the standard Itô formula for a flow of measures applied to $u(t,x,\cdot)$ (see Theorem 5.99 in Chapter $5$ of \cite{CarmonaProbabilisticTheoryMean2018}) ensure that we have for $h \in \R$ satisfying $t+h \geq 0$ \begin{align}\label{formulaitou_n2} \notag u^n(t+h,x,\mu_{t+h}) - u^n(t,x,\mu_t)&= u^n(t+h,x,\mu_{t+h}) - u^n(t,x,\mu_{t+h}) + u^n(t,x,\mu_{t+h}) - u^n(t,x,\mu_t) \\ &=\int_t^{t+h} \partial_t u^n(s,x,\mu_{t+h}) \, ds + \int_t^{t+h} \E \left( \partial_v \frac{\delta u^n}{\delta m} (t,x,\mu_s)(X_s)\cdot b_s\right) \, ds\\ \notag&+ \frac12 \int_t^{t+h} \E \left( \partial^2_v \frac{\delta u^n}{\delta m} (t,x,\mu_s)(X_s)\cdot a_s\right) \, ds. \end{align} The function $(s,x,\mu) \in [0,T] \times \R^d \times \PPP_2(\R^d) \mapsto \partial_t u^n(s,x,\mu)$ is continuous so $$ \frac1h \int_t^{t+h} \partial_t u^n(s,x,\mu_{t+h}) \, ds \underset{h \rightarrow 0}{\longrightarrow} \partial_t u^n(t,x,\mu_{t}).$$ The two other terms in \eqref{formulaitou_n2} can be dealt similarly. Indeed, the dominated convergence theorem justified by \eqref{eqbound} ensures that the functions $(s,x) \in [0,T] \times \R^d \mapsto \E \left( \partial_v \frac{\delta u^n}{\delta m} (s,x,\mu_s)(X_s)\cdot b_s\right) $ and $(s,x) \in [0,T] \times \R^d \mapsto \E \left( \partial^2_v \frac{\delta u^n}{\delta m} (s,x,\mu_s)(X_s)\cdot a_s\right) $ are continuous. Then, it follows that $U^n \in \CC^{1,2}([0,T] \times \R^d)$ and that for all $(t,x) \in [0,T] \times \R^d $ $$ \partial_t U^n(t,x) = \partial_t u^n(t,x,\mu_t)+ \E \left( \partial_v \frac{\delta u^n}{\delta m} (t,x,\mu_t)(X_t)\cdot b_t\right) + \frac12 \E \left( \partial^2_v \frac{\delta u^n}{\delta m} (t,x,\mu_t)(X_t)\cdot a_t\right).$$ We can now apply the classical Itô formula for $U^n$ and $\xi$, up to the random time $\tau$ defined at the end of Step 2, to obtain that almost surely, for all $t \in [0,T]$ \begin{align}\label{eqstep4} \notag u^n(t\wedge \tau,\xi_{t\wedge \tau},\mu_{t\wedge \tau}) &= u^n(0,\xi_0, \mu_0) + \int_0^{t\wedge \tau} \partial_t u^n(s,\xi_s,\mu_s) + \partial_x u^n(s,\xi_s,\mu_s)\cdot\eta_s + \frac12 \partial^2_x u^n(s,\xi_s,\mu_s) \cdot \gamma_s\gamma_s^* \, ds \\ &+ \int_0^{t\wedge \tau} \tilde{\E} \left(\partial_v \frac{\delta u^n}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s})\cdot \tilde{b_s}\right) \, ds + \frac12 \int_0^{t\wedge \tau} \tilde{\E} \left(\partial^2_v \frac{\delta u^n}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s})\cdot \tilde{a_s}\right) \, ds \\ \notag&+ \int_0^{t\wedge \tau} \partial_x u^n(s,\xi_s,\mu_s)\cdot(\gamma_s \, dB_s). \end{align} Note that \eqref{eqstep4} does not require Assumptions \textbf{(A)} and \textbf{(B)} on the Itô process $X.$ These assumptions will only be used in Step 6.\\ \noindent\textbf{Step 5: Removing the continuity hypothesis on the coefficients $\boldsymbol{b}$ and $\boldsymbol{\sigma}$.} \\ We consider $(b^m)_m$ and $(\sigma^m)_m$ two sequences of continuous and progressively measurable processes such that $$ \E \int_0^T |b^n_s - b_s|^2 + |\sigma^n_s - \sigma_s|^4 \, ds \rightarrow 0.$$ We set, for $t \leq T, $ $X^m_t := X_0 + \int_0^t b^m_s \, ds + \int_0^t \sigma^m_s \, dB_s,$ and $\mu^m_t$ the law of $X^m_t$. Owing to Step $4$, Itô's formula \eqref{eqstep4} holds true for $X^m$ and $\xi$. Now, we aim at taking the limit $m \rightarrow +\infty$ in \eqref{eqstep4}. Note that the set $\KK:=\{\mu^m_s, \, s\leq T, \, m \geq 1 \} \cup \{ \mu_s, \, s \leq T\}$ is compact in $ \PPP_2(\R^d)$. Indeed, using Jensen's inequality and the Burkholder-Davis-Gundy (BDG) inequalities, it is clear that $\E \, \underset{t \leq T}{\sup} |X^m_t - X_t|^2 \rightarrow 0,$ thus $\underset{t \leq T}{\sup} \,W_2(\mu^m_t,\mu_t) \rightarrow 0$. We deduce that almost surely, for all $t \in [0,T] $ $$u^n(t,\xi_t,\mu^m_t) \underset{m \rightarrow + \infty}{\longrightarrow} u^n(t,\xi_t,\mu_t).$$ Now, we take the limit $m \rightarrow + \infty$ in the integrals in Itô's formula \eqref{eqstep4}.\\ \noindent\textbf{(i) Martingale term in \eqref{eqstep4}.} Using BDG's inequality, there exists $C>0$ such that \begin{align*} &\E \sup_{t \leq T} \left| \int_0^{t\wedge \tau} (\partial_x u^n(s,\xi_s,\mu^m_s) - \partial_x u^n(s,\xi_s,\mu_s))\cdot(\gamma_s \, dB_s)\right|^2\\ &\leq C \E \int_0^{T\wedge \tau}|\partial_x u^n(s,\xi_s,\mu^m_s) - \partial_x u^n(s,\xi_s,\mu_s)|^2 |\gamma_s|^2 \, ds \\ &\leq C \E \int_0^{T}|\partial_x u^n(s,\xi_s,\mu^m_s) - \partial_x u^n(s,\xi_s,\mu_s)|^2\1_{B_M}(\xi_s) |\gamma_s|^2 \, ds. \end{align*} The dominated convergence theorem can be applied since $\gamma$ is bounded and $\partial_x u^n$ is jointly continuous on $[0,T]\times \R^d \times \PPP_2(\R^d).$ It shows that, up to an extraction, almost surely $$ \forall t \leq T, \, \int_0^{t\wedge \tau} \partial_x u^n(s,\xi_s,\mu^m_s)\cdot (\gamma_s \, dB_s) \underset{m \rightarrow + \infty}{\longrightarrow} \int_0^{t\wedge \tau} \partial_x u^n(s,\xi_s,\mu_s)\cdot (\gamma_s \, dB_s).$$ \noindent \textbf{(ii) Terms involving the linear derivative in \eqref{eqstep4}.} Let us write \begin{align*} &\left|\int_0^{t\wedge \tau} \tilde{\E} \left( \partial_v \frac{\delta u^n}{\delta m}(s,\xi_s,\mu^m_s)(\tilde{X^m_s})\cdot \tilde{b_s^m} \right) \, ds - \int_0^{t\wedge \tau} \tilde{\E} \left( \partial_v \frac{\delta u^n}{\delta m}(s,\xi_s,\mu_s)(\tilde{X_s})\cdot \tilde{b_s} \right) \, ds \right| \\ &\leq \int_0^{T\wedge \tau} \tilde{\E} \left| \partial_v \frac{\delta u^n}{\delta m}(s,\xi_s,\mu^m_s)(\tilde{X_s^m})\right||\tilde{b_s^m} -\tilde{b_s}| \, ds \\ &+ \int_0^{T\wedge \tau} \tilde{\E} \left|\partial_v \frac{\delta u^n}{\delta m}(s,\xi_s,\mu^m_s)(\tilde{X_s^m}) - \partial_v \frac{\delta u^n}{\delta m}(s,\xi_s,\mu_s)(\tilde{X_s}) \right||\tilde{b_s} | \, ds \\ &=: I_1+I_2 \end{align*} Cauchy-Schwarz's inequality ensures that \begin{align*} I_1 \leq \left( \int_0^{T\wedge \tau} \tilde{\E} \left|\partial_v \frac{\delta u^n}{\delta m}(s, \xi_s,\mu^m_s)(\tilde{X_s^m})\right|^2 \, ds\right)^{1/2}\left(\int_0^{T} \tilde{\E}|\tilde{b_s^m} - \tilde{b_s}|^2 \, ds\right)^{1/2}. \end{align*} We conclude that $I_1$ converges to $0$ thanks to the bound \eqref{eqbound} proved in Step $3$ and since $\xi$ is bounded by $M$ on the set $\{\tau >0 \}.$ To show that $I_2 \rightarrow 0,$ we use the fact that $b$ is bounded by $K$ to get \begin{align*} I_2 \leq K\int_0^{T\wedge \tau} \tilde{\E} \left| \partial_v \frac{\delta u^n}{\delta m}(s,\xi_s,\mu^m_s)(\tilde{X_s^m}) - \partial_v \frac{\delta u^n}{\delta m}(s,\xi_s,\mu_s)(\tilde{X_s})\right|\, ds. \end{align*} The continuity of $\partial_v \frac{\delta u^n}{\delta m}$ and the convergence in $L^2$ of $(\tilde{X^m_s})_m$ to $\tilde{X_s}$ ensure that for all $\omega \in \Omega,$ \newline $ \left| \partial_v \frac{\delta u^n}{\delta m}(s,\xi_s(\omega),\mu^m_s)(\tilde{X_s^m}) - \partial_v \frac{\delta u^n}{\delta m}(s,\xi_s(\omega),\mu_s)(\tilde{X_s})\right|$ converges in probability on $\tilde{\Omega}$ to $0$ as $m$ goes to infinity. Using a uniform integrability argument coming from \eqref{eqbound}, we deduce that $I_2$ converges to $0.$ Following the same strategy, one has for all $t \in [0,T]$ $$ \int_0^{t\wedge \tau} \tilde{\E} \left( \partial^2_v \frac{\delta u^n}{\delta m}(s,\xi_s,\mu^m_s)(\tilde{X^m_s})\cdot \tilde{a_s^m} \right) \, ds \underset{m \rightarrow + \infty}{\longrightarrow} \int_0^{t\wedge \tau} \tilde{\E} \left( \partial^2_v \frac{\delta u^n}{\delta m}(s,\xi_s,\mu_s)(\tilde{X_s})\cdot\tilde{a_s} \right) \, ds.$$ \noindent \textbf{(iii) Terms involving standard derivatives in \eqref{eqstep4}.} It follows from the dominated convergence theorem that almost surely, for all $t \leq T$ \begin{align*}&\int_0^{t\wedge \tau} ( \partial_t u^n(s,\xi_s,\mu^m_s) + \partial_x u^n(s,\xi_s,\mu^m_s)\cdot\eta_s) \, ds + \frac12 \int_0^{t\wedge \tau} \partial^2_x u^n(s,\xi_s,\mu^m_s)\cdot \gamma_s\gamma_s^* \, ds\\ &\underset{m \rightarrow + \infty}{\longrightarrow} \int_0^{t\wedge \tau} ( \partial_t u^n(s,\xi_s,\mu_s) + \partial_x u^n(s,\xi_s,\mu_s)\cdot \eta_s) \, ds + \frac12 \int_0^{t\wedge \tau} \partial^2_x u^n(s,\xi_s,\mu_s)\cdot \gamma_s\gamma_s^* \, ds. \end{align*} Indeed the functions $ \partial_t u^n$, $\partial_x u^n$ and $\partial_x^2 u^n$ are jointly continuous on $[0,T]\times \R^d \times \PPP_2(\R^d)$ and thus uniformly bounded on $[0,T]\times B_M \times \{\mu_s^m, \, s \in [0,T],\, m \geq 1 \}.$ Moreover, $\eta$ and $\gamma$ are also uniformly bounded.\\ This concludes Step $5$.\\ \noindent\textbf{Step 6: Letting $\boldsymbol{n \rightarrow + \infty.}$}\\ From Step $5$, we deduce that Itô's formula \eqref{eqstep4} in Step 4 holds for $u^n$ up to time $\tau.$ To conclude the proof, we need to take the limit $n \rightarrow + \infty$ in each term of \eqref{eqstep4}. Then it remains to remove the stopping time $\tau$ as explained at the end of Step $2$ (i.e. letting $\tau \rightarrow T$). The continuity of $u$ ensures that almost surely, for all $t \leq T$, $u^n(t,\xi_t,\mu_t) \rightarrow u(t,\xi_t,\mu_t).$ We now focus on the integrals in Itô's formula \eqref{eqstep4}.\\ \noindent \textbf{(i) Martingale term in \eqref{eqstep4}.} Thanks to BDG's inequality, Hölder's inequality, and the boundedness of $\gamma,$ we have \begin{align*} &\E \sup_{t \leq T} \left| \int_0^{t \wedge \tau} (\partial_x u^n(s,\xi_s,\mu_s) - \partial_x u(s,\xi_s,\mu_s))\cdot(\gamma_s \, dB_s)\right|^2 \\&\leq C \E\int_0^{T } |\partial_x u(s,\cdot,\mu_s\star \rho_n)*\rho_n(\xi_s) - \partial_x u(s,\xi_s,\mu_s)|^2\1_{B_M}(\xi_s) \, ds\\ &= C \int_0^T\int_{B_M} |\partial_{x} u(s,\cdot,\mu_s\star \rho_n)*\rho_n(x) -\partial_{x} u(s,x,\mu_s)|^2 q(s,x) \, dx \,ds \\ &\leq C\int_0^T \Vert \partial_{x} u(s,\cdot,\mu_s\star \rho_n)*\rho_n -\partial_{x} u(s,\cdot,\mu_s) \Vert_{L^{2k_1}(B_M)}^2 \Vert q(s,\cdot) \Vert_{L^{k_1'}(B_M)} \, ds\\ &\leq C\int_0^T \Vert \partial_{x} u(s,\cdot,\mu_s\star \rho_n)*\rho_n -\partial_{x} u(s,\cdot,\mu_s)* \rho_n \Vert_{L^{2k_1}(B_M)}^2 \Vert q(s,\cdot) \Vert_{L^{k_1'}(B_M)} \, ds \\ &+ C\int_0^T \Vert \partial_{x} u(s,\cdot,\mu_s)*\rho_n -\partial_{x} u(s,\cdot,\mu_s) \Vert_{L^{2k_1}(B_M)}^2 \Vert q(s,\cdot) \Vert_{L^{k_1'}(B_M)} \, ds \\ &=: I_1 + I_2. \end{align*} We prove that $I_1$ and $I_2$ converge to $0.$ First note that, due to the convolution inequality $L^r*L^1$, we have for $f \in L^r_{\text{loc}}(\R^d)$ and for all $R>0,$ $\|f * \rho_n \|_{L^r(B_R)} \leq \|f\|_{L^r(B_{R+1})}.$ The control on $B_{R+1}$ follows from the fact that the support of each $\rho_n$ is included in $B_1$. Hence \begin{align*} I_1 & \leq C \int_0^T \Vert \partial_{x} u(s,\cdot,\mu_s\star \rho_n) -\partial_{x} u(s,\cdot,\mu_s) \Vert_{L^{2k_1}(B_{M+1})}^2 \Vert q(s,\cdot) \Vert_{L^{k_1'}(B_{M+1})} \, ds \, =: \tilde{I_1}. \end{align*} As a consequence of Sobolev embedding theorem, for all $t$, the function $$ \mu \in (\PP(\R^d),d_{\PP}) \mapsto \partial_x u(t,\cdot,\mu) \in L^{\infty}(B_{M+1})$$ is continuous. Since $\mu_s \in \PP(\R^d)$ for almost all $s$ and thanks to Assumption $(2)$ in Definition \ref{defespaceP}, we deduce that the integrand in $\tilde{I_1}$ converges to $0$ for almost all $s$. It follows from the dominated convergence theorem (see \eqref{bound1} in Step 1) that $\tilde{I_1}$ converges to $0$, as well as $I_1.$ We now focus on $I_2.$ The integrand in $I_2$ converges to $0$ for almost all $s$ because $\partial_x u (s,\cdot,\mu_s) \in L^{2k_1}(B_{M}).$ We conclude with the dominated convergence theorem as previously. This shows that, up to an extraction, almost surely $$ \sup_{t \leq T} \left| \int_0^{t \wedge \tau} \partial_x u^n(s,\xi_s,\mu_s) \cdot(\gamma_s \, dB_s) - \int_0^{t \wedge \tau} \partial_x u(s,\xi_s,\mu_s) \cdot(\gamma_s \, dB_s) \right| \rightarrow 0. $$ \noindent \textbf{(ii) Terms involving the linear derivative in \eqref{eqstep4}.} Following the same strategy, we obtain using Hölder's inequality \begin{align*} &\E \, \sup_{t \leq T} \left|\int_0^{t \wedge \tau}\tilde{\E} \left(\partial_v \frac{\delta u^n}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s})\cdot\tilde{b_s}\right) \, ds - \int_0^{t \wedge \tau}\tilde{\E} \left(\partial_v \frac{\delta u}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s})\cdot\tilde{b_s}\right) \, ds \right|\\ &\leq \E\tilde{\E}\int_0^{T \wedge \tau}\left|\partial_v \frac{\delta u^n}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s}).\tilde{b_s} -\partial_v \frac{\delta u}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s}).\tilde{b_s}\right| \, ds \\ &\leq \E\tilde{\E}\int_0^{T }\left|\partial_v \frac{\delta u^n}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s})\cdot\tilde{b_s} -\partial_v \frac{\delta u}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s})\cdot\tilde{b_s}\right|\1_{B_M}(\xi_s) \, ds\\ &\leq C \int_0^T \int_{B_M\times \R^d} \left|\partial_v \frac{\delta u^n}{\delta m} (s,x,\mu_s)(v) -\partial_v \frac{\delta u}{\delta m} (s,x,\mu_s)(v)\right| q(s,x)p(s,v) \, dx \, dv \, ds \\ &\leq C \int_0^T \left\Vert \partial_v \frac{\delta u}{\delta m} (s,\cdot,\mu_s\star \rho_n)(\cdot)* \tilde{\rho_n} -\partial_v \frac{\delta u}{\delta m} (s,\cdot,\mu_s)(\cdot) \right\Vert_{L^{k_2}(B_M\times\R^d)} \Vert q(s,\cdot) \Vert_{L^{k_2'}(B_M)} \Vert p(s,\cdot) \Vert_{L^{k_2'}(\R^d)} \, ds. \end{align*} The dominated convergence theorem justified by Assumption $(4)$ in Definition \ref{sobolevW} and \eqref{bound1} in Step $1$ ensures that this term converges to $0.$ The same argument holds true for the term involving $\partial^2_v \del$. \\ \noindent \textbf{(iii) Terms involving standard derivatives in \eqref{eqstep4}.} The convergence of the term involving $\partial_tu^n$ in \eqref{eqstep4} follows from the continuity of $\partial_t u$ on $[0,T] \times \R^d \times \PPP_2(\R^d)$ and the dominated convergence theorem since almost surely on the set $\{ \tau >0 \}$ $$ \sup_{s \in [0,T]}\sup_{n\geq1} |\partial_t u^n(s,\xi_s,\mu_s) | \leq \sup_{s \in [0,T]}\sup_{n\geq 1} \sup_{|x| \leq M+1} |\partial_t u(s,x,\mu_s\star \rho_n) | <+ \infty.$$ For the spatial derivatives, Hölder's inequality ensures that \begin{align*} &\E \, \sup_{t \leq T} \left| \int_0^{t \wedge \tau} \partial_x u^n(s,\xi_s,\mu_s)\cdot\eta_s \, ds - \int_0^{t \wedge \tau} \partial_x u(s,\xi_s,\mu_s)\cdot\eta_s \, ds \right| \\ &\leq C\, \int_0^{T} \Vert \partial_x u(s,\cdot,\mu_s\star \rho_n)*\rho_n-\partial_x u(s,\cdot,\mu_s)\Vert_{L^{k_1}(B_M)} \Vert q(s,\cdot)\Vert_{L^{k_1'}(B_M)} \, ds. \end{align*} The right-hand side term converges to $0$ with same reasoning as before. This shows that, up to an extraction, one has almost surely $$ \sup_{t \leq T} \left| \int_0^{t \wedge \tau} \partial_x u^n(s,\xi_s,\mu_s)\cdot\eta_s \, ds - \int_0^{t \wedge \tau}\partial_x u(s,\xi_s,\mu_s)\cdot\eta_s \, ds \right| \underset{n \rightarrow + \infty}{\longrightarrow} 0.$$ The term involving $\partial^2_x u$ in \eqref{eqstep4} is dealt similarly.\\ Taking the limit $ n \rightarrow + \infty$ in \eqref{eqstep4}, up to an extraction, we conclude that almost surely, for all $t \in [0,T]$ \begin{align*} u(t\wedge \tau,\xi_{t\wedge \tau},\mu_{t\wedge \tau}) &= u(0,\xi_0, \mu_0) \\ &+ \int_0^{t\wedge \tau} ( \partial_t u(s,\xi_s,\mu_s) + \partial_x u(s,\xi_s,\mu_s)\cdot\eta_s) \, ds + \frac12 \int_0^{t\wedge \tau} \partial^2_x u(s,\xi_s,\mu_s) \cdot \gamma_s\gamma_s^* \, ds \\ &+ \int_0^{t\wedge \tau} \tilde{\E} \left(\partial_v \frac{\delta u}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s})\cdot\tilde{b_s}\right) \, ds + \frac12 \int_0^{t\wedge \tau} \tilde{\E} \left(\partial^2_v \frac{\delta u}{\delta m} (s,\xi_s,\mu_s)(\tilde{X_s})\cdot \tilde{a_s}\right) \, ds \\ &+ \int_0^{t\wedge \tau} \partial_x u(s,\xi_s,\mu_s)\cdot(\gamma_s \, dB_s). \end{align*} This ends the proof as explained in Step 2. \hfill$\square$
{'timestamp': '2021-10-12T02:38:40', 'yymm': '2110', 'arxiv_id': '2110.05251', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.05251'}
\section{I. Josephson spectroscopy} In the calculation for the Josephson current shown in Fig.~2, we assumed that only two out of the six possible dot-dot tunnel couplings are nonzero (shown as arrows in Fig.~1(b)). Now, we compare this with the case that all six dot-dot couplings are present, see Fig.~\ref{fig:Josephson_cross}. We take the same tunneling amplitude $t$ for all four dot-dot nearest neighbors, and a smaller amplitude $t'=t/2$ for the remaining two cross-tunneling terms. The result is almost identical to Fig.~1. The only difference is a slight shift of the maximum position $\alpha_\text{max}$. The presence of triplet correlations is still clearly visible. \begin{figure}[ht!] \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=\columnwidth]{figS1a.pdf} \end{minipage} \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=\columnwidth]{figS1b.pdf} \end{minipage} \caption{\label{fig:Josephson_cross}Josephson current for the same system as in Fig.~2 but now including tunneling between all dot-dot pairs with tunneling amplitude $t$ for dot-dot nearest neighbors and $t'=t/2$ for cross tunneling.} \end{figure} \section{II. Andreev spectroscopy} We now turn to Andreev spectroscopy for the device shown in Fig.~1(c). \subsection{A. Weak tunnel coupling to the superconducting lead} In the calculation for Fig.~3, we assumed a rather large coupling $\Gamma_\text{S}$ between double quantum dot and superconducting lead. As a consequence, we got a strong proxmity effect with pronounced anticrossings in the Andreev addition energies. Superconducting triplet correlations are, however, also visible in the limit of weak coupling, $\Gamma_\text{S}$, as shown in Fig.~\ref{fig:Andreev_small_gamS}, where we took the same parameters as in Fig.~3 but reduced $\Gamma_\text{S}$ by a factor of $10$. In this regime, a perturbative treatment of $\Gamma_\text{S}$ to lowest order would be sufficient. The Andreev addition spectrum is, then, independent of $\alpha$, Fig.~\ref{fig:Andreev_small_gamS}(b), and the anticrossings are no longer resolved, Fig.~\ref{fig:Andreev_small_gamS}(a). Nevertheless, superconducting triplet correlations are clearly indicated by the nonmonontic behavior of the differential conductance as function of $\alpha$, taken at $\delta=0.4 U$ in Fig.~\ref{fig:Andreev_small_gamS}(b). For this choice of $\delta$, we meet the resonance condition $\delta=\epsilon_\text{B}$ derived in Eqs.~(2a)-(2c) for the limit $\Delta B\gg \Gamma_\text{S}$, which is satisfied here. Interestingly, we find in Fig.~\ref{fig:Andreev_small_gamS}(a) not only resonances at $\delta=\epsilon_\text{B}$ but also at $\delta=-\epsilon_\text{B}$. The latter can be similarly derived as the former by diagonalizing the Hamiltonian in the singlet-triplet subspace but then selecting the state with the {\it highest} energy $\delta+\epsilon_\text{B}$, coupling it to the empty state, and, finally picking the lower-energy state of this effective two-state system. This state is obviously not the ground state but can, nevertheless, be accessed at finite bias voltage. \begin{figure}[ht!] \begin{minipage}[b]{0.47\textwidth} \includegraphics[width=\columnwidth]{figS2a.pdf} \end{minipage} \begin{minipage}[b]{0.47\textwidth} \includegraphics[width=\columnwidth]{figS2b.pdf} \end{minipage} \caption{\label{fig:Andreev_small_gamS}Differential Andreev conductance for the same system as in Fig.~3 but for a weaker coupling to the superconductor, $\Gamma_\text{S}=0.05U$ and, in panel (b), a different choice of $\delta=0.4U$ in order to match the resonance condition. Again, the nonmonotonic dependence on $\alpha$ is a signature of superconducting triplet correlations. } \end{figure} \vspace*{-.5cm} \subsection{B. Varying a global magnetic field in the presence of a fixed inhomogeneous one} In the main text, we discussed the magnetic-field dependence by keeping the magnitudes of the local fields equal and fixed, $|{\bf B}_\text{L}|=|{\bf B}_\text{R}|$, and varying the angle $\alpha$ between their directions. Experimentally, it may be easier to generate non-collinear magnetic fields by varying a global (homogeneous) external magnetic field that is superimposed with a fixed inhomogeneous one. To be specific, we choose ${\bf B}_\text{L}={\bf B}_\text{i}+{\bf B}_\text{g}$ and ${\bf B}_\text{R}=-{\bf B}_\text{i}+{\bf B}_\text{g}$ with an angle $\gamma$ between ${\bf B}_\text{g}$ and ${\bf B}_\text{i}$. The variation of $B_\text{g}$ will lead to a crossover from a nearly antiparallel (for $B_\text{g} \ll B_\text{i}$) to a noncollinear (for $B_\text{g} \sim B_\text{i}$) and, then, to a nearly parallel (for $B_\text{g} \gg B_\text{i}$) configuration. In such a scenario, the local fields will, in general, differ in magnitude, $|{\bf B}_\text{L}|\neq |{\bf B}_\text{R}|$, such that now all three triplet states can couple to the empty state. The only exception is the range of small angles $\gamma$ that satisfy $B_\text{g} \sin \gamma \ll t$. In this case, the interdot tunnel coupling $t$ dominates over the left-right asymmetry introduced into the spectrum by ${\bf B}_\text{g}$, which yields $|{\bf B}_\text{L}| \approx |{\bf B}_\text{R}|$ (and one triplet state decouples from the other doubly-occupied states). In Fig.~\ref{fig:Andreev_delB} we show the dependence of (a) the current and (b) the conductance on $B_\text{g}$ for $\gamma=0$. We find that in the regimes of nearly parallel ($B_\text{g} \gg B_\text{i}$) and nearly antiparallel ($B_\text{g} \ll B_\text{i}$) magnetic configuration, the current is suppressed as compared to the noncollinear case ($B_\text{g} \sim B_\text{i}$), which is a clear signature of triplet correlations, in agreement with what we find in Fig.~3. \begin{figure}[ht!] \includegraphics[width=0.98\columnwidth]{figS3.pdf} \caption{\label{fig:Andreev_delB} (a) Andreev current and (b) differential conductance for antisymmetrically and (c) Andreev current and (d) differential conductance for symmetrically applied bias voltage. The parameters are the same as in Fig.~3 with $B_\text{i}=0.4 U$ as well as $\gamma=0$ in (a) and (b) while $\gamma= \pi/4$ in (c) and (d). } \end{figure} \subsection{C. Symmetrically applied bias voltage} The differential conductance shown in Fig.~3 were for the three-terminal device shown in Fig.~1(c) with {\it antisymmetrically} applied bias voltage, $\mu_\text{L} = - \mu_\text{R} = \mu_\text{N} = eV$ relative to the superconductor $\mu_\text{S}=0$. We now consider the case of a {\it symmetrically} applied bias voltage, $\mu_\text{L} = \mu_\text{R} = \mu_\text{N} = eV$, i.e., there is only a bias between normal leads and superconductor but not between the two normal leads which are short-cut, making the system effectively a two-terminal device. The case of an antisymmetrically applied bias voltage has the advantage that in the absence of a noncollinear magnetic field transport is suppressed, which is interpreted as a signature of triplet correlations. For a symmetrically applied bias voltage, the situation is different. In this case, transport occurs already in the absence of any magnetic field [42]. There is, however, a special feature that still allows for the identification of triplet correlations. As discussed in Ref.~[42], transport becomes blocked for bias voltages such that the electrons entering the double dot from the normal leads cannot get back but have to go into the superconductor. Since two electrons entering the double dot from the normal leads are with finite probability in a triplet state, they cannot enter the superconductor (triplet blockade) unless superconducting triplet correlations are induced. The triplet blockade leads to an absence of Andreev current for large and positive $\mu_\text{N}$ in the regimes of almost collinear magnetic fields, $B_\text{g} \gg B_\text{i}$ and $B_\text{g} \ll B_\text{i}$. In the regime $B_\text{g} \sim B_\text{i}$, however, superconducting triplet correlations are induced and the triplet blockade is lifted. The triplet blockade for $B_\text{g} \gg B_\text{i}$ is clearly visible in Fig.~\ref{fig:Andreev_delB}(c) and (d). For $B_\text{g} \sim B_\text{i}$, the triplet blockade is lifted (we choose the angle $\gamma$ large enough such that $B_\text{g} \sin \gamma \gtrsim t$ guarantees that {\it all} triplet states can couple to the empty state in the regime $B_\text{g} \sim B_\text{i}$). For the chosen parameters, a pronounced triplet blockade around $B_\text{g}=0$ occurs only for a small range that is not resolved in Fig.~\ref{fig:Andreev_delB}(c) and (d). For negative $\mu_\text{N}$, the electrons are transfered from the superconductor to the normal leads, and triplet blockade does not appear for any value of $B_\text{g}$. \section{III. Superconducting order parameters} Andreev spectrocopy as shown in Fig.~3 gives an indirect access to the superconducting order parameters. From the $\alpha$-dependence of the conductance, we could deduce the presence of superconducting triplet correlations without having detailed information about the relative importance of the various superconducting order parameter as a function gate and bias voltage. In the following, we provide this information by plotting the absolute values of the complex scalars $\Delta_\text{e/o}^S$ and vectors $\boldsymbol{\Delta}_\text{e/o}^T$. \subsection{A. No detuning between dot levels} In Fig.~\ref{fig:order_deleps0} (a)-(d), we show the order parameters for the same system as in Fig.~3(a), i.e., for $\epsilon_\text{L}=\epsilon_\text{R}$ (no detuning between dot levels). We find that the gate and voltage dependence of the four order parameters strongly differ from each other. In particular, there are regions in which some of them vanish while others remain finite. This is, e.g., the case in the Coulomb-blockade for small $|\delta|$ and $|\mu_\text{N}|$, where only the odd-frequency triplet order parameter is finite while all others vanish. Similarly, for large bias voltage $|\mu_\text{N}|$, both the odd-frequencey singlet and the odd-frequency triplet order parameters survive while the even-frequency counterparts are suppressed. With the help of Eqs.~(5a) and (5b), we can conclude that unconventional superconductivity is, in this case, generated by the terms involving a finite spin and a left-right asymmetry of the occupations in the DQD. For large $|\delta|$ and small $|\mu_\text{N}|$, on the other hand, the odd-frequency singlet order parameter vanishes since in equilibrium the left-right symmetry is restored. \begin{figure}[ht!] \includegraphics[width=0.98\columnwidth]{figS4.pdf} \caption{\label{fig:order_deleps0} Absolute values of the superconducting order parameters as a function of $\delta$ and $\mu_\text{N}$ for zero detuning $\Delta \epsilon$ between the quantum-dot levels. The parameters are the same as in Fig.~3(a).} \end{figure} \hfill \pagebreak \subsection{B. Finite detuning between dot levels} A finite detuning $\Delta \epsilon=\epsilon_\text{L}-\epsilon_\text{R}$ which is smaller than the interdot tunneling amplitude $t$ does not change the results qualitatively. The situation becomes different for $\Delta\epsilon \gg t$. In this case, an antisymmetrically applied bias voltage tends to favor a singly occupied state, which is incompatible with even-frequency order parameters, as clearly displayed in Fig.~\ref{fig:order_deleps02}. (For a symmetrically applied bias voltage, as considered in Ref.~[42], the relative magnitude of $\Delta \epsilon$ and $t$ is not important.) Odd-frequency order parameters, on the other hand, can still be finite due to the terms in the second line of Eqs.~(5a) and (5b). The suppression of the even-frequency order parameters is lifted by increasing the interdot tunneling such that $t\gtrsim \Delta \epsilon$. (We remark that breaking the left-right symmetry by $|{\bf B}_\text{L}| \neq |{\bf B}_\text{R}|$ while keeping $\epsilon_\text{L}=\epsilon_\text{R}$ leads to a similar suppression of even-frequency pairing. This was the motivation to choose a small angle $\gamma$ in Fig.~\ref{fig:Andreev_delB}(a) and (b). For Fig.~\ref{fig:Andreev_delB}(c) and (d), i.e., symmetrically applied bias voltages this suppression is not an issue.) \begin{figure}[ht!] \includegraphics[width=0.98\columnwidth]{figS5.pdf} \caption{\label{fig:order_deleps02} Same as Fig.~\ref{fig:order_deleps0} but for finite detuning $\Delta \epsilon=0.2U$ between the quantum-dot levels.} \end{figure} \end{document}
{'timestamp': '2014-12-02T02:22:32', 'yymm': '1404', 'arxiv_id': '1404.0813', 'language': 'en', 'url': 'https://arxiv.org/abs/1404.0813'}
\section{Introduction} \label{sec:level1} The electric double layer resulting from a charged surface in an aqueous solution affects a wealth of structural and dynamic properties in a wide range of physicochemical, colloidal, soft-matter and biophysical systems\cite{Israelachvili,Gelbart,Levinreview,Honig,Luo}. The standard textbook description of the electrical double layers is based on the mean-field Poisson-Boltzmann (PB) theory. At large surface-charge density, high counter-ion valency and high ion concentration -- the so-called strong coupling limit -- it is well recognized that PB theory fails to capture a number of qualitative effects, such as like-charge attraction\cite{Kjellander,Bowen,Jho} and charge inversion\cite{Kekicheff,Shklovskii1,Shklovskii2,Besteman}. Liquid-state theories\cite{Kjellander2,Monica} and other strong-coupling theories\cite{Jho,Netz} have been employed to account for the strong ion-ion correlations in this regime. In the opposite limit -- the weak-coupling regime -- it is generally accepted that the electric double layer is well described by the PB theory\cite{Netz,Netzandorland,Ninham1,Ninham2,Podgornik,Podgornik2,Netz2,Neu,Kanduc,Podgornikrev}. Performing a loop-wise perturbation expansion\cite{Netzandorland} in the coupling parameter (to be defined below), Netz\cite{Netz2} demonstrated that the PB theory is the leading-order theory in the weak-coupling limit, and becomes exact in the limit of zero coupling strength. Applying Netz's approach explicitly to surfaces with dielectric discontinuity, Kandu$\check{\rm c}$ and Podgornik\cite{Kanduc} concluded that, under the weak-coupling condition, the image force only enters as a small correction to the leading PB theory, which vanishes in the limit of zero coupling. In particular, the self-energy due to image charge interaction was shown not to appear in the Boltzmann factor for the ion distributions. Although these demonstrations were performed explicitly for counterion-only systems, the conclusions are generally believed to hold when salt ions are added\cite{Podgornikrev}. Thus, many researchers in the electrolyte community consider the weak-coupling theory to mean the PB theory; in other words, weak coupling is considered synonymous with the validity of the PB theory. Physically, however, a single ion in solution next to a surface of a lower dielectric plate obviously should feel the image charge repulsion even in the absence of any surface charge, and the ion distribution -- the probability of finding the ion at any location -- should reflect the image charge interaction through the Boltzmann factor. This was the case studied in the pioneering work of Wagner\cite{wagner}, and Onsager and Samaras\cite{onsager} (WOS) for the surface tension of electrolyte solutions. It is rather odd that this interaction should become absent from the Boltzmann factor for the distribution of mobile ions in the weak-coupling limit when the surface becomes charged. It is also rather curious that the image interaction, which is absent from the Boltzmann factor in the Netz-Kandu$\check{\rm c}$-Podgornik (NKP) approach\cite{Netz,Netz2,Kanduc} in the weak coupling limit, ``re-emerges" in the Boltzmann factor in the strong-coupling limit, though in a different form (through a fugacity expansion)\cite{Jho,Kanduc,Podgornikrev}. Taking zero-surface charge as the limiting case of the {\it physical} weak-coupling condition, it is clear that the NKP and WOS approaches give drastically different descriptions of the same system. It is also difficult to physically reconcile the absence of the image interaction from the Boltzmann factor in the weak-coupling limit with its ``re-emergence" in the strong coupling limit in the NKP approach. Furthermore, ion depletion near a weakly-charged dielectric interface has been observed in Monte Carlo simulation\cite{Netz,Levinsimulation} as well as predicted by the hypernetted chain approximation (HNC) integral equation theory that includes the image charge interactions\cite{HNC}. In this work, we clarify the origin of these discrepancies by a re-examination of the role of the image charge interaction in the {\it physical} weak-coupling limit. We show that in the presence of a dielectric discontinuity, the {\it physical} weak-coupling limit is not described by the so-called weak-coupling theory if the latter is meant to be the PB or PB with small fluctuations corrections. The image charge repulsion creates a boundary layer which cannot be captured by the the NKP approach. A nonperturbative approach yields a modified Poisson-Boltzmann equation, where a screened, self-consistently determined image charge interaction appears in the Boltzmann factor for the ion concentration for any surface charge density. The WOS theory is an approximation of the more general framework presented here in the special case of zero surface charge. To see the origin of the boundary layer, we start by an analysis of the relevant length scales for the counterion-only system. Consider a charged planar surface at $z=0$ with charge density $\sigma$ separating an aqueous solution ($z>0$) from an semi-infinite plate ($z<0$). The solvent and plate are taken to be dielectric continuum with dielectric constant $\varepsilon_S$ and $\varepsilon_P$, respectively, with $\varepsilon_P<<\varepsilon_S$. Now consider a counterion of valency $q$ at distance $z$ away from the surface. The attraction between the test ion and the charged surface is $E_{sur}=2\pi q l_B \sigma z = z/l_{GC}$, whereas the repulsion due to its image charge is $E_{im}=f q^2 l_B /(2z)$, where $l_B=e^2/(4\pi \varepsilon_0\varepsilon_S kT)$ is the Bjerrum length with $\varepsilon_0$ denoting the vacuum permittivity, $l_{GC}=1/(2 \pi q \sigma l_{B})$ is the Gouy-Chapman length and $f=(\varepsilon_S-\varepsilon_P)/(\varepsilon_S+\varepsilon_P)$ represents the dielectric contrast between the two media. Balancing $E_{sur}$ with $E_{im}$ results in a characteristic length: \begin{equation} d=\left(f/2\right)^{1/2}q \left(l_B l_{GC}\right)^{1/2} \label{eq1.1} \end{equation} Introducing the coupling parameter $\Xi=q^2 l_B/l_{GC}$,\cite{Netz2} we see \begin{equation} d \sim l_B \Xi^{-1/2} \ \ and \ \ d/l_{GC} \sim \Xi^{1/2} \label{eq1.2} \end{equation} Thus, as the coupling strength $\Xi$ goes to zero, $d$ itself diverges, but the ratio of $d$ to $l_{GC}$ (noting that $l_{GC}$ is the characteristic length scale for the double layer in the PB theory) goes to zero. This is a typical feature of a {\it boundary layer}. Physically, the competition between the surface charge attraction and the image charge repulsion gives rise to a depletion boundary layer. Since the perturbation approach performs an expansion in powers of $\Xi$\cite{Netz,Netz2,Kanduc} (which results from nondimensionalizing all the lengths by the longest length scale $l_{GC}$), information within the smaller length-scale -- the depletion boundary layer -- is lost. Although this analysis is performed explicitly for the counterion-only system, the depletion boundary layer persists when salt ions are introduced. \section{A Gaussian Variational Approach} \label{sec:level2} The presence of a boundary layer necessitates a nonperturbative treatment. Using the renormalized Gaussian variational approach\cite{Orland}, one of us\cite{Wang1} derived a general theory for electrolyte solutions with dielectric inhomogeneity. In this section, we first recapitulate the key steps in the derivation of the general theory and then specify to the case of a charged plate with dielectric discontinuity. \subsection{General Theory} \label{sec:levelA} We consider a general system with a fixed charge distribution $e \rho_{ex}({\bf r})$ in the presence of small mobile cations of valency $q_+$ and anions of valency $q_+$ in a dielectric medium of a spatially varying dielectric function $\varepsilon ({\bf r})$. $e$ is the elementary charge. The charge on the ion is assumed to have a finite spread given by a short-range distribution function $h_{\pm} ({\bf r}-{\bf r}_i)$ for the {\it i}th ion, with the point-charge model corresponding to $h_{\pm} ({\bf r}-{\bf r}_i)=q_{\pm} \delta ({\bf r}-{\bf r}_i)$. The introduction of a finite charge distribution on the ion avoids the divergence of the short-range component of the self energy -- the local solvation energy -- resulting from the point-charge model, and reproduces the Born solvation energy\cite{Wang1}. However, as the emphasis of this work is on the long-range component of the self energy -- the image charge interaction -- which is finite for point charges, we will eventually take the point-charge limit for the ion. The diverging but constant local solvation energy in the point-charge limit can be regularized by subtracting the same-point Green function in the bulk, as discussed below. Since we work in the low concentration regime for the ions ($c \le 0.1M$) (the Debye-H\"uckel regime), the excluded volume effects of the ions are unimportant, and so we treat the ions as volumeless. The total charge density including both the fixed charges and mobile ions is \begin{eqnarray} e \rho({\bf r}) = &e& \rho_{ex}({\bf r}) +e \int d{\bf r}' [ h_+ ({\bf r}'-{\bf r})\hat{c}_+({\bf r}') \nonumber \\ &-&h_- ({\bf r}'-{\bf r})\hat{c}_-({\bf r}') ] \label{eq2.1} \end{eqnarray} with $ \hat{c}_{\pm} ({\bf r}) = \sum_{i=1}^{n_{\pm}} \delta ({\bf r}-{\bf r}_i) $ the particle density operator for the ions. The Coulomb energy of the system, {\it including the self energy}, is \begin{equation} H= \frac{e^2}{2 } \int d{\bf r} d {\bf r}' \rho({\bf r})G_0 ({\bf r},{\bf r}' )\rho({\bf r}') \label{eq2.2} \end{equation} where $G_0 ({\bf r},{\bf r}' )$ is the Coulomb operator given by \begin{equation} - \nabla \cdot \left[\varepsilon_0 \varepsilon ({\bf r}) \nabla G_0({\bf r},{\bf r}' ) \right] = \delta({\bf r}-{\bf r}' ) \label{eq2.3} \end{equation} It is convenient to work with the grand canonical partition function \begin{eqnarray} \Omega=\sum_{n_+=0}^{\infty} \sum_{n_-=0}^{\infty} \frac{{\rm e} ^{n_+ \mu_+} {\rm e}^{ n_- \mu_-}}{n_+!n_-! v_+^{n_+} v_-^{n_-}} \nonumber \\ \times \int \prod_{i=1}^{n_+} d {\bf r}_i \prod_{j=1}^{n_-} d {\bf r}_j \exp \left(- \beta H \right) \label{eq2.4} \end{eqnarray} where $\mu_{\pm}$ are the chemical potential for the cations and anions, and $v_{\pm}$ are some characteristic volume scales, which have no thermodynamic consequence. We perform the usual Hubbard-Stratonovich transformation to Eq. \ref{eq2.4} by introducing a field variable $\phi({\bf r})$, which yields \begin{equation} \Omega= \frac{1}{Z_0} \int D \phi \exp \left\{ - L [\phi] \right\} \label{eq2.5} \end{equation} The ``action" $L$ is \begin{eqnarray} L= \int d {\bf r} &[& \frac{1}{2} \epsilon (\nabla \phi)^2 + i \rho_{ex} \phi - \Gamma \lambda_+ {\rm e}^{- i {\hat h}_+ \phi }\nonumber \\ &-& \Gamma \lambda_- {\rm e}^{ i {\hat h}_-\phi } ] \label{eq2.6} \end{eqnarray} $Z_0$ is the normalization factor given by \begin{equation} Z_0= \int D \phi \exp \left[- \frac{1}{2} \int d {\bf r} \epsilon (\nabla \phi)^2 \right] = \left[\det {\bf G_0}\right]^{1/2} \label{eq2.7} \end{equation} where $G_0^{-1}= \nabla_{{\bf r}} \cdot \left[ \epsilon ({\bf r}) \nabla_{{\bf r}'} \delta ({\bf r}-{\bf r}') \right]$ is the inverse of the Coulomb operator in Eq. \ref{eq2.3}, $\epsilon =\varepsilon_0 \varepsilon/(\beta e^2)$ is a scaled permittivity, and $\lambda_{\pm}={\rm e}^{ \mu_{\pm}}/ v_{\pm}$ is the fugacity of the ions. We have used the short-hand notation ${\hat h}_{\pm} \phi$ to represent the local spatial averaging of $\phi$ by the charge distribution function: ${\hat h}_{\pm} \phi= \int d{\bf r}' h_{\pm} ({\bf r}'-{\bf r}) \phi ({\bf r}')$. The function $\Gamma({\bf r})$ in Eq. \ref{eq2.6} is introduced to constrain the mobile ions to the solvent region. Equations \ref{eq2.5} and \ref{eq2.6} are the exact field-theoretic representation for the partition function. Because the action is nonlinear, the partition function cannot be evaluated exactly. The lowest-order approximation corresponds to taking the saddle-point contribution of the functional integral, which results in the Poisson-Boltzmann equation. A systematic loop expansion can be developed to account for fluctuations around the saddle point in an order by order manner. In practice, most theoretical treatments only include one-loop corrections. The loop expansion involves expanding the action around the saddle point in polynomial forms. However, the fluctuation part of the electrostatic potential due to image charge interaction becomes very large near the dielectric interface; thus any finite-order expansion of the $\langle {\rm e}^{\mp i {\hat h}_{\pm} \phi } \rangle$ term, which becomes the Boltzmann factor in the ion distribution (see Eq. \ref{eq2.11}), is problematic. The absence of the image-charge self-energy in the Boltzmann factor in the perturbation approaches\cite{Ninham1,Ninham2,Netz2,Kanduc,Podgornik,Podgornik2,Podgornikrev} is thus a consequence of low-order expansion of the exponential function of an imaginary variable when the variable can be quite large. To develop a nonperturbative theory, we perform a variational calculation of Eq. \ref{eq2.5} using the Gibbs-Feynman-Bogoliubov bound for the grand free energy $W = -\ln \Omega$, which yields \begin{equation} W \le -\ln \Omega_{ref} + \langle L[\phi]- L_{ref}[\phi] \rangle \label{eq2.8} \end{equation} where \begin{equation} \Omega_{ref}=\frac{1}{Z_0} \int D \phi \exp \left\{ -L_{ref}\left[\phi \right]\right\} \label{eq2.9} \end{equation} The average $\langle \cdots \rangle$ is taken in the reference ensemble with the action $L_{ref}$. We take the reference action to be of the Gaussian form centered around the mean $-{i \psi}$ \begin{equation} L_{ref}=\frac{1}{2} \int d {\bf r} d {\bf r}' [\phi ({\bf r})+ {i \psi} ({\bf r})] G^{-1} ({\bf r},{\bf r}') [\phi ({\bf r}') +{i \psi}({\bf r}')] \label{eq2.10} \end{equation} where $G^{-1}$ is the functional inverse of the Green function $G$, and the introduction of $i$ is to keep the mean electrostatic potential $\psi$ real. $\psi$ and $G$ are taken to be variational parameters for the grand free energy functional. With the Gaussian reference action Eq. \ref{eq2.10}, all the terms on the r.h.s. of Eq. \ref{eq2.8} can be evaluated analytically (see Appendix A for detailed derivation). The lower bound of the free energy is obtained by extremizing the r.h.s. of Eq. \ref{eq2.8} with respect to $\psi$ and $G$, which results in the following two variational conditions: \begin{equation} - \nabla \cdot \left( \epsilon \nabla \psi \right) = \rho_{ex} + \Gamma \lambda_+ q_+ {\rm e}^{ - q_+ \psi - u_+ } - \Gamma \lambda_- q_- {\rm e}^{q_- \psi - u_- } \label{eq2.11} \end{equation} \begin{equation} - \nabla \cdot \left[ \epsilon \nabla G({\bf r},{\bf r}') \right] + 2 I({\bf r}) G({\bf r},{\bf r}')= \delta ({\bf r}-{\bf r}') \label{eq2.12} \end{equation} where $u_{\pm}$ is the self energy of the ions \begin{equation} u_{\pm} ({\bf r})= \frac{1}{2} \int d {\bf r}' d {\bf r}'' h_{\pm} ({\bf r}-{\bf r}') G({\bf r}',{\bf r}'') h_{\pm} ({\bf r}''-{\bf r}) \label{eq2.13} \end{equation} $I({\bf r})=\left[q_+^2 c_+({\bf r})+q_-^2 c_-({\bf r})\right]/2$ is the local ionic strength, with the concentration of cations and anions given by \begin{equation} c_{\pm} ({\bf r}) =\lambda_{\pm} \Gamma \exp \left[ \mp q_{\pm} \psi ({\bf r}) -u_{\pm} ({\bf r}) \right] \label{eq2.14} \end{equation} Eqs. \ref{eq2.11}-\ref{eq2.13} forms a set of self-consistent equations for the mean electrostatic potential $\psi({\bf r})$, the correlation function (Green function) $G({\bf r},{\bf r}')$ and the self energy $u_\pm({\bf r})$ of the ions, which are the key equations for weakly coupled electrolytes~\cite{Wang1,Carnie}. Eq. \ref{eq2.11} has the same form as the PB equation, but now with the self-energy of the ions appearing in the Boltzmann factor. The appearance of the self energy in the Boltzmann factor reflects the nonlinear feedback of the fluctuation effects, an aspect that was missing in a perturbation expansion. The self-energy given by Eq. \ref{eq2.13} is a unified expression that includes the Born energy of the ion, the interaction between the ion and its ionic atmosphere, as well as the distortion of the electric field by a spatially varying dielectric function, the latter taking the form of image charge interaction near a dielectric discontinuity. In general, the self energy is spatially varying if there is spatial inhomogeneity in either the dielectric constant or the ionic strength. Making use of the variational conditions Eqs. \ref{eq2.11} and \ref{eq2.12} and evaluating the fluctuation part of the free energy arising from Gaussian integrals by using the charging method (as shown in Appendix B), we obtain a simple expression for the equilibrium grand free energy: \begin{eqnarray} W &=&- \int d {\bf r}\left( c_+ + c_- \right) + \frac{1}{2} \int d {\bf r} \psi \left( \rho_{ex} - q_+ c_+ + q_- c_- \right) \nonumber \\ &+& \int d {\bf r} I({\bf r}) \int_0^1 d \eta \left[ G ({\bf r},{\bf r}; \eta) - G ({\bf r},{\bf r})\right] \label{eq2.15} \end{eqnarray} where $\eta$ is a ``charging" variable. $G ({\bf r},{\bf r}; \eta)$ is the same-point Green function obtained from solving Eq. \ref{eq2.12} but with the term $I({\bf r})$ replaced with $\eta I({\bf r})$. Note that the free energy expression Eq. \ref{eq2.15} is finite even in the point-charge limit. Although both $G ({\bf r},{\bf r}; \eta)$ and $G ({\bf r},{\bf r})$ are infinite, their divergent parts exactly cancel; the remaining difference is finite and accounts for the leading-order ion-ion correlation effect. Unlike previous field-theoretical treatments\cite{Netz2,Podgornik2,Dean}, {\em no microscopic cut-off is needed in our theory}. \subsection{Weakly Charged Plate} \label{sec:levelB} We now specify to the case of a charged plate with dielectric discontinuity in contact with an electrolyte solution. The fixed external charge density is then $ \rho_{ex} ({\bf r})= \sigma \delta (z)$. For concreteness, we take the surface charge to be positive. Both $\Gamma$ and $\varepsilon({\bf r})$ are now step functions: $\Gamma=0$ and $\varepsilon({\bf r})=\varepsilon_{P}$ for $z<0$; $\Gamma=1$ and $\varepsilon({\bf r})=\varepsilon_{S}$ for $z>0$. In the solvent region ($z>0$), Eq. \ref{eq2.11} becomes \begin{equation} -\epsilon_{S} \frac{\partial^2 \psi (z)}{\partial z^2}= \lambda_+ q_+ {\rm e}^{ - q_+ \psi - u_+ } - \lambda_- q_- {\rm e}^{q_- \psi - u_- } \label{eq2.16} \end{equation} with the boundary condition $(\partial \psi / \partial z)_{z=0}=- \sigma /\epsilon_{S}$, which is obtained by integrating Eq. \ref{eq2.11} between $z=0^-$ and $z=0^+$ and noting that $(\partial \psi / \partial z) = 0$ for $z<0$. Since the solvent has a uniform dielectric constant, the Born energy is constant and can be absorbed into the reference chemical potential. It is then convenient to single out this constant contribution by rewriting Eq. \ref{eq2.13} as \begin{eqnarray} &&u_{\pm} ({\bf r}) = \frac{1}{2} \int d {\bf r}' d {\bf r}'' h_{\pm} ({\bf r}-{\bf r}') \frac {1}{4 \pi \epsilon_{S} \vert {\bf r}' - {\bf r}'' \vert } h_{\pm} ({\bf r}''-{\bf r})\nonumber \\ &&+ \frac{1}{2} \int d {\bf r}' d {\bf r}'' h_{\pm} ({\bf r}-{\bf r}') \left[ G({\bf r}',{\bf r}'')- \frac {1}{4 \pi \epsilon_{S} \vert {\bf r}' - {\bf r}'' \vert }\right] \nonumber \\ && \times h_{\pm} ({\bf r}''-{\bf r})\label{eq2.13a} \end{eqnarray} The first term gives a constant Born energy of the ion $q_{\pm}^2/8\pi \epsilon_{S} a_{\pm}$, with $a_{\pm}$ the Born radius of the ion\cite{Wang1}. The remaining term is finite in the point-charge limit. We can thus take the $a_{\pm} \to 0$ limit for this term in the final expression, or equivalently and more conveniently by directly take the point-charge limit in the distribution $h_{\pm} ({\bf r}-{\bf r}')= \delta ({\bf r}-{\bf r}')$. The nontrivial and nonsingular part of the self energy $u_{\pm}^{*}$ is then \begin{equation} u_{\pm}^{*} ({\bf r})=\frac {q_{\pm}^2 } {2} \lim_{{\bf r}' \to {\bf r}} \left[ G({\bf r}, {\bf r}')- \frac{1}{4 \pi \epsilon_{S} \vert {\bf r} - {\bf r}' \vert}\right] \label{eq2.17} \end{equation} To avoid the complexity of solving the equation for the Green function (\ref{eq2.12}), previous work usually invoked approximate schemes, e.g., by replacing the spatially varying screening length by the bulk Debye length \cite{Monica,onsager,Levinsimulation,Levin1,Levin2} or using a WKB-like approximation\cite{Carnie,Buff,Levine}. However, the screening on the image force at the dielectric interface is inhomogeneous, long-ranged and accumulative, which cannot be captured fully by these approximate methods. In this work, we perform the full numerical solution of the Green function, which provides the most accurate treatment of the inhomogeneous screening effect at the dielectric interface. To solve the Green function in the planar geometry, it is convenient to work in a cylindrical coordinate system $(r,z)$. Noting the isotropy and translational invariance in the directions parallel to the surface, we can perform a Fourier transform in the parallel direction to write: \begin{equation} G(r,z,z')=\frac{1}{2\pi} \int _0 ^\infty kdk J_0(k r) {\hat G}(k,z,z') \label{eq2.18} \end{equation} where $J_0$ is the zero-order Bessel function. ${\hat G}(k,z,z')$ now satisfies: \begin{equation} -\frac{\partial^2 {\hat G}(k,z,z')}{\partial z^2}+\left[\kappa^2 (z)+k^2\right] {\hat G}(k,z,z')=\frac{1}{\epsilon_S} \delta(z,z') \label{eq2.19} \end{equation} for $z>0$, with the boundary condition $\epsilon_S \partial {\hat G} / \partial z- k \epsilon_P {\hat G} =0$ at $z=0$. $\kappa(z)=\left[2 I (z)/\epsilon_{S}\right]^{1/2}$ can be considered the inverse of the local Debye screening length. Eq. \ref{eq2.19} is solved numerically by using the finite difference method\cite{Jiaotong,numerical}. The free-space Green function satisfying $-\partial^2{\hat G}_0/\partial z^2 + k^2 {\hat G}_0 = \delta(z,z')/\epsilon_S$, though analytically solvable, is also solved numerically along with Eq. \ref{eq2.19} to ensure consistent numerical accuracy in removing the singularity of the same-point Green function. The nondivergent part of the self energy is then: \begin{equation} u_{\pm}^{*} (z)=\frac{q_{\pm}^2}{4\pi} \int _0 ^\infty \left[ {\hat G}(k,z,z) - {\hat G}_0(k,z,z)\right] kdk \label{eq2.20} \end{equation} Far away from the plate surface ($z \to \infty$), the ion concentration approaches the bulk value $c^b_{\pm}$, and from Eq. \ref{eq2.14} (where we set $\psi_b=0$, or equivalently absorbing a constant $\psi_b$ into the definition of the fugacity), the fugacity of the ions is given by $\lambda_\pm=c^b_{\pm} \exp\left[ -q^2_\pm \kappa_b/(8\pi \epsilon_{S}) \right]$ where $\kappa_b$ is the inverse screening length in the bulk. Note that this relationship automatically takes into account the Debye-H\"uckel correction to fugacity due to ion-ion correlations. The theory presented above is derived explicitly with added salt. However, application to the counterion-only system is straightforward through an ensemble transformation\cite{Netz2}. \section{Numerical Results and Discussions} \label{sec:level3} In this section, we apply the theory presented in the last section to an electrolyte solution in contact with a weakly charged plate. We first examine the counterion-only system to highlight the depletion boundary layer issue and then study the consequences of the depletion boundary layer on the structure and thermodynamic properties of the electric double layer with added salts. \subsection{Counterion-only Case} For the counterion-only system, the PB theory admits an analytical solution for the counterion distribution: $c(z)=1/\left[2 \pi l_B q^2 (z^2+l_{GC}^2)\right]$, which is characterized by a single length scale, the Gouy-Chapman length. The counterion concentration profile is shown in Fig. 1 as the dashed line, which decays monotonically. In contrast, when there is dielectric discontinuity, our theory predicts a qualitatively different behavior. The presence of the depletion boundary layer inside the Gouy-Chapman length is obvious, and is consistent with results from Monte Carlo simulation\cite{Netz,Levinsimulation}. Within the depletion boundary layer ($z<d \sim l_B \Xi^{-1/2}$), image charge repulsion is dominant and ions are excluded from the plate surface. In the point-charge model, the self energy diverges to infinity at the plate surface; thus the ion concentration vanishes at $z=0$. The vanishing of the ion concentration obviously contracts the PB prediction but is also incapable of being captured by any perturbation corrections around the PB limit\cite{Ninham1,Ninham2,Netz2,Kanduc,Podgornik,Podgornik2,Podgornikrev}. Simply put, these perturbative approaches fail to satisfy the boundary condition for the ion concentration at $z=0$, as is typical with boundary layer problems. Beyond the depletion boundary layer ($z>d$), surface charge attraction prevails and the ion concentration approaches the PB profile sufficiently far away from the surface. \begin{figure}[hbt] \centering \includegraphics[width=0.45\textwidth]{Fig1.eps} \caption{Ion concentration for the counterion-only system for different ion valency $q$. $\varepsilon_S=80$ and $\varepsilon_P=2.5$. $\sigma=1/(100q) (e/nm^2)$. The Gouy-Chapman length is kept constant ($l_{GC}=22.7nm$) for counterions of different valencies. The coupling parameter $\Xi$ is $0.031 q^2$. \label {2}} \end{figure} The PB theory predicts a universal profile $q^2 c(z)$ for counterions of different valencies when the Gouy-Chapman length is kept fixed. However, from our scaling analysis in the Introduction, the depletion boundary layer should increase linearly with valency; see Eq. \ref{eq1.1}. This prediction is borne out by our numerical result as shown in Fig. 1. Therefore, the boundary layer problem becomes more severe for ions of high valency. The scaling of the boundary thickness with the coupling parameter predicted from Eq. \ref{eq1.2} is also confirmed by our numerical results (data not shown). \subsection{With Symmetric Salt} When there are added salt ions in the solution, the image force affects the distribution of both the counterions and coions. The PB theory predicts that the double layer structure is characterized by the Debye screening length $\kappa^{-1}$ under the condition that $\kappa^{-1} \ll l_{GC}$, with a monotonically decreasing counterion and monotonically increasing coion distribution. In contrast, both the counterion and coion concentration must vanish at the surface, but their approach to the bulk concentration is different: the coions increases monotonically, while the counterions goes through an overshoot. Furthermore, we find two regimes depending on the relative width of the screening length and the boundary layer thickness, which itself is in turn affected by the screening. At low salt concentration, $\kappa^{-1} \gg d$ and ion depletion is confined in a boundary layer very close to the plate surface; both the ion distribution and electrostatic potential approach the profile predicted by PB beyond the boundary layer. As the salt concentration increases, the width of the depletion boundary layer becomes comparable to the screening length and the two length scales remain comparable thereafter; the image charge interaction then affect the entire range of the double layer. In Figure 2 we show the ion distribution of a $0.1M$ 1:1 electrolyte calculated by our theory. The contrast with the PB result is quite striking. \begin{figure}[hbt] \centering \includegraphics[width=0.45\textwidth]{Fig2.eps} \caption{Ion concentration profile for a 1:1 electrolyte solution with $c^b=0.1M$. $\varepsilon_S=80$, $\varepsilon_P=2.5$ and $\sigma=1e/100nm^2$. \label {5}} \end{figure} \begin{figure}[b] \centering \includegraphics[width=0.45\textwidth]{Fig3.eps} \caption{The surface energy $f_s$ as a function of the salt concentration for a 1:1 electrolyte solution. $\varepsilon_S=80$, $\varepsilon_P=2.5$ and $\sigma=1e/100nm^2$. \label {5}} \end{figure} The change in the double layer structure will affect a wealth of interfacial properties. As an example, we show in Figure 3 the surface excess free energy $f_s= \int_0 ^{\infty} (w-w^{b})dz$ (where $w$ is the grand free energy density and $w^{b}$ is its bulk value) as a function of the salt concentration. The PB theory predicts a monotonic decrease of $f_s$ that scales approximately with $(c^b)^{-1/2}$, which arises from the electric field contribution in the free energy due to the surface charge\cite{Onuki,Wang4}. With the inclusion of image charge interaction, our theory shows that $f_s$ changes nonmonotonically. At low salt concentration ($c^b < 10^{-3}M$), $f_s$ calculated by our theory follows closely the PB result; this is because the region affected by the image charge repulsion is relatively narrow compared to the screening length, giving a relatively small contribution to the surface excess energy when integrated over the entire solution. As the salt concentration increases ($c^b > 10^{-2}M$), our theory predicts a sign change in the slope of $f_s$ vs. $c^b$: $f_s$ increases with increasing $c^b$, {\em opposite to} the PB result. In this concentration regime, the width of the depletion boundary layer is comparable to the Debye screening length, and the entire double layer region is affected by the image charge interaction as shown in Figure 2. The increase in $f_s$ is now largely due to the depletion (i.e., negative adsorption) of mobile ions. The slope of $\log(f_s)$ vs $\log(c^b)$ is less than $1$ because of the increased screening of the image force as the salt concentration increases. The sign change of $\partial f_s /\partial c^b$ corresponds to the crossover in the length scale relationship from $\kappa^{-1} \gg d$ to $\kappa^{-1} \approx d$. As the excess surface energy determines the spreading of a liquid drop on a solid surface, this result implies a qualitatively different behavior for the spreading of a drop of electrolyte solution than that predicted by the PB theory. We also note that the nonmonotonic behavior discussed here shares the same physics as the Jones-Ray effect\cite{Onuki,JonesRay,Bier,Wang4} for the interfacial tension observed at the water/air and water/oil interfaces. \subsection{With Asymmetric Salt} The effects of image charge become more complex if the salt ions are of unequal valency. Because of the quadratic dependence of the image force on the valency, the higher-valent ions are pushed further away from the surface, necessitating a compensation by the lower-valent ions in the space in between. The difference in the image force between the counterions and the coions induces additional charge separation and hence electric field within the depletion boundary layer. The induced net charge within the boundary layer alters the effective surface charge, which can affect the double layer structure outside the boundary layer. For the case where the coions are of higher valency than the counterions, the induced electric field due to unequal ion depletion counteracts the field generated by the surface charge. With the increase of the salt concentration, the induced field can exceed that generated by the bare surface charge, leading to a sign change in the effective surface charge known as charge inversion. The double layer structure becomes qualitatively different from that predicted by the PB theory as shown in Figure 4: the electrostatic potential is of the opposite sign to the PB result. Excess counterions accumulate in the depletion boundary layer, overcharging the plate surface, while the coions are enriched outside the boundary layer, serving to screen the inverted surface charge. In this case, the PB theory qualitatively fails to describe the entire double layer structure. \begin{figure}[th] \centering \includegraphics[width=0.45\textwidth]{Fig4a.eps} \includegraphics[width=0.45\textwidth]{Fig4b.eps} \caption{Charge inversion for a 0.05M 2:1 electrolyte solution near a positively charged plate. (a) Dimensionless electrostatic potential and (b) net charge density ($q_+ c_+ -q_- c_-$). $\varepsilon_S=80$, $\varepsilon_P=2.5$ and $\sigma=1e/100nm^2$. \label {5}} \end{figure} \subsection{Uncharged Surface: Image Charge vs. Correlation Effect} The case of an electrolyte solution next to an uncharged surface ($\sigma=0$) reduces to the problem treated by Wagner, Onsager and Samaras. The self energy due to image charge repulsion appears in the Boltzmann factor and is responsible for the depletion layer in the ion distribution near the surface as shown in Figure 5. Note, however, in the original WOS theory as well as in subsequent treatments \cite{Monica,onsager,Levinsimulation,Levin1,Levin2}, the image charge term was added to the Boltzmann factor {\em ad hoc} based on physical intuition, whereas in our theory, its appearance is the result of systematic derivation. Therefore, our theory not only recovers the WOS theory (upon making additional approximations, e.g., by using the constant bulk screening length for the image force potential) but also provides the means for systematically improving the WOS theory. First, our theory captures the anisotropic screening cloud around an ion near the interface due to the spatially varying ion concentration near the surface. The inhomogeneous ionic cloud in the depletion layer and its effect on the screening of the test ion are treated self-consistently in our theory, whereas this inhomogeneous screening is missing in the WOS theory. Second, by including the mean electrostatic potential generated by charge separation, our theory can describe salt solutions with unequal valency such as the case of 2:1 electrolyte shown in Figure 5b. Finally, our theory provides a more accurate expression for the excess free energy by properly accounting for the inhomogeneous screening effect and the fluctuation contribution to the free energy. Thus, we expect our theory to be able to better predict the surface tension of electrolyte solutions in comparison to the WOS theory, especially at higher salt concentrations (where accurate treatment of the screening becomes more important). The inhomogeneous screening results in a correlation effect that can lead to ion depletion near the surface\cite{Monica}: an ion interacts more favorably with its full ionic atmosphere far away from the surface than in the vicinity of the surface. This correlation effect is stronger for multivalent ions, which pushes them further away from the interface than the monovalent ions. The correlation-induced ion depletion near the surface can take place both with and without the dielectric contrast, and is well captured by our theory, as shown in Figure 5. While the ion depletion without the dielectric contrast is induced by the correlation alone, the ion depletion in the presence of the dielectric contrast is due to both the correlation effect and the image charge effect, which enhance each other. As a result, both ion depletion, as well as charge separation in the case of 2:1 electrolyte, are more pronounced in the presence of image charge than due to correlation alone. Ion depletion due to correlation alone is most noticeable when the surface is uncharged. When the surface is charged, the surface attraction for the counterions will dominate over such correlation effect in the absence of image charge repulsion. In contrast, depletion due to image charge repulsion persists for both the counterions and coions even when the surface is charged. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{Fig5a.eps} \includegraphics[width=0.45\textwidth]{Fig5b.eps} \caption{(Color online) Ion concentration (scaled by the bulk ion concentration $c_{\pm}^{b}$) for (a) 0.01M 1:1 electrolyte solution ($c_{+}=c_{-}$) and (b) 0.01M 2:1 electrolyte solution near an uncharged interface ($\sigma=0$) with dielectric contrast ($\varepsilon_S=80$, $\varepsilon_P=2.5$) in comparison with the case without dielectric contrast ($\varepsilon_S=\varepsilon_P=80$). Profiles calculated by our theory are shown by colored lines; results from the PB theory are given as black dot lines. \label {5}} \end{figure} \section{Conclusions} \label{sec:level3} In this work, we have shown that the image charge repulsion creates a depletion boundary layer near a dielectric surface, which cannot be captured by a regular perturbation method. Using a nonperturbative approach based on a variational functional formulation, we find that the self energy of the ion, which includes contributions from both the image charge interaction and the interionic correlation, appears explicitly in the Boltzmann factor for the ion distribution, resulting in a self-energy modified Poisson-Boltzmann equation as the appropriate theory for describing the {\em physical} weak-coupling condition. This image-charge self energy is not diminished by reducing the surface or the ionic strength in the solution; in the presence of a significant dielectric discontinuity, there is no limiting condition for which the PB theory is valid. For zero surface charge, our theory reduces to the WOS theory upon further approximations. Thus, our theory provides both the justification for the WOS theory and means for systematically improving the WOS theory, for example, by including the mean electrostatic potential generated by the charge separation in salt solutions with unequal valency or other asymmetries between the cation and anions, such as different size and polarizability\cite{Levin1}. The weak-coupling condition in the presence of dielectric discontinuity covers many soft-matter and biophysical systems. Many phenomena, such as the surface tension of electrolyte solutions\cite{Surfacetension1,Surfacetension2}, salt effects on bubble coalescence\cite{Bubble}, and the ion conductivity in artificial and biological ion-channels\cite{Channel1,Channel2,Channel3}, cannot be explained, even qualitatively, by the PB theory. The presence of the image charge interaction results in a very different picture of the electrical double layer from that provided by the PB theory, and can give rise to such phenomena as like-charge attraction and charge inversion even in the weak-coupling condition\cite{Wang3}; these phenomena have usually been associated with the strong-coupling condition. The PB theory has played a foundational role in colloidal and interfacial sciences: the DLVO theory, interpretation of the zeta potential, experimental determination of the surface charge and the Hamaker constant, are all based on the PB theory\cite{Israelachvili}. With the inclusion of the image charge interaction, some of the well known and accepted results will have to be reexamined. \begin{acknowledgments} Acknowledgment is made to the donors of the American Chemical Society Petroleum Research Fund for partial support of this research. \end{acknowledgments}
{'timestamp': '2015-02-19T02:11:02', 'yymm': '1502', 'arxiv_id': '1502.05265', 'language': 'en', 'url': 'https://arxiv.org/abs/1502.05265'}
\section{Introduction } The plasma is the dominant constituent of matter in the universe. The properties of the plasma is entirely different from that of the ordinary gases and solids. Due to presence the cluster of the charged particles in the plasma, it shows the collective behavior which represents the long range Coulomb force between the plasma particles. There are two types of interactions in the plasma, namely the charge-charge interactions and charge-neutral interactions. In charge-charge interactions, charge particles interact according to the Coulomb law, while in the charge-neutral interactions there is the generation of the electric polarization fields which may produce by the distortion of the neutral atom when comes in contact with the charged particles. The range of this polarization field is limited in the order of diameter of the atom, i.e. it effected only for the inter-atomic distance to perturb the orbital motion of the electrons. This interaction also involves the induced or permanent dipole moments. Furthermore, to explore the properties of plasma, it is important to study the influence of applied electric and magnetic field. Due to the high mobility of the electron, the plasmas are generally considered as the good conductor of electrical as well as the thermal conductivity. There is the diffusion of the charge particles in plasma from high density to the region of low density due to the particles density gradient. The charge particle electron, due to its lower mass and high mobility, is more diffusible than ions. Moreover, a plasma also has the property to sustain a wave phenomena due to the charge particles. In low-frequency region, Alfven waves and magnetosonic waves are studied, whereas in high frequency region longitudinal electrostatic wave and transverse electromagnetic wave are studied. Many researcher studied the behavior of electrically conducting fluid plasma in the presence of magnetic field. Alfven \cite{key-1} proposed the theory of Magneto-hydrodynamics (MHD) and suggested that electrically conducting fluid can support the propagation of shear waves. Meyer-Vernet\textbf{ }\cite{key-2} discussed how the electromagnetic wave propagates in a cold plasma that contained both electric and the magnetic charges. Correspondingly, Kambe \cite{key-3} constructed the mathematical formulation for compressible fluids, which provides an analogous theory of Maxwell equations for the viscous fluids. The magnetic field works as a vorticity field whereas the electric field works as the Lamb vector field. It shows the complete analogous theory of electromagnetism in terms of fluid mechanics where the fluid-flow follows the Galilean symmetry whereas electromagnetic field follows the Lorentz symmetry. Further, Thompson-Moeller\textbf{ }\cite{key-4} also have interpreted the Maxwell like equations for plasma particles. In mathematical physics, the study of four dimensional particles like dyons, tachyons, etc. in distinguish mediums can be explain by the help of division algebras. Basically, there exist four types of divisions algebras \cite{key-5}, they are the real, complex, quaternion and octonion algebras. The quaternionic algebra \cite{key-6} is generally an extension of complex numbers, can be express by the four-dimensional Euclidean spaces \cite{key-7,key-8}. The quaternionic algebra has vast applications in the multiple branches of theoretical physics. The Maxwell\textquoteright s equations in the presence of magnetic monopoles, and other classical equations of motion have already been developed in terms of quaternionic algebra \cite{key-9}.\textbf{ }Moreover, Bisht \textit{et al}. \cite{key-10}\textbf{ }discussed the MHD equations of plasma for massive dyons containing electric as well as magnetic charge. Thus, keeping in mind the properties of quaternionic algebra and its application in theoretical physics, in this paper, we discuss the behavior of hydro-electromagnetic field of dyonic cold plasma and its conservation laws in terms of quaternionic field. We propose the quaternionic energy-momentum conservation laws for dyonic plasma particle. In this case the conservation of energy is related to the Bernoulli's like equation while the conservation of momentum is related to Navier-Stokes like equation for dynamics of dyonic plasma particle. Further, the quaternionic expression for dyonic plasma wave emphasizes that there are two types of waves propagation namely the Langmuir like wave propagation due to electrons, and the \textquoteright t Hooft-Polyakov like wave propagation due to magnetic monopoles. The present theory also unify the Langmuir and \textquoteright t Hooft-Polyakov like waves in a single quaternionic framework. \section{Preliminaries} In microscopic description of plasma particles, we consider plasma particles as the point-like classical particles where the quantum effect becomes negligible. let us start with a single plasma particle governs the spatial distribution by the Dirac delta function as \cite{key-11} \begin{align} \delta[\boldsymbol{r}-\boldsymbol{r}(t)]\,\,= & \,\delta[x-x(t)]\,\delta[y-y(t)]\,\delta[z-z(t)]\,,\label{eq:1} \end{align} where $\boldsymbol{r}\,(x,\,y,\,z)$ is fixed coordinate and $\boldsymbol{r}(t)$ is any trajectory for moving plasma particle. For this case, the velocity space distribution in a six dimensional phase-space for plasma particle will be $\delta[\boldsymbol{v}-\boldsymbol{v}(t)]$. The microscopic distribution for $N$- charged particles of plasma in given phase-space can be written as \begin{align} f\,(\boldsymbol{r},\,\boldsymbol{v},\,t)\,\,=\,\, & \sum_{j=1}^{N}\delta[\boldsymbol{r}-\boldsymbol{r}(t)]\,\delta[\boldsymbol{v}-\boldsymbol{v}(t)]\,,\label{eq:2} \end{align} where the particle density becomes \begin{align} \mathsf{n}\,(\boldsymbol{r},\,t)\,\,= & \,\,\int d^{3}v\,f\,(\boldsymbol{r},\,\boldsymbol{v},\,t)\,\,=\,\,\sum_{j=1}^{N}\delta[\boldsymbol{r}-\boldsymbol{r}(t)]\,.\label{eq:3} \end{align} The equation of motion for $j^{th}$charge particle of plasma under the influence of Lorentz force due to the electric ($\boldsymbol{E}$) and magnetic induction ($\boldsymbol{B}$) fields in particle trajectories ($\boldsymbol{r}_{j}(t)$, $\boldsymbol{v}_{j}(t)$) can be written as \begin{align} m\frac{d\boldsymbol{v}_{j}(t)}{dt}\,\,= & \,\,q_{j}\left[\boldsymbol{E}(\boldsymbol{r}_{j},\,t)+\boldsymbol{v}_{j}\times\boldsymbol{B}((\boldsymbol{r}_{j},\,t))\right]\,\,,\label{eq:4}\\ \frac{d\boldsymbol{r}_{j}(t)}{dt}\,\,= & \,\,\boldsymbol{v}_{j}\,,\,\,\,\,(\forall\,j=1,2,.........N)\,.\label{eq:5} \end{align} The electric and magnetic fields satisfy the following Maxwell's equations, \begin{align} \boldsymbol{\nabla\cdot E}\, & =\,\rho_{c}\,,\label{eq:6}\\ \boldsymbol{\nabla\cdot B}\, & =\,0\,,\label{eq:7}\\ \boldsymbol{\nabla\times E} & \,=-\frac{\partial\boldsymbol{B}}{\partial t}\,,\label{eq:8}\\ \boldsymbol{\nabla\times B}\, & =\,\frac{\partial\boldsymbol{E}}{\partial t}+\boldsymbol{J}\,,\label{eq:9} \end{align} where we consider the natural unit ($\hbar=c=1$). The required charge and current density, respectively $(\rho_{c},\,\boldsymbol{J})$ can also be expressed as, \begin{align} \rho_{c}\,(\boldsymbol{r},\,t)\,\,= & \,\sum_{s}q_{s}\int d^{3}v\,f\,(\boldsymbol{r},\,\boldsymbol{v},\,t)\,\,=\,\,\sum_{s}q_{s}\sum_{j=1}^{N}\delta[\boldsymbol{r}-\boldsymbol{r}_{j}(t)]\,,\label{eq:10}\\ \boldsymbol{J}\,(\boldsymbol{r},\,t)\,\,= & \,\sum_{s}q_{s}\int d^{3}v\,\boldsymbol{v}\,f\,(\boldsymbol{r},\,\boldsymbol{v},\,t)\,\,=\,\,\sum_{s}q_{s}\sum_{j=1}^{N}\boldsymbol{v}_{j}(t)\delta[\boldsymbol{r}-\boldsymbol{r}_{j}(t)]\,,\label{eq:11} \end{align} where $q_{s}$ is the effective charge of $s-$species. The total time derivation of equation (\ref{eq:2}) gives the complete microscopic description of plasma for $s-$species \cite{key-12} \begin{align} \frac{df_{s}}{dt}\,\,= & \,\frac{\partial f_{s}}{\partial t}+\boldsymbol{v}\cdot\frac{\partial f_{s}}{\partial\boldsymbol{r}}+\frac{q_{s}}{m_{s}}(\boldsymbol{E}(\boldsymbol{r},\,t)+\boldsymbol{v}\times\boldsymbol{B}(\boldsymbol{r},\,t))\cdot\frac{\partial f_{s}}{\partial\boldsymbol{v}}\,=\,\,0\,.\label{eq:12} \end{align} This equation is called Klimontovich equations which describe the $N-$particles motion in a single equation. The Coulomb collision phenomena can also effect the motion of plasma particles due to its charge dependent. But for some plasma processes we can neglect the Coulomb collision effect. To express the collisionless plasma, the kinetic equation can be written by using the average of Boltzmann distribution function \cite{key-13}, i.e., $\left\langle f_{s}\right\rangle \rightarrow f$ , $\left\langle q_{s}\right\rangle \rightarrow q$, $\left\langle m_{s}\right\rangle \rightarrow m$ as, \begin{align} \frac{df}{dt}=\frac{\partial f}{\partial t}+\boldsymbol{v}\cdot\frac{\partial f}{\partial\boldsymbol{r}}+\frac{q}{m}(\boldsymbol{E}+\boldsymbol{v}\times\boldsymbol{B})\cdot\frac{\partial f}{\partial\boldsymbol{v}} & \,=\,\,0\,\,.\label{eq:13} \end{align} On the other hand, the plasma can also describe by fluid theory where two interpenetrating fluids are electrons-fluid and ions-fluid. In two fluid theory of plasma the continuity equations define the mass conservation and charge conservation laws, i.e., \begin{align} \frac{\partial\rho_{M}}{\partial t}+\boldsymbol{\nabla}\cdot\boldsymbol{J}_{M} & \,=\,\,0\,,\label{eq:14}\\ \frac{\partial\rho_{c}}{\partial t}+\boldsymbol{\nabla}\cdot\boldsymbol{J}_{c} & \,=\,\,0\,,\label{eq:15} \end{align} where the mass and charge densities $\left(\rho_{M},\,\rho_{c}\right)$ are given as \begin{align} \rho_{M}\,\,= & \,\,\,m_{e}n_{e}+m_{i}n_{i}\,,\label{eq:16}\\ \rho_{c}\,\,= & \,\,\,q_{e}n_{e}+q_{i}n_{i}\,.\label{eq:17} \end{align} Here $\left(m_{e},\,n_{e},\,q_{e}\right)$ and $\left(m_{i},\,n_{i},\,q_{i}\right)$ are the electronic fluid and ionic fluid terms respectively for masses, total number of particles and charges. Similarly, the current sources $\left(\boldsymbol{J}_{M},\,\boldsymbol{J}_{c}\right)$ due to masses and charges of two-fluid plasma can be written as \begin{align} \boldsymbol{J}_{M}\,\,= & \,\,\rho_{M}\boldsymbol{v}\,\,=\,\,m_{e}n_{e}\boldsymbol{v}_{e}+m_{i}n_{i}\boldsymbol{v}_{i}\,,\label{eq:18}\\ \boldsymbol{J}_{c}\,\,= & \,\,\rho_{c}\boldsymbol{v}\,\,=\,\,\,q_{e}n_{e}\boldsymbol{v}_{e}+q_{i}n_{i}\boldsymbol{v}_{i}\,\,,\label{eq:19} \end{align} where the center of mass fluid velocity $\boldsymbol{v}$ will be \begin{align} \boldsymbol{v}\,\,= & \,\,\,\frac{1}{\rho_{M}}\left(\boldsymbol{v}_{e}m_{e}n_{e}+\boldsymbol{v}_{i}m_{i}n_{i}\right)\,.\label{eq:20} \end{align} Another equation to the fluid theory is force equation that gives the exact motion of plasma fluid. This can be written as \begin{align} m_{s}\frac{d\boldsymbol{v}_{s}(\boldsymbol{r},\,t)}{dt}\,\,= & \,\,F_{s}(\boldsymbol{r},\,t)\,,\label{eq:21} \end{align} where $F_{s}(\boldsymbol{r},\,t)$ is the total force per unit volume acting on the fluid species at space-time $(\boldsymbol{r},\,t)$ and the acceleration of conducting fluid species yield \begin{align} \frac{d\boldsymbol{v}_{s}(\boldsymbol{r},\,t)}{dt}\,\,=\,\, & \left(\frac{\partial}{\partial t}+\boldsymbol{v}_{s}\cdot\boldsymbol{\nabla}\right)\boldsymbol{v}_{s}\,.\label{eq:22} \end{align} Here, the term $\left(\boldsymbol{v}_{s}\cdot\boldsymbol{\nabla}\right)\boldsymbol{v}_{s}$ used for the convective acceleration of fluid particles. In given equation (\ref{eq:21}) the total force acting on the plasma fluid species may be the resultant of the pressure gradient force and the Lorentz electromagnetic force. Therefore, \begin{align} \rho_{M}\left(\frac{\partial}{\partial t}+\boldsymbol{v}_{s}\cdot\boldsymbol{\nabla}\right)\boldsymbol{v}_{s}\,\,= & -\boldsymbol{\nabla}p_{s}+\frac{q_{s}}{m_{s}}\boldsymbol{E}+\frac{q_{s}}{m_{s}}(\boldsymbol{v}_{s}\times\boldsymbol{B})\,,\label{eq:23} \end{align} where $\boldsymbol{\nabla}p_{s}$ indicated the pressure force acting due to the inhomogeneity of the plasma. The generalized Ohm's law \cite{key-13} for plasma fluid species can also be written as \begin{align} \frac{m_{e}m_{i}}{\rho_{M}\,e^{2}}\frac{\partial\boldsymbol{J}_{c}}{\partial t} & \,\,=\,\,\frac{m_{i}}{2\rho_{M}\,e}\boldsymbol{\nabla}p_{e,i}+\boldsymbol{E}+\left(\boldsymbol{v}_{e,i}\times\boldsymbol{B}\right)-\frac{m_{i}}{\rho_{M}\,e}\left(\boldsymbol{J}_{c}\times\boldsymbol{B}\right)-\frac{\boldsymbol{J}_{c}}{\sigma}\,,\label{eq:24} \end{align} where $\sigma$ introduced for the conductivity of plasma fluid. If we combine the conducting plasma fluid with electromagnetic field then the relevant fluid theory called MHD \cite{key-14}. In MHD, the simplest system for macroscopic transport equations of fluid plasma is known as the cold plasma model. We introduce the following approximation of fluid parameters to the case of cold plasma \cite{key-15,key-16,key-17} \begin{align} T_{e,i}\,\,\, & \sim\,\,\,0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\boldsymbol{\nabla}p\,\,\sim\,\,\,0\,,\nonumber \\ \mathscr{E}_{e}\,\,\, & \sim\,\,\,\mathscr{E}_{i}\,,\,\,\,\,\,\,\,\,\,\,\,\,\boldsymbol{v}_{e}\,\,\,\sim\,\,\,\boldsymbol{v}_{i}\,,\label{eq:25}\\ \rho_{e}\,\,\, & \sim\,\,\,\rho_{i}\,,\,\,\,\,\,\,\,\,\,\,\,\,\,\,n_{e}\,\,\,\sim\,\,\,n_{i}\,.\nonumber \end{align} Here $T$ is temperature and $\mathscr{E}$ is effective energy of fluid particles. Therefore, the Navier-Stokes and continuity equation for cold plasma fluid yield \begin{align} \rho\left(\frac{\partial}{\partial t}+\boldsymbol{v}\cdot\boldsymbol{\nabla}\right)\boldsymbol{v}\,\,= & \,\,\frac{q}{m}\left[\boldsymbol{E}+(\boldsymbol{v}\times\boldsymbol{B})\right]\,,\label{eq:26} \end{align} \begin{align} \frac{\partial\rho}{\partial t}+\boldsymbol{\nabla}\cdot\boldsymbol{J} & \,=\,\,0\,,\label{eq:27} \end{align} where $\rho$ is cold mass density, $\boldsymbol{v}$ is cold fluid velocity, $m$ is fluid mass, and $q$ is cold charge. As such the Ohm's law associated with cold current source $\boldsymbol{J}$ as \begin{align} \frac{m^{2}}{\rho\,e^{2}}\frac{\partial\boldsymbol{J}}{\partial t} & \,\,=\,\,\boldsymbol{E}+\left(\boldsymbol{v}\times\boldsymbol{B}\right)-\frac{m}{\rho\,e}\left(\boldsymbol{J}\times\boldsymbol{B}\right)-\frac{\boldsymbol{J}}{\sigma}\,.\label{eq:28} \end{align} Equations (\ref{eq:26})-(\ref{eq:28}) in the cold plasma shows the temperature independent dispersion relation or we can say that, the thermal velocity of the particle is small to compared with the wave phase velocity. Basically, in cold plasma approximation we are not considered the individual motion of the electrons or ions. Here, we take equivalent motion of electrons and ions to case of cold plasma-fluid approximation with temperature $T=0$. \section{The quaternionic field} The hyper-complex algebras are widely used to explain many theories \cite{key-18}-\cite{key-26} related to high energy physics. In hyper-complex algebras, quaternion is a four dimensional norm-division algebra over the field of real numbers $\mathbb{R}$ invented by Hamilton \cite{key-6}. A quaternionic variable ($\mathbb{Q}$) can be expressed by the unification of scalar as well as vector spaces, i.e., \begin{align} \mathbb{Q}\,\,= & \,\,\left(q_{0},\,\boldsymbol{q}\right)\,\,\,\simeq\,\,\,S(q)+\boldsymbol{V}(q)\,,\,\,\,\,\forall\,\mathbb{Q}\in\mathbb{H}\,,\nonumber \\ = & \,\,e_{0}q_{0}+\sum_{j=1}^{3}e_{j}q_{j}\,,\,\,\,\left(\forall\,q_{0}\in\mathbb{R},\,\,q_{j}\in\mathbb{R}^{3}\right)\,\,,\label{eq:29} \end{align} where $(S(q))$ is the scalar and $(\boldsymbol{V}(q))$ is the vector field in Hamilton space ($\mathbb{H}$) associated with quaternionic unit elements ($e_{0}$, $e_{1}$, $e_{2}$, $e_{3}$). The quaternionic conjugate $\bar{\mathbb{Q}}$ in the same $\mathbb{H}$-space can be written as \begin{align} \bar{\mathbb{Q}}\,\,= & \,\,S(q)-\boldsymbol{V}(q)\,\,=\,\,\,\,e_{0}q_{0}-\sum_{j=1}^{3}e_{j}q_{j}\,\,.\label{eq:30} \end{align} From equations (\ref{eq:29}) and (\ref{eq:30}) we also can define the real and imaginary quaternions, viz. $\text{Re}\,(\mathbb{H}):\longmapsto q_{0}$$=(q+\bar{q})/2$ and $\text{Im}\,(\mathbb{H}):\longmapsto q_{j}$ $=(q-\bar{q})/2$. The quaternionic basis vectors satisfy the given multiplication rules \begin{align} e_{0}e_{0}\, & =\,\,e_{0}^{2}=\,1\,,\,\,e_{A}^{2}=\,-1\,,\,\,e_{0}e_{A}=\,e_{A}e_{0}=e_{A}\,,\nonumber \\ e_{A}e_{B} & =\,\,-\delta_{AB}e_{0}+f_{ABC}e_{C}\,,\,\,\,\,\,\,\,(\forall\,A,B,C=1,2,3)\,,\label{eq:31} \end{align} where $\delta_{AB}$ and $f_{ABC}$ are delta symbol and Levi Civita symbol, respectively. As such, the commutation and anti-commutation relations for quaternionic basis vectors are expressed as \begin{align} \left[e_{A},\,\,e_{B}\right] & \,=\,2\,f_{ABC}\,e_{C}\,,\,\,\,\,\,\,\,\,\,\,\,\,(\text{commutation relation})\label{eq:32}\\ \left\{ e_{A},\,\,e_{B}\right\} & \,=\,-2\,\delta_{AB}e_{0}\,,\,\,\,\,\,\,\,\,(\text{anti-commutation relation})\,.\label{eq:33} \end{align} The quaternion holds the associative law, i.e., \begin{align} e_{A}(\,e_{B}\,e_{C}) & \,=\,(e_{A}\,e_{B}\,)\,e_{C}\,.\label{eq:34} \end{align} The addition and the multiplication of any two quaternions are expressed by \begin{align} \mathbb{Q}\pm\mathbb{P}\,\,= & \,\left(q_{0}\pm p_{0}\right)+\left(\boldsymbol{q}\pm\boldsymbol{p}\right)\nonumber \\ = & \,e_{0}\left(q_{0}\pm p_{0}\right)+e_{1}\left(q_{1}\pm p_{1}\right)+e_{2}\left(q_{1}\pm p_{1}\right)+e_{3}\left(q_{1}\pm p_{1}\right)\,\,,\label{eq:35} \end{align} \begin{align} \mathbb{Q}\circ\mathbb{P}\,\,= & \,\left[q_{0}+\boldsymbol{q}\right]\left[p_{0}+\boldsymbol{p}\right]\nonumber \\ = & \,\,e_{0}(q_{0}p_{0}-\boldsymbol{q}\cdot\boldsymbol{p})+e_{j}\left(q_{0}\boldsymbol{p}+p_{0}\boldsymbol{q}+(\boldsymbol{q}\times\boldsymbol{p})\right)\,\,,\,\,(\forall\,j=1,2,3)\,\,,\label{eq:36} \end{align} where we notice that the quaternionic multiplication is non-commutative, i.e., $\mathbb{Q}\circ\mathbb{P}\,\neq\,\mathbb{P}\circ\mathbb{Q}$, because $\boldsymbol{q}\times\boldsymbol{p}\neq0$ and $\boldsymbol{q}\times\boldsymbol{p}\neq\boldsymbol{p}\times\boldsymbol{q}$. Further, the quaternionic Euclidean scalar product $\mathbb{H}\times\mathbb{H}\longmapsto\mathbb{R}$ can also be written as \begin{align} \left\langle \mathbb{Q},\,\mathbb{P}\right\rangle \,=\,\,\text{Re}\,(\mathbb{Q}\circ\bar{\mathbb{P}}) & \,=\,\left(q_{0}p_{0}+q_{1}p_{1}+q_{2}p_{2}+q_{3}p_{3}\right)\,.\label{eq:37} \end{align} The quaternionic modulus $\mid\mathbb{Q}\mid$ and quaternionic inverse $\mathbb{Q}^{-1}$ are respectively expressed by \begin{align} \mid\mathbb{Q}\mid\,= & \,\sqrt{q_{0}^{2}+q_{1}^{2}+q_{2}^{2}+q_{3}^{2}}\,,\label{eq:38}\\ \mathbb{Q}^{-1}\,= & \,\frac{\bar{q}}{\mid q\mid}\,.\label{eq:39} \end{align} The multiplication rules for quaternion conjugation and norm are given as, \begin{align} \overline{\mathbb{Q}_{1}\circ\mathbb{Q}_{2}}\,\,= & \,\,\overline{\mathbb{Q}_{1}}\,\circ\,\overline{\mathbb{Q}_{2}}\,\,\label{eq:40} \end{align} \begin{align} N\left(\mathbb{Q}_{1}\circ\mathbb{Q}_{2}\right)\,= & \,N\left(\mathbb{Q}_{1}\right)\,\circ\,N\left(\mathbb{Q}_{2}\right)\,\,.\label{eq:41} \end{align} The quaternion unit elements show non-Abelian structure in nature and thus follow the non-commutative division ring. Moreover, in the application of physics Girard \cite{key-27} discussed the role of quaternionic group in modern physics, i.e., the effect of quaternions in SO(3), the Clifford algebra SU(2), the Lorentz group and the conformal group. Recently, quaternionic formulation has been applied to describe the quantized equation of electromagnetism of dyons \cite{key-28,key-29}. \section{Generalized dual MHD of cold plasma in Hamilton space} The dual MHD field consist not only electrons and ions but also consist with the magnetic monopoles and their ionic partners magneto-ions \cite{key-30}. To study the dyonic cold plasma field, there are dual-mass and dual-charge species in presence of dyons. Many authors \cite{key-31,key-32,key-33} discussed the generalized fields associated with dyons. From equation (\ref{eq:25}), we consider electrons and magnetic monopoles (constitute of \textit{dyons}) are equivalent to ions and magneto-ions (constitute of \textit{i-dyons}) in cold plasma approximation. Therefore, the dyonic equivalent of cold plasma equations are written as the following ways: \begin{align} \varrho^{D}(\varrho^{e},\,\varrho^{\mathfrak{m}})\,\simeq\, & \left(m^{e}n^{e}+m^{\mathfrak{m}}n^{\mathfrak{m}}\right)\,,\,\,\,\,\,\,\,(\text{dual-mass density})\label{eq:42}\\ \rho^{D}(\rho^{e},\,\rho^{\mathfrak{m}})\,\simeq\, & \left(q^{e}n^{e}+q^{\mathfrak{m}}n^{\mathfrak{m}}\right)\,,\,\,\,\,\,\,\,(\text{dual-charge density})\label{eq:43}\\ \boldsymbol{v}^{D}(\boldsymbol{v}^{e},\,\boldsymbol{v}^{\mathfrak{m}})\,\,\simeq & \,\,\,\frac{1}{\varrho^{D}}\left(\boldsymbol{v}^{e}m^{e}n^{e}(x)+\boldsymbol{v}^{\mathfrak{m}}m^{\mathfrak{m}}n^{\mathfrak{m}}(x)\right)\,,\,\,\,\,\,\,\,(\text{dual-mass velocity})\label{eq:44}\\ \frac{\partial\varrho^{D}}{\partial t}+\boldsymbol{\nabla}\cdot(\varrho^{D}\boldsymbol{v}^{D})\, & =\,\,0\,,\,\,\,\,\,\,\,(\text{dual-mass conservation law})\label{eq:45}\\ \frac{\partial\rho^{D}}{\partial t}+\boldsymbol{\nabla}\cdot\boldsymbol{J}^{D}\, & =\,\,0\,,\,\,\,\,\,\,\,(\text{dual-charge conservation law})\label{eq:46} \end{align} where $(\varrho^{e},\,\varrho^{\mathfrak{m}})$ and $(\rho^{e}$,$\rho^{\mathfrak{m}})$ are the electric, magnetic mass and charge densities respectively, while $(\boldsymbol{J}^{e}=\,q^{e}n^{e}\boldsymbol{v}^{e}$,~$\boldsymbol{J}^{\mathfrak{m}}=\,q^{\mathfrak{m}}n^{\mathfrak{m}}\boldsymbol{v}^{\mathfrak{m}})$ are the two current densities associated with electric and magnetic charges of dyons. Similarly, $(m^{e},\,n^{e},\,q^{e})$ and $(m^{\mathfrak{m}},\,n^{\mathfrak{m}},\,q^{\mathfrak{m}})$ are the mass, total number and charge for electrons and magnetic monopoles, respectively. The dual Lorentz force equation for dyons can also be expressed as \begin{align} \boldsymbol{F}^{D}\,\,= & \,\,\rho^{e}\boldsymbol{E}+\left(\boldsymbol{J}^{e}\boldsymbol{\times B}\right)+\rho^{\mathfrak{m}}\boldsymbol{B}-\left(\boldsymbol{J}^{\mathfrak{m}}\boldsymbol{\times E}\right)\,,\label{eq:47} \end{align} where we neglect the dyonic pressure gradient term $\left(\boldsymbol{\nabla}p\right)^{D}$ to considering cold plasma approximation \cite{key-17}. The above equations (\ref{eq:42})-(\ref{eq:47}) are well known equations for dual field of massive dyons. In order to discuss the quaternionic space-time revolution of these dual field equations for cold dyonic fluid plasma, let us write the quaternionic-valued differential operator and its quaternionic conjugate as \begin{align} \mathrm{\mathbb{D\,}}(e_{1},e_{2},e_{3};\,\,\,e_{0})\,\,= & \,\,\left(e_{\mathrm{1}}\frac{\partial}{\partial x}+e_{2}\frac{\partial}{\partial y}+e_{3}\frac{\partial}{\partial z}\right)-e_{0}\frac{i}{a_{0}}\frac{\partial}{\partial t}\,,\label{eq:48} \end{align} \begin{align} \bar{\mathbb{D}}\,(e_{1},e_{2},e_{3};\,\,\,e_{0})\,\,= & \,\,-\left(e_{\mathrm{1}}\frac{\partial}{\partial x}+e_{2}\frac{\partial}{\partial y}+e_{3}\frac{\partial}{\partial z}\right)-e_{0}\frac{i}{a_{0}}\frac{\partial}{\partial t}\,,\label{eq:49} \end{align} where $a_{0}$ denoted the speed of fluid particles. The D' Alembert operator $\square$, can be expressed as \begin{alignat}{1} \square\,\,\longmapsto\,\,\,\mathbb{\left(D\circ\bar{D}\right)}\,\,\,= & \,\,\,\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}+\frac{\partial^{2}}{\partial z^{2}}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}}{\partial t^{2}}\nonumber \\ = & \,\,\,\boldsymbol{\nabla}^{2}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}}{\partial t^{2}}\,\,\simeq\,\,\mathbb{\bar{D}\circ D}\,.\label{eq:50} \end{alignat} In generalized MHD-field of dyonic particles, the quaternionic valued dual-velocities can be written as \begin{align} \mathbb{U}\left(e_{1},\,e_{2},\,e_{3};\,\,\,e_{0}\right)\,= & \,\left\{ u_{x},\,u_{y},\,u_{z};\,\,-\frac{i}{a_{0}}h\right\} \,,\label{eq:51}\\ \mathbb{V}\left(e_{1},\,e_{2},\,e_{3};\,\,\,e_{0}\right)\,= & \,\left\{ \upsilon_{x},\,\upsilon_{y},\,\upsilon_{z};\,\,-ia_{0}k\right\} \,,\label{eq:52} \end{align} where $\mathbb{U}$ represents the four-components quaternionic velocity of electrons while $\mathbb{V}$ represents the four-components quaternionic velocity of magnetic-monopoles. Due to their mass variation the velocities are taken different. The scalar components ($h$, $k$) represent the two-enthalpy of dyons. Now, the bi-quaternionic (complex quaternion) generalization of dyonic fluid velocity $\mathbb{W}$ can be written as \begin{align} \mathbb{W}\,\left(e_{1},\,e_{2},\,e_{3};\,\,\,e_{0}\right)= & \,\,\left(\mathbb{U}-\frac{i}{a_{0}}\mathbb{V}\right)\nonumber \\ = & \,\,\,e_{1}\left(u_{x}-\frac{i}{a_{0}}\upsilon_{x}\right)+e_{2}\left(u_{y}-\frac{i}{a_{0}}\upsilon_{y}\right)+e_{3}\left(u_{z}-\frac{i}{a_{0}}\upsilon_{z}\right)-\frac{i}{a_{0}}e_{0}(h-ia_{0}k)\,.\label{eq:53} \end{align} Using equations (\ref{eq:48}) and (\ref{eq:53}), we can write the quaternionic hydrodynamics field equation for dyonic fluid plasma \begin{align} \mathbb{D\,\circ W}\,\,= & \,\,\boldsymbol{\Psi}\,\,\left(e_{1},\,e_{2},\,e_{3};\,\,\,e_{0}\right)\nonumber \\ \simeq & \,\,\,e_{1}\left(B_{x}+\frac{i}{a_{0}}E_{x}\right)+e_{2}\left(B_{y}+\frac{i}{a_{0}}E_{y}\right)+e_{3}\left(B_{z}+\frac{i}{a_{0}}E_{z}\right)-e_{0}\left(B_{0}-\frac{i}{a_{0}}E_{0}\right)\,,\label{eq:54} \end{align} where $\boldsymbol{\Psi}$ is quaternionic generalized hydro-electromagnetic (HEM) field for dyonic cold plasma. One can defined the components of bi-quaternionic hydrodynamics field as \begin{align} \psi_{1}^{\text{HEM}}:\longmapsto\left[B_{x}+\frac{i}{a_{0}}E_{x}\right]\,= & \,\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{u}\right)_{x}-\frac{1}{a_{0}^{2}}\frac{\partial\upsilon_{x}}{\partial t}-\frac{\partial k}{\partial x}\right\} +\frac{i}{a_{0}}\left\{ -\left(\boldsymbol{\nabla}\times\boldsymbol{\upsilon}\right)_{x}-\frac{\partial u_{x}}{\partial t}-\frac{\partial h}{\partial x}\right\} \,,\label{eq:55}\\ \psi_{2}^{\text{HEM}}:\longmapsto\left[B_{y}+\frac{i}{a_{0}}E_{y}\right]\,= & \,\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{u}\right)_{y}-\frac{1}{a_{0}^{2}}\frac{\partial\upsilon_{y}}{\partial t}-\frac{\partial k}{\partial y}\right\} +\frac{i}{a_{0}}\left\{ -\left(\boldsymbol{\nabla}\times\boldsymbol{\upsilon}\right)_{y}-\frac{\partial u_{y}}{\partial t}-\frac{\partial h}{\partial y}\right\} \,,\label{eq:56}\\ \psi_{3}^{\text{HEM}}:\longmapsto\left[B_{z}+\frac{i}{a_{0}}E_{z}\right]\,= & \,\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{u}\right)_{z}-\frac{1}{a_{0}^{2}}\frac{\partial\upsilon_{z}}{\partial t}-\frac{\partial k}{\partial z}\right\} +\frac{i}{a_{0}}\left\{ -\left(\boldsymbol{\nabla}\times\boldsymbol{\upsilon}\right)_{z}-\frac{\partial u_{z}}{\partial t}-\frac{\partial h}{\partial z}\right\} \,,\label{eq:57}\\ \psi_{0}^{\text{HEM}}:\longmapsto\left[B_{0}-\frac{i}{a_{0}}E_{0}\right]\,= & \left\{ \left(\boldsymbol{\nabla}\cdotp\boldsymbol{u}+\frac{1}{a_{0}^{2}}\frac{\partial h}{\partial t}\right)-\frac{i}{a_{0}}\left(\boldsymbol{\nabla}\cdotp\boldsymbol{\upsilon}+\frac{\partial k}{\partial t}\right)\right\} \,.\label{eq:58} \end{align} The hydro-electric field vector ($\boldsymbol{E}$) is identical to generalized Lamb vector field while the hydro-magnetic field vector ($\boldsymbol{B}$) is identical to generalized vorticity field \cite{key-34,key-35,key-36} for dyonic fluid plasma. The Lorenz gauge conditions may equivalent to the continuity like equations in dyonic fluid plasma, i.e., the scalar component $\psi_{0}^{\text{HEM}}\simeq\left[B_{0}-\frac{i}{a_{0}}E_{0}\right]=0$ become \begin{alignat}{1} \boldsymbol{\nabla}\cdotp\boldsymbol{u}+\frac{1}{a_{0}^{2}}\frac{\partial h}{\partial t}\, & =\,\,0\,,\label{eq:59}\\ \boldsymbol{\nabla}\cdotp\boldsymbol{\upsilon}+\frac{\partial k}{\partial t}\, & =\,\,0\,.\label{eq:60} \end{alignat} Equations (\ref{eq:59}) and (\ref{eq:60}) represent the condition for the dynamics of compressible fluid where the divergence of two-fluid velocities are not equal to zero. Thus, these equations lead to the non-conservation form of two-enthalpy. We can summarize the quaternionic hydro-electromagnetic field equations (i.e., dual field $(\psi_{j},\,\chi_{j})$ for $j=0,1,2,3$) in Table-1. \\ \begin{table}[H] \begin{doublespace} \begin{centering} \begin{tabular}{ccc} \hline \textbf{Lamb field components} & \textbf{Vorticity field components} & \textbf{Corresponding $\mathbb{Q}$-field}\tabularnewline \hline \hline $\psi_{1}:\longmapsto\left(\boldsymbol{\nabla}\times\boldsymbol{u}\right)_{x}-\frac{1}{a_{0}^{2}}\frac{\partial\upsilon_{x}}{\partial t}-\frac{\partial k}{\partial x}$ & $\chi_{1}:\longmapsto-\left(\boldsymbol{\nabla}\times\boldsymbol{\upsilon}\right)_{x}-\frac{\partial u_{x}}{\partial t}-\frac{\partial h}{\partial x}$ & $e_{1}(\psi_{1}+\frac{i}{a_{0}}\chi_{1})$\tabularnewline $\psi_{2}:\longmapsto\left(\boldsymbol{\nabla}\times\boldsymbol{u}\right)_{y}-\frac{1}{a_{0}^{2}}\frac{\partial\upsilon_{y}}{\partial t}-\frac{\partial k}{\partial y}$ & $\chi_{2}:\longmapsto-\left(\boldsymbol{\nabla}\times\boldsymbol{\upsilon}\right)_{y}-\frac{\partial u_{y}}{\partial t}-\frac{\partial h}{\partial y}$ & $e_{2}(\psi_{2}+\frac{i}{a_{0}}\chi_{2})$\tabularnewline $\psi_{3}:\longmapsto\left(\boldsymbol{\nabla}\times\boldsymbol{u}\right)_{z}-\frac{1}{a_{0}^{2}}\frac{\partial\upsilon_{z}}{\partial t}-\frac{\partial k}{\partial z}$ & $\chi_{3}:\longmapsto-\left(\boldsymbol{\nabla}\times\boldsymbol{\upsilon}\right)_{z}-\frac{\partial u_{z}}{\partial t}-\frac{\partial h}{\partial z}$ & $e_{3}(\psi_{3}+\frac{i}{a_{0}}\chi_{3})$\tabularnewline $\psi_{0}:\longmapsto\boldsymbol{\nabla}\cdotp\boldsymbol{u}+\frac{1}{a_{0}^{2}}\frac{\partial h}{\partial t}\,=\,\,0\,,$ & $\chi_{0}:\longmapsto\boldsymbol{\nabla}\cdotp\boldsymbol{\upsilon}+\frac{\partial k}{\partial t}\,=\,\,0$ & $e_{0}(\psi_{0}-\frac{i}{a_{0}}\chi_{0})$\tabularnewline \hline \end{tabular} \par\end{centering} \end{doublespace} \centering{}\caption{Quaternionic generalization of Lamb-vorticity field components in presence of dyons } \end{table} In order to find the field source equations for dyonic plasma fluid, we may operate $\bar{\mathbb{D}}$ on hydro-electromagnetic field $\boldsymbol{\Psi}$ and obtain \begin{align} \bar{\mathbb{D}}\circ\boldsymbol{\Psi}\,\,= & \,-\mathbb{J}\,\left(e_{1},\,e_{2},\,e_{3};\,\,\,e_{0}\right)\,,\nonumber \\ = & \,\,\mu\left(e_{1}J_{x}^{e}+e_{2}J_{y}^{e}+e_{3}J_{z}^{e}-e_{0}\rho^{\mathfrak{m}}\right)-\frac{i}{a_{0}\epsilon}\left(e_{1}J_{x}^{\mathfrak{m}}+e_{2}J_{y}^{\mathfrak{m}}+e_{3}J_{z}^{\mathfrak{m}}+\rho^{e}\right)\,,\label{eq:61} \end{align} so that the bi-quaternionic components of dyonic plasma source are expressed as \begin{alignat}{1} \mathcal{J}_{1}^{\text{HEM}}:\longmapsto\left[\mu J_{x}^{e}-\frac{i}{a_{0}\epsilon}J_{x}^{\mathfrak{m}}\right]\,\,= & \,\,\,\left[\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{B}\right)_{x}-\frac{1}{a_{0}^{2}}\frac{\partial E_{x}}{\partial t}\right\} +\frac{i}{a_{0}}\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{E}\right)_{x}+\frac{\partial B_{x}}{\partial t}\right\} \right]\,,\label{eq:62}\\ \mathcal{J}_{2}^{\text{HEM}}:\longmapsto\left[\mu J_{y}^{e}-\frac{i}{a_{0}\epsilon}J_{y}^{\mathfrak{m}}\right]\,\,= & \,\,\,\left[\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{B}\right)_{y}-\frac{1}{a_{0}^{2}}\frac{\partial E_{y}}{\partial t}\right\} +\frac{i}{a_{0}}\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{E}\right)_{y}+\frac{\partial B_{y}}{\partial t}\right\} \right]\,,\label{eq:63}\\ \mathcal{J}_{3}^{\text{HEM}}:\longmapsto\left[\mu J_{z}^{e}-\frac{i}{a_{0}\epsilon}J_{z}^{\mathfrak{m}}\right]\,\,= & \,\,\,\left[\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{B}\right)_{z}-\frac{1}{a_{0}^{2}}\frac{\partial E_{z}}{\partial t}\right\} +\frac{i}{a_{0}}\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{E}\right)_{z}+\frac{\partial B_{z}}{\partial t}\right\} \right]\,,\label{eq:64}\\ \mathcal{J}_{0}^{\text{HEM}}:\longmapsto\left[\mu\rho^{\mathfrak{m}}-\frac{i}{a_{0}\epsilon}\rho^{e}\right]\,\,= & \,\,\,\left[\boldsymbol{\nabla}\cdotp\boldsymbol{B}-\frac{i}{a_{0}}\boldsymbol{\nabla}\cdotp\boldsymbol{E}\right]\,,\label{eq:65} \end{alignat} where ($\boldsymbol{J}^{e}$, $\rho^{e}$) represent the electric source current and source density while ($\boldsymbol{J}^{\mathfrak{m}}$,$\rho^{\mathfrak{m}}$) represent the magnetic source current and source density for dyonic fluid plasma, and ($\epsilon,\,\mu$) define the permittivity and permeability. Interestingly, the complex-quaternionic form of dyonic source equations reduce to \begin{align} \mathcal{\boldsymbol{J}}^{\text{HEM}}= & \,\,\,\left[\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{B}\right)-\frac{1}{a_{0}^{2}}\frac{\partial\boldsymbol{E}}{\partial t}\right\} +\frac{i}{a_{0}}\left\{ \left(\boldsymbol{\nabla}\times\boldsymbol{E}\right)+\frac{\partial\boldsymbol{B}}{\partial t}\right\} \right]\,,\,\,\,\,\,\,(\text{dyonic source current})\label{eq:66}\\ \rho^{\text{HEM}}= & \,\,\,\left[\boldsymbol{\nabla}\cdotp\boldsymbol{B}-\frac{i}{a_{0}}\boldsymbol{\nabla}\cdotp\boldsymbol{E}\right]\,,\,\,\,\,\,\,(\text{dyonic source density)}\,\,.\label{eq:67} \end{align} Then, equation (\ref{eq:61}) leads to the following relations \begin{align} \boldsymbol{\nabla}\cdotp\boldsymbol{E}\,\,= & \,\,\frac{\rho^{e}}{\epsilon}\,\,,\,\,\,\,\boldsymbol{\nabla}\cdotp\boldsymbol{B}\,\,=\,\,\mu\rho^{\mathfrak{m}}\,\,,\label{eq:68}\\ \boldsymbol{\nabla}\times\boldsymbol{E}\,=\,-\frac{\partial\boldsymbol{B}}{\partial t} & -\frac{1}{\epsilon}\boldsymbol{J}^{\mathfrak{m}}\,\,,\,\,\,\,\boldsymbol{\nabla}\times\boldsymbol{B}\,=\,\frac{1}{a_{0}^{2}}\frac{\partial\boldsymbol{E}}{\partial t}+\mu\boldsymbol{J}^{e}\,\,.\label{eq:69} \end{align} Equations (\ref{eq:68}) and (\ref{eq:69}) represent the generalized Dirac-Maxwell equations for hydro-electromagnetic fields of dyonic cold plasma. These equations incorporate all dyonic particles motion in cold plasma fluid. But these equations are incomplete to describe dyonic cold plasma. When combine these generalized Dirac-Maxwell equations with the Bernoulli, Navier-Stokes and continuity equations, then the plasma fluid equations provide a complete description of quaternionic MHD. Therefore, in next sections, we shall discuss the quaternionic form of Bernoulli, Navier-Stokes and continuity equations for cold dyonic plasma fluid. \section{Generalized quaternionic Bernoulli and Navier-Stokes like equation} The Bernoulli and Navier-Stokes equations are basically the fundamental differential equations that describe the conservation of energy and the conservation of momentum to the flow of fluid \cite{key-37}. In order to derive the quaternionic Bernoulli and Navier-Stokes like force equation for dyonic cold plasma fluid, we may operate left by $\boldsymbol{\bar{\Psi}}$ in the quaternionic field equation (\ref{eq:61}) as \begin{alignat}{1} \boldsymbol{\bar{\Psi}}\circ(\mathbb{\bar{D}}\circ\boldsymbol{\Psi})\,\,= & \,-\boldsymbol{\bar{\Psi}}\circ\mathbb{J}\,.\label{eq:70} \end{alignat} Now, we simplify the left hand part of quaternionic field equation (\ref{eq:70}) as \begin{alignat}{1} \boldsymbol{\bar{\Psi}}\circ(\mathbb{\bar{D}}\circ\boldsymbol{\Psi})\,\,\left\{ e_{1},\,e_{2},\,e_{3};\,\,\,e_{0}\right\} \,\,= & \,\,\ensuremath{e_{1}}L+\ensuremath{e_{2}}M+\ensuremath{e_{3}}N+\ensuremath{e_{0}}O\,\,,\,\,\,\forall\,(L,\,M,\,N,\,O)\in\mathbb{C},\label{eq:71} \end{alignat} where the real and imaginary quaternionic components $(L,M,N,\,\text{and\,}O)$ are expressed by \begin{alignat}{1} \text{Re}\left\{ e_{1}L\right\} \,\,=\,\,\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{B}\times\frac{\partial\boldsymbol{E}}{\partial t}\right\} _{x}-\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\frac{\partial\boldsymbol{B}}{\partial t}\right\} _{x}-\left\{ \boldsymbol{B}\times\left(\boldsymbol{\nabla\times}\boldsymbol{B}\right)\right\} _{x}-\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\left(\boldsymbol{\nabla\times}\boldsymbol{E}\right)\right\} _{x}\nonumber \\ +\left\{ \boldsymbol{B}\left(\boldsymbol{\nabla\cdot}\boldsymbol{B}\right)\right\} _{x}+\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\left(\boldsymbol{\nabla\cdot}\boldsymbol{E}\right)\right\} _{x}\,\,\boldsymbol{\longmapsto}\,\,\,\,\left(\text{Real coefficients of \ensuremath{e_{1}}}\right)\,\,,\label{eq:72}\\ \text{Im}\left\{ e_{1}L\right\} \,\,=\,\,-\left\{ \boldsymbol{B}\times\frac{\partial\boldsymbol{B}}{\partial t}\right\} _{x}-\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\frac{\partial\boldsymbol{E}}{\partial t}\right\} _{x}-\left\{ \boldsymbol{B}\times\left(\boldsymbol{\nabla\times}\boldsymbol{E}\right)\right\} _{x}+\left\{ \boldsymbol{E}\times\left(\boldsymbol{\nabla\times}\boldsymbol{B}\right)\right\} _{x}\,\nonumber \\ -\left\{ \boldsymbol{E}\left(\boldsymbol{\nabla\cdot}\boldsymbol{B}\right)\right\} _{x}+\left\{ \boldsymbol{B}\left(\boldsymbol{\nabla\cdot}\boldsymbol{E}\right)\right\} _{x}\,\,\boldsymbol{\longmapsto}\,\,\,\,\,\text{\ensuremath{\left(\text{Imaginary coefficients of }\ensuremath{e_{1}}\right)}}\,\,,\label{eq:73} \end{alignat} \begin{alignat}{1} \text{Re}\left\{ e_{2}M\right\} \,\,=\,\,\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{B}\times\frac{\partial\boldsymbol{E}}{\partial t}\right\} _{y}-\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\frac{\partial\boldsymbol{B}}{\partial t}\right\} _{y}-\left\{ \boldsymbol{B}\times\left(\boldsymbol{\nabla\times}\boldsymbol{B}\right)\right\} _{y}-\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\left(\boldsymbol{\nabla\times}\boldsymbol{E}\right)\right\} _{y}\nonumber \\ +\left\{ \boldsymbol{B}\left(\boldsymbol{\nabla\cdot}\boldsymbol{B}\right)\right\} _{y}+\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\left(\boldsymbol{\nabla\cdot}\boldsymbol{E}\right)\right\} _{y}\,\boldsymbol{\longmapsto}\,\,\,\text{\ensuremath{\left(\text{Real\,coefficients\,of}\ensuremath{\,e_{2}}\right)}}\,\,,\label{eq:74}\\ \text{Im}\left\{ e_{2}M\right\} \,\,=\,\,-\left\{ \boldsymbol{B}\times\frac{\partial\boldsymbol{B}}{\partial t}\right\} _{y}-\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\frac{\partial\boldsymbol{E}}{\partial t}\right\} _{y}-\left\{ \boldsymbol{B}\times\left(\boldsymbol{\nabla\times}\boldsymbol{E}\right)\right\} _{y}+\left\{ \boldsymbol{E}\times\left(\boldsymbol{\nabla\times}\boldsymbol{E}\right)\right\} _{y}\,\nonumber \\ -\left\{ \boldsymbol{E}\left(\boldsymbol{\nabla\cdot}\boldsymbol{B}\right)\right\} _{y}+\left\{ \boldsymbol{B}\left(\boldsymbol{\nabla\cdot}\boldsymbol{E}\right)\right\} _{y}\,\,\boldsymbol{\longmapsto}\,\,\text{\ensuremath{\left(\text{Imaginary\,coefficients\,of}\,\ensuremath{e_{2}}\right)}}\,\,,\label{eq:75} \end{alignat} \begin{alignat}{1} \text{Re}\left\{ e_{3}N\right\} \,\,=\,\,\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{B}\times\frac{\partial\boldsymbol{E}}{\partial t}\right\} _{z}-\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\frac{\partial\boldsymbol{B}}{\partial t}\right\} _{z}-\left\{ \boldsymbol{B}\times\left(\boldsymbol{\nabla\times}\boldsymbol{B}\right)\right\} _{z}-\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\left(\boldsymbol{\nabla\times}\boldsymbol{E}\right)\right\} _{z}\nonumber \\ +\left\{ \boldsymbol{B}\left(\boldsymbol{\nabla\cdot}\boldsymbol{B}\right)\right\} _{z}+\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\left(\boldsymbol{\nabla\cdot}\boldsymbol{E}\right)\right\} _{z}\,\,\boldsymbol{\longmapsto}\,\text{\ensuremath{\left(\text{Real\,coefficients\,of\,}\ensuremath{e_{3}}\right)}}\,\,,\label{eq:76}\\ \text{Im}\left\{ e_{3}N\right\} \,\,=\,\,-\left\{ \boldsymbol{B}\times\frac{\partial\boldsymbol{B}}{\partial t}\right\} _{y}-\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\frac{\partial\boldsymbol{E}}{\partial t}\right\} _{y}-\left\{ \boldsymbol{B}\times\left(\boldsymbol{\nabla\times}\boldsymbol{E}\right)\right\} _{y}+\left\{ \boldsymbol{E}\times\left(\boldsymbol{\nabla\times}\boldsymbol{E}\right)\right\} _{y}\,\nonumber \\ -\left\{ \boldsymbol{E}\left(\boldsymbol{\nabla\cdot}\boldsymbol{B}\right)\right\} _{y}+\left\{ \boldsymbol{B}\left(\boldsymbol{\nabla\cdot}\boldsymbol{E}\right)\right\} _{y}\,\boldsymbol{\longmapsto}\left(\text{Imaginary\,cofficients of \,}e_{3}\right)\,\,,\label{eq:77} \end{alignat} along with, \begin{alignat}{1} \text{Re}\left\{ e_{0}O\right\} \,\,=\,\,-\frac{1}{a_{0}^{2}}\left(\boldsymbol{B}\cdot\frac{\partial\boldsymbol{E}}{\partial t}\right)+\frac{1}{a_{0}^{2}}\left(\boldsymbol{E}\cdot\frac{\partial\boldsymbol{B}}{\partial t}\right)+\boldsymbol{B}\cdot\left(\boldsymbol{\nabla\times B}\right)\nonumber \\ +\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\cdot\left(\boldsymbol{\nabla\times E}\right)\right\} \,\boldsymbol{\longmapsto}\,\, & \left(\text{Real coefficients of \ensuremath{\,e_{0}}}\right)\,\,,\label{eq:78}\\ \text{Im}\left\{ e_{0}O\right\} \,\,=\,\,\left(\boldsymbol{B}\cdot\frac{\partial\boldsymbol{B}}{\partial t}\right)+\frac{1}{a_{0}^{2}}\left(\boldsymbol{E}\cdot\frac{\partial\boldsymbol{E}}{\partial t}\right)+\boldsymbol{B}\cdot\left(\boldsymbol{\nabla\times E}\right)\nonumber \\ -\boldsymbol{E}\cdot\left(\boldsymbol{\nabla\times B}\right)\,\boldsymbol{\longmapsto}\,\, & \left(\text{Imaginary coefficients of\ensuremath{\,e_{0}}}\right)\,\,.\label{eq:79} \end{alignat} Similarly, the right hand part of equation (\ref{eq:70}) can also be expressed in terms of the following quaternionic form \begin{align} -\boldsymbol{\bar{\Psi}}\circ\mathbb{J}\,\,\left\{ e_{1},\,e_{2},\,e_{3};\,\,\,e_{0}\right\} \,\,= & \,\,\ensuremath{e_{1}}L'+\ensuremath{e_{2}}M'+\ensuremath{e_{3}}N'+\ensuremath{e_{0}}O'\,\,,\,\,\,\forall\,(L',\,M',\,N',\,O')\in\mathbb{C}\label{eq:80} \end{align} where the real and imaginary quaternionic components $(L',M',N',\,\text{and\,}O')$ are \begin{align} \text{Re}\left\{ e_{1}L'\right\} \,\,= & \,\,-\mu\left(\boldsymbol{B\times J^{e}}\right)_{x}+\frac{1}{a_{0}^{2}\epsilon}\left(\boldsymbol{E\times J^{\mathfrak{m}}}\right)_{x}+\mu\left(\rho^{\mathfrak{m}}B_{x}\right)\nonumber \\ & +\frac{1}{a_{0}^{2}\epsilon}\left(\rho^{e}E_{x}\right)\,\,\longmapsto\left(\text{Real cofficients of}\,e_{1}\right)\,\,,\label{eq:81}\\ \text{Im}\left\{ e_{1}L'\right\} \,\,= & \,\,\frac{1}{a_{0}}\biggl\{\mu\left(\boldsymbol{E\times J^{e}}\right)_{x}+\frac{1}{\epsilon}\left(\boldsymbol{B\times J^{m}}\right)_{x}-\mu\left(\rho^{\mathfrak{m}}E_{x}\right)\nonumber \\ & +\frac{1}{\epsilon}\left(\rho^{e}B_{x}\right)\biggr\}\,\,\longmapsto\left(\text{Imaginary}\text{ coefficients of }\,e_{1}\right)\,\,,\label{eq:82}\\ \text{Re}\left\{ e_{2}M'\right\} \,\,= & \,\,-\mu\left(\boldsymbol{B\times J^{e}}\right)_{y}+\frac{1}{a_{0}^{2}\epsilon}\left(\boldsymbol{E\times J^{\mathfrak{m}}}\right)_{y}+\mu\left(\rho^{\mathfrak{m}}B_{y}\right)\nonumber \\ & +\frac{1}{a_{0}^{2}\epsilon}\left(\rho^{e}E_{y}\right)\,\,\longmapsto\left(\text{Real cofficients of}\,e_{2}\right)\,\,,\label{eq:83}\\ \text{Im}\left\{ e_{2}M'\right\} \,\,= & \,\,\frac{1}{a_{0}}\biggl\{\mu\left(\boldsymbol{E\times J^{e}}\right)_{y}+\frac{1}{\epsilon}\left(\boldsymbol{B\times J^{m}}\right)_{y}-\mu\left(\rho^{\mathfrak{m}}E_{y}\right)\nonumber \\ & +\frac{1}{\epsilon}\left(\rho^{e}B_{y}\right)\biggr\}\,\,\longmapsto\left(\text{Imaginary}\text{ coefficients of }\,e_{2}\right)\,\,,\label{eq:84}\\ \text{Re}\left\{ e_{3}N'\right\} \,\,= & \,\,-\mu\left(\boldsymbol{B\times J^{e}}\right)_{z}+\frac{1}{a_{0}^{2}\epsilon}\left(\boldsymbol{E\times J^{\mathfrak{m}}}\right)_{z}+\mu\left(\rho^{\mathfrak{m}}B_{z}\right)\nonumber \\ & +\frac{1}{a_{0}^{2}\epsilon}\left(\rho^{e}E_{z}\right)\,\,\longmapsto\left(\text{Real cofficients of}\,e_{3}\right)\,\,,\label{eq:85}\\ \text{Im}\left\{ e_{3}N'\right\} \,\,= & \,\,\frac{1}{a_{0}}\biggl\{\mu\left(\boldsymbol{E\times J^{e}}\right)_{z}+\frac{1}{\epsilon}\left(\boldsymbol{B\times J^{m}}\right)_{z}-\mu\left(\rho^{\mathfrak{m}}E_{z}\right)\nonumber \\ & +\frac{1}{\epsilon}\left(\rho^{e}B_{z}\right)\biggr\}\,\,\longmapsto\left(\text{Imaginary}\text{ coefficients of }\,e_{3}\right)\,\,,\label{eq:86} \end{align} and \begin{align} \text{Re}\left\{ e_{0}O'\right\} \,\,= & \,\,\mu\left(\boldsymbol{B}\cdot\boldsymbol{J}^{e}\right)-\frac{1}{\epsilon}\left(\boldsymbol{E}\cdot\boldsymbol{J}^{\mathfrak{m}}\right)\,\,\longmapsto\,\,\,\,\,\,\,\left(\text{Real}\text{ coefficients of }\,e_{0}\right)\,\,,\label{eq:87}\\ \text{Im}\left\{ e_{0}O'\right\} \,\,= & \,\,\frac{1}{a_{0}}\left\{ -\frac{1}{\epsilon}\left(\boldsymbol{B}\cdot\boldsymbol{J}^{\mathfrak{m}}\right)-\mu\left(\boldsymbol{E}\cdot\boldsymbol{J}^{e}\right)\right\} \,\,\longmapsto\,\,\,\left(\text{Imaginary}\text{ coefficients of }\,e_{0}\,\right)\,\,.\label{eq:88} \end{align} The above quaternionic analysis shows that the left and right-hand sides of equations (\ref{eq:70}) resemble to one another, if the quaternionic coefficients $(L,M,N,O)$ and $(L',M',N',O')$ coincide to each other, i.e., \begin{align} e_{1}L(\text{Re,\,Im})\,\,\cong & \,\,\,e_{1}L'(\text{Re,\,Im})\nonumber \\ e_{2}M(\text{Re,\,Im})\,\,\cong & \,\,\,e_{2}M'(\text{Re,\,Im})\nonumber \\ e_{3}N(\text{Re,\,Im})\,\,\cong & \,\,\,e_{3}N'(\text{Re,\,Im})\nonumber \\ e_{0}O(\text{Re,\,Im})\,\,\cong & \,\,\,e_{0}O'(\text{Re,\,Im})\,\,.\label{eq:89} \end{align} At first we would like to equate the scalar components, i.e., $e_{0}O(\text{Re,\,Im})\,\,\cong\,\,\,e_{0}O'(\text{Re,\,Im})$. For this resemble, we equate the imaginary part of quaternionic scalar coefficient ($e_{0}$) that gives the conservation of energy to require the flow of hydro-electromagnetic dyonic plasma as \begin{alignat}{1} \boldsymbol{B}\cdot\frac{\partial\boldsymbol{B}}{\partial t}+\frac{1}{a_{0}^{2}}\left(\boldsymbol{E}\cdot\frac{\partial\boldsymbol{E}}{\partial t}\right)+\boldsymbol{B}\cdot\left(\boldsymbol{\nabla\times E}\right)-\boldsymbol{E}\cdot\left(\boldsymbol{\nabla\times B}\right)+\frac{1}{\epsilon}\left(\boldsymbol{B}\cdot\boldsymbol{J}^{\mathfrak{m}}\right)+\mu\left(\boldsymbol{E}\cdot\boldsymbol{J}^{e}\right)\,=\,\, & 0\,,\label{eq:90} \end{alignat} which can further reduces as \begin{alignat}{1} \frac{1}{2}\left(\frac{\partial B^{2}}{\partial t}+\frac{1}{a_{0}^{2}}\frac{\partial E^{2}}{\partial t}\right)+\boldsymbol{\nabla}\cdot\left(\boldsymbol{E}\times\boldsymbol{B}\right)+\frac{1}{\epsilon}\left(\boldsymbol{B}\cdot\boldsymbol{J}^{\mathfrak{m}}\right)+\mu\left(\boldsymbol{E}\cdot\boldsymbol{J}^{e}\right)\,=\,\, & 0\,\,.\label{eq:91} \end{alignat} Equation (\ref{eq:91}) indicates the energy theorem also known as the \textit{Poynting's theorem} for the generalized electromagnetic fluid of dyons, where the first term represents the hydro-electric and hydro-magnetic fields energy, second term represents the average energy flux and the third and fourth terms represent the work done by the field on the magnetic monopoles and electrons. Interestingly, equation (\ref{eq:91}) shows a resemblance to the \textit{Bernoulli's theorem} in which we study the conservation of energy to the case of dyonic-fluid flow. If we equating the real part of quaternionic unit $e_{0}$ in equation (\ref{eq:70}) the complexfied Dirac-Maxwell equations for plasma-fluid are obtained. Now, to find the force equation or conservation of momentum for the hydro-electromagnetic field of dyonic cold plasma, we proceed by equating the real coefficients of $e_{j}X_{j}(\text{Re,\,Im})\,\,\cong\,\,\,e_{j}X_{j}'(\text{Re,\,Im})\,,$ $\forall\,j=1,2,3\,\text{and}\,X_{j}\simeq\left(L,M,N\right),$ $X_{j}'\simeq\left(L',M',N'\right)$ as, \begin{alignat}{1} \frac{1}{a_{0}^{2}}\left\{ \boldsymbol{B}\times\frac{\partial\boldsymbol{E}}{\partial t}\right\} -\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\frac{\partial\boldsymbol{B}}{\partial t}\right\} & -\left\{ \boldsymbol{B}\times\left(\boldsymbol{\nabla\times}\boldsymbol{B}\right)\right\} -\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\times\left(\boldsymbol{\nabla\times}\boldsymbol{E}\right)\right\} +\left\{ \boldsymbol{B}\left(\boldsymbol{\nabla\cdot}\boldsymbol{B}\right)\right\} \nonumber \\ +\frac{1}{a_{0}^{2}}\left\{ \boldsymbol{E}\left(\boldsymbol{\nabla\cdot}\boldsymbol{E}\right)\right\} \,=\,- & \mu\left(\boldsymbol{B}\times\boldsymbol{J}^{e}\right)+\frac{1}{a_{0}^{2}\epsilon}\left(\boldsymbol{E}\times\boldsymbol{J}^{\mathfrak{m}}\right)+\mu\left(\rho^{\mathfrak{m}}\boldsymbol{B}\right)+\frac{1}{a_{0}^{2}\epsilon}\left(\rho^{e}\boldsymbol{E}\right)\,,\label{eq:92} \end{alignat} which simplifies to \begin{alignat}{1} -\frac{1}{a_{0}^{2}}\frac{\partial\boldsymbol{\mathcal{H}}}{\partial t}-\frac{1}{2}\boldsymbol{\nabla}\left(B^{2}+\frac{1}{a_{0}^{2}}E^{2}\right)+\left(\boldsymbol{B}\boldsymbol{\cdot\nabla}\right)\boldsymbol{B}+\frac{1}{a_{0}^{2}}\left(\boldsymbol{E}\boldsymbol{\cdot\nabla}\right)\boldsymbol{E}+\boldsymbol{B}\left(\boldsymbol{\nabla\cdot B}\right)+\frac{1}{a_{0}^{2}}\boldsymbol{E}\left(\boldsymbol{\nabla\cdot E}\right)\nonumber \\ =\,\,\,\mu\left(\rho^{m}\boldsymbol{B}\right)+\frac{1}{a_{0}^{2}\epsilon}\left(\rho^{e}\boldsymbol{E}\right)-\mu\left(\boldsymbol{B}\times\boldsymbol{J}^{e}\right)+\frac{1}{a_{0}^{2}\epsilon}\left(\boldsymbol{E}\times\boldsymbol{J}^{m}\right)\,,\label{eq:93} \end{alignat} where $\boldsymbol{\mathcal{H}}=\left(\boldsymbol{E}\times\boldsymbol{B}\right)$ represents the fluidic power flux (or fluidic Poynting vector) that provides the energy transport of plasma fluid by the hydro-electromagnetic field per unit volume per unit time. Interestingly, equation (\ref{eq:93}) represented the force per unit volume due to the generalized hydro-electromagnetic energy of dyonic cold plasma, so that \begin{alignat}{1} \boldsymbol{\mathcal{F}}\,\,=\,\, & \frac{1}{a_{0}^{2}}\frac{\partial\boldsymbol{\mathcal{H}}}{\partial t}+\frac{1}{2}\boldsymbol{\nabla}\left(B^{2}+\frac{1}{a_{0}^{2}}E^{2}\right)-\left(\boldsymbol{B}\boldsymbol{\cdot\nabla}\right)\boldsymbol{B}-\frac{1}{a_{0}^{2}}\left(\boldsymbol{E}\boldsymbol{\cdot\nabla}\right)\boldsymbol{E}-\left(\boldsymbol{\nabla\cdot B}\right)\boldsymbol{B}\nonumber \\ & -\frac{1}{a_{0}^{2}}\left(\boldsymbol{\nabla\cdot E}\right)\boldsymbol{E}+\mu\left(\rho^{m}\boldsymbol{B}\right)+\frac{1}{a_{0}^{2}\epsilon}\left(\rho^{e}\boldsymbol{E}\right)-\mu\left(\boldsymbol{B}\times\boldsymbol{J}^{e}\right)+\frac{1}{a_{0}^{2}\epsilon}\left(\boldsymbol{E}\times\boldsymbol{J}^{m}\right)\,\,,\label{eq:94} \end{alignat} where $\mathcal{\boldsymbol{\mathcal{F}}}$ is the generalized quaternionic fluid force per unit volume superimposed by three terms i.e. the stress tensor, the fluidic power flux and the dynamics of dyonic particles per unit volume in cold plasma. Moreover, equation (\ref{eq:94}) leads to the following compact form, \begin{align} \boldsymbol{\mathcal{F}}\,\,= & \,\,\left(\boldsymbol{\nabla}\cdot\boldsymbol{\overleftrightarrow{T}}\right)+\boldsymbol{F}_{ff}+\boldsymbol{F}_{dyons}\,,\label{eq:95} \end{align} where the divergence of viscous stress tensor which acts analogous to the Maxwell stress tensor yields \begin{align} \boldsymbol{\nabla\cdot}\boldsymbol{\overleftrightarrow{T}}\,\,=- & \frac{1}{a_{0}^{2}}\left[\left(\boldsymbol{\nabla\cdot E}\right)\boldsymbol{E}+\left(\boldsymbol{E}\boldsymbol{\cdot\nabla}\right)\boldsymbol{E}-\frac{1}{2}\boldsymbol{\nabla}E^{2}\right]-\left[\left(\boldsymbol{B}\boldsymbol{\cdot\nabla}\right)\boldsymbol{B}+\left(\boldsymbol{\nabla\cdot B}\right)\boldsymbol{B}-\frac{1}{2}\boldsymbol{\nabla}B^{2}\right]\,\,,\label{eq:96} \end{align} and the forces arises due to the fluidic power flux ($\boldsymbol{F}_{ff}$) and due to electromagnetic dyonic fluid particles ($\boldsymbol{F}_{dyons}$) become \begin{align} \boldsymbol{F}_{ff}\,\, & \simeq\,\,\frac{1}{a_{0}^{2}}\frac{\partial\boldsymbol{\mathcal{H}}}{\partial t}\,\,,\label{eq:97}\\ \boldsymbol{F}_{dyons}\,\, & \simeq\,\,\mu\left(\rho^{m}\boldsymbol{B}\right)-\frac{1}{a_{0}^{2}\epsilon}\rho^{e}\boldsymbol{E}-\mu\left(\boldsymbol{B}\times\boldsymbol{J}^{e}\right)+\frac{1}{a_{0}^{2}\epsilon}\left(\boldsymbol{E}\times\boldsymbol{J}^{m}\right)\,.\label{eq:98} \end{align} Thus the obtained equation (\ref{eq:95}) represents the quaternionic generalization of Navier-Stokes like equation in case of dyonic cold plasma. We also can write the simplified form of Navier-Stokes like equation if we put the value of quaternionic fluid force, i.e., \begin{align} \boldsymbol{\mathcal{F}}\,\,\simeq\,\,\rho\left(\frac{\partial}{\partial t}+\boldsymbol{v}\cdot\boldsymbol{\nabla}\right)\boldsymbol{v}\,\, & =\,\,\,\left(\boldsymbol{\nabla}\cdot\boldsymbol{\overleftrightarrow{T}}\right)+\boldsymbol{F}_{ff}+\boldsymbol{F}_{dyons}\,.\label{eq:99} \end{align} Therefore, by combing the above Navier-Stokes like equation (\ref{eq:99}) with Dirac-Maxwell equations (\ref{eq:68})-(\ref{eq:69}), the resultant MHD fluid equations provide a complete description of dyonic cold plasma. In order to obtain the conservation law for fluid momentum, we may write the equation (\ref{eq:99}) in terms of linear momentum as, \begin{alignat}{1} \frac{\partial\boldsymbol{P}_{\text{mech}}}{\partial t}\,\,= & \,\left(\boldsymbol{\nabla}\cdot\boldsymbol{\overleftrightarrow{T}}\right)+\frac{\partial\boldsymbol{P}_{\text{hydroem}}}{\partial t}\,,\label{eq:100} \end{alignat} where $\boldsymbol{P}_{\text{mech}}$ represents the mechanical momentum and $\boldsymbol{P}_{\text{hydroem}}$ represents the total generalized hydro-electromagnetic momentum of dyonic cold plasma. Here we define the total generalized hydro-electromagnetic force as $\boldsymbol{F}_{\text{hydroem}}\,=\,\left(\boldsymbol{F}_{ff}+\boldsymbol{F}_{dyons}\right)\,\simeq\,\partial\boldsymbol{P}_{\text{hydroem}}/\partial t$. Therefore, we get \begin{alignat}{1} \frac{\partial\mathbb{G}}{\partial t}+\boldsymbol{\nabla}\cdot\boldsymbol{\overleftrightarrow{T}}\,\,=\, & 0\,,\label{eq:101} \end{alignat} where the resultant momentum becomes$\mathbb{G}\rightarrow\left(\boldsymbol{P}_{\text{hydroem}}-\boldsymbol{P}_{\text{mech}}\right).$ Equation (\ref{eq:101}) represented generalized continuity equation for the case of generalized hydro-electromagnetic fluid-momentum, where the viscous stress tensor $\boldsymbol{\overleftrightarrow{T}}$ works as the source current and the term $\mathbb{G}$ works as the source density of the system. Correspondingly, if we equate the imaginary coefficient of quaternionic unit $e_{j}$ $\left(\forall\,j=1,2,3\right)$ in equation (\ref{eq:89}) we obtain again the complexfied Dirac-Maxwell like equations. Thus the interesting part of our present quaternionic formalism is the generalized energy- momentum conservation of the hydro-electromagnetic fluid of dyonic cold plasma shows the invariant nature under the duality and Lorentz transformations. \section{Quaternionic wave equations for cold plasma fluid} In this section, we shall describe the wave equations of electromagnetic fluid plasma consisting with dyons. To obtain the dyonic wave equations for cold plasma fluid, we may operate $\mathbb{D}$ by left on the quaternionic field equation as \begin{alignat}{1} \mathbb{D}\circ(\mathbb{\bar{D}}\circ\boldsymbol{\Psi})\,\,= & \,-\mathbb{D}\circ\mathbb{J}\,.\label{eq:102} \end{alignat} The quaternionic expression for the left hand side of equation (\ref{eq:102}) becomes \begin{align} \mathbb{D}\circ(\mathbb{\bar{D}}\circ\boldsymbol{\Psi})\,\,\left\{ e_{1},\,e_{2},\,e_{3};\,\,\,e_{0}\right\} \,\,= & \,\,\ensuremath{e_{1}}P+\ensuremath{e_{2}}Q+\ensuremath{e_{3}}R+\ensuremath{e_{0}}S\,\,,\,\,\,\forall\,(P,\,Q,\,R,\,S)\in\mathbb{C}\label{eq:103} \end{align} where the real and imaginary components of quaternionic coefficient $(P,\,Q,\,R,\,S)$ are \begin{alignat}{1} \text{Re}\left\{ e_{1}P\right\} \,\,= & \,\,\left(\frac{\partial^{2}B_{x}}{\partial x^{2}}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}B_{x}}{\partial t^{2}}\right)\,,\,\,\,\,\text{Im}\left\{ e_{1}P\right\} \,\,=\,\,\frac{1}{a_{0}}\left(\frac{\partial^{2}E_{x}}{\partial x^{2}}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}E_{x}}{\partial t^{2}}\right)\,,\nonumber \\ \text{Re}\left\{ e_{2}Q\right\} \,\,= & \,\,\left(\frac{\partial^{2}B_{y}}{\partial y^{2}}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}B_{y}}{\partial t^{2}}\right)\,,\,\,\,\,\text{Im}\left\{ e_{2}Q\right\} \,\,=\,\,\frac{1}{a_{0}}\left(\frac{\partial^{2}E_{y}}{\partial y^{2}}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}E_{y}}{\partial t^{2}}\right)\,,\nonumber \\ \text{Re}\left\{ e_{3}R\right\} \,\,= & \,\,\left(\frac{\partial^{2}B_{z}}{\partial z^{2}}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}B_{z}}{\partial t^{2}}\right)\,,\,\,\,\,\text{Im}\left\{ e_{2}R\right\} \,\,=\,\,\frac{1}{a_{0}}\left(\frac{\partial^{2}E_{z}}{\partial z^{2}}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}E_{z}}{\partial t^{2}}\right)\,,\nonumber \\ \text{Re}\left\{ e_{0}S\right\} \,\,= & 0\,,\,\,\,\,\text{Im}\left\{ e_{0}S\right\} \,\,=\,\,0\,\,.\label{eq:104} \end{alignat} Equation (\ref{eq:104}) associated classical wave equation for hydro-electric and hydro-magnetic fields without containing any source. Correspondingly, the quaternionic source expression for the right hand side of equation (\ref{eq:102}) can be written as \begin{align} -\mathbb{D}\circ\mathbb{J}\,\,\left\{ e_{1},\,e_{2},\,e_{3};\,\,\,e_{0}\right\} \,\,= & \,\,\ensuremath{e_{1}}P'+\ensuremath{e_{2}}Q'+\ensuremath{e_{3}}R'+\ensuremath{e_{0}}S'\,,\,\,\,\forall\,(P',\,Q',\,R',\,S')\in\mathbb{C}\label{eq:105} \end{align} where the real and imaginary components are \begin{alignat}{1} \text{Re}\left\{ e_{1}P'\right\} \,\,= & \,\,\mu\left(\frac{\partial J_{z}^{e}}{\partial y}-\frac{\partial J_{y}^{e}}{\partial z}-\frac{1}{a_{0}^{2}\mu\epsilon}\frac{\partial J_{x}^{\mathfrak{m}}}{\partial t}-\frac{\partial\rho^{\mathfrak{m}}}{\partial x}\right)\,\,,\nonumber \\ \text{Im}\left\{ e_{1}P'\right\} \,\,= & -\frac{1}{a_{0}\epsilon}\left(\frac{\partial J_{z}^{\mathfrak{m}}}{\partial y}-\frac{\partial J_{y}^{\mathfrak{m}}}{\partial z}+\mu\epsilon\frac{\partial J_{x}^{e}}{\partial t}+\frac{\partial\rho^{e}}{\partial x}\right)\,\,,\nonumber \\ \text{Re}\left\{ e_{2}Q'\right\} \,\,= & \,\,\mu\left(\frac{\partial J_{x}^{e}}{\partial z}-\frac{\partial J_{z}^{e}}{\partial x}-\frac{1}{a_{0}^{2}\mu\epsilon}\frac{\partial J_{y}^{\mathfrak{m}}}{\partial t}-\frac{\partial\rho^{\mathfrak{m}}}{\partial y}\right)\,\,,\nonumber \\ \text{Im}\left\{ e_{2}Q'\right\} \,\,= & -\frac{1}{a_{0}\epsilon}\left(\frac{\partial J_{x}^{\mathfrak{m}}}{\partial z}-\frac{\partial J_{z}^{\mathfrak{m}}}{\partial x}+\mu\epsilon\frac{\partial J_{y}^{e}}{\partial t}+\frac{\partial\rho^{e}}{\partial y}\right)\,\,,\nonumber \\ \text{Re}\left\{ e_{3}R'\right\} \,\,= & \,\,\mu\left(\frac{\partial J_{y}^{e}}{\partial x}-\frac{\partial J_{x}^{e}}{\partial y}-\frac{1}{a_{0}^{2}\mu\epsilon}\frac{\partial J_{z}^{\mathfrak{m}}}{\partial t}-\frac{\partial\rho^{\mathfrak{m}}}{\partial z}\right)\,\,,\nonumber \\ \text{Im}\left\{ e_{3}R'\right\} \,\,= & -\frac{1}{a_{0}\epsilon}\left(\frac{\partial J_{y}^{\mathfrak{m}}}{\partial x}-\frac{\partial J_{x}^{\mathfrak{m}}}{\partial y}+\mu\epsilon\frac{\partial J_{z}^{e}}{\partial t}+\frac{\partial\rho^{e}}{\partial z}\right)\,\,,\nonumber \\ \text{Re}\left\{ e_{0}S'\right\} \,\,= & \,\,-\mu\left(\frac{\partial J_{x}^{e}}{\partial x}+\frac{\partial J_{y}^{e}}{\partial y}+\frac{\partial J_{z}^{e}}{\partial z}+\mu\epsilon\frac{\partial\rho^{e}}{\partial t}\right)\,\,,\nonumber \\ \text{Im}\left\{ e_{0}S'\right\} \,\,= & \,\,\frac{1}{a_{0}\epsilon}\left(\frac{\partial J_{x}^{\mathfrak{m}}}{\partial x}+\frac{\partial J_{y}^{\mathfrak{m}}}{\partial y}+\frac{\partial J_{z}^{\mathfrak{m}}}{\partial z}+\frac{\partial\rho^{\mathfrak{m}}}{\partial t}\right)\,\,.\label{eq:106} \end{alignat} The physical significant of quaternionic analysis occurs if the left and right-hand sides of equations (\ref{eq:102}) resemble to one another, and the quaternionic coefficients $(P,\,Q\,,R\,,S)$ and $(P'\,,Q'\,,R'\,,S')$ coincide to each other, \begin{align} e_{1}P(\text{Re,\,Im})\,\,\cong & \,\,\,e_{1}P'(\text{Re,\,Im})\nonumber \\ e_{2}Q(\text{Re,\,Im})\,\,\cong & \,\,\,e_{2}Q'(\text{Re,\,Im})\nonumber \\ e_{3}R(\text{Re,\,Im})\,\,\cong & \,\,\,e_{3}R'(\text{Re,\,Im})\nonumber \\ e_{0}S(\text{Re,\,Im})\,\,\cong & \,\,\,e_{0}S'(\text{Re,\,Im})\,\,.\label{eq:107} \end{align} Now, we may equate the real and imaginary parts of $e_{0}S(\text{Re,\,Im})\,\,\cong\,\,\,e_{0}S'(\text{Re,\,Im})$ given in equation (\ref{eq:107}) and obtained \begin{alignat}{1} \boldsymbol{\nabla}\cdot\boldsymbol{J}^{e}+\frac{1}{a_{0}^{2}}\frac{\partial\rho^{e}}{\partial t} & \,\,=\,\,0\,,\label{eq:108}\\ \boldsymbol{\nabla}\cdot\boldsymbol{J}^{\mathfrak{m}}+\frac{\partial\rho^{\mathfrak{m}}}{\partial t} & \,\,=\,\,0\,.\label{eq:109} \end{alignat} These equations represented the continuity equations or simply called conservation of electric and magnetic charges for the dynamics of cold electrons and cold magnetic-monopoles in dyonic plasma. Therefore, we obtain the Lorenz gauge like conditions for compressible cold plasma fluid ($\boldsymbol{\nabla}\cdot\boldsymbol{J}^{D}\neq0$), i.e. \begin{alignat}{1} \boldsymbol{\nabla}\cdot\left(q^{e}n^{e}\boldsymbol{u}\right)+\frac{1}{a_{0}^{2}}\frac{\partial(q^{e}n^{e}h)}{\partial t} & \,\,=\,\,0\,,\label{eq:110}\\ \boldsymbol{\nabla}\cdot\left(q^{\mathfrak{m}}n^{\mathfrak{m}}\boldsymbol{v}\right)+\frac{\partial(q^{\mathfrak{m}}n^{\mathfrak{m}}k)}{\partial t} & \,\,=\,\,0\,.\label{eq:111} \end{alignat} Correspondingly, from equation (\ref{eq:107}) equating the coefficients of $e_{j}Y_{j}(\text{Re,\,Im})\,\,\cong\,\,\,e_{j}Y_{j}'(\text{Re,\,Im})\,,$ $\forall\,j=1,2,3\,\text{and}\,Y_{j}\simeq\left(P,Q,R\right),$ $Y_{j}'\simeq\left(P',Q',R'\right)$ as, \begin{alignat}{1} \boldsymbol{\nabla}^{2}\boldsymbol{E}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\boldsymbol{E}}{\partial t^{2}}-\frac{1}{\epsilon}\left(\boldsymbol{\nabla}\rho^{e}\right)-\mu\frac{\partial\boldsymbol{J}^{e}}{\partial t}-\frac{1}{\epsilon}\left(\boldsymbol{\nabla}\times\boldsymbol{J}^{\mathfrak{m}}\right)\,\, & =\,\,0\,,\label{eq:112}\\ \boldsymbol{\nabla}^{2}\boldsymbol{B}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\boldsymbol{B}}{\partial t^{2}}-\mu\left(\boldsymbol{\nabla}\rho^{\mathfrak{m}}\right)-\frac{1}{a_{0}^{2}\epsilon}\frac{\partial\boldsymbol{J}^{\mathfrak{m}}}{\partial t}+\mu\left(\boldsymbol{\nabla}\times\boldsymbol{J}^{e}\right)\,\, & =\,\,0\,,\label{eq:113} \end{alignat} Equations (\ref{eq:112}) and (\ref{eq:113}) represented the generalized hydro-electric wave and hydro-magnetic wave equations for cold electrons and cold magnetic-monopoles traveling in dyonic plasma-fluid. On the other hand, the hydro-electromagnetic wave components can also be expressed as \begin{align} \boldsymbol{\nabla}^{2}\boldsymbol{E}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\boldsymbol{E}}{\partial t^{2}}-\frac{1}{\epsilon}\left(\boldsymbol{\nabla}(q^{e}n^{e}h)\right)-\mu\frac{\partial\left(q^{e}n^{e}\boldsymbol{u}\right)}{\partial t}-\frac{1}{\epsilon}\left(\boldsymbol{\nabla}\times\left(q^{\mathfrak{m}}n^{\mathfrak{m}}\boldsymbol{v}\right)\right)\,\, & =\,\,0\,,\label{eq:114}\\ \boldsymbol{\nabla}^{2}\boldsymbol{B}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\boldsymbol{B}}{\partial t^{2}}-\mu\left(\boldsymbol{\nabla}(q^{\mathfrak{m}}n^{\mathfrak{m}}k)\right)-\frac{1}{a_{0}^{2}\epsilon}\frac{\partial\left(q^{\mathfrak{m}}n^{\mathfrak{m}}\boldsymbol{v}\right)}{\partial t}+\mu\left(\boldsymbol{\nabla}\times\left(q^{e}n^{e}\boldsymbol{u}\right)\right)\,\, & =\,\,0\,.\label{eq:115} \end{align} In vacuum, equations (\ref{eq:114}) and (\ref{eq:115}) behave like as free hydro-electromagnetic wave components of cold plasma, i.e., \begin{align} \square\boldsymbol{E} & \,\,=\,\,0\,,\,\,\,\,\,\,\,\text{and}\,\,\,\,\,\square\boldsymbol{B}\,\,=\,\,0\,.\label{eq:116} \end{align} However, we may consider dyonic fluid as the two-fluid theory in which both electrons and magnetic-monopoles propagate through cold plasma-fluid. Here, two types of wave propagation seem to be theoretically possible, first wave propagation of electrons and second wave propagation of magnetic monopoles where we may consider that electrons wave propagation are too rapid from the magnetic monopoles due to their mass densities. In following cases we shall discuss the electrons plasma waves and magnetic-monopoles plasma waves for dyonic fluid propagation. \paragraph{Case-1 Langmuir like wave propagation:} Suppose the magnetic monopoles are infinitely massive, so that they do not contribute to the given fluid motion \cite{key-38}. In this situation, the whole process of the plasma fluid depends on the electron inertia. Thus, in this electrons wave or Langmuir like wave propagation we assume that the initial condition is a unmagnetized cold plasma fluid containing no source of magnetic monopoles. Then the equation of motion for electrons cold plasma fluid becomes \begin{align} m^{e}n^{e}\left(\frac{\partial}{\partial t}+\boldsymbol{u}\cdot\boldsymbol{\nabla}\right)\boldsymbol{u}\,\,= & \,\,\boldsymbol{F}^{e}\,,\label{eq:117} \end{align} where $\boldsymbol{F}^{e}$ represented Lorentz electric force due to electrons. The electron continuity equation yields \begin{align} \boldsymbol{\nabla}\cdot\left(n^{e}\boldsymbol{u}\right)+\frac{1}{a_{0}^{2}}\frac{\partial(n^{e}h)}{\partial t} & \,\,=\,\,0\,.\label{eq:118} \end{align} Correspondingly, the electrons cold plasma fluid also satisfy the following Maxwell equations \begin{align} \boldsymbol{\nabla\cdot E}\, & =\,\frac{\rho^{e}\,}{\epsilon},\nonumber \\ \boldsymbol{\nabla\cdot B}\, & =\,0\,,\nonumber \\ \boldsymbol{\nabla\times E} & \,=-\frac{\partial\boldsymbol{B}}{\partial t}\,,\nonumber \\ \boldsymbol{\nabla\times B}\, & =\,\frac{1}{a_{0}^{2}}\frac{\partial\boldsymbol{E}}{\partial t}+\mu\boldsymbol{J}^{e}\,.\label{eq:119} \end{align} As such the hydro-electromagnetic wave equations for electrons-fluid plasma can be expressed as \begin{alignat}{1} \boldsymbol{\nabla}^{2}\boldsymbol{E}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\boldsymbol{E}}{\partial t^{2}}-\frac{1}{\epsilon}\left(\boldsymbol{\nabla}\rho^{e}\right)-\mu\frac{\partial\boldsymbol{J}^{e}}{\partial t}\,\, & =\,\,0\,,\label{eq:120}\\ \boldsymbol{\nabla}^{2}\boldsymbol{B}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\boldsymbol{B}}{\partial t^{2}}+\mu\left(\boldsymbol{\nabla}\times\boldsymbol{J}^{e}\right)\,\, & =\,\,0\,.\label{eq:121} \end{alignat} These equations are not invariant under duality transformation because we considering only electrons-fluid plasma. In this case, the generalized hydro-electromagnetic wave propagation for electrons-fluid plasma becomes \begin{align} \boldsymbol{\nabla}^{2}\Psi-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\Psi}{\partial t^{2}}-\frac{1}{\epsilon}\left(\boldsymbol{\nabla}\rho^{e}\right)-\mu\left[\frac{\partial\boldsymbol{J}^{e}}{\partial t}-\left(\boldsymbol{\nabla}\times\boldsymbol{J}^{e}\right)\right]\,\, & =\,\,0\,.\label{eq:122} \end{align} \paragraph{Case-2 \textquoteright t Hooft Polyakov Monopole like wave propagation:} In \textquoteright t Hooft-Polyakov model \cite{key-39,key-40}, after symmetry breaking we may find the $U(1)$ gauge theory which shows the characteristics of Maxwell\textquoteright s electromagnetic theory. Generally, the \textquoteright t Hooft-Polyakov magnetic monopole carries one Dirac unit of magnetic charge. Suppose $m^{\mathfrak{m}}$ represented the mass of \textquoteright t Hooft-Polyakov magnetic monopoles, and for the pure magnetic monopoles-fluid plasma we neglect the electrons motion ($\rho^{e},\,\boldsymbol{J}^{e}\simeq0$). Then, the equation of motion for compressible magnetic monopoles-fluid plasma becomes \begin{align} m^{\text{\ensuremath{\mathfrak{m}}}}n^{\mathfrak{m}}\left(\frac{\partial}{\partial t}+\boldsymbol{v}\cdot\boldsymbol{\nabla}\right)\boldsymbol{v}\,\,\,= & \,\,\boldsymbol{F}^{\text{\ensuremath{\mathfrak{m}}}}\,,\label{eq:123} \end{align} along with the continuity equation \begin{align} \boldsymbol{\nabla}\cdot\left(n^{\text{\ensuremath{\mathfrak{m}}}}\boldsymbol{v}\right)+\frac{1}{a_{0}^{2}}\frac{\partial(n^{\text{\ensuremath{\mathfrak{m}}}}k)}{\partial t} & \,\,=\,\,0\,,\label{eq:124} \end{align} where $\boldsymbol{F}^{\text{\ensuremath{\mathfrak{m}}}}$ is Lorentz magnetic force. The magnetic monopole-fluid satisfy the following Maxwell equations \begin{align} \boldsymbol{\nabla\cdot E}\, & =\,0\,,\nonumber \\ \boldsymbol{\nabla\cdot B}\, & =\,\mu\rho^{\text{\ensuremath{\mathfrak{m}}}}\,,\nonumber \\ \boldsymbol{\nabla\times E} & \,=-\frac{\partial\boldsymbol{B}}{\partial t}-\frac{1}{\epsilon}\boldsymbol{J}^{\mathfrak{m}}\nonumber \\ \boldsymbol{\nabla\times B}\, & =\,\frac{\partial\boldsymbol{E}}{\partial t}\,.\label{eq:125} \end{align} Therefore the hydro-electromagnetic wave equations for magnetic monopole fluid plasma propagation can be written as \begin{alignat}{1} \boldsymbol{\nabla}^{2}\boldsymbol{E}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\boldsymbol{E}}{\partial t^{2}}-\frac{1}{\epsilon}\left(\boldsymbol{\nabla}\times\boldsymbol{J}^{\mathfrak{m}}\right)\,\, & =\,\,0\,,\label{eq:126}\\ \boldsymbol{\nabla}^{2}\boldsymbol{B}-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\boldsymbol{B}}{\partial t^{2}}-\mu\left(\boldsymbol{\nabla}\rho^{\mathfrak{m}}\right)-\frac{1}{a_{0}^{2}\epsilon}\frac{\partial\boldsymbol{J}^{\mathfrak{m}}}{\partial t}\,\, & =\,\,0\,.\label{eq:127} \end{alignat} The generalized hydro-electromagnetic wave propagation of magnetic monopole fluid plasma becomes \begin{align} \boldsymbol{\nabla}^{2}\Psi-\frac{1}{a_{0}^{2}}\frac{\partial^{2}\Psi}{\partial t^{2}}-\mu\left(\boldsymbol{\nabla}\rho^{\mathfrak{m}}\right)-\frac{1}{\epsilon}\left[\frac{1}{a_{0}^{2}}\frac{\partial\boldsymbol{J}^{\mathfrak{m}}}{\partial t}+\left(\boldsymbol{\nabla}\times\boldsymbol{J}^{\mathfrak{m}}\right)\right]\,\, & =\,\,0\,.\label{eq:128} \end{align} Moreover, in $^{\shortmid}$t Hooft -Polyakov field, the dynamics of magnetic monopoles having a definite size inside of which massive fields play a role in providing a smooth structure and outside they rapidly vanish leaving the field configuration identical to Dirac's monopoles. A stable monopole solution satisfying Bogomonly condition in$^{\shortmid}$t Hooft-Polyakov field introduced by Bogomonly-Prasad-Sommerfield (BPS) \cite{key-41}. \section{Conclusion} \noindent The Navier-Stokes equation generally describes a balance equation to the motion of compressible fluid together with Newton's second law. There is an important role of Navier-Stokes equation in MHD, i.e., the MHD equations are the combination of the Navier-Stokes equation of fluid dynamics and Maxwell's equations of electrodynamics. In this paper, we have discussed both the Navier-Stokes and Maxwell's equations for a complete formulation of dual-MHD equations of dyonic cold plasma-fluid. The dyons existed in cold plasma (where we assume negligible plasma temperature), are high energetic soliton particles consisting electrons as well as magnetic monopoles. We have used the four-dimensional Hamilton algebra to analysis the dynamics of dyonic cold plasma fluid. The benefit of the quaternionic algebra is that, it explains both scalar and vector fields in a single frame called four-vector formulation in Euclidean space-time. Thus, we have described quaternionic four-velocities, generalized Lamb \& vorticity fields components, four-current sources, etc. for dyonic cold plasma. We have expressed the generalized quaternionic hydro-electromagnetic field that unify the analogy of Lamb-vorticity fields for dyonic cold plasma fluid. The scalar component of quaternionic hydro-electromagnetic field has identified to the dual Lorentz-gauge like conditions. We have derived the generalized quaternionic Dirac-Maxwell like equations for the conducting electromagnetic fluid of dyonic cold plasma. In section-5, the generalized Navier-Stokes equation for dyonic cold plasma fluid has been discussed. We have obtained the generalized quaternionic form of conservation of energy for hydro-electromagnetic field by equating the imaginary part of quaternionic scalar coefficient. The quaternionic form of energy conservation equation is correlated with the Bernoulli's theorem for dynamics of dyonic plasma fluid. The real part of quaternionic coefficient represents the generalized quaternionic Navier-Stokes like equation for dyonic cold plasma fluid. It is defined that the total amount of forces per unit volume acted on hydro-electric and hydro-magnetic fields of dyonic cold plasma. On the other hand, the generalized quaternionic Navier-Stokes equation may also be identical to the conservation of linear momentum in the field of dyonic cold plasma. The conservation of linear momentum for conducting plasma fluid represented the generalized continuity equation given by equation (\ref{eq:101}). Therefore, the combination of generalized Dirac-Maxwell equations and the Navier-Stokes equation provided a complete description of quaternionic dual-MHD equations. In section-6, we discussed the wave propagation of dyons in generalized hydro-electromagnetic fields of cold plasma. Conservation of electric, and magnetic charges with the dynamics of electrons and magnetic-monopoles in conducting cold plasma fluid has been analyzed. Equations (\ref{eq:112}) and (\ref{eq:113}) are described the generalized hydro-electric and hydro-magnetic wave equations for respectively cold electrons and cold magnetic-monopoles moving in dyonic plasma-fluid. Interestingly, the quaternionic formalism for dyonic plasma waves emphasized that theoretically there are two types of waves propagation namely the wave propagation due to electrons, and the wave propagation due to magnetic monopoles. The electrons wave propagation are too rapid from the magnetic monopoles due to their mass densities. Therefore, our present theory predicted that there have existed the electrons wave (Langmuir like waves) and the magnetic monopoles wave (\textquoteright t Hooft-Polyakov waves) for dynamics of dyonic compressible plasma fluid. The generalized Langmuir-\textquoteright t Hooft-Polyakov wave propagation for electrons and magnetic monopoles-fluid have been given by equation (\ref{eq:122}) and (\ref{eq:128}). \noindent On the other side, in experimental point of view there may be three categories to search the magnetic monopoles (or dyons), viz. (a) from accelerator searches (b) from direct searches and (c) from astrophysical bounds. For accelerator searchers, the magnetic monopoles should be produced in particle accelerator experiments if the collision energy is sufficiently high, i.e., higher than $2Mc^{2}$. In order to check for GUT monopoles, the required energy is at least 12 orders of magnitude higher than the energies available at the Large Hadron Collider (LHC). Therefore, it is unrealistic to expect that they could be produced in any foreseeable particle accelerators. Except to produce magnetic monopoles in an experiment, one can also try to look for monopoles that already exist in the universe. Since, the monopoles are stable particles, therefore monopoles created in the early universe should still be around. Because of the Dirac quantization condition, their magnetic field is strong, and their behavior is very different from other, electrically charged particles. In astrophysical bounds, the magnetic monopoles would also have astrophysical effects, which can be used to look for them and constrain their flux. Therefore, experimentally it is very tough to detect magnetic monopoles (or dyons) due to its huge energy. \begin{description} \item [{\textbf{Acknowledgments:}}] The authors would like to thank the anonymous reviewers for their helpful and constructive comments that greatly contributed to improving the final version of the paper. \end{description}
{'timestamp': '2019-12-18T02:16:16', 'yymm': '1912', 'arxiv_id': '1912.08046', 'language': 'en', 'url': 'https://arxiv.org/abs/1912.08046'}
\section{Introduction} The nature of the quantum vacuum is fundamental to the study of quantum information. By coupling the vacuum field to particle detectors, it is possible to gain understanding of the quantum vacuum and its response to the structure of spacetime. This is particularly interesting if topological features or horizons are present. These detectors are often taken to be simple two-level quantum systems (known as Unruh-DeWitt detectors) that interact with an underlying scalar field, a model that captures the essential features of the light-matter interaction \cite{MartinMartinez:2012th,Funai:2021jpc}. Such detectors have been employed to study the structure of spacetime \cite{Smith2016,Ng2017}, black holes \cite{Henderson2018,Tjoa2020}, and the thermality of de Sitter spacetime \cite{Steeg2009,Huang2017}. Consider a detector with uniform acceleration in flat spacetime. The detector will experience the Unruh effect, in which the temperature increases in proportion to acceleration \cite{Fulling1973,Davies1975,Unruh1976}, and heat up. This effect arises because the temperature of the vacuum of one set of modes is different than the vacuum temperature of a second set of modes. Though highly idealized in its original assumptions (such as that of an eternally uniformly accelerating detector), a model-independent derivation of the Unruh effect has been given in the context of axiomatic quantum field theory \cite{sewell1982quantum}, with the field temperature (given by the Kubo-Martin-Schwinger (KMS) condition \cite{Kubo1957,Martin1959,Haag1967}) and the temperature measured by the detector being the same. There have since been many demonstrations that detectors undergoing other forms of acceleration (non-uniform, circular) get hot \cite{Bell:1986ir,Costa:1987vv,Percoco:1991iv,Villalba:1999uj,Ostapchuk:2011ud,Hu:2012jr,Doukas2013}. In these more general situations the field temperature is positively correlated with the detector temperature, with the latter a monotonically increasing function of the former. In the last few years it has been shown that some physical situations exhibit the so-called \textit{anti}-Unruh effect instead, in which the temperature of the field is no longer positively correlated with that measured by the detector \cite{Brenna2016,Garay2016}. This anti-Unruh effect can be split into two cases: a weak Anti-Unruh effect (as the temperature of the field increases, the detector clicks less often) and a strong Anti-Unruh effect (the field temperature and detector temperature are inversely related) \cite{Garay2016}. When considering black holes, the Hawking effect is the analogue of the Unruh effect \cite{Birrell1982}. While in general detector temperatures are positively correlated with the field temperature outside a black hole \cite{Hodgkinson2012,Smith2014,Hodgkinson:2014iua,Ng:2014kha}, recently an anti-Hawking effect was shown to also exist, in which a static Unruh-DeWitt (UdW) detector exhibited both strong and weak versions of the phenomenon \cite{Henderson:2019uqo}. This was explicitly demonstrated for the (2+1) dimensional static Banados-Teitelboim-Zanelli (BTZ) black hole. For sufficiently small black holes, the temperature measured by the detector would decrease as the Kubo-Martin Schwinger (KMS) field temperature of the Hawking radiation increased. The anti-Hawking effect has since been observed for a broader range of boundary conditions \cite{Campos:2020twd}, though its weak version is not observed for massless topological black holes in four spacetime dimensions \cite{Campos:2020lpt}. The effects of spacetime dragging due to rotation on these phenomena are much less understood. The quantum vacuum around a rotating black hole is known to exhibit features significantly different from its non-rotating counterpart \cite{Winstanley_2001,Casals_2008,Ottewill:2000yr}. Several investigations of the behaviour of quantum scalar fields in the background of a rotating BTZ black hole have been carried out \cite{Mann:1996ze,Singh:2011gd,Bussola:2017wki,Meitei:2018mgo}, and studies investigating the response of Unruh-DeWitt detectors in such spacetimes have also been undertaken \cite{Hodgkinson2012}. Recently it was shown that rotation has very significant effects on the entanglement harvesting abilities of UdW detectors, with the harvested entanglement being considerably amplified at intermediate distances (about 20-50 horizon radii) from the black hole \cite{Robbins2020}. Motivated by this, we study here the implications of rotation for the anti-Hawking effect. We find that rotation increases the intensity of the weak anti-Hawking effect, but has a negligible influence on its threshold critical temperature. However for the strong anti-Hawking effect, we find that there is a strong dependence on the angular momenta, with the effect becoming stronger or weaker depending on the boundary conditions. The influence of AdS length on the strong and weak versions of the effect is likewise distinct: the weak anti-Hawking effect is independent of AdS length whereas the strong version sees an increased temperature range. \section{Unruh-DeWitt Detectors} To model the interaction between the detectors and the field, we take the detectors to be two-level quantum systems with ground state $\ket{0}_D$ and excited state $\ket{1}_D$, separated by an energy gap $\Omega_D$. We shall assume that these detectors have a spacetime trajectory $x_D(\tau)$. The interaction Hamiltonian is \begin{align} H_D=\lambda\chi_D(\tau)\left(e^{i\Omega_D\tau}\sigma^+ + e^{-i\Omega_D\tau}\sigma^-\right)\otimes\phi[x_D(\tau)]\,, \end{align} where the switching function dictating the duration of the interaction between the detector and field is $\chi_D(\tau)$. $\lambda\ll1$ is the field-detector coupling constant, and the ladder operators that raise and lower the energy levels of the detectors are $\sigma^+= \ket{1}_D\bra{0}_D$, $\sigma^-= \ket{0}_D\bra{1}_D$, respectively. Let the initial state of the detector-field system be $\ket{\Psi_i}=\ket{0}_D\ket{0}$. The final state after a time $t$ is then $\ket{\Psi_f}=U(t,0)\ket{\psi_i}$, where $U(t,0)=\mathcal{T}e^{-i\int dt\left[\frac{d\tau_D}{dt}H_D(\tau_D))\right]}$, with $\mathcal{T}$ being the time-ordering operator. With the reduced density operator $\rho=\text{Tr}_\phi\ket{\Psi_f}\bra{\Psi_f}$ being the state of the system after integrating over the field's degrees of freedom, we have \cite{Smith2016,Smith2017} \begin{align} \rho_{AB}=\begin{pmatrix} 1-P_D &0\\ 0&P_D \end{pmatrix} +\mathcal{O}(\lambda^4)\,, \end{align} where \begin{widetext} \begin{align} P_D=\lambda^2\int d\tau_Dd\tau_D'\chi_D(\tau_D)\chi_D(\tau_D')e^{-i\Omega_D(\tau_D-\tau_D')}W(x_D(\tau_D),x_D(\tau_D')) \label{eq: PD} \end{align} \end{widetext} is the detector's transition probability. We note that this quantity depends on the two-point correlation function, $W(x,x')=\braket{0|\phi(x)\phi(x')|0}$ (also called the Wightman function) of the vacuum. From the transition probability, we can then define the response function, \begin{align} \mathcal{F}=\frac{P_D}{\lambda^2\sigma}\,, \end{align} where $\sigma$ describes the timescale of interaction between the field and the detector. In this paper, we shall focus on a Gaussian switching function, $\chi_D(\tau)=e^{-\frac{\tau^2}{2\sigma^2}}$. We will consider fields whose Wightman functions obey the relation \begin{equation}\label{KMScond} W(\tau-i/T_{KMS},\tau')=W(\tau',\tau) \end{equation} known as the Kubo-Martin-Schwinger (KMS) condition \cite{Kubo1957,Martin1959,Haag1967}. The quantity $T_{KMS}$ in \eqref{KMScond} can be regarded as the temperature of the quantum field in the spacetime. It depends only on the nature of the quantum field and the spacetime background. Correspondingly, we can also define the detector's temperature from its excitation to de-excitation ratio (EDR). Let \begin{align} \mathcal{R}=\frac{\mathcal{F}(\Omega)}{\mathcal{F}(-\Omega)}\,, \end{align} such that there exists a temperature, $T$, that obeys the same form of the KMS condition \cite{Fewster2016}, \begin{align} \mathcal{R}=e^{-\Omega/T}\,. \end{align} Labelling the temperature that obeys this condition by $T_{EDR}$, we have \begin{align} T_{EDR}=-\frac{\Omega}{\log\mathcal{R}} \end{align} The quantity $T_{EDR}$ can be regarded as the temperature that the UdW detector registers in the spacetime. Normally we expect $T_{EDR}$ and $T_{KMS}$ to be positively correlated: as the black hole gets hotter, the field temperature increases and the temperature registered by the UdW detector likewise increases. This is indeed the case for most situations in black hole physics. As noted above, it was recently shown that this is not always the case \cite{Henderson:2019uqo}, and that sometimes the contrary situation, known as the anti-Hawking effect, occurs. As with the anti-Unruh effect \cite{Brenna2016,Garay2016}, we define \begin{align}\label{weakAH} &\frac{d\mathcal{F}(\Omega)}{dT_{KMS}}<0 \quad \textrm{weak}\\ & \frac{\partial T_{EDR}}{\partial T_{KMS}} < 0 \quad \textrm{strong} \label{strongAH} \end{align} for the weak and strong anti-Hawking effects respectively. \section{Rotating BTZ Black holes} We can write the action of our system as $S=S_{EH}+S_{\phi}$, where \begin{align} S_{EH}=\frac{1}{16\pi}\int R\sqrt{-g}d^3x \end{align} is the Einstein-Hilbert action ($R$ is the Ricci scalar and $g$ is the determinant of the metric tensor $g_{\mu\nu}$) and \begin{align} S_{\phi}=-\int\left(\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi+\frac{1}{16}R\phi^2\right)\sqrt{-g}d^3x \end{align} is the action for a conformally-coupled scalar field $\phi$. We are interested in analyzing both the KMS temperature of the field and the EDR temperature of a detector near a rotating BTZ black hole, whose line element is \cite{Banados1992} \begin{equation}\label{btzmet} ds^2=-(N^\perp)^2dt^2+f^{-2}dr^2+r^2(d\phi+N^{\phi}dt)^2 \end{equation} where, $N^\perp=f=\sqrt{-M+\frac{r^2}{\ell^2}+\frac{J^2}{4r^2}}$ and $N^\phi =-\frac{J}{2r^2}$ with $M=\frac{r_+^2+r_-^2}{\ell^2}$ the mass of the black hole and $J=\frac{2r_+ r_-}{\ell}$ its angular momentum. The Hawking temperature is \begin{align} T_H=\frac{1}{2\pi\ell}\left(\frac{r_+^2-r_-^2}{r_+}\right)\,, \end{align} where the inner and outer horizon radii are denoted by $r_-$ and $r_+$, and $\ell$ is the AdS length. Note that $|J|\leq M\ell$, with extremality occurring when $r_+=r_-$ (i.e. $J=M\ell$). \begin{figure*}[t!] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=0.6\textwidth]{PDLegendConstantM.pdf} \end{minipage}\\ \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{{PD_M=0.1_Omega=0.1_Zeta=1_Inset_M=100}.pdf} \caption*{(a) $\zeta=1$} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{{PD_M=0.1_Omega=0.1_Zeta=0_Inset_M=100}.pdf} \caption*{(b) $\zeta=0$} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \centering \includegraphics[width=\textwidth]{{PD_M=0.1_Omega=0.1_Zeta=-1_Inset_M=100}.pdf} \caption*{(c) $\zeta=-1$} \end{minipage} \caption{Response functions for a black hole of mass $M=1/10$ for Dirichlet, transparent, and Neumann boundary conditions and an energy gap of $\Omega\ell=1/10$. The inset plots correspond to $M=100$. As expected, the rotation of the black hole has a smaller effect for larger masses. As the mass of the black hole increases, the weak anti-Hawking effect goes away for $\zeta=1$ and $\zeta=0$. Note that for $\zeta=-1$, the weak anti-Hawking effect is still present even for large mass black holes, with the distinctions between the different rotation parameters so tiny that the curves effectively all overlap. } \label{fig: FJ_LargerMass} \end{figure*} \begin{figure*}[t!] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=0.6\textwidth]{PDLegendConstantM.pdf} \end{minipage}\\ \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{{PD_M=0.001_Omega=1_Zeta=1}.pdf} \caption*{(a) $\Omega\ell=1$} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{{PD_M=0.001_Omega=0.1_Zeta=1}.pdf} \caption*{(b) $\Omega\ell=1/10$} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \centering \includegraphics[width=\textwidth]{{PD_M=0.001_Omega=0.01_Zeta=1}.pdf} \caption*{(c) $\Omega\ell=1/100$} \end{minipage} \caption{Response of a rotating BTZ black hole with mass $M=1/1000$ and Dirichlet boundary conditions ($\zeta=1$). We note that for transparent and Neumann boundary conditions, qualitatively similar results are obtained. } \label{fig: FJ} \end{figure*} In the Hartle-Hawking vacuum, a conformally coupled-scalar field has a Wightman function that can be written as the image sum over the Wightman functions for AdS${}_3$ \cite{Lifschytz1994,Carlip1998}, \begin{align} W_{BTZ}(x,x')=\sum_{n=-\infty}^\infty \eta^nW_{AdS_3}(x,\Gamma^nx') \end{align} where $\Gamma x'$ takes $(t,r,\phi)\to(t,r,\phi+2\pi)$, $\eta=1$ is an untwisted scalar field and $\eta=-1$ is a twisted scalar field. This yields \cite{Hodgkinson2012,Smith2014} \begin{widetext} \begin{equation} \begin{aligned} W_{BTZ}&=\frac{1}{4\pi}\frac{1}{2\sqrt{\ell}}\sum_{n=-\infty}^{n=\infty}\eta^n\left(\frac{1}{\sqrt{\sigma_{\epsilon}(x,\Gamma^nx')}} -\frac{\zeta}{\sqrt{\sigma_{\epsilon}(x,\Gamma^nx')+2}}\right) \label{eq: sum} \end{aligned} \end{equation} where \begin{equation} \begin{aligned} \sigma_\epsilon(x,\Gamma^nx')^2=&-1+\sqrt{\alpha(r)\alpha(r')}\cosh\left[\frac{r_+}{\ell}(\Delta\phi-2\pi n)-\frac{r_-}{\ell^2}(t-t')\right]\\ &-\sqrt{(\alpha(r)-1)(\alpha(r')-1)}\cosh\left[\frac{r_+}{\ell^2}(t-t')-\frac{r_-}{\ell}(\Delta\phi-2\pi n)\right] \label{eq: sigma 0} \end{aligned} \end{equation} and \begin{align} \alpha(r)&=\frac{r^2-r_-^2}{r_+^2-r_-^2}\qquad \Delta\phi=\phi-\phi' \end{align} \end{widetext} with the respective boundary conditions as $\zeta=1$ (Dirichlet), $\zeta=0$ (transparent), and $\zeta=-1$ (Neumann). We shall take the detector to have switching function $\chi_D(\tau_D)=e^{-\tau_D^2/2\sigma^2}$ and only consider untwisted scalar fields with $\eta=1$. To calculate the transition probabilities, we will work in the co-rotating frame of the detectors \cite{Hodgkinson2012}: \begin{align} t&=\frac{\ell r_+\tau}{\sqrt{r^2-r_+^2}\sqrt{r_+^2-r_-^2}} \label{eq: CRM t}\\ \phi&=\frac{r_-\tau}{\sqrt{r^2-r_+^2}\sqrt{r_+^2-r_-^2}} \label{eq: CRM phi}\ . \end{align} in which case \cite{Vagenas2002} \begin{align}\label{temprel} T_{KMS}=T_{H}/\gamma\,, \end{align} where \begin{align}\label{lorfac} \gamma=\frac{\sqrt{r^2-r_+^2}\sqrt{r_+^2-r_-^2}}{r_+} \end{align} is the Lorentz factor. Straightforward calculations show that we can rewrite equation \eqref{eq: PD} as $P_D=\sum_{n=-\infty}^\infty\eta^n\left\{I_n^--\zeta I_n^+\right\}$, where \begin{align} I_n^\pm =K_P\int_{-\infty}^{\infty}dz\frac{e^{-a\left(z-\frac{2\pi nr_-}{\ell}\right)^2}e^{-i\beta\left(z-\frac{2\pi nr_-}{\ell}\right)}}{\sqrt{\left(\cosh(\alpha_n^\pm)-\cosh\left(z\right)\right)}} \label{eq: In} \end{align} \begin{widetext} and \begin{align} K_P&=\frac{\lambda ^2 \sigma_D}{4 \sqrt{2 \pi }}\\ a&=\frac{1}{(4\pi T_{KMS}\sigma_D)^2} \qquad \beta =\frac{\Omega_D}{2\pi T_{KMS}}\\ \cosh(\alpha_n^\pm)&= \pm4\ell^2\pi^2T_{KMS}^2+(1+4\ell^2\pi^2T_{KMS}^2)\cosh\frac{2\pi n r_+}{\ell} \end{align} In the limit of an infinite interaction time (i.e. $\sigma\to\infty$), we note that we can write the $n=0$ term (corresponding to AdS spacetime) analytically as \begin{align} \lim_{\sigma\to\infty}P_{D,n=0}=\lim_{\sigma\to\infty}I_0=\frac{\sqrt{\pi}}{4}\left[1-\tanh\frac{\Omega_D}{2T_{KMS}}\right]\left[1-\zeta P_{-1/2+i\beta}\left(\cosh\alpha_0^+\right)\right] \label{eq: I0} \end{align} \end{widetext} To investigate the influence of rotation (and AdS length) on the weak and strong anti-Hawking effects, we must determine the dependence of the response function and EDR temperature on the KMS temperature. To vary the latter, we locate the detector at $r=R_D$ in the co-rotating frame and solve \eqref{temprel} and \eqref{lorfac} for $R_D$ in terms of $T_{KMS}$ and the other parameters. The response $P_D$ in \eqref{eq: PD} and the EDR temperature are then functions of $T_{KMS}$. \section{Weak Anti-Hawking Effect for Rotating BTZ Black Holes} To set the context for our investigation, we first compare the situation for large and intermediate mass black holes with a detector energy gap of $\Omega_D\ell=1/10$ (with other energy gaps yielding qualitatively similar results). This is shown in Figure~\ref{fig: FJ_LargerMass}, where we plot detector response as a function of $T_{KMS}$. The general trend is that for both masses and all boundary conditions the response is suppressed at small KMS temperatures and asymptotes to a constant value at a large ones. Apart from these two general features, there is notably distinct behaviour as these parameters are varied. In the main figure, we depict the situation for an intermediate mass $M=1/10$, where we see that rotation has marginal impact at small and large KMS temperatures, but significantly amplifies detector response at intermediate KMS temperatures. An even larger influence is due to boundary conditions, where we observe that the weak anti-Hawking effect is absent for Dirichlet boundary conditions, but present for the other two. The negative slope to the right of the peak is steeper for Neumann boundary conditions, indicative of increasing strength of the weak effect as $\zeta$ decreases. In the inset of each subfigure, we consider the large mass $M=100$ case, where we see that the weak anti-Hawking effect disappears for $\zeta=1,0$ boundary conditions, yet remains for $\zeta=-1$, recovering earlier results \cite{Henderson:2019uqo} for large-mass black holes. We see that there is negligible dependence of the response on angular momentum in this large mass case for all KMS temperatures. \begin{figure*}[t!] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=0.6\textwidth]{PDLegendConstantM.pdf} \end{minipage}\\ \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{{PD_M=0.001_Omega=1_Zeta=1_Derivative}.pdf} \caption*{(a) $\Omega\ell=1$} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{{PD_M=0.001_Omega=0.1_Zeta=1_Derivative}.pdf} \caption*{(b) $\Omega\ell=1/10$} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \centering \includegraphics[width=\textwidth]{{PD_M=0.001_Omega=0.01_Zeta=1_Derivative}.pdf} \caption*{(c) $\Omega\ell=1/100$} \end{minipage} \caption{ Derivative of the response with respect to the KMS temperature \eqref{weakAH} of a rotating BTZ black hole with mass $M=1/1000$, and Dirichlet boundary conditions ($\zeta=1$). We note that for transparent and Neumann boundary conditions, qualitatively similar results are obtained.} \label{fig: FJ derivative} \end{figure*} It is clear from this that rotational effects are more pronounced for smaller mass black holes, and so in Figure~\ref{fig: FJ}, we plot the dependence of the response function on KMS temperature for $M=1/1000$. The weak effect is now clearly evident for Dirichlet boundary conditions, with the peak response enhanced as much as sevenfold, compared to the left diagram in Figure~\ref{fig: FJ_LargerMass}. Similar results hold for the other boundary conditions. As before, as rotation increases, the response is amplified for all values of $T_{KMS}$. For all values of rotation and all gaps, the response asymptotes to a value of $P_D=\frac{\sqrt\pi}{4}$, in accord with equation \eqref{eq: I0}, noting from equation \eqref{eq: In} that $I_n\to0$ for $n\neq0$. The strength of the weak anti-Hawking effect depends on the magnitude of the negative value of the slope of equation \eqref{weakAH} after the peak. We see that this increases with decreasing gap, showing that smaller gap enhances the weak anti-Hawking effect, which we illustrate for Dirichlet boundary conditions in Figure \ref{fig: FJ derivative}. The slope peaks at $\frac{d\mathcal{F}}{d T_{KMS}\ell}\approx-0.15$ for the large energy gap and $\frac{d\mathcal{F}}{d T_{KMS}\ell}\approx-0.6$ for the small gap. For each gap, we also see that the weak anti-Hawking effect is amplified with increasing rotation, by as much as 50\% for near-extremal black holes, for all gaps in the figure. Furthermore, we find that the weak anti-Hawking effect occurs after a critical value of $T_{KMS}$ that depends on the detector's energy gap, but not on the rotation of the black hole, again evident from Figure \ref{fig: FJ derivative}. Though we have only illustrated results for Dirichlet boundary conditions $\zeta=1$, we emphasize that this critical value depends on $\zeta$, with the critical temperature becoming smaller as $\zeta\to\-1$. Finally, we note that changing the AdS length will not change the strength of the weak effect. Physically, this is because the AdS length is the only length scale present (as $\sigma\to\infty$), and everything is calibrated against this length. \section{Strong Anti-Hawking Effect for Rotating BTZ Black Holes} Let us now turn our attention to the strong anti-Hawking effect. In Figures~\ref{fig: TEDRKMS} and~\ref{fig: strong anti-Hawking Omega=1/100}, we plot the relationship between the EDR and KMS temperatures for $M=1/1000$ and different boundary conditions, with $\Omega\sigma=1$ and $\Omega\sigma=1/10$, respectively. We note several interesting features. \begin{figure*}[t!] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=0.6\textwidth]{PDLegendConstantM.pdf} \end{minipage}\\ \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{{TEDR_M=0.001_Omega=1_Zeta=1}.pdf} \caption*{(a) $\zeta=1$} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{{TEDR_M=0.001_Omega=1_Zeta=0}.pdf} \caption*{(b) $\zeta=0$} \end{minipage} \begin{minipage}{0.3\textwidth} \centering \includegraphics[width=\textwidth]{{TEDR_M=0.001_Omega=1_Zeta=-1}.pdf} \caption*{(c) $\zeta=-1$} \end{minipage} \caption{EDR temperature for a black hole of mass $M=1/1000$ and energy gap of $\Omega\sigma=1$. We plot KMS temperature down to $T_{KMS}\ell=10^{-5}$. The insets show the relation between the EDR temperature and KMS temperature for larger values of $T_{KMS}$.} \label{fig: TEDRKMS} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{PDLegendConstantM.pdf} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=\columnwidth]{{TEDRConstantM_M=0.001_Omega=1_Zeta=1}.pdf} \caption*{(a) $\Omega\sigma=1$} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=\columnwidth]{{TEDRConstantM_M=0.001_Omega=0.01_Zeta=1}.pdf} \caption*{(a) $\Omega\sigma=1/100$} \end{minipage} \caption{EDR temperature for a black hole of mass $M=1/1000$. The results are similar for $\Omega\sigma=1/100$. The insets show the EDR temperature for our first three values of the angular momentum, plotted on a linear scale (rather than a log scale, as is the case for the main plots). } \label{fig: strong anti-Hawking Omega=1/100} \end{figure} First, it is evident from Figure~\ref{fig: TEDRKMS} that a strong anti-Hawking effect is present for all three boundary conditions. This is clear at low values of $T_{KMS}$, where we see a negative slope indicative of the strong effect. Eventually a minimum is reached and $T_{EDR}$ begins to increase with $T_{KMS}$. The insets indicate the behaviour at large $T_{KMS}$, where we see that this quantity is indeed positively correlated with $T_{EDR}$, and there is minimal dependence on angular momentum, even for small $M$. Unlike the weak effect, we see that the maximum decreases with increasing values of angular momentum for all boundary conditions. Consequently the strength of the strong effect (the slope in equation \eqref{strongAH}) likewise diminishes. As extremality is approached, the strong effect essentially vanishes. The exception is for Dirichlet boundary conditions. In Figure~\ref{fig: strong anti-Hawking Omega=1/100}, we plot the EDR temperature for Dirichlet boundary conditions for $\Omega\sigma=1$ and $\Omega\sigma=1/100$. In the insets, we see that there is a small strong anti-Hawking effect for non-rotating black holes, similar to what was found in \cite{Henderson:2019uqo}. As the angular momentum increases, the peak in Figure~\ref{fig: strong anti-Hawking Omega=1/100} moves rightward, and the threshold value of $T_{KMS}$ at which the strong effect appears increases. This threshold value will reach a maximum for some $J/M\ell$, after which it decreases with increasing $J/M\ell$ as extremality is approached, as is clear from Figure~\ref{fig: strong anti-Hawking derivative}. We also qualitatively see that the magnitude of the slope becomes larger for larger angular momenta, indicating that the strong anti-Hawking effect becomes stronger. For smaller values of the energy gap, we see that the strong anti-Hawking effect increases in strength, with the $\Omega\sigma=1/10$ case being similar to $\Omega\sigma=1/100$. This can be seen by noting that $J/M\ell=0.9$ does not exhibit the strong anti-Hawking effect for $\Omega\sigma=1$, but there is a small effect present at this value of the angular momentum for $\Omega\sigma=1/100$. For transparent and Neumann boundary conditions, we find that the strong anti-Hawking effect is much more pronounced, as can be seen from Figure~\ref{fig: TEDRKMS}. Furthermore, as $\zeta$ increases from $-1$ to $1$ in Figure \ref{fig: TEDRKMS}, the range of $T_{KMS} \ell$ over which the strong effect is present also decreases, and is very small for Dirichlet boundary conditions. This range decreases with increasing angular momentum for transparent and Neumann boundary conditions. There is a minimal value of $T_{EDR}\ell$ as function of $T_{KMS} \ell$, and this minimal value decreases as the angular momentum of the black hole increases. For Dirichlet boundary conditions and sufficiently small angular momentum, larger angular momenta also yields a decreasing range of $T_{KMS}$ temperatures for which the strong effect holds, up to a critical value of $J/M\ell$. Beyond this value, increasing angular momenta results in a greater range of $T_{KMS}$ temperatures for which the effect is present. As a result, the anti-Hawking effect appears to nearly disappear for near-extremal black holes $J \geq 0.9999 M \ell$ for transparent and Neumann boundary conditions, yet is still present for Dirichlet boundary conditions. In Figure \ref{fig: strong anti-Hawking Omega=1/100}, we see that there is a strong anti-Hawking effect for a non-rotating black hole, but the effect disappears (or almost disappears, depending on the energy gap) as the angular momentum increases to $J/M\ell=0.9$. Beyond this, however, as we continue to approach extremality in the Dirichlet case, the strong effect emerges at lower KMS temperatures, as shown in Figure~\ref{fig: strong anti-Hawking derivative}. Indeed, its range and strength both get larger as $J/M\ell$ gets very close to unity, as evidenced by the curve for $J/M\ell=0.9999$. It is quite remarkable that there is such a strong dependence on boundary conditions. Finally, inspection of Figures~\ref{fig: TEDRKMS} and \ref{fig: strong anti-Hawking Omega=1/100} indicate that the strong effect does not monotonically depend on $J/M\ell$. Indeed, we observe a `crossover' effect at small $T_{KMS}$, in which the values of $T_{EDR}$ decrease with increasing angular momentum as $T_{KMS} \to 0$, whereas at sufficiently large $T_{KMS}$ the rate of change of $T_{EDR}$ with respect to $T_{KMS}$ increases with increasing angular momentum such that the higher-$J$ curves cross over the lower-$J$ curves. At large $T_{KMS}$, we see that $T_{EDR}\sim T_{KMS}$, with the largest $T_{EDR}$ corresponding to the largest $J$ for fixed $T_{KMS}$, and the smallest $T_{EDR}$ corresponding to the smallest $J$. This is clearly evident in Figure~ \ref{fig: strong anti-Hawking Omega=1/100}. We have also verified that this effect is also present for Neumann and transparent boundary conditions, though the `crossover' occurs at larger KMS temperatures than the Dirichlet case. By comparing Figures \ref{fig: FJ}-\ref{fig: strong anti-Hawking Omega=1/100}, we see that there is no range of $T_{KMS}$ for which the strong anti-Hawking effect overlaps with the weak anti-Hawking effect, with the weak anti-Hawking effect appearing for $T_{KMS}\ell\gtrsim1$, while the strong anti-Hawking effect appears for $T_{KMS}\ell\lesssim0.1-0.5$. The exact temperature range is dependent on the boundary conditions, energy gap, and in the case of the strong anti-Hawking effect, the angular momentum. Furthermore, the critical KMS temperature at which the strong anti-Hawking effect disappears becomes smaller for larger angular momentum, in contrast to the weak anti-Hawking effect where the critical temperature at which this effect appears has minimal dependence on angular momentum. In addition, we again note that the location of this critical temperature for the strong effect is highly dependent on the boundary conditions. \begin{figure} \centering \includegraphics[width=\columnwidth]{StrongAntiHawkingLegend.pdf} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=\columnwidth]{{StrongAntiHawkingDerivative1}.pdf} \caption*{(a) $\Omega\sigma=1$} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=\columnwidth]{{StrongAntiHawkingDerivative0.01}.pdf} \caption*{(a) $\Omega\sigma=1/100$} \end{minipage} \caption{ Strong anti-Hawking effect for a near-extremal black hole of mass $M=1/1000$ and Dirchlet boundary conditions. The results are similar for $\Omega\sigma=1/10$. } \label{fig: strong anti-Hawking derivative} \end{figure} Our last consideration is that of the impact of changing the AdS length on the strong effect. Here the situation differs from the weak anti-Hawking effect as now there is a second length scale present ($\sigma$, the width of the switching function). In Figure~\ref{fig: strong anti hawking changing AdS length}, we consider the effect of changing the AdS length for a non-rotating BTZ black hole, compared to a near-extremal BTZ black hole. In the non-rotating case, increasing AdS length increases the range of $T_{KMS}$ temperatures where the strong anti-Hawking effect holds. However, the marginal effect of increasing $\ell$ is reduced for larger and larger values of the AdS length. We also see that for small $T_{KMS}\ell$, a larger AdS length will broaden the initial peak. In the case of a near-extremal black hole, the situation is similar. As we saw in Figure \ref{fig: TEDRKMS}a, there was a tiny strong anti-Hawking effect present for near-extremal black holes. For larger AdS lengths, we similarly see that the temperature range of the strong anti-Hawking effect increases in size. However, we note that this effect is still relatively weak and only becomes noticeable for larger values of $\ell$. \begin{figure} \centering \includegraphics[scale=0.5]{AdSLegend.pdf} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=\columnwidth]{{M=0.001_J=0_Omega=0.1_Zeta=1_VaryingAdSLengths}.pdf} \caption*{(a) $J/M\ell=0$} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=\columnwidth]{{M=0.001_J=0.9999_Omega=0.1_Zeta=1_VaryingAdSLengths}.pdf} \caption*{(b) $J/M\ell=0.9999$} \end{minipage} \caption{Changing AdS lengths for the strong anti-Hawking effect for a black hole of mass $M=1/1000$, Dirichlet boundary conditions, and energy gap of $\Omega\sigma=1/10$. We plot KMS temperature down to $T_{KMS}\ell=10^{-5}$. The insets show the effect of changing AdS length on the EDR temperature for larger values of $T_{KMS}$. } \label{fig: strong anti hawking changing AdS length} \end{figure} \section{Conclusion} As with entanglement harvesting \cite{Robbins2020}, rotation can have a significant impact on the anti-Hawking effect. For large-mass black holes with Dirichlet and transparent boundary conditions, the weak anti-Hawking effect vanishes, as expected; it is present for Neumann boundary conditions \cite{Henderson:2019uqo}. In all cases the effects of rotation are negligible. But as the mass of the black hole decreases, rotation significantly amplifies the weak version of the effect. The impact of rotation on the strong effect is somewhat inverted. We find that rotation tends to weaken the strength of the strong anti-Hawking effect for transparent and Neumann boundary conditions, with it nearly vanishing for near-extremal black holes. In contrast, for Dirichlet boundary conditions, larger angular momenta causes the strong anti-Hawking effect to be reduced before being amplified again. Furthermore, for the strong anti-Hawking effect, the relationship between angular momentum and detector temperature is non-monotonic for each boundary condition, leading to a `crossover' phenomenon that is most prominent for Dirichlet boundary conditions. For small $T_{KMS}L$, larger angular momenta yield smaller $T_{EDR}L$, whereas for larger values of $T_{KMS}L$, larger angular momenta yield \textit{larger} $T_{EDR}$. More work is needed to better understand how this cross-over effect comes about and its dependence on the boundary conditions. It would also be interesting to consider whether $3+1$ dimensional rotating black holes also exhibit these same findings as the rotating BTZ black hole. While the weak anti-Hawking effect is independent of the AdS length $\ell$, increasing AdS length increases the range of $T_{KMS}$ temperatures where the strong anti-Hawking effect holds. A larger AdS length will also broaden the initial peak for small $T_{KMS}\ell$. However as $\ell$ continues to increase, its impact on the strong effect becomes increasingly marginal. In summary, our results indicate that the effects of spacetime dragging on the quantum vacuum can significantly modify detector response as small field KMS field temperatures, as exemplified by the anti-Hawking effect(s). The role of boundary conditions is very important; indeed, it is surprising that there is such a strong dependence of the strong anti-Hawking effect on boundary conditions for small-mass rotating black holes. The origin of this effect merits further study. \bigskip {\it Acknowledgements} $\quad$ MR was funded by an Ontario Graduate Scholarship. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada, Asian Office of Aerospace Research \& Development Grant FA2386-19-1-4077, and the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. \bibliographystyle{unsrt}
{'timestamp': '2021-07-06T02:20:08', 'yymm': '2107', 'arxiv_id': '2107.01648', 'language': 'en', 'url': 'https://arxiv.org/abs/2107.01648'}
\section{Introduction} Onsager reciprocity relation indicates that for anomalous Hall effect in linear response regime, selected material is required to break time-reversal symmetry since the Berry curvature is odd in momentum space.\cite{PhysRevResearch.2.032066}\cite{PhysRevLett.124.087402}\cite{PhysRevLett.123.246602} Therefore, the integral of Berry curvature over momentum will vanish with Fermi distribution of electrons in equilibrium. This is given by $\Omega_{a}(-k)=-\Omega_{a}(k)$. The Kramer pairs of $k$ and $-k$ are both occupied. However, according to recent research, nonlinear Hall conductivity can be still remained in time-reversal symmetric crystals. What we need is only inversion-symmetry breaking. In this case, the energy gap emerges at each Dirac node or Weyl node. More importantly, it has been found out the Berry curvature dipole is responsible for nonlinear Hall response in quantum transport by both experimental and theoretical study. \cite{PhysRevLett.115.216806}\cite{PhysRevLett.105.026805}\cite{PhysRevB.100.195117}\cite{PhysRevLett.121.246403} Indeed, there are two types of materials creating non-trivial Berry curvature dipole. The first kind is topological crystalline insulator SnTe, which will undergo a ferroelectric distrotion at low temperature~\cite{PhysRevLett.122.186801}, time-reversal symmetric Weyl semimetals in the TaAs material class\cite{PhysRevLett.115.216806} and Rashba material BiTeI\cite{PhysRevLett.121.246403} They all have strong spin-orbit coupling contributing to their tilted Dirac cone. These tilted Dirac cones will not change their Berry curvature but crucial to non-vanishing dipole term. The second type is two-dimensional Dirac material without spin-orbit coupling. Their inversion symmetry breaking attributes to external field and substrate. Even more importantly, the appearance of a finite dipole can only be captured taking explicitly into account the terms accounting for the warping of the Fermi surface\cite{PhysRevLett.123.196403}. This new phenomena has been already studied in quantum nonlinear Hall effect\cite{PhysRevLett.115.216806} and thermal Hall effect\cite{PhysRevResearch.2.032066} with semi-classical Boltzmann equation. Inspired by the two studies, I expect to explore more on nonlinear Nernst effect. Without conventional Boltzmann equation and semi-classical approximation, I begin with generalization of the quantum kinetic theory which is more fundamental to us. With the theory in temperature field, I will investigate the nonlinear response theory in thermoelectric transport. In this work I study the quantum kinetic theory of nonlinear Nernst effect (NNE) in thermoelectrical transport. There is a research before on transport properties of nonlinear Nernst effect beginning from Boltzmann equation with semi-classical approximation.\cite{PhysRevB.99.201410} I will derive expression and equation of the density matrix from the basic quantum Liouville equation. I develop theory of the nonlinear electronic transport induced by temperature gradient in the presence of disorder. I will also introduce the new type of dipole: $\textbf{thermoelectric Berry curvature dipole}$ instead of dipole before\cite{PhysRevLett.115.216806}. This new thermoelectric Berry curvature dipole will play an important role in thermoelectric transport. This theory is also crucial to experimental physicists since they can measure the electric current in presence of temperature gradient. I here provide a theoretical prediction of the relationship between thermoelectric conductivity and chemical potential. We can also figure out that this thermoelectric current is also connected to a term with Berry curvature which is totally different from the electric Hall effect. This paper is organized as follows. In the second section, I will briefly introduce not only quantum kinetic equation for Bloch electrons in the presence of disorder, temperature gradient but also the solution of density matrix to the equation. In the third part, I give the general expression of the density matrix by solving the quantum kinetic equation and derive the second-order response. It will also be explained that why thermoelectric Berry curvature dipole is constructed and how it influences the transport. To show the adaptability and reliability of my generalized theory, I apply the quantum kinetic theory in the presence of electric field and compare my results with that in the research before. I take disorder effect into account by applying the scattering theory as well. I prove the terms related to Berry curvature and Berry curvature dipole have no effects on the conductivity. In the fourth section, I employ the theory before to a specific model: topological crystalline insulator, which presents the nontrivial thermoelectric Berry curvature dipole. I will show how its thermoelectric Berry curvature dipole and thermoelectric conductivity change with chemical potential of the valley numerically. Last but not the least, I will discuss the quantum kinetic theory in more general case: non-static solution and its application to derivation of optical conductivity. This is still unfamiliar to most of the researchers since all the previous research on quantum kinetic theory only focus on the case in DC limit. Our ambition is to discover the optical current and optical conductivity in any frequency. I will also check the theory with results in semi-classical approximation. \section{Quantum Kinetic Theory} Without external field, the Hamiltonian of the system is: $H=H_{0}+U$, where $U$ represents disorder potential. The free Hamiltonian satisfies: \begin{equation} H_{0}|m,\textbf{k}\rangle=\epsilon_{k}^{m}|m,\textbf{k}\rangle \end{equation} Here m represents band index and $\epsilon_{\textbf{k}}^{m}$ are dispersion relationship of m-th band. In the presence of disorder, the quantum Liouville equation after integrating the disorder's coordinates is our beginning point~\cite{Schmidt2001} \begin{equation} \frac{\partial\langle\rho\rangle}{\partial t}+\frac{i}{\hbar}[H_{0},\langle\rho\rangle]+K(\langle\rho\rangle)=0 \end{equation} where $\langle\rho\rangle$ is density matrix after integrating all the disorder\cite{Schmidt2001,Liboff2003}: $\langle\rho\rangle=\frac{1}{V^{n}}\int dR_{1}...dR_{n}\rho(\textbf{r},R_{1},...,R_{n})$. $R_{1},...,R_{n}$ are coordinates of disorder. For further step, the well-known Luttinger proposal\cite{PhysRev.135.A1505}\cite{PhysRevLett.114.196601} is introduced. To describe the thermal transport in material, I similarly add scalar potential $\psi$ which satisfies $\nabla\psi=\nabla T/T$. Therefore, the thermal field and the thermal driving term take the forms: \begin{equation} \textbf{E}_{T}=-\frac{\partial\textbf{A}_{T}}{\partial t}\equiv-\frac{\nabla T}{T} \end{equation} \begin{equation} D_{T}(\langle\rho\rangle)=\frac{1}{2\hbar}\frac{\nabla T}{T}\frac{D(\{H_{0},\langle\rho\rangle\})}{D\textbf{k}} \end{equation} The covariant derivative is defined as\cite{PhysRevB.96.235134}: \begin{equation} \frac{DX}{D\textbf{k}}=\nabla_{\textbf{k}}X-i[\mathcal{R}_{k},X] \end{equation} where X is a matrix and $\mathcal{R}$ is Berry connection: $\mathcal{R}_{\textbf{k}a}^{mn}=i\langle u_{\textbf{k}}^{m}|\partial_{k_{a}}u_{\textbf{k}}^{n}\rangle$, $\mathcal{R}_{\textbf{k}}^{mn}=\sum_{a=1}^{3}\mathcal{R}_{\textbf{k}a}^{mn}e_{a}$ Then we can construct the kinetic equation in the presence of disorder and thermal field\cite{PhysRevB.101.155204}: \begin{equation} \frac{\partial\langle\rho\rangle}{\partial t}+\frac{i}{\hbar}[H_{0},\langle\rho\rangle]+K(\langle\rho\rangle)=D_{T}(\langle\rho\rangle) \end{equation} Here we give the numerical result of scattering term: \begin{widetext} \begin{equation} K(\langle\rho\rangle)=\frac{1}{\hbar^{2}}\langle\int_{0}^{\infty}dt'[U,[e^{-iH_{0}t'/\hbar}Ue^{iH_{0}t'/\hbar},e^{-iH_{0}t/\hbar}\langle\rho\rangle e^{iH_{0}t/\hbar}]]\rangle \end{equation} This can be decomposed into two parts:\cite{PhysRevB.96.035106} \begin{equation} [I(\langle\rho\rangle)]_{\textbf{k}}^{mm}=\frac{2\pi}{\hbar}\sum_{m',\textbf{k}'}\langle U_{\textbf{k}\textbf{k}'}^{mm'}U_{\textbf{k}'\textbf{k}}^{m'm}\rangle(n_{\textbf{k}}^{m}-n_{\textbf{k}'}^{m'})\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m'}) \end{equation} \begin{equation} [J(\langle\rho\rangle)]_{\textbf{k}}^{mm''}=\frac{\pi}{\hbar}\sum_{m',\textbf{k}'}\langle U_{\textbf{k}\textbf{k}'}^{mm'}U_{\textbf{k}'\textbf{k}}^{m'm''}\rangle[(n_{\textbf{k}}^{m}-n_{\textbf{k}'}^{m'})\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m'})+(n_{\textbf{k}}^{m''}-n_{\textbf{k}'}^{m'})\delta(\epsilon_{\textbf{k}}^{m''}-\epsilon_{\textbf{k}'}^{m'})],~(m\neq m'') \end{equation} \end{widetext} Especially, due to energy conservation law, the main contribution is only from band-diagonal part $\langle n\rangle$. Therefore, the two disorder terms can be rewritten into: $I(\langle n\rangle)$ and $J(\langle n\rangle)$. These results can also be found in \cite{Liboff2003}\cite{PhysRevB.96.235134}. To solve the equation (6), we should sepearte the density matrix into two parts: $\langle\rho\rangle=\langle\rho_{0}\rangle+\langle\rho_{T}\rangle$, where $\langle\rho_{0}\rangle=\sum_{m}f_{0}(\xi_{\textbf{k}})|m\rangle\langle m|$ represents the equilibrium-state distribution. In this passage we mainly focus on the nonequilibrium part: $\langle\rho_{T}\rangle$ induced by temperature gradient in the density matrix. The solution to this part yields: \begin{equation} \langle n_{T}\rangle_{\textbf{k}}^{m}=\tau_{k}^{m}\frac{\nabla T}{T}\cdot\textbf{v}_{\textbf{k}}^{m}(\epsilon_{\textbf{k}}^{m}-\mu)\frac{\partial f_{0}(\epsilon_{k}^{m})}{\partial\epsilon_{k}^{m}} \end{equation} \begin{equation} \langle S_{T}\rangle_{\textbf{k}}^{mm'}=-i\hbar\frac{[D_{T}(\langle\rho_{0}\rangle)]_{\textbf{k}}^{mm'}-[J(\langle n_{T}\rangle)]_{\textbf{k}}^{mm'}}{\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}}^{m'}} \end{equation} Here $\textbf{v}_{\textbf{k}}^{m}=\frac{1}{\hbar}\nabla_{\textbf{k}}\epsilon_{\textbf{k}}^{m}$, and $\tau_{k}^{m}$ represents relaxation time which takes the form: $1/\tau_{k}^{m}=2\pi\sum_{m'}\langle U_{\textbf{kk}'}^{mm'}U_{\textbf{k'k}}^{m'm}\rangle\int\frac{d^{d}k}{(2\pi)^{d}}\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m'})$. This will be figured out in specific material. Ignoring the impurity term, we can calculate the electric current induced by temperature gradient with intrinsic velocity operator \cite{PhysRevB.101.155204}: \[ J_{x}=\frac{1}{2}Tr[(-e)\{v_{x},\langle S_{T}\rangle\}] \] \begin{equation} =(-e)\frac{\partial_{y}T}{T}\sum_{m}\int\frac{d^{d}k}{(2\pi)^{d}}\times\Omega_{\textbf{k},z}^{m}(\epsilon_{\textbf{k}}^{m}-\mu)f_{0}(\epsilon_{\textbf{k}}^{m}) \end{equation} Here $\{,\}$ represents anti-commutation. $\{A,B\}=AB+BA$. It is used to prove the hermiticity of the electric current. The reason why we only pay attention to the off-diagonal part of density matrix is no contribution comes from the diagonal part. The integral of diagonal part is proven to be zero because it is an odd function of momentum. In addition, we can see the current is directly connected to Berry curvature of bands: $\Omega_{\textbf{k},a}^{m}=i\epsilon_{abc}\langle\partial_{k_{b}}m|\partial_{k_{c}}m\rangle$ However, this current can only be measured in time-reversal symmetry breaking material. For time-reversal symmetric crystals, (12) will contributes nothing. We have to consider the nonlinear Nernst effect. \section{General theory of Nonlinear Nernst effect} According to study by Fu \cite{PhysRevLett.115.216806}, nonlinear Hall conductivity tensor in the second harmonic term is for material preserving the time-reversal symmetry. We will here prove its reasonability with quantum kinetic theory and develop theory into thermoelectric transport Let's focus on quantum Liouville equation first. Instead of the form like (6), we give the general expression for the equation with temperature gradient. \begin{equation} (\mathcal{L}-D_{T})\langle\rho\rangle_{F}=D_{T}\langle\rho_{0}\rangle \end{equation} Here we define an operator $\mathcal{L}=P+K$, where $P\langle\rho\rangle_{F}\equiv\frac{i}{\hbar}[H_{0},\langle\rho\rangle_{F}]$. This is accurate for $\mathcal{L}\langle\rho_{0}\rangle=0$. So we can give the direct solution of it. \begin{equation} \langle\rho\rangle_{F}=\sum_{N=1}^{\infty}(\mathcal{L}^{-1}D_{T})^{N}\langle\rho_{0}\rangle \end{equation} (14) is a nontrivial result for the term of $N=2$ is the response in nonlinear regime which may be related to Berry curvature dipole. The result before is just the simplest approximation of (13). Indeed, equation (14) is obtained by iteration. In linear response theory, we just consider the $N=1$ case. Now we turn to the quadratic term. Similarly, we can calculate the off-diagonal term(without impurity): \begin{equation} \langle S_{T^{2}}\rangle=-i\hbar\frac{\partial_{y}T}{T}\sum_{nn'}\frac{\epsilon_{n'}\langle n_{T}\rangle_{\textbf{k}}^{n'}-\epsilon_{n}\langle n_{T}\rangle_{\textbf{k}}^{n}}{\epsilon_{n}-\epsilon_{n'}}\times|n\rangle\langle n|\partial_{k_{y}}n'\rangle\langle n'| \end{equation} Here $\epsilon_{m}=\epsilon_{k}^{m}-\mu$. In the following parts, we note $\partial_{k_{a}}\rightarrow\partial_{a}$ We can also obtain the general expression of thermoelectric current for the quadratic term. Detailed calculation is displayed in Appendix A. \begin{widetext} \begin{equation} J_{x}^{(2)}=\frac{1}{2}Tr[(-e){v_{x},\langle S_{T^{2}}\rangle}]=J_{x1}^{(2)}+J_{x2}^{(2)} \end{equation} \begin{equation} J_{x1}^{(2)}=\frac{e}{2\hbar}(\frac{\partial_{y}T}{T})^{2}\sum_{m}\tau_{\textbf{k}}^{m}\epsilon_{m}^{2}f_{0}\partial_{y}\Omega_{\textbf{k}z}^{m}=e(\frac{\partial_{y}T}{T})^{2}D_{y} \end{equation} \begin{equation} J_{x2}^{(2)}=\frac{e}{\hbar}(\frac{\partial_{y}T}{T})^{2}\sum_{m}\tau_{\textbf{k}}^{m}\epsilon_{m}\partial_y\epsilon_{\textbf{k}}^{m}f_{0}\Omega_{\textbf{k}z}^{m} \end{equation} \end{widetext} We can see this thermoelectric currect includes two terms, which will be expanded in the following context. For simplicity, we have $\sum_{m}\int\frac{d^{d}k}{(2\pi)^{d}}\rightarrow\sum_{m}$. Here we define the thermoelectric Berry curvature dipole: $D_{y}=\frac{1}{2\hbar}\sum_{m}\int\frac{d^{d}k}{(2\pi)^{d}}\tau_{\textbf{k}}^{m}\epsilon_{m}^{2}f_{0}\partial_{y}\Omega_{\textbf{k}z}^{m}$ . Actually, when the temperature is low enough, the contribution can be considered only from the conduction band with assumption: $\mu>0$. Since we only consider the problem on the Fermi surface, the relaxation time can be replaced with $\tau_{k_{F}}^{+}$. In this case, only the electrons near the Fermi surface on the conduction band give rise to transport. So the thermoelectric Berry curvature can be rewritten into another form. \begin{equation} D_{y}=\frac{1}{2\hbar}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{+}^{2}f_{0}\partial_{y}\Omega_{\textbf{k}z}^{+} \end{equation} Here we modify the dipole by removing the relaxation time from it. We can also see another exhilirating story that this current contains a term with Berry curvature, which does not show up in the electric Hall effect. This is different from one in \cite{PhysRevLett.115.216806} since this is thermoelectric current instead of electric current induced by electric field. After the calculation, we still remain a question: the kinetic theory developed here should be consistent with semi-classical wavepacket dynamics\cite{PhysRevLett.95.137204}, it is crucial to check whether the result in nonlinear Hall effect is the same as that before. When the external field is electric field, we have the form of density matrix: \begin{equation} \langle S_{E^2}\rangle =-ieE_{y}\sum_{n,n'}\frac{\langle n_{E}\rangle^{n'}_{\textbf{k}}-\langle n_{E}\rangle^{n}_{\textbf{k}}}{\epsilon_{n}-\epsilon_{n'}}|n\rangle\langle n|\partial_{y}n'\rangle\langle n'| \end{equation} \[ \langle n_{E}\rangle^{m}_{\textbf{k}}=e\textbf{E}\cdot\textbf{v}_{\textbf{k}}^{m}\frac{\partial f_{0}}{\partial\textbf{k}} \] Since we take the zero-temperature limit, there is only contribution from conductance band. With this approximation, we obtain the same result in \cite{PhysRevLett.115.216806} by repeating the same procedure. \[ J_{x}=\frac{1}{2}Tr[(-e){v_{x},\langle S_{E^2}\rangle}]=-\frac{1}{2\hbar}e^{3}E_{y}^{2}\sum_{n}\tau_{\textbf{k}}^{n}\frac{\partial f_{0}}{\partial k_{y}}\Omega_{\textbf{k}z}^{n} \] \begin{equation} =\frac{1}{2\hbar}e^{3}E_{y}^{2}\sum_{n}\tau_{k_{F}}^{n}\frac{\partial \Omega_{\textbf{k}z}^{n}}{\partial k_{y}}f_{0} \end{equation} \begin{equation} \chi=\frac{1}{2\hbar}e^{3}\tau_{k_{F}}^{+}\int\frac{d^{2}k}{(2\pi)^{2}}f_{0}\frac{\partial \Omega_{\textbf{k}z}^{+}}{\partial k_{y}} \end{equation} More details are presented in Appendix A. This also indicates that our theory is consistent with semi-classical apprximation. The results (12) and (16-18) are derived without considering the impurity scattering. With (11), we find the current corresponding to linear term which is induced by impurity takes the form of: \begin{equation} J_{x1}^{i}=\pi e\frac{\partial_{y}T}{T}\sum_{m,m'}\langle U_{\textbf{k}\textbf{k'}}^{mm'}U_{\textbf{k'}\textbf{k}}^{m'm}\rangle(n_{k}^{m}-n_{k'}^{m'})\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m'})\Omega_{kz}^{m} \end{equation} This is shown to be 0 after integral where $n_{\textbf{k}}^{m}=(\tau_{k_{F}}^{m})\epsilon_{m}f_{0}(\epsilon_{\textbf{k}}^{m})$. The off-diagonal matrix elements of quadratic term induced by disorder is given by: \begin{equation} \langle S_{T^{2}}'\rangle_{k}^{mm'}=i\hbar\frac{[J(\langle n_{T^{2}}\rangle)]_{\textbf{k}}^{mm'}}{\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}}^{m'}} \end{equation} So the current term connected to the dipole can be shown as: \begin{equation} J_{x2}^{i}=\pi e(\frac{\partial_{y}T}{T})^{2}\sum_{m,m'}\langle U_{\textbf{k}\textbf{k'}}^{mm'}U_{\textbf{k'}\textbf{k}}^{m'm}\rangle(N_{\textbf{k}}^{m}-N_{\textbf{k}'}^{m'})\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m'})\partial_{y}\Omega_{\textbf{k}z}^{m} \end{equation} Where $N_{\textbf{k}}^{m}=(\tau_{k_{F}}^{m})^{2}\epsilon_{m}^{2}f_{0}(\epsilon_{\textbf{k}}^{m})$. This current can also be proven to be 0 after integral, which tells that current terms in impurity scattering which is linked to dipole contributes nothing to transport. More details are illustrated in Appendix B. We have to pay attention that here we only consider the impurity-scattering terms related to Berry curvature and dipole. They will have no effects on transport. Besides, all above is about results in DC limit. We have also developed one in the AC limit which will be discussed in the discussion section. Since then, we have developed quantum kinetic theory of nonlinear Nernst effect in thermoelectric transport. For further step, we are intended to apply our results to a specific system: topological crystalline insulator. \section{Application} We firstly consider the Dirac semimetal materials(DSM). In many realistic DSM, Dirac cones are more or less distorted. The tilted Dirac cone can be realized in a number of types of materials. To be specific, we focus on topological crystalline insulators such as SnTe. Experiments tell us there are tilted Dirac cones on (001) surface of it. Therefore, we can calculate the thermoelectric conductance induced by thermoelectric Berry curvature dipole. Firstly, the low-energy model of the (001) surface is given by \begin{equation} H=\xi w_{y}k_{y}\sigma_{0}+v_{x}k_{x}\sigma_{x}+\xi v_{y}k_{y}\sigma_{y}+\frac{\Delta}{2}\sigma_{z} \end{equation} The energy bands will take the form: $\epsilon_{\textbf{k}}^{\pm}=w_{y}k_{y}\pm\sqrt{(v_{x}k_{x})^{2}+(v_{y}k_{y})^{2}+(\frac{\Delta}{2})^{2}}$. Here $w_{y}$ is the tilted parameter, $\Delta$ is energy gap, $v_{x},v_{y}$ represent fermi velocity in different direction. Here $\xi=\pm1$ represents the freedom of valley, which conserves time reversal(TR) symmetry of the system. Due to the ferroelectric distortion, the Dirac cone are turned into gapped one. Meanwhile, form of energy bands is stable since if we take influence of disorder into account, this form is still invariant. To properly account for such a dynamically generated kinetic term, we add a term $\lambda\omega\sigma_{x}$ in free fermion action. Since that, we can correct the corresponding dispersion\cite{PhysRevB.98.195123}: \begin{equation} ||E-tv_{y}k_{y}-\lambda E\sigma_{x}-v_{x}k_{x}\sigma_{x}-v_{y}k_{y}\sigma_{y}||=0 \end{equation} Here we use the convention: $tv_{y}=\xi w_{y}$. After solving the equation, we have the effect energy band. \begin{equation} E_{\pm}=t^{eff}v_{y}^{eff}k_{y}\pm\sqrt{(v_{x}^{eff}k_{x})^{2}+(v_{y}^{eff}k_{y})^{2}} \end{equation} where \begin{equation} t^{eff}=\frac{t+\lambda}{1+t\lambda} \end{equation} \begin{equation} v_{y}^{eff}=\frac{1+t\lambda}{1-\lambda^{2}}v_{y} \end{equation} \begin{equation} v_{x}^{eff}=\frac{1}{\sqrt{1-\lambda^{2}}}v_{x} \end{equation} This method has been used by Sikkenk and Fritz to study the disorder effect in 3D tilted Weyl semimetal(WSM)\cite{PhysRevB.96.155121}. With renormalization group(RG), this term is determined to be marginal one which can not be ignored simply. However, this perturbation does nothing to the form of energy band since we can turn coefficients into effective one compared with (24). We begin with the topological band and Berry curvature. Although we introduce the tilted parameter, the corresponding eigenvectors are still invariant: \begin{equation} |\pm,\textbf{k}\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}\sqrt{1\pm\frac{\Delta}{2\epsilon_{\textbf{k}}}}\\ \pm e^{i\theta}\sqrt{1\mp\frac{\Delta}{2\epsilon_{\textbf{k}}}} \end{pmatrix} \end{equation} The angle $\theta$ is defined by $e^{i\theta}=\frac{v_{x}k_{x}+iv_{y}k_{y}}{k_{\bot}},k_{\bot}=\sqrt{v_{x}^{2}k_{x}^{2}+v_{y}^{2}k_{y}^{2}}$. In this way, we also have the Berry curvature which is the same one in 2D WSM. \begin{equation} \Omega_{\textbf{k},z}^{\pm}=i\epsilon_{zbc}\langle\partial_{b}\pm|\partial_{c}\pm\rangle=\mp\frac{\xi\Delta v_{x}v_{y}}{4\epsilon_{\textbf{k}}} \end{equation} Where $\epsilon_{\textbf{k}}=\sqrt{(v_{x}k_{x})^{2}+(v_{y}k_{y})^{2}+(\frac{\Delta}{2})^{2}}$. In general case, this will contribute nothing to transport after sum of $\xi$. However, we will obtain the nonlinear Hall conductivity by taking the nonlinear Hall effect into accounnt. After calculating the relaxation time, we will give the form of conductivity. Due to the same contribution from the different valleys, we just calculate one and multiply it by 2. Before approaching the final result, we just make some basic assumptions: firstly, we also consider the case with low-enough temperature. Further, we assume that warping of the Fermi surface can be ignored when calculating the relaxation time. In this way, The form of the Berry curvature dipole and conductivity are taken as: \begin{equation} D_{y}=\frac{1}{2}\int\frac{d^{2}k}{(2\pi)^{2}}\epsilon_{+}^{2}f_{0}(\epsilon_{\textbf{k}}^{+})\partial_{y}\Omega_{\textbf{k}z}^{+} \end{equation} \begin{equation} \chi_1=\frac{4ev_{x}v_{y}}{n_{imp}U_{0}^{2}\mu(1+3\frac{\Delta^{2}}{4\mu^{2}})}D_{y} \end{equation} \begin{equation} \chi_2=\frac{4 ev_{x}v_{y}}{n_{imp}U_{0}^{2}\mu(1+3\frac{\Delta^{2}}{4\mu^{2}})}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{+}\frac{\partial\epsilon_{\textbf{k}}^+}{{\partial k_{y}}}f_{0}(\epsilon_{k}^{+})\Omega_{kz}^{+} \end{equation} Therefore, the total conductance is considered as: $\chi=\chi_1+\chi_2$. More details will be displayed in Appendix C. With $\partial_{y}\Omega_{\textbf{k},z}^{\pm}=\pm\frac{3\xi v_{x}v_{y}^{3}\Delta k_{y}}{4\epsilon_{\textbf{k}}^{5}}$, we can also see that the integral will vanish if the Dirac cone is not tilted. Although the untilted Dirac cone gives finite Berry curvature, the Berry curvature dipole comes to zero since $\partial_{y}\Omega_{z}$ is odd function under Fermi surface. The parameter set is $v_{x}\approx v_{y}\approx2.6328eV\cdot\mathring{A},\Delta=20meV,w_{y}=0.026328eV\cdot\mathring{A}$. \begin{figure}[t] \centering \includegraphics[width=7.2 cm]{dipole.png}\\ \caption{thermoelectric Berry curvature dipole(after rescaling) of SnTe as a function of $2\mu/\Delta$} \label{fig1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=7.2 cm]{conductance.png}\\ \caption{thermoelectric conductivity(after rescaling) of SnTe as a function of $2\mu/\Delta$} \label{fig2} \end{figure} \section{Discussion} In summary, we begin with the quantum Liouville equation and its solution in the presence of disorder and temperature gradient. Further, we develop quantum kinetic theory of nonlinear Nernst effect with general solution to density matrix. We prove the persistence of Nernst coefficient in nonlinear regime with calculation. It also establishes a new concept of thermoelectric Berry curvature dipole dominant in the quadratic term and electric current in TR-invariant systems. The Berry curavture giving rise to linear response does not contribute to electric current without breaking time-reversal symmetry. Meanwhile, we have also proved that the main impurity scattering contributes nothing to thermoelectric transport. Finally, we apply our theory to SnTe, a topological crystalline insulator with time-reversal symmetry which has been intensively studied by recent experiments, and give numerical result of the thermoelectric Berry curvature dipole and thermoelectric conductivity. However, this theory also remains us some problems: if the external field is intensive, could we also expand the formula (13) as (14)? When the external field is in the DC limit, how could we solve the kinetic equation (6)? Since we only care about second-order response, we can solve equation (13) iteratively. In this way, $\langle\rho_{T}\rangle=\mathcal{L}^{-1}D_{T}\langle\rho_{0}\rangle$, $\langle\rho_{T^{2}}\rangle=\mathcal{L}^{-1}D_{T}\langle\rho_{T}\rangle=(\mathcal{L}^{-1}D_{T})^{2}\langle\rho_{0}\rangle$, $\langle\rho_{T^{n}}\rangle=\mathcal{L}^{-1}D_{T}\langle\rho_{T^{n-1}}\rangle=(\mathcal{L}^{-1}D_{T})^{n}\langle\rho_{0}\rangle$. In this way, we can derive any-order response iteratively. We can always derive the nonlinear response with (14) however intensive the external field is. In the AC limit, when the external field is replaced with an oscillating one $E(t)=E_{0}e^{i\omega t}$, we are still unfamiliar with the solution of the density matrix. We can solve the kinetic equation by replacing the operator $\mathcal{L}$ with: $\mathcal{M}=\mathcal{L}-i\omega=P+K-i\omega$ for we only consider the distribution in frequency space instead of time space. Then we can similarly derive the unsteady-state kinetic equation as: $\langle\rho\rangle_{F}(\omega)=\sum_{N=1}^{\infty}(\mathcal{M}^{-1}D_{T})^{N}\langle\rho_{0}\rangle$. However, what does the unsteady-state stand for? To further consider this problem, we firstly write down the corresponding conductivity $\sigma_{\mu\nu}(\omega)=Tr[(-e)v^{\mu}\langle\rho\rangle_{F}(\omega)]/E_{0}^{\nu}$. This indicates conductivity when the external is oscillating one. In other words, this conductivity corresponds to optical conductivity in the experiments. To clarify our quantum kinetic theory can be developed into one in AC limit, we shall calculate the second-order response in oscillating external electric field and check it with one in semi-classical approximation. We firstly ignore the scattering term to focus on the effects of $i\omega$. We firstly write down the formula for the first-order response: \begin{equation} \frac{eE_{y}}{\hbar}\frac{\partial f_{0}(\epsilon_{\textbf{k}}^{m})}{\partial k_{y}}-i\omega\langle n\rangle_\textbf{k}^{m}=\frac{\langle n\rangle_\textbf{k}^{m}}{\tau_{k_F}^{m}} \end{equation} \begin{equation} \langle n\rangle_{k}^{m}=\frac{\frac{eE_{y}}{\hbar}\frac{\partial f_{0}(\epsilon_{\textbf{k}}^{m})}{\partial k_{y}}\tau_{k_F}^{m}}{1+i\omega\tau_{k_F}^{m}} \end{equation} Here we figure out the relationship between the diagonal part in AC limit and that in DC limit:$\langle n\rangle_{k}^{m}(\omega)=\frac{\langle n\rangle_{k}^{m}} {1+i\omega\tau_{\textbf{k}}^{m}}$. Hence we come to the conclusion that by replacing the diagonal part in (18) with generalized one, we can get the nonlinear optical conductivity. (we also assume $\mu>0$ which indicates $m=+$) \begin{equation} \chi=\frac{e^{3}\tau_{k_{F}}^{+}}{2(1+i\omega\tau_{k_{F}}^{+})}\int\frac{d^{2}k}{(2\pi)^{2}}f_{0}\frac{\partial \Omega_{\textbf{k}z}^{n}}{\partial k_{y}} \end{equation} This is consisent with the result in \cite{PhysRevLett.115.216806}. \section{Acknowledgement} I acknowledge helpful discussion with Yonghao Gao at Fudan University and professor Gang Chen at Hong Kong University. My work is also supported by Physics Departement of University of Science and Technology of China. I also acknowledge my advisor Prof. Shaolong Wan in USTC and discussions on detailed calculation with other people in the group. \begin{widetext} \begin{appendix} \section{Deriviation of (16-18)} Firstly, let's focus on the derivation of off-diagonal part of linear response$\langle S_{T}\rangle$. According to (14), the lowest order should be: \begin{equation} \langle\rho_{T}\rangle=\mathcal{L}^{-1}D_{T}(\langle\rho_{0}\rangle) \end{equation} Therefore, the matrix elements should be: \begin{equation} \langle n_{T}\rangle_{\textbf{k}}^{m}=\tau_{\textbf{k}}^{m}[D_{T}(\langle\rho_{0}\rangle)]_{\textbf{k}}^{mm} \end{equation} \begin{equation} \langle S_{T}\rangle_{\textbf{k}}^{nn'}=-i\hbar\frac{D_{T}(\langle\rho_{0}\rangle)_{\textbf{k}}^{nn'}-J(\langle n_{T}\rangle)_{\textbf{k}}^{nn'}}{\epsilon_{n}-\epsilon_{n'}} \end{equation} To be specific, the off-diagonal one can take another form: \begin{equation} \langle S_{T}\rangle=\sum_{nn'}\frac{\partial_{y}T}{T}\frac{\epsilon_{n'}f_{0n'}-\epsilon_{n}f_{0n}}{\epsilon_{n}-\epsilon_{n'}}|n\rangle\langle n|\partial_{y}n'\rangle\langle n'| \end{equation} Similarly, compared with (A3), we can also derive both the off-diagonal part and the diagonal part of the quadratic term. \begin{equation} \langle S_{T^{2}}\rangle_{\textbf{k}}^{nn'}=-i\hbar\frac{D_{T}(\langle n_{T}\rangle)_{\textbf{k}}^{nn'}-J(\langle n_{T^{2}}\rangle)_{\textbf{k}}^{nn'}}{\epsilon_{n}-\epsilon_{n'}} \end{equation} \begin{equation} \langle n_{T^{2}}\rangle_{\textbf{k}}^{m}=\tau_{\textbf{k}}^{m}D_{T}(\langle n_{T}\rangle)_{\textbf{k}}^{m}=\tau_{\textbf{k}}^{m}\frac{1}{2\hbar}\frac{\partial_{y}T}{T}\frac{D(\{H_{0},\langle n_{T}\rangle\})}{Dk_{y}} \end{equation} These are the nonlinear responses in the presence of temperature gradient. Here I remain further discussion on influence caused by impurity scattering in Appendix B. We here just care about the part induced by temperature gradient. In this way, the off-diagonal term of(A5) can be obtained as \begin{equation} \langle S_{T^{2}}\rangle_{\textbf{k}}^{nn'}=-i\frac{\partial_{y}T}{T}\sum_{nn'}\frac{\epsilon_{n'}\langle n_{T}\rangle_{k}^{n'}-\epsilon_{n}\langle n_{T}\rangle_{k}^{n}}{\epsilon_{n}-\epsilon_{n'}}|n\rangle\langle n|\partial_{y}n'\rangle\langle n'| \end{equation} So the electric current induced by thermal flow takes the form: \begin{equation} J_{x}^{(2)}=\frac{1}{2}Tr[(-e)\{v_{x},\langle S_{T^{2}}\rangle\}] \end{equation} This formula contains two parts: $\langle m|v_{x}\langle S_{T^{2}}\rangle|m\rangle$ and $\langle m|\langle S_{T^{2}}\rangle v_{x}|m\rangle$. Let's calculate the two parts separately. Since $\sum_{m}|\partial_{y}m\rangle\langle m|+|m\rangle\langle\partial_{y}m|=\partial_{y}(\sum_{m}|m\rangle\langle m|)=\partial_{y}I=0$, we can write down another form of the intrinsic velocity operator. \begin{equation} v_{x}=\sum_{m'}(\epsilon_{m'}-\epsilon_{n'})|\partial_{y}m'\rangle\langle m'|+|m'\rangle\langle\partial_{y}m'|=\sum_{m'}(\epsilon_{m'}-\epsilon_{n})|\partial_{y}m'\rangle\langle m'|+|m'\rangle\langle\partial_{y}m'| \end{equation} \[ \langle m|(-e)v_{x}\langle S_{T^{2}}\rangle|m\rangle=ie\frac{\partial_{y}T}{T}\sum_{m'nn'}\frac{\epsilon_{n'}\langle n_{T}\rangle_{k}^{n'}-\epsilon_{n}\langle n_{T}\rangle_{k}^{n}}{\epsilon_{n}-\epsilon_{n'}}(\epsilon_{m'}-\epsilon_{n'})\langle n|\partial_{y}n'\rangle\langle m|[|\partial_{x}m'\rangle\langle m'|+|m'\rangle\langle\partial_{x}m'|]|n\rangle\langle n'|m\rangle \] \[ =-ie\frac{\partial_{y}T}{T}\sum_{m'nn'}[\epsilon_{n'}\langle n_{T}\rangle_{k}^{n'}-\epsilon_{n}\langle n_{T}\rangle_{k}^{n}]\langle n|\partial_{y}m\rangle\langle m|\partial_{x}n\rangle\delta_{m'n}\delta_{n'm} \] \[ =ie\frac{\partial_{y}T}{T}\sum_{n}[\epsilon_{n'}\langle n_{T}\rangle_{k}^{n'}-\epsilon_{n}\langle n_{T}\rangle_{k}^{n}]\langle\partial_{x}m|n\rangle\langle n|\partial_{y}m\rangle \] \begin{equation} =-ie\frac{\partial_{y}T}{T}[\epsilon_{m}\langle n_{T}\rangle_{k}^{m}\langle\partial_{x}m|\partial_{y}m\rangle-\sum_{n}\epsilon_{n}\langle n_{T}\rangle_{k}^{n}\langle\partial_{x}m|n\rangle\langle n|\partial_{y}m\rangle] \end{equation} \[ \langle m|(-e)\langle S_{T^{2}}\rangle v_{x}|m\rangle=ie\frac{\partial_{y}T}{T}\sum_{m'nn'}\frac{\epsilon_{n'}\langle n_{T}\rangle_{k}^{n'}-\epsilon_{n}\langle n_{T}\rangle_{k}^{n}}{\epsilon_{n}-\epsilon_{n'}}(\epsilon_{m'}-\epsilon_{n})\langle\partial_{y}n|n'\rangle\langle m|n\rangle\langle n'|[|\partial_{x}m'\rangle\langle m'|+|m'\rangle\langle\partial_{x}m'|]|m\rangle \] \begin{equation} =ie\frac{\partial_{y}T}{T}[\epsilon_{m}\langle n_{T}\rangle_{k}^{m}\langle\partial_{y}m|\partial_{x}m\rangle-\sum_{n}\epsilon_{n}\langle n_{T}\rangle_{k}^{n}\langle\partial_{y}m|n\rangle\langle n|\partial_{x}m\rangle] \end{equation} The first term of (A10,A11) can be pointed out as Berry curvature: \begin{equation} -ie\frac{\partial_{y}T}{T}\sum_{m}[\epsilon_{m}\langle n_{T}\rangle_{k}^{m}\langle\partial_{x}m|\partial_{y}m\rangle-\epsilon_{m}\langle n_{T}\rangle_{k}^{m}\langle\partial_{y}m|\partial_{x}m\rangle]=-e\frac{\partial_{y}T}{T}\sum_{m}\epsilon_{m}\langle n_{T}\rangle_{k}^{m}\Omega_{\textbf{k},z}^{m} \end{equation} Due to the sum of index m, the second term of (A10,A11) can be combined together. \[ ie\frac{\partial_{y}T}{T}\sum_{n,m}(\epsilon_{n}\langle n_{T}\rangle_{k}^{n}\langle\partial_{x}m|n\rangle\langle n|\partial_{y}m\rangle-\epsilon_{n}\langle n_{T}\rangle_{k}^{n}\langle\partial_{y}m|n\rangle\langle n|\partial_{x}m\rangle) \] \[ =ie\frac{\partial_{y}T}{T}\sum_{n,m}(\epsilon_{n}\langle n_{T}\rangle_{k}^{n}\langle\partial_{y}n|m\rangle\langle m|\partial_{x}n\rangle-\epsilon_{n}\langle n_{T}\rangle_{k}^{n}\langle\partial_{x}n|m\rangle\langle m|\partial_{y}n\rangle) \] \[ =ie\frac{\partial_{y}T}{T}\sum_{n}(\epsilon_{n}\langle n_{T}\rangle_{k}^{n}\langle\partial_{y}n|\partial_{x}n\rangle-\epsilon_{n}\langle n_{T}\rangle_{k}^{n}\langle\partial_{x}n|\partial_{y}n\rangle) \] \begin{equation} =-e\frac{\partial_{y}T}{T}\sum_{n}\epsilon_{n}\langle n_{T}\rangle_{k}^{n}\Omega_{\textbf{k},z}^{m} \end{equation} Correspondingly, the current can be figured out: \begin{equation} J_{x}^{(2)}=-\frac{e}{2}(\frac{\partial_{y}T}{T})^{2}\sum_{m}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{m}^{2}\tau_{k}^{m}\frac{\partial f_{0}(\epsilon_{k}^{m})}{\partial k_{y}}\Omega_{kz}^{m} \end{equation} where $\Omega_{\textbf{k}z}^{m}=i(\langle\partial_{x}m|\partial_{y}m\rangle-\langle\partial_{y}m|\partial_{x}m\rangle)$ represents Berry curvature. This can be separated into two parts if we basically assume only the conductance band contributes: \begin{equation} J_{x1}^{(2)}=\frac{e}{2}(\frac{\partial_{y}T}{T})^{2}\tau_{k_F}^{+}\sum_{m}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{m}^{2} f_{0}(\epsilon_{k}^{m})\frac{\partial\Omega_{kz}^{m}}{{\partial k_{y}}} \end{equation} \begin{equation} J_{x2}^{(2)}=e(\frac{\partial_{y}T}{T})^{2}\tau_{k_F}^{+}\sum_{m}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{m}\frac{\partial\epsilon_{\textbf{k}}^m}{{\partial k_{y}}} f_{0}(\epsilon_{k}^{m})\partial\Omega_{kz}^{m} \end{equation} In this way, we have the form of thermoelectric conductivity with (A15, A16) \begin{equation} \chi_1=\frac{e}{2}\tau_{k_{F}}^{+}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{+}^{2}f_{0}(\epsilon_{\textbf{k}}^{+})\partial_{y}\Omega_{\textbf{k},z}^{+}=e\tau_{k_{F}}^{+}D_{y} \end{equation} \begin{equation} \chi_2=e\tau_{k_F}^{+}\sum_{m}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{m}\frac{\partial\epsilon_{\textbf{k}}^m}{{\partial k_{y}}}f_{0}(\epsilon_{k}^{m})\Omega_{kz}^{m} \end{equation} Here $D_{y}=\frac{1}{2}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{+}^{2}f_{0}(\epsilon_{\textbf{k}}^{+})\partial_{y}\Omega_{\textbf{k},z}^{+}$ represents the new type of Berry curvature dipole. We can see there are two terms in the thermoelectric conductivity. The first one is connected to the thermoelectric Berry curvature dipole. The second one, which does not show up in electric Hall effect, is directly linked to the Berry curvature. \section{Derivation of (23,25)} At the beginning, let's focus on the impurity scattering of the linear term. \begin{equation} J_{x1}^{i}=\frac{1}{2}Tr[(-e){v_{x},\langle S_{T}'\rangle}] \end{equation} \begin{equation} \langle S_{T}'\rangle=i\hbar\sum_{nn'}\frac{[J(\langle n_{T}\rangle)]_{\textbf{k}}^{nn'}}{\epsilon_{n}-\epsilon_{n'}}|n\rangle\langle n'| \end{equation} With intrinsic velocity(A9), you can also obtain the current density(B1). To begin with, let's firstly consider the related two terms: $\langle m|v_{x}\langle S_{T}'\rangle|m\rangle$ and $\langle m|\langle S_{T}'\rangle v_{x}|m\rangle$: \[ \langle m|v_{x}\langle S_{T}'\rangle|m\rangle=i\sum_{m'nn'}(\epsilon_{m'}-\epsilon_{n'})\frac{[J(\langle n_{T}\rangle)]_{\textbf{k}}^{nn'}}{\epsilon_{n}-\epsilon_{n'}}\langle m|[|\partial_{x}m'\rangle\langle m'|+|m'\rangle\langle\partial_{x}m'|]|n\rangle\langle n'|m\rangle \] \begin{equation} =i\sum_{n}[J(\langle n_{T}\rangle)]_{\textbf{k}}^{nm}\langle m|\partial_{x}n\rangle \end{equation} \[ \langle m|\langle S_{T}'\rangle v_{x}|m\rangle=i\sum_{m'nn'}(\epsilon_{m'}-\epsilon_{n})\frac{[J(\langle n_{T}\rangle)]_{\textbf{k}}^{nn'}}{\epsilon_{n}-\epsilon_{n'}}\langle m|n\rangle\langle n'|[|\partial_{x}m'\rangle\langle m'|+|m'\rangle\langle\partial_{x}m'|]|m\rangle \] \begin{equation} =-i\sum_{n}[J(\langle n_{T}\rangle)]_{\textbf{k}}^{nm}\langle\partial_{x}n|m\rangle \end{equation} With (B3,B4) we can directly figure out the form of . \begin{equation} J_{x1}^{i}=e\sum_{n,m}\int\frac{d^{d}k}{(2\pi)^{d}}[J(\langle n_{T}\rangle)]_{\textbf{k}}^{nm}Im(\langle m|\partial_{x}n\rangle) \end{equation} If we take the zero-temperature approximation, the non-equilibrium distribution induced by temperature gradient will just take place on the conductance band. So this will contribute nothing after sum of band index. However, if we take the band-diagonal part into account, we can find the term related to Berry curvature.We assume the Born approximation: $\langle U(\textbf{r})U(\textbf{r'})\rangle=n_{imp}U_0^{2}\delta(\textbf{r}-\textbf{r'})$\cite{PhysRevB.97.201301} Then we have: \begin{equation} \langle U_{\textbf{k}\textbf{k'}}^{mm'}U_{\textbf{k'}\textbf{k}}^{m'm}\rangle=n_{imp}U_{0}^{2}\langle u_{\textbf{k}}^{m}|u_{\textbf{k'}}^{m'}\rangle\langle u_{\textbf{k'}}^{m'}|u_{\textbf{k}}^{m}\rangle \end{equation} In most cases, this is not a trivial result since we may have strong SOC in the material. We can not simply come to the general case. However, we can still make some approximation. Firstly, the temperature is low enough that we can still replace the relaxation time with one on the Fermi surface which is noted by $\tau_{k_{F}}^{m}$. Further, even though the Fermi surface is partly distorted, we still assume the diagonal part $\langle n_{T}\rangle_{\textbf{k}}^{m}$ and $\langle n_{T^{2}}\rangle_{\textbf{k}}^{m}$ are approximately considered as functions of $\epsilon_{\textbf{k}}^{m}$. \[ J_{x1}^{i}=-i\pi e\sum_{m,m'}\langle U_{\textbf{k}\textbf{k}'}^{mm'}U_{\textbf{k}'\textbf{k}}^{m'm}\rangle(\langle n_{T}\rangle_{\textbf{k}}^{m}-\langle n_{T}\rangle_{\textbf{k}'}^{m'})\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m'})(\langle m|\partial_{x}m'\rangle-\langle\partial_{x}m'|m\rangle) \] \begin{equation} =-i\pi e\sum_{m}\langle U_{\textbf{k}\textbf{k}'}^{mm}U_{\textbf{k}'\textbf{k}}^{mm}\rangle(\langle n_{T}\rangle_{\textbf{k}}^{m}-\langle n_{T}\rangle_{\textbf{k}'}^{m})\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m})(\langle m|\partial_{x}m\rangle-\langle\partial_{x}m|m\rangle) \end{equation} \begin{equation} \rightarrow\pi e\frac{\partial_{y}T}{T}\sum_{m}\tau_{k_{F}}^{m}(n_{\textbf{k}}^{m}-n_{\textbf{k'}}^{m'})\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m})\Omega_{\textbf{k},z}^{m} \end{equation} where $n_{\textbf{k}}^{m}=(\tau_{k_{F}}^{m})\epsilon_{m}f_{0}(\epsilon_{\textbf{k}}^{m})$, This is exactly the equation(21), which is only connected to $\epsilon_{\textbf{k}}^{m}$. Therefore, this current will vanish after integrating $\textbf{k'}$. We can clearly see that the term connected with Berry curvature has no effect on transport in linear regime. However, current from other terms is still unknown to us in general case. Hence, we can only consider other terms in specific models. Let's turn to the band-diagonal part of the quadratic term. \begin{equation} \langle n_{T^{2}}\rangle_{\textbf{k}}^{m}=\frac{\tau_{\textbf{k}}^{m}}{\hbar}\frac{\partial_{y}T}{T}[\frac{D(H_{0}\langle n_{T}\rangle)}{Dk_{y}}]_{\textbf{k}}^{m}=\frac{(\tau_{k_{F}}^{m})^{2}}{\hbar}(\frac{\partial_{y}T}{T})^{2}\partial_{y}(\epsilon_{m}^{2}\partial_{y}f_{0}(\epsilon_{\textbf{k}}^{m})) \end{equation} In this way, the impurity scattering part is derived as: \begin{equation} \langle S_{T^{2}}'\rangle_{k}^{mm'}=i\hbar\frac{[J(\langle n_{T^{2}}\rangle)]_{\textbf{k}}^{mm'}}{\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}}^{m'}} \end{equation} \begin{equation} \langle S_{T^{2}}'\rangle=i\pi\sum_{mm'm''}\sum_{\textbf{k}'}\langle U_{\textbf{kk'}}^{mm'}U_{\textbf{k'k}}^{m'm}\rangle\frac{(n_{\textbf{k}}^{(2)m}-n_{\textbf{k}'}^{(2)m'})\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m'})+(n_{\textbf{k}}^{(2)m''}-n_{\textbf{k}'}^{(2)m'})\delta(\epsilon_{\textbf{k}}^{m''}-\epsilon_{\textbf{k}'}^{m'})}{\epsilon_{m}-\epsilon_{m''}}(|\partial_{y}m\rangle\langle m''|+|m\rangle\langle\partial_{y}m''|) \end{equation} Here $n_{\textbf{k}}^{(2)m}$ is not the one in quadratic term. It is given by: \begin{equation} n_{\textbf{k}}^{(2)m}=\frac{(\tau_{k_{F}}^{m})^{2}}{\hbar}(\frac{\partial_{y}T}{T})^{2}\epsilon_{m}^{2}\partial_{y}f_{0}(\epsilon_{\textbf{k}}^{m}) \end{equation} So we can calculate each matrix element of it.(Here we also use the simpliest approximation) \begin{equation} \langle S_{T^{2}}'\rangle_{\textbf{k}}^{nn'}=i\pi \sum_{m,m',m''}\sum_{\textbf{k'}}\langle U_{\textbf{kk'}}^{mm'}U_{\textbf{k'k}}^{m'm}\rangle[\frac{g(m,m',n')}{\epsilon_{m}-\epsilon_{n'}}\langle n|\partial_{y}m\rangle\delta_{m''n'}+\frac{g(n,m',m'')}{\epsilon_{n}-\epsilon_{m''}}\langle\partial_{y}m''|n'\rangle\delta_{mn}] \end{equation} With $g(m,m',n')=(n_{k}^{(2)m}-n_{k'}^{(2)m'})\delta(\epsilon_{k}^{m}-\epsilon_{k'}^{m'})+(n_{k}^{(2)m''}-n_{k'}^{(2)m'})\delta(\epsilon_{k}^{m''}-\epsilon_{k'}^{m'})$. The intrinsic contribution to velocity operator in the eigenstate basis is \begin{equation} v_{x}=\sum_{l'}(\epsilon_{l'}-\epsilon_{m''})[|\partial_{x}l'\rangle\langle l'|+|l'\rangle\langle\partial_{x}l'|] \end{equation} So the diagonal part is given by \[ \langle l|v_{x}\langle S_{T^{2}}'\rangle|l\rangle=i\pi \sum_{l'n'n}\sum_{m,m',m''}\langle U_{\textbf{kk'}}^{mm'}U_{\textbf{k'k}}^{m'm}\rangle[\frac{g(m,m',n')}{\epsilon_{m}-\epsilon_{n'}}\langle n|\partial_{y}m\rangle\delta_{m''n'}+\frac{g(n,m',m'')}{\epsilon_{n}-\epsilon_{m''}}\langle\partial_{y}m''|n'\rangle\delta_{mn}] \] \begin{equation} \times(\epsilon_{l'}-\epsilon_{m''})\langle l|[|\partial_{x}l'\rangle\langle l'|+|l'\rangle\langle\partial_{x}l'|][|n\rangle\langle n'|\|l\rangle \end{equation} With tedious calculation, we obtain the only term seemingly connected to Berry curvature dipole. \begin{equation} i\pi\sum_{l,m,m',m''}\langle U_{\textbf{kk'}}^{mm'}U_{\textbf{k'k}}^{m'm}\rangle g(m,m',m'')\langle\partial_{y}m''|l\rangle\langle l|\partial_{x}m\rangle \end{equation} In this way, the current related to impurity scattering can be figured out: \begin{equation} J_{x2}^{i}=\frac{1}{2}Tr[(-e)\{v_{x},\langle S_{T^{2}}'\rangle\}]=\pi e\sum_{m,m'}\langle U_{\textbf{kk'}}^{mm'}U_{\textbf{k'k}}^{m'm}\rangle(N_{\textbf{k}}^{m}-N_{\textbf{k'}}^{m'})\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k'}}^{m'})\partial_{y}\Omega_{\textbf{k},z} \end{equation} where $N_{\textbf{k}}^{m}=(\tau_{k_{F}}^{m})^{2}\epsilon_{m}^{2}f_{0}(\epsilon_{\textbf{k}}^{m})$, which is only connected with $\epsilon_{\textbf{k}}^{m}$. Therefore, this current will vanish after integrating $\textbf{k'}$. To conclude, we have proved the current density induced by impurity scattering is zero in both linear regime and nonlinear regime. Similar results can be found in \cite{PhysRevB.96.235134}\cite{PhysRevB.101.155204}. However, here we only prove that the term related to the dipole contributes nothing to the transport in nonlinear regime. We do not consider all the terms in general case. \section{More details of calculation on topological crystalline insulator} After constructing the effective Hamiltonian of strained single-layer graphene, we can calculate the eigenstates $H|u_{\textbf{k}}^{\pm}\rangle=\pm\epsilon_{\textbf{k}}|u_{\textbf{k}}^{\pm}\rangle$ and Berry curvature of the model. \begin{equation} |u_{\textbf{k}}^{\pm}\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}\sqrt{1\pm\frac{\Delta}{2\epsilon_{\textbf{k}}}}\\ \pm e^{i\theta}\sqrt{1\mp\frac{\Delta}{2\epsilon_{\textbf{k}}}} \end{pmatrix} \end{equation} where angle $\theta$ and $\epsilon_{\textbf{k}}$ is defined by $e^{i\theta}=\frac{A_{1}+iA_{2}}{\sqrt{A_{1}^{2}+A_2^{2}}}$ and $\epsilon_{\textbf{k}}=\sqrt{A_{1}^{2}+A_2^{2}+m^{2}}$. \begin{equation} \partial_{a}|u_{\textbf{k}}^{\pm}\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}\mp\frac{1}{2}\frac{1}{\sqrt{1\pm m/\epsilon_{k}}}\frac{\Delta}{2\epsilon_{\textbf{k}}^{2}}\frac{\partial\epsilon_{\textbf{k}}}{\partial k_{a}}\\ \frac{1}{2}e^{i\theta}\frac{1}{\sqrt{1\mp m/\epsilon_{k}}}\frac{\Delta}{2\epsilon_{\textbf{k}}^{2}}\frac{\partial\epsilon_{\textbf{k}}}{\partial k_{a}}\pm\frac{i}{2}e^{i\theta}\frac{\partial\theta}{\partial k_{a}}\sqrt{1\mp\frac{\Delta}{2\epsilon_{\textbf{k}}}} \end{pmatrix} \end{equation} \[ \Omega_{\textbf{k}z}^{\pm}=i(\langle\partial_{x}u_{\textbf{k}}^{\pm}|\partial_{y}u_{\textbf{k}}^{\pm}\rangle-\langle\partial_{y}u_{\textbf{k}}^{\pm}|\partial_{x}u_{\textbf{k}}^{\pm}\rangle) \] \begin{equation} =\mp\frac{\Delta}{4\epsilon_{\textbf{k}}^{3}}(\frac{\partial A_{1}}{\partial k_{x}}\frac{\partial A_{2}}{\partial k_{y}}-\frac{\partial A_{1}}{\partial k_{y}}\frac{\partial A_{2}}{\partial k_{x}}) \end{equation} With the form of $A_{1}=v_{x}k_{x}$ and $A_{2}=v_{y}k_{y}$, you can obtain (28,29) automatically. For further step, we focus on the relaxation time. \begin{equation} \frac{1}{\tau_{\textbf{k}}^{m}}=\frac{2\pi }{\hbar}\int\frac{dk_{x}'dk_{y}'}{(2\pi)^{2}}\sum_{m,m'}\langle U_{\textbf{k}\textbf{k'}}^{mm'}U_{\textbf{k'}\textbf{k}}^{m'm}\rangle\delta(\epsilon_{\textbf{k}}^{m}-\epsilon_{\textbf{k}'}^{m'}) \end{equation} We here take $m=m'=+$ since not only there is no crossover near each valley but also we just basically assume $\mu>0$ for simplicity. In this way, only electrons from the conduction band contribute. So the relaxation time is: \begin{equation} \frac{1}{\tau_{\textbf{k}}^{+}}=\frac{2\pi }{\hbar}\int\frac{dk_{x}'dk_{y}'}{(2\pi)^{2}}\langle U_{\textbf{k}\textbf{k'}}^{++}U_{\textbf{k'}\textbf{k}}^{++}\rangle\delta(\epsilon_{\textbf{k}}^{+}-\epsilon_{\textbf{k}'}^{+}) \end{equation} With (C1), we have the Born approximation in the Bloch space\cite{PhysRevB.97.201301}: \begin{equation} U_{\textbf{k}\textbf{k'}}^{++}=U\langle u_{k}^{+}|u_{k'}^{+}\rangle=\frac{U_{0}}{2}[\sqrt{(1+\frac{\Delta}{2\epsilon_{\textbf{k}}})(1+\frac{\Delta}{2\epsilon_{\textbf{k'}}})}+e^{i(\theta'-\theta)}\sqrt{(1-\frac{\Delta}{2\epsilon_{\textbf{k}}})(1-\frac{\Delta}{2\epsilon_{\textbf{k'}}})}] \end{equation} \begin{equation} \langle U_{\textbf{k}\textbf{k'}}^{++}U_{\textbf{k'}\textbf{k}}^{++}\rangle=\frac{n_{imp}U_{0}^{2}}{2}[1+\frac{\Delta^{2}}{4\epsilon_{\textbf{k}}\epsilon_{\textbf{k'}}}+cos(\theta'-\theta)\frac{\sqrt{v_x^{2}k_x^{2}+v_y^{2}k_y^{2}}\sqrt{v_x^{2}k_x'^{2}+v_y^{2}k_y'^{2}}}{\epsilon_{\textbf{k}}\epsilon_{\textbf{k'}}}] \end{equation} By neglecting the warping effect, we obtain the relaxation time: \begin{equation} \frac{1}{\tau_{\textbf{k}}^{+}}=\frac{n_{imp}U_{0}^{2}\mu}{4v_{x}v_{y}}(1+3\frac{\Delta^{2}}{4\mu^{2}}) \end{equation} In SnTe, The energy bands are: $E=w_{y}k_{y}\pm\sqrt{v_{x}^{2}k_{x}^{2}+v_{y}^{2}k_{y}^{2}+(\frac{\Delta}{2})^{2}}$,which can be calculated analytically. Energy surface is given by: \begin{equation} \frac{(k_{y}+k_{0})^{2}}{s_{y}^{2}}+\frac{k_{x}^{2}}{s_{x}^{2}}=1 \end{equation} In the formula: $s_{y}^{2}=(\frac{v_{y}^{2}}{v_{y}^{2}-w_{y}^{2}}E^{2}-(\frac{\Delta}{2})^{2})/(v_{y}^{2}-w_{y}^{2}),s_{x}^{2}=(\frac{v_{y}^{2}}{v_{y}^{2}-w_{y}^{2}}E^{2}-(\frac{\Delta}{2})^{2})/v_{x}^{2},k_{0}=\frac{w_{y}E}{v_{y}^{2}-w_{y}^{2}}$. With $k_{y}=-k_{0}+s_{y}sin\theta,k_{x}=s_{x}cos\theta$, \begin{equation} dk_{x}dk_{y}=\begin{vmatrix}\frac{\partial k_{x}}{\partial E} & \frac{\partial k_{y}}{\partial E}\\ \frac{\partial k_{x}}{\partial\theta} & \frac{\partial k_{y}}{\partial\theta} \end{vmatrix}dEd\theta=[\frac{v_{y}^{2}}{v_{x}(v_{y}^{2}-w_{y}^{2})^{3/2}}E-\frac{w_{y}}{v_{x}(v_{y}^{2}-w_{y}^{2})}\sqrt{\frac{v_{y}^{2}}{v_{y}^{2}-w_{y}^{2}}E^{2}-(\frac{\Delta}{2})^{2}}sin\theta]dEd\theta \end{equation} After preparation, let's focus on the conductivity and Berry curvature dipole. According to (19), we can derive $D_{y}$ first. \[ D_{y}=\frac{1}{2}\int\frac{d^{2}k}{(2\pi)^{2}}\epsilon_{+}^{2}f_{0}(\epsilon_{\textbf{k}}^{+})\partial_{y}\Omega_{\textbf{k}z}^{+} \] \[ =\frac{1}{2\hbar}\int_{E<\mu}\frac{dEd\theta}{(2\pi)^{2}}[\frac{v_{y}^{2}}{v_{x}(v_{y}^{2}-w_{y}^{2})^{3/2}}E-\frac{w_{y}}{v_{x}(v_{y}^{2}-w_{y}^{2})}\sqrt{\frac{v_{y}^{2}}{v_{x}(v_{y}^{2}-w_{y}^{2})}E^{2}-(\frac{\Delta}{2})^{2}}sin\theta](E-\mu)^{2}\frac{3\Delta v_{x}v_{y}^{3}(-k_{0}+s_{y}sin\theta)}{4(E+w_{y}k_{0}-w_{y}s_{y}sin\theta)^{5}} \] With calculation tools, we can give the result as Fig.1 in the context. Meanwhile, we can derive the conductivity with the relaxation time. \begin{equation} \chi_1=e\tau_{k_{F}}^{+}D_{y}=\frac{4 ev_{x}v_{y}}{n_{imp}U_{0}^{2}\mu(1+3\frac{\Delta^{2}}{4\mu^{2}})}D_{y} \end{equation} \[ \chi_2=e\tau_{k_F}^{+}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{+}\frac{\partial\epsilon_{\textbf{k}}^+}{{\partial k_{y}}}f_{0}(\epsilon_{k}^{+})\Omega_{kz}^{+} \] \begin{equation} =\frac{4 ev_{x}v_{y}}{n_{imp}U_{0}^{2}\mu(1+3\frac{\Delta^{2}}{4\mu^{2}})}\int\frac{d^{d}k}{(2\pi)^{d}}\epsilon_{+}\frac{\partial\epsilon_{\textbf{k}}^+}{{\partial k_{y}}}f_{0}(\epsilon_{k}^{+})\Omega_{kz}^{+} \end{equation} We can numerically depict the result as Fig.2 above. \end{appendix} \end{widetext}
{'timestamp': '2020-12-15T02:05:01', 'yymm': '2010', 'arxiv_id': '2010.15340', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.15340'}
\section*{Introduction} Plasmonic oligomers consisting of nanoparticles of nobel metals (e.g. silver and gold) are the cornerstone of modern nanophotonics due to a sharp effect of resonant scattering originating from destructive interference between super-radiant and sub-radiant modes~\cite{Hentschel2010, Mirin2009, Hillenbrand_NL2011, Halas_NL_2012, Dorpe2012, Koenderink_PRL_2012, Martin2013}, which can be described in terms of the \textit{Fano resonances}~\cite{Lukyanchuk2010, Miroshnichenko2010}. In addition to a strong local field enhancement, the asymmetric profile of the Fano resonance in such structures allows to control the radiative damping of the localized surface plasmon resonance. This superior feature is very useful for applications of nanophotonics, although such plasmonic nanostructures suffer from high dissipative losses in visible~\cite{Boltasseva2011}. Recently, all-dielectric oligomers based on high-index dielectric and semiconductor nanoparticles (e.g. silicon) have been proposed theoretically~\cite{Miroshnichenko_2012, Hopkins2013, Hopkins2013a, Hopkins2015}, and realized experimentally~\cite{Chong2014, Filonov2014, Shcherbakov2015} as a more efficient counterpart to the plasmonic ones. It has been shown that the all-dielectric oligomers can exhibit not only an electric type of Fano resonance, but also a \textit{magnetic one}, which is associated with the optically induced magnetic dipole mode of individual high-index nanoparticle~\cite{Evlyukhin:NL:2012, Kuznetsov:SR:2012, Kuznetsov2016}. An existence of the resonant magnetic response in such structures makes it possible to control the electric and magnetic response individually. It is worth noting that the plasmonic oligomers also can provide resonant magnetic response~\cite{Monticone_2013, Haran_2013, Sun_2016} including more complicated metal-insulator-metal structures~\cite{Hong_2012, Verre2015}, where the insulator has a low refractive index (SiO$_2$). However, such resonant plasmonic structures suffer from dissipative losses inherent to metals in the visible range. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{Fig1} \caption{Sketch of the hybrid oligomers composed of asymmetric hybrid (Au/Si)~dimer nanoparticles with different shapes of Au components, which correspond to different stages of laser reshaping: (a)~nanodiscs, (b)~nanocups, and (c)~nanospheres.}\label{artistic} \end{figure*} Currently, both all-dielectric and more sophisticated plasmonic oligomers are used to achieve the near-field enhancement~\cite{Toma_2015} and associated nonlinear optical effects~\cite{Bragas_2014, Martin_2015, Shorohov2016}, biosensing~\cite{Jin2012, Deng2015}, surface-enhanced Raman scattering~\cite{Dorpe2012}, graphene electronics~\cite{Fang2012}, strong optical activity~\cite{Fang_2016}, which potentially can be applied for quantum optics~\cite{Talebi2012} as well. In terms of these practical applications, it is necessary to have a possibility for a fine-tuning of the spectral features of the Fano resonances in the \textit{fabricated} nanoparticle oligomer structures. The recently proposed approaches to tune the Fano resonances in the clusters of metallic nanoparticles are based on a changing of their geometry during a fabrication process~\cite{GiessenACS2011, Chong2014, Sun2014, King2015} or electromagnetic properties of their environment~\cite{Lassiter2010, Park_2012}. Moreover, the near-field distribution and absorption properties of oligomers with rotational symmetry can be tuned via a polarisation of incident light, leaving the scattering properties unchanged~\cite{Rahmani2013}. Although these methods show significant performance, they can not be applied to fabricated oligomers for fine-tuning of their modes and scattering properties. The purpose of this paper is twofold. First, we propose to combine two paradigms of plasmonic and all-dielectric oligomers and form a hybrid metal-dielectric clusters to have benefits and advantages of both of them. Recently, unique properties of asymmetric hybrid nanoparticles made them as a very promising platform for nanophotonics~\cite{Henzie2006, Halas_NL2012, Jiang2014, Li2014, Zhu2015, Wang2015, Narasimhan2015}. However, oligomers based on resonant plasmonic and dielectric nanoparticles have not been studied yet. Here, we suggest and implement a novel type of oligomers consisting of resonant asymmetric metal-dielectric (Au/Si) nanoparticles realizing the concept of hybrid oligomers. We show that the proposed oligomers exhibit a sharp Fano resonance in the visible range. Based on the multipole expansion analysis (for the method of multipole expansion in vector spherical harmonics see \textit{Supplementary Information}), we demonstrate that the Fano resonance has a predominantly magnetic origin owing to magnetic Mie-type modes of the Si nanoparticles. Second, being inspired by our recent experimental work~\cite{Zuev_2016} on a new technique for fabrication of asymmetric hybrid (Au/Si) nanoparticles, we propose and realize an original approach for tuning of the magnetic Fano resonance in the oligomers. The approach is based on a fs-laser induced melting of Au part of hybrid dimer nanoparticles at the nanometer scale (as schematically shown in Figure~\ref{artistic}). We show that the Fano resonance wavelength can be changed by fs-laser reshaping very precisely being accompanied by a reconfiguration of its profile. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{Fig2} \caption{The scattering cross sections (black curves) and results of the multipole expansion for (a)~a single Si nanocone, and hybrid Au/Si nanoparticles: Si nanocone with (b)~Au nanodisc, (c)~Au nanocup, and (d) Au nanosphere. The diameters of the lower base of the Si nanocone and the Au nanodisc are equal 190~nm. The spectra are normalised identically. (A--H)~The electric field profiles (in terms of field amplitude) at the corresponding resonance; the corresponding points are marked on the spectra. The incident wave propagates along the axis of symmetry of the nanoparticles.}\label{unit} \end{figure*} \section*{Results} We start our analysis by comparing the scattering cross sections of a single silicon nanocone and hybrid nanoparticles (see Figure~\ref{unit}). We assume the incident planewave is propagating along the axis of symmetry of the nanoparticles. Figure~\ref{unit}(a) shows the light scattering spectrum of the single Si truncated nanocone. The geometric parameters of the cone are taken from the Ref.~\cite{Zuev_2016}, namely, the diameter of the upper base is $a = 60$~nm, and the cone height is $h = 200$~nm. The lower base ($b$) of the Si nanocone is $b=190$~nm; the results for Si nanocones of other sizes are presented in \textit{Supplementary Information}. The scattering cross section of the single Si nanocone has two distinct resonances, around the wavelengths 550~nm and 650~nm (Figure~\ref{unit}(a), points A and B). By using the multipole expansion~\cite{Jackson1999, Grahn2012}, we reveal that these resonances are of the electric dipole type at $550$ nm (ED, red dashed curve) and magnetic dipole type at $650$ nm (MD, blue dashed curve). Higher-order multipoles produce a negligible contribution for given parameters. The electric near-field distribution profiles at these resonances are presented in Figure~\ref{unit}(a)A,B. It is known that the magnetic dipole Mie resonance condition for the dielectric (such as silicon) nanoparticle depends on its size. For the conical particle under investigation we obtain the following equation for the wavelength of magnetic resonance $\lambda_{\rm res}\approx 0.9 b n_{\rm d}$, where $\lambda_{\rm res}$ is the resonant wavelength, $n_{\rm d}$ is the refractive index of the silicon nanoparticle~\cite{Vuye_1993}. In previous experimental articles it has been shown that the refractive index of balk crystalline silicon works well with nanoparticles of such sizes~\cite{Evlyukhin:NL:2012, Kuznetsov:SR:2012, Dmitriev2016, Dmitriev_NL_2016}. The last equation is a good approximation in the domain close to selected geometric parameters. Thus, the reduction of $b$ from 190~nm to 150~nm leads to the blue-shift of the magnetic resonance from $\lambda_{\rm res}=640$~nm to $\lambda_{\rm res}=570$~nm (see \textit{Supplementary Information}). This feature was recently used for controlling over the wavelength of Fano resonance in all-dielectric oligomers at the manufacturing stage~\cite{Chong2014}. Now we consider the scattering properties of a single hybrid nanoparticle consisting of Si nanocone and Au nanodisc (see inset in Figure~\ref{unit}(b)). We assume that the diameter of Au nanodisc is equal to the diameter of the lower base of the Si nanocone, which is dictated by the lithography process~\cite{Zuev_2016}. We also take the thickness of the Au nanodisc is equal to $d = 20$~nm. By adding the gold nanodisc on the upper base of Si nanocone, an additional resonance appears in the scattering spectra of the resulting hybrid nanoparticle, which is shown in Figure~\ref{unit}(b), and where the resonance depicted by point $C$. This resonance has a plasmonic nature and manifests itself in strong local electric field enhancement around the nanodisc. Moreover, the modes of Si nanocone and Au nanodisc begin to hybridize. The hybridization of Mie and plasmonic modes causes their mutual perturbation (see multipole expansion for this particle in Figure~\ref{unit}(b)). The magnetic Mie resonance still has a resonant behaviour (Figure~\ref{unit}(b), point $D$). The electric near-field distribution at the wavelength of the plasmonic resonance ($\lambda=800$~nm) is presented in Figure~\ref{unit}(b)C. The existence of the Au nanodisc perturbs the electric near-field of the nanocone at its magnetic resonance (Figure~\ref{unit}(b)D) due to their effective coupling. The scattering properties of the hybrid nanoparticles in the optical frequency range and electric near-field distributions hereinafter were numerically calculated by using CST~Microwave~Studio. A nonuniform mesh was used to improve the accuracy in the vicinity of the Au nanoparticle where the field concentration was significantly large. The dispersion model for the Au and Si materials was taken from the literature~\cite{Palik_1985, Meyer_2006, Christy_1972, Vuye_1993}. The plasmon resonance of Au nanoparticles arises from an excitation of localized surface plasmon modes, which are strongly depended on the geometrical shape of the nanoparticle~\cite{Giannini2011, Zhu2015, Viarbitskaya2015, Makarov2016}. It has been shown that under irradiation of a Au nanoparticle by femtosecond laser pulse with energy density of 40--50~mJ/cm$^2$ (depending on the Au particle size), the Au nanoparticle changes its shape from a disc to a cup~\cite{Halas_2008, HalasNL_2011, Zuev_2016}. At lower intensities, there is no detectable shape deformation. We emphasise that it is necessary to use a truncated nanocone to properly change the Au nanoparticles shape. At the same time, the Si nanocone is not affected by the fs-laser radiation due to the higher melting temperature and enthalpy of fusion (about 1687~K and 50.21 kJ/mol for crystalline silicon in contrast to 1337~K and 12.55 kJ/mol for gold). The plasmon resonance of the deformed nanoparticle [see scattering spectra in Figure~\ref{unit}(c)] shifts to shorter wavelengths (from 800~nm to 690~nm, in our case). Now it is difficult to separate the response of whole hybrid nanoparticle to responses of dielectric and metallic parts. It results in dramatically changing in the near field distribution of the hybrid nanoparticle [see Figure~\ref{unit}(c)E] and appearance of hot-spots of the locally enhanced electric field at the edges of the nanocup where $E/E_{0}$ reaches 8, $E_{0}$ is the exciting field strength. Upon the Au nanoparticle reshaping, the wavelength of the magnetic resonance of Si nanocone shifts to 630~nm (see Figure~\ref{unit}(c)F). Moreover, in this case we observe the notable contribution of the electric quadrupole mode (EQ) in the total scattering (see Figure~\ref{unit}(c), green doted curve). By increasing the energy density of the laser radiation up to 70--80~mJ/cm$^2$, the nanocup transforms its shape to a nanosphere (in our case the radius of resulting sphere is 51~nm). The scattering cross section of such hybrid nanoparticle as well as the results of multipole expansion are presented in Figure~\ref{unit}(d). The scattering cross section is similar to the single Si nanocone, due to the Au nanosphere scatters much less of light energy than the Si nanocone. Thus, the position of the Au nanoparticle plasmon resonance as well as response of the whole hybrid nanoparticle can be controlled via fs-laser induced reshaping. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{Fig3} \caption{(a)~Scattering cross sections of (i)~all-dielectric hexamer based on Si nanocones, (ii)~single Si nanocone with smaller lower base, and (iii)~all-dielectric heptamer. The gaps between the central cone and the boundary ones are 30~nm. The dashed line shows the position of the Fano resonance. (b)~The electric field profiles (in terms of field amplitude) in the vertical and top cross-sections calculated at the scattering intensity dip of Fano resonance ($\lambda=588$~nm) and outside of the resonance ($\lambda=560$~nm).}\label{cones1} \end{figure*} Let us consider an all-dielectric oligomer consisting of Si nanocones and having a 6-fold rotational axis ($C_6$). To demonstrate that the oligomer has a Fano resonance, we calculate the scattering spectra of hexamer and single Si nanocone separately as well as scattering spectra of whole oligomer (see Figure~\ref{cones1}). The hexamer structure is based on the nanocones with a diameter of the lower base of $b=190$~nm. The gap between nanocones (the distances between the neighboring lower bases) is 10~nm, which leads to their effective interaction resulting in the appearance of low-Q collective modes. The scattering spectra of these modes overlap forming a non-resonant scattering channel [see Figure~\ref{cones1}(a)i]. To obtain a heptamer, we place a Si nanocone with the diameter of lower base of 150~nm and with a relatively narrow magnetic resonance (see Figure~\ref{cones1}(a)ii) in the center of the hexamer. The gap between the central Si nanocone and the hexamer's ones in the resulting structure is 30~nm. Figure~\ref{cones1}(a)iii shows the scattering spectrum of the resulting heptamer. This spectrum has a resonant dip at the wavelength of magnetic Mie resonance of the central nanocone (around $\lambda=590$~nm) with a pronounced asymmetric profile. In Refs.~\cite{Miroshnichenko_2012, Chong2014} it has been shown that this dip is associated with the \textit{magnetic Fano resonance}, which is caused by the scattered wave interference of two modes -- the spectrally narrow magnetic dipole Mie mode of the central nanocone and the broadband collective magnetic modes of the hexamer. The Fano resonance dip at 588~nm is caused by antiphase oscillating of the magnetic dipoles of heptamer and magnetic dipole of the central nanocone (dark mode). Outside of this resonance ($\lambda=560$~nm) these modes oscillate in phase (bright mode) (see Figure~\ref{cones1}(b)). We also note that due to the rotational symmetry of the all-dielectric oligomer the scattering cross section does not depend on the incident wave polarization~\cite{Hopkins2013, Fuller_1994, Miroshnichenko_ACSN_2013}. The shape of the Fano resonance depends on the distances between the nanocones. The results of detailed study of this effect are presented in \textit{Supplementary Information}. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{Fig4} \caption{(a)~Scattering cross sections of (i)~hybrid Au/Si hexamer, (ii)~single hybrid nanoparticle with smaller lower base, and (iii)~heptamer, with Au nanoparticles in form of nanocups. (b)~The electric field profiles (in terms of field amplitude) in the vertical and top cross-sections calculated at the scattering intensity dip of Fano resonance ($\lambda=584$~nm) and outside of the resonance ($\lambda=566$~nm).}\label{caps} \end{figure*} Our goal now is to use the melting of the Au nanoparticles placed on the Si nanocones to tune the magnetic Fano resonance in the hybrid oligomer. For this propose we show that the hybrid oligomer has a pronounced Fano resonance, even in the presence of Au nanocups, i.e. when the Au nanoparticle is resonant in vicinity of the Fano resonance wavelength. In other words, we show that the Au nanoparticles perturb the Fano resonance, but does not destroy it. We demonstrate it in the same manner as for the all-dielectric oligomers. Namely, the scattering spectra of the the hybrid Au/Si hexamer with Au nanocups has a broad and nonresonant wing of collective modes (see Figure~\ref{caps}(a)i). The interaction of these modes with the narrow resonance of the single Au/Si nanoparticle (see Figure~\ref{caps}(a)ii) results in appearance of the asymmetrical dip in the scattering spectrum (see Figure~\ref{caps}(a)iii). The electric field distribution profiles in the side and top views calculated at the scattering intensity dip of Fano resonance ($\lambda=584$~nm) and outside of the resonance ($\lambda=566$~nm) are presented in Figure~\ref{caps}(b). At the Fano resonance wavelength ($\lambda=584$~nm) the modes of central particle and hexamer oscillate in opposite phase, forming a dark mode of the whole hybrid oligomer. Now we consider the optical properties of hybrid oligomers composed of Au/Si nanoparticles for different stages of reshaping (see Figure~\ref{hybrid}). We study the hybrid oligomers with the diameters of the lower base of the hexamer's Si nanocones and the central nanocone of $190$~nm and 150~nm, respectively. The gap between the central Si nanocone and the hexamer's ones is 30~nm. We consider the Fano resonance of the oligomer composed of hybrid nanoparticles with Au nanodiscs that appears at 580~nm (see Figure~\ref{hybrid}(a), blue curve). The resonance is caused mainly by the responses of Si nanocones, and its wavelength corresponds to the wavelength of the Fano resonance of all-dielectric oligomer (see Figure~\ref{cones1}(a)iii) because of the weak coupling between Au nanodiscs and Si nanocones. For this case of Au/Si nanoparticle oligomers the Fano resonance is less pronounced compared to the all-dielectric counterpart (see Figure~\ref{cones1}(a)). Next, we numerically show that the profile and its spectral position of this resonance can be changed by fs-laser induced reshaping of the Au nanodiscs. When the Au nanoparticles take the form of nanocups, the minimum of the Fano resonance shifts to $\lambda=585$~nm being accompanied by a constriction of its profile (see Figure~\ref{hybrid}(a), green curve). It has been shown above (see Figure~\ref{caps}(a)) that this very pronounced dip in the scattering spectrum of hybrid oligomers with Au nanocups corresponds to the Fano resonance. Upon further reshaping of the Au nanoparticles to nanospheres, the Fano resonance becomes broader again and its minimum shifts to $\lambda=595$~nm (see Figure~\ref{hybrid}(a), red curve). Thus, the laser reshaping of the Au nanoparticles can be applied for fine-tuning of the Fano resonance of the hybrid oligomers. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{Fig5} \caption{(a)~Calculated scattering cross section spectra and (b)~experimentally measured scattering dark-field signals of the hybrid Au/Si heptamer with Au nanodiscs (blue curve), Au nanocups (green curve), and Au nanospheres (red curve).The spectral region with strong Fano resonane response is highlighted by yellow stripe. (c)--(e)~The SEM images (viewing angle is 45$^\circ$) of typical of fabricated hybrid oligomers with Au nanodiscs~(c), Au nanocups~(d), and Au nanospheres~(e); the scale-bar is 200~nm.}\label{hybrid} \end{figure*} In order to prove the concept of fine-tuning of the magnetic Fano resonance we provide a series of dark-field scattering spectra measurements from hybrid oligomers with different degree of the Au nanoparticle reshaping. First, we have fabricated the hybrid oligomers based on the gold nanodiscs placed on the upper base of the truncated silicon nanocones by means of a combination of e-beam lithography (25~kV), metal evaporation, lift-off procedure, and gas-phase chemical etching. Recently, it has been shown that under the electron-beam processing step with 25~kV acceleration voltage the amorphous silicon gets the nanocrystalline structure~\cite{Baranov_ACSPh_2016}. This method of hybrid nanostructures fabrication is developed in Ref.~\cite{Zuev_2016}. At the first step, an a-Si:H layer with a thickness of $\approx$200~nm was deposited on a properly cleaned substrate of fused silica by the plasma-enhanced chemical vapor deposition of SiH$_3$ gas. Then, the arrays of metal nanodiscs consisting of Cr/Au layers with thicknesses of $\approx$ 1~nm/20~nm were produced by means of e-beam lithography, metal deposition, followed by the lift-off procedures. After that, the silicon layer was etched through a fabricated metal mask using radio-frequency inductively coupled plasma technology in the presence of SF$_6$ and O$_2$ gases. The etching was carried out with temperature of 265~K to fabricate Si nanostructures in the shape of nanocones. The typical SEM images of fabricated hybrid oligomers with Au nanodiscs, Au nanocups, and Au nanospheres are represented in Figure~\ref{hybrid}(c)--(e); the scale-bar is 200~nm. For fs-laser melting a commercial femtosecond laser system (Ytterbium-doped Femtosecond Solid-State Laser TeMa 150, Avesta~Poject) was used, providing laser pulses at a central wavelength of 1050~nm, with a maximum pulse energy of 85~nJ, and a pulse duration of 150~fs at a repetition rate of 80~MHz. The laser energy was varied and controlled by an optical attenuator (Avesta~Poject) and a power meter (FielfMax~II, Coherent), respectively. Laser pulses were focused on the fabricated sample by an objective (Mitutoyo~M~Plan~Apo~NIR~10X) with a numerical aperture (NA) of 0.26. In order to adhesion of the Au nanodisk and Si nanocone a thin Cr layer is used. According to the results of molecular dynamic simulations~\cite{Zuev_2016}, the Cr layer (with a thickness of 1-2~nm) provides a desired shape of the Au nanoparticle during laser reshaping without affecting the electromagnetic properties of the hybrid nanoparticle. Moreover, this Cr layer prevents formation of the Au-Si alloy. Measurements of the scattering spectra were carried out in a dark-field scheme, where the arrays irradiation was performed by p-polarized light from a halogen lamp (HL-2000-FHSA) at an angle of incidence of 70$^\circ$ with the surface normal. Scattered signal collection was performed by means of a Mitutoyo~M~Plan~APO~NIR objective (NA = 0.7), which directed light to a commercial spectrometer (Horiba~LabRam~HR) with a CCD camera (Andor~DU~420A-OE~325). The confocal optical scheme was optimized for signal collection from individual nanoparticles. A sketch of the experimental setup for the polarization-resolved dark-field spectroscopy is represented in the \textit{Supplementary Information}. The results of fine-tuning of the Fano resonance in the fabricated hybrid oligomers are summarized in Figure~\ref{hybrid}(b). Our experimental results clearly show the spectral shift of the Fano resonance minimum from $\lambda=650$~nm to $\lambda=660$~nm with increasing of the power of external laser field from 0~mW (blue curve) to 40~mW (green curve). Following our previous results in this regime of reshaping the Au nanoparticles takes the form of nanocones~\cite{Zuev_2016}. Upon further increase of the laser power up to 90~mW the Au nanocones reshape to nanospheres with spectral shifting of the Fano resonance dip to $\lambda=665$~nm (see Figure~\ref{hybrid}(b), red curve). The measured damage threshold of the Au nanoparticles is about 130~mW. At this power the Fano resonance of the hybrid oligomer disappears. The slight mismatching of the numerical and experimental results is explained by the presence of SiO$_2$ substrate and accuracy of nanostructures fabrication. \section*{Conclusion} In summary, we have proposed and implemented a novel type of hybrid oligomers consisting of resonant asymmetric metal-dielectric (Au/Si) nanoparticles and exhibiting a sharp Fano resonance in visible range, which has a predominantly magnetic nature. We have demonstrated, numerically and experimentally, that such oligomers allow irreversible fine-tuning of the Fano resonance via fs-laser melting of Au nanoparticles at the nanometer scale. We have shown that the Fano resonance wavelength can be changed by fs-laser reshaping very precisely (within 15~nm) being accompanied by a reconfiguration of its profile. We believe that our results pave the way to realization of nanophotonic elements that can be adjusted after their manufacturing. \section*{Acknowledgements} This work was financially supported by Russian Science Foundation (Grant 15-19-30023). The authors declare no competing financial interest.
{'timestamp': '2016-12-13T02:09:06', 'yymm': '1612', 'arxiv_id': '1612.03555', 'language': 'en', 'url': 'https://arxiv.org/abs/1612.03555'}
\section{Introduction} Entanglement of multi-qubit systems is a central subject in Quantum Information Theory. It plays an important role in applications in the field of quantum information such as quantum cryptography, quantum computation, quantum teleportation \cite{HHHH}. Recently entanglement was involved in surprising theoretical bridges like the correspondence between entanglement measures and string theoretic formulae for black hole entropy, leading to what is now known as the blackhole/qubit correpondence \cite{BDD,BDDER2,BDL,Levay}. The question of understanding entanglement patterns of multipartite systems has been investigated by various authors in the past decade \cite{Dur,VDMV,My,chen, LLS, BDD, BDDER,HLT}. The case of three qubits, the first nontrivial one -- denoted here as the $2\times 2\times 2$ system --, has been solved by D\"ur {\em et al.}\cite{Dur} more than ten years ago, and is equivalent to the classification of binary trilinear forms given by Le Paige\cite{LePai} in 1881. Even if this classification is completely established, the interpretation of entanglement for three qubits is still under scrutiny\cite{BDDER,BDFMR,Levay1,Levay}. The mixed tripartite configurations $2\times 2\times n$ have been classified by Miyake {\em et al.}\cite{My,My2} and in a previous article\cite{HLT}, we have obtained geometric descriptions of the $2\times 2\times n$ and $2\times 3\times 3$ quantum systems. In all of these classifications, we find only a finite number of nonequivalent entangled states, and they can be explicitly identified. Compared to the 3-qubit case, the classification of entangled states of four qubits is a much more difficult problem. The Hilbert space of four qubits, $\mathcal{H}=\mathbb{C}^2\otimes\mathbb{C}^2\otimes \mathbb{C}^2\otimes \mathbb{C}^2$, contains infinitely many orbits under the action of the group $G=GL_2(\mathbb{C})\times GL_2(\mathbb{C})\times GL_2(\mathbb{C})\times GL_2(\mathbb{C})$ of Stochastic Local Operations and Classical Communication (SLOCC). Therefore there is no hope to give a comprehensive classification as in the finite case. In terms of normal forms, a classification leads to forms depending on parameters, such as the ones of Verstraete {\em et al.}\cite{VDMV}, corrected by Chterental and Djokovi\'c\cite{CD}. Another perspective is to describe a complete set of invariant and covariant polynomials to separate non-equivalent orbits. This was achieved by Briand and the last two authors of this paper\cite{BLT,LT}. We may notice that the algebras of invariant and covariant polynomials are quite large (4 invariant polynomials and 170 covariant polynomials to generate both algebras), compared to the 9 normal forms given by Verstraete {\em et al.}. Moreover, even if the Verstraete {\em et al.} classification allows us to assign any 4-qubit state to one of the 9 families, it also implies that states with different entanglement patterns can belong to the same family. The geometric study of four qubit states as $G$-invariant algebraic varieties will provide finer descriptions and make the connection between the normal forms and the invariant theory approaches. This is the purpose of this paper. The paper is organized as follow. In Section \ref{tools} we introduce the tools, from classical invariant theory and algebraic geometry, which will be used all over the paper. We recall what is known in terms of invariant and covariant polynomials and describe the method that will be used in our investigation. We recall some of the algebraic geometry techniques that we already used\cite{HLT} as well as some recent results by Buczy\'nski and Landsberg\cite{Lan2} which will guide us in the process of identifying the algebraic varieties. In Section \ref{nulcone} we describe the set of nilpotent $4$-qubit states. This set, called the nullcone, is the algebraic variety defined as the zero set of all invariant polynomials. We construct the $G$-subvarieties of the nullcone from the set of separable states and provide an algorithm to identify a given nilpotent $4$-qubit state as a point of one of those varieties. In Section \ref{3sct} we describe subvarieties of the third secant variety which is the direct generalization of the GHZ-state for four qubits. The third secant variety already contains an infinite number of orbits, and this will be the first example where our algorithmic method will have to be modified by some geometrical insights. This last step will allow us to explicitly describe a {\em geometric atlas} of the third secant variety (including the nullcone) made of $47$ non-equivalent $G$-varieties. Up to permutation of the qubits, this yields $15$ non-equivalent types of entanglement within the third secant variety. We conclude the paper with general remarks and perspectives for further investigation of the geometry of 4-qubit states outside of the third secant variety. Partial results in this direction will be presented in a forthcoming paper\cite{HLT2}. \subsection*{Notations} Let $\{|0\rangle,|1\rangle\}$ be a basis of $\mathbb{C}^2$. A standard basis of the Hilbert space $\mathcal{H}=\mathbb{C}^2\otimes\mathbb{C}^2\otimes\mathbb{C}^2\otimes\mathbb{C}^2$ is given by $|j_1\rangle\otimes|j_2\rangle\otimes |j_3\rangle\otimes |j_4\rangle$, with $0\leq j_i\leq 1$. That basis notation will be shortened in $|j_1j_2j_3j_4\rangle$ and a $4$-qubit state will be denoted by \[|\Psi\rangle=\sum_{0\leq j_1,j_2,j_3,j_4\leq 1} A_{j_1j_2j_3j_4}|j_1j_2j_3j_4\rangle\text{ with } A_{j_1j_2j_3j_4}\in \mathbb{C}.\] Nonzero scalar multiplication has no incidence on a state $|\Psi\rangle$ of the Hilbert space $\mathcal{H}$, therefore we will consider quantum states as points in the projective space $\mathbb{P}^{15}=\mathbb{P}(\mathcal{H})$. The set of separable states corresponds to tensors which can be factorized, i.e. $|\Psi\rangle=v_1\otimes v_2\otimes v_3\otimes v_4$ with $v_i=\alpha_i|0\rangle+\beta_i|1\rangle\in \mathbb{C}^2$. The projectivization of that set is an algebraic variety, called the Segre embedding of the product of four projective lines. It is the image of the following map: \[\begin{array}{cccc} \phi: & \mathbb{P}(\mathbb{C}^2)\times\mathbb{P}(\mathbb{C}^{2})\times\mathbb{P}(\mathbb{C}^2)\times\mathbb{P}(\mathbb{C}^{2}) & \to & \mathbb{P}(\mathbb{C}^{2}\otimes\mathbb{C}^2\otimes\mathbb{C}^2\otimes\mathbb{C}^{2})\\ & ([v_1],[v_2],[v_3],[v_4]) & \mapsto & [v_1\otimes v_2 \otimes v_3 \otimes v_4] \end{array}\] The Segre variety $X=\phi(\mathbb{P}(\mathbb{C}^{2})\times\mathbb{P}(\mathbb{C}^{2})\times\mathbb{P}(\mathbb{C}^2)\times\mathbb{P}(\mathbb{C}^{2}))$ will be denoted later on by $X=\mathbb{P}^{1}\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^{1}\subset \mathbb{P}(\mathcal{H})$. Once we work over $\mathbb{P}(\mathcal{H})$ the group SLOCC will be equivalently replaced by $G=SL_2(\mathbb{C})\times SL_2(\mathbb{C}) \times SL_2(\mathbb{C}) \times SL_2(\mathbb{C})$ (with no risk of confusion $G$ will always denote the group SLOCC, which is the product of $GL_2(\mathbb{C})$ when we consider $\mathcal{H}$ and the product of $SL_2(\mathbb{C})$ when we consider $\mathbb{P}(\mathcal{H})$). The variety $\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ is homogeneous for the semi-simple Lie group $G$ and it corresponds to the orbit of the highest weight vector\cite{F-H,HLT} (which can be chosen to be $v=|0000\rangle$). More precisely the variety $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1=\mathbb{P}(G|0000\rangle)$ is the unique homogeneous variety for the group $G$ in the sense that for any $x,y\in X$ there exists $g\in G$ such that $y=g.x$. A variety $Y\subset \mathbb{P}^{15}$ will be called a $G$-variety if for all $y\in Y$ and all $g\in G$ we have $g.y\in Y$. A variety $Z$ will be called quasi-homogeneous if there exists an open dense orbit, i.e. there exists $z\in Z$ such that $Z=\mathbb{P}(\overline{G.z})$. We work throughout with algebraic varieties over the field $\mathbb{C}$ of complex numbers. In particular we denote by $V$ a complex vector space of dimension $N+1$ and $X^n\subset \mathbb{P}(V)=\mathbb{P}^{N}$ is a complex projective nondegenerate variety (i.e. not contained in a hyperplane) of dimension $n$. Given $x$ a smooth point of $X$, we denote by $T_x X$ the intrinsic tangent space, $\tilde{T}_x X$ the embedded tangent space\cite{Lan}, of $X$ at $x$. The notation $\hat{X}\subset V$ (resp. $\widehat{T}_x X$) will denote the cone over $X$ (resp. over $\tilde{T}_x X$) and $[v]\in \mathbb{P}(V)$ will denote the projectivization of a vector $v\in V$. The dimension of the variety, $\text{dim}(X)$, is the dimension of the tangent space at a smooth point. We say $x\in X$ is a {\em general point} of $X$ in the sense of the Zariski topology. \section{Toolbox: invariant theory and algebraic geometry}\label{tools} \subsection{Invariant Theory} \def\mathrm{SLOCC}{\mathrm{SLOCC}} \def\mathrm{Inv}{\mathrm{Inv}} \def\mathrm{Cov}{\mathrm{Cov}} \def\mathrm{tr}{\mathrm{tr}} \newtheorem{meth}{Method}[section] In a more general setting a (pure) $k$-qudit system is an element of the Hilbert space $\mathcal H=V_1\otimes\cdots\otimes V_k$ with $V_i=\mathbb{C}^{n_i}$, equivalently it can be regarded as a multilinear form \[A=\sum_{0\leq i_1\leq n_1}\cdots \sum_{0\leq i_k\leq n_k}a_{i_1,\dots,i_k}x^{(1)}_{i_1}\cdots x^{(k)}_{i_k}.\] Two qudit systems are equivalent if they belong in the same orbit for the group $\mathrm{SLOCC}=GL_{n_1}(\mathbb C)\times\cdots\times GL_{n_k}(\mathbb C)$. The classification of multilinear forms is an old and difficult problem treated generally by using classical (and more recently geometrical) invariant theory. The principle is the following: one describes polynomials (in the coefficient of the forms) which are invariant under the action of $\mathrm{SLOCC}$. Hence, if we have sufficiently many polynomials we can decide if two forms are equivalent by comparing their evaluations on these polynomials. In general, invariants are not sufficient to describe completely the orbits and we need more general polynomials, called concomittants. The set of polynomials invariants is obviously an algebra but its description, in terms of generators and syzygies (and even the calcultaion of its Hilbert series) is out of reach of any computer system in the general case. For our purposes, we will only deal with the $k$-qubit systems (that is $n_i=2$ for each $1\leq k\leq i$). In the case of multilinear binary forms, the knowledge of the covariant polynomials is sufficient. Let us recall briefly the main definitions: The set of all invariants and covariants of a multilinear form of a given size are algebras $$\mathrm{Inv}:=S(\mathcal H)^{\mathrm{SLOCC}}\subsetneq \mathrm{Cov}:=[S(\mathcal H)\otimes S(V_1^*\oplus\cdots\oplus V_k^*)]^{\mathrm{SLOCC}}.$$ In the case of the binary forms we have $V_k=\mathbb{C}^2$ and the covariants are polynomials in the coefficients $\mathbf a=\{a_{i_1,\dots,i_k}: 0\leq i_1,\dots,i_k\leq 1\}$ of the form and in $k$ auxiliary binary variables $\mathbf x^{(j)}=\left(x_0^{(j)},x_1^{(j)}\right)$. Hence, it is a multigraded space $\mathrm{Cov}=\oplus_{d,d_1,\dots,d_k} \mathrm{Cov}_{d,d_1,\dots,d_k}$, where $\mathrm{Cov}_{d,d_1,\dots,d_k}$ is the space of multihomogeneous polynomials of degree $d$ in $\mathbf A$ and of degree $d_i$ in $\mathbf x^{(i)}$. The subspace consisting in polynomials of degree $0$ in each binary variable $\mathbf x^{(i)}$ is the graded $\mathrm{Inv}=\bigoplus_d\mathrm{Inv}_d$ sub-algebra of $\mathrm{Cov}$.\\ The simplest covariant is the ground form $A$ itself, and we will obtain all our covariants by using the Cayley Omega process. The Omega process is an algorithm based on a set of binary operators called transvectants. The transvection of two multibinary forms $B$ and $C$ is defined by \[ (B,C)^{i_1,\dots,i_k}:=\mathrm{tr}\Omega_{\mathbf x^{(1)}}^{i_1}\dots \Omega_{\mathbf x^{(k)}}^{i_k} B(\mathbf{x'}^{(1)},\dots,\mathbf{x'}^{(1)})C(\mathbf{x''}^{(1)},\dots,\mathbf{x''}^{(1)}) \] where $\Omega$ is the Cayley operator \[ \Omega_x=\left|\begin{array}{cc}\frac\partial{\partial x_0'}&\frac\partial{\partial x_0''}\\ \frac\partial{\partial x_1'}&\frac\partial{\partial x_1''}\end{array}\right|, \] and $\mathrm{tr}$ sends each $\mathbf x'$ and $\mathbf x''$ on $\mathbf x$ (erases ${}'$ and ${}''$).\\ In principle, for multilinear binary forms, one can compute a basis of $\mathrm{Cov}$ from the ground form $A$ by using only the operations $f\rightarrow (f,A)^{i_1,\dots,i_k}$.\\ In fact, the polynomials here considered are relative invariants for the action of $GL_2(\mathbb{C})^k$, in the following sense: Let $F\in\mathrm{Cov}$ and $g_1,\dots,g_k\in GL_2(\mathbb{C})$ we have $(g_1,\dots,g_k).F=(\det g_1)^{\ell_1}\cdots (\det g_k)^{\ell_k}F$, \emph{i.e.} $F$ is invariant under $GL_2(\mathbb{C})^k$ up to the action of a global coefficient.\\ Let $B$ be a covariant and $$\mathbf \alpha=\sum_{0\leq i_1,\dots,i_k\leq 1}\alpha_{i_1,\dots,i_k}x^{(1)}_{i_1}\cdots x^{(k)}_{i_k}$$ be a form, the evaluation $B(\alpha)$ is the polynomial obtained by substituting $\alpha_{i_1,\dots,i_k}$ to $a_{i_1,\dots,i_k}$ in $B$. We will set $B[\alpha]=1$ if $B(\alpha)\neq 0$ and $B[\alpha]=0$ otherwise.\\ Assume that we know a basis $\mathcal B$ of $\mathrm{Cov}$ and let $\mathcal B[\alpha]=\left(B[\alpha]\right)_{B\in\mathcal B}$. Note that if $\mathcal B[\alpha]\neq\mathcal B[\alpha']$ then $\alpha$ and $\alpha'$ do not belong in the same orbit. We define the equivalence relation $\alpha\sim\alpha'$ if and only if $\mathcal B[\alpha]=\mathcal B[\alpha']$, which partitions $\mathcal H$ into equivalence classes. If $\mathbb A\in\mathcal H/_\sim$, we will define $\mathcal B[\mathbb A]=\mathcal B[\alpha]$ for $\alpha\in\mathbb A$, we will denote also by $\tilde\alpha$ the class of $\alpha$. More precisely, we define the partial order $\preceq$ on $\mathcal H/_\sim$: $\mathbb A\preceq\mathbb A'$ if and only if for each $B\in\mathcal B$, $B[\mathbb A']=0$ implies $B[\mathbb A]=0$. Let $\alpha\in\mathcal H$ be a form, the set $\alpha^{\preceq}=\bigcup_{\mathbb A\preceq \tilde\alpha}\mathbb A$ is the Zarisky closure of a union of orbits. We will use the following method which is not an algorithm but rather an heuristic strategy: \begin{meth}\label{Meth} \begin{enumerate} \item Compute a basis $\mathcal B$ of $\mathrm{Cov}$ (if it is possible otherwise compute a sufficiently large set $\mathcal B$ of linearly independent covariants). \item Consider a finite set of forms $\mathcal F$. The set $\mathcal F$ is assumed sufficiently large to contain the representatives of interesting orbits. \item\label{m3} Compute the set $\{\alpha^\leq:\alpha\in \mathcal F\}$. \item Find the geometric interpretation of the classes $\alpha^\leq$. \item If the geometric investigation involves new classes then modify $\mathcal F$ and $\mathcal B$ and go back to (\ref{m3}). \item Compute the inclusion graph. \end{enumerate} \end{meth} This method will be used as a starting point for our geometric interpretation. Furthermore, the set of covariants $\mathcal B$ will provide an algorithm allowing to identify the orbit of a given form. We have already applied this method to investigate the geometry of systems of $3$-particles\cite{HLT}. In each of these cases, we have found, for each orbit, a representative with coefficients in $\{0,1\}$. This suggests that we may start with $$\mathcal F\subseteq\mathcal E:=\left\{\sum_{0\leq i_1,i_2,i_3,i_4\leq 1}\alpha_{i_1,i_2,i_3,i_4}x^{(1)}_{i_1}x^{(2)}_{i_2}x^{(3)}_{i_3}x^{(4)}_{i_4}: \alpha_{i_1,i_2,i_3,i_4}\in\{0,1\}\right\}.$$ For simplicity, we will denote the form $\alpha\in\mathcal F$ by the number $$\sum_{0\leq i_1,i_2,i_3,i_4\leq 1}\alpha_{i_1,i_2,i_3,i_4}2^{i_1+i_22+i_34+i_48}.$$ This set is certainly not sufficient to describe the orbits for $4$-qubit systems, since there is no dense orbit and the normal forms have $4$ parameters \cite{BLT}, but we will see that if we restrict to the nullcone or the third secant variety, our method does yield interesting results.\\ In a previous paper \cite{BLT}, we have computed a complete generating set of covariants. In appendix \ref{AppCov} we propose an other generating system $\mathcal B$ which has more symmetries. Whilst the algebra of covariant polynomials seems very difficult to describe, the subalgebra of invariant polynomials is quite simple, it is a free algebra on four generators: \begin{enumerate} \item One of degree 2: \[B=B_{0000}=a_{0000}a_{1111}-a_{1000}a_{0111}+a_{0100}a_{1011}+a_{1100}a_{0011}-a_{0010}a_{1101} \]\[+a_{1010}a_{0101}-a_{0110}a_{1001}+a_{1110}a_{0001}\] \item Two of degree 4: \[L:=\left|\begin{array}{cccc} a_{0000}&a_{0010}&a_{0001}&a_{0011}\\ a_{1000}&a_{1010}&a_{1001}&a_{1011}\\ a_{0100}&a_{0110}&a_{0101}&a_{0111}\\ a_{1100}&a_{1110}&a_{1101}&a_{1111} \end{array}\right|\mbox{ and } M:=\left|\begin{array}{cccc} a_{0000}&a_{0001}&a_{0100}&a_{0101}\\ a_{1000}&a_{1001}&a_{1100}&a_{1101}\\ a_{0010}&a_{0011}&a_{0110}&a_{0111}\\ a_{1010}&a_{1011}&a_{1110}&a_{1111} \end{array}\right| \] \item One of degree 6: Set $b_{xy}:=\det\left(\dfrac{\partial^2 f}{\partial z_i\partial t_j}\right)$. This quadratic form is interpreted as a bilinear form on the three dimensional space $S^2(\mathbb{C}^2)$, so we can find a $3\times 3$ matrix $B_{xy}$ verifying \[ b_{xy}=[x_0^2,x_0x_1,x_1^2]B_{xy}\left[\begin{array}{c}y_0^2\\y_0y_1\\y_1^2 \end{array}\right]. \] The generator of degree $6$ is $D_{xy}:=\det(B_{xy})$. \end{enumerate} Note that we can alternatively replace $L$ or $M$ by \[N:=-L-M=\left|\begin{array}{cccc} a_{0000}&a_{1000}&a_{0001}&a_{1001}\\ a_{0100}&a_{1100}&a_{0101}&a_{1101}\\ a_{0010}&a_{1010}&a_{0011}&a_{1011}\\ a_{0110}&a_{1110}&a_{0111}&a_{1111} \end{array}\right| \] and $D_{xy}$ by $D_{xz},\dots,D_{zt}$ defined in a similar way with respect to the variables $xz,\dots,zt$.\\ There is also another invariant polynomial which have a great interest in the context of geometry: the hyperdeterminant in the sense of Gelfand-Krapranov-Zelevinsky \cite{GKZ}. Let us recall how to compute it for the case of $4$-qubits. First consider the quartic form: \[ R(t):=\det\left(\dfrac{\partial^2b_{xt}}{\partial x_i\partial x_j}\right) \] and compute the apolar $S$ of $R$ with itself and its catalecticant $T$. More precisely, setting \[ R(t)=\sum \binom4ic_it_0^{4-i}t_1^i, \] we compute \[ S:=c_0c_4-4c_1c_3+3c_2^2\mbox{ and }T:=c_0c_2c_4-c_0c_3^2+2c_1c_2c_3-c_1^2c_4-c_2^3. \] The hyperdeterminant is the discriminant of $R(t)$ and is given by \[ \Delta:=S^3-27T^2. \] Alternatively, $\Delta$ can be constructed from the sextic forms $L_{6000}$, $L_{0600}$, $L_{0060}$ and $L_{0006}$. First choose one of the forms, for instance \[ L_{6000}=\sum\binom6id_ix_0^{6-i}x_1^i, \] and compute the degree $2$ invariant of the sextic (see \cite{Gordan}): \[ I_2:=d_0d_6-6d_1d_5+15d_3d_4-10d^2. \] We remark that $\Delta$ equals $I_2$ up to a factor: \begin{equation}\label{Delta2I_2} {\Delta}:=\dfrac{3}{2^{19}5^2} I_2. \end{equation} \begin{rem}\rm The algebra of invariant polynomials of four qubit states has already been used\cite{VES} to refine existing classifications\cite{LLS} and provide tests to distinguish certain types of entanglement. However it will be clear after the case treated in Section \ref{nulcone} (the nullcone), that the algebra of invariants is not sufficient in particular when we focus on states which annihilate some generators of this algebra (Sections \ref{nulcone} and \ref{3sct}). \end{rem} \subsection{Geometry} The geometric interpretations which will be given in Sections \ref{nulcone} and \ref{3sct} are based on the construction of auxiliary varieties from the variety of separable states $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times \mathbb{P}^1$. There will be two types of constructions. The first type will consist in building varieties from $X$ by taking first and second order derivatives of curves of $X$. Those constructions are mostly inspired by a recent paper of Buczy\'nski and Landsberg\cite{Lan2} where the authors provide a precise analysis of the normal forms of tensors which are limiting points of rank three tensors, i.e. points of the third secant variety (see below). We will use their terminology to name the new varieties. In the second type of construction, entangled states will be obtained by linear combinations of two states. This is the construction by joins and secants which we already explored in our previous article\cite{HLT}. One of the most important feature of quantum mechanics is the superposition principle, whose geometric counterpart is provided precisely by the auxiliary varieties discussed in this section. Indeed, it was first noticed by Brody {\em et al.}\cite{brody1} that for $\hat{x},\hat{y}\in \mathcal{H}$, the projective line $\mathbb{P}^1_{xy}\subset \mathbb{P}(\mathcal{H})$ -- the unique line in $\mathbb{P}(\mathcal{H})$ containing $[\hat{x}]$ and $[\hat{y}]$ -- represents all possible superpositions of the states $\hat{x}$, $\hat{y}\in \mathcal{H}$. It is clear that the second construction by joins and secants corresponds to superposition of states. But the first construction is also linked to the superposition principle, because, as will be emphasized, tangent lines are limits of secant lines, i.e. states built by the first and second order information will arise as limits of superpositions. \subsubsection{Tangential varieties, or how to build new states from first and second order information} For $X$ a smooth projective variety, recall the definition of the tangential variety, \[\tau(X)=\bigcup_{x\in X} \tilde{T}_x X\] The tangential variety contains the first order information in the sense that it can be recovered by taking first derivatives of curves in $X$. Consider a smooth curve $x(t)\subset X$ with $x(0)=x$, then $\widehat{x}'(0)\in \widehat{T}_x X$, i.e. $x'(0) \in \tau(X)$. On the other hand any $v\in \tau(X)$ belongs to a tangent space $\tilde{T}_x X$ and we can take a smooth curve $x(t)$ such that $x(0)=x$ and $x'(0)=v$. This observation leads to the following alternative definition of $\tau(X)$. \[\tau(X)=\overline{\{x'(0), \text{ where } x(t)\subset X \text{ is a curve}\}}.\] \begin{ex}\rm\label{deriv} In the context of four qubits, we have $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ and a smooth curve of $X$ will be $x(t)=[e_1(t)\otimes e_2(t)\otimes e_3(t)\otimes e_4(t)]$ with $e_i(t)\in \mathbb{C}^2$. Without loss of generality we may assume, up to a change of basis, that $e_i(0)=|0\rangle$ and $e_i'(0)=|1\rangle$ (here we suppose the curve is general and the vectors $e_i$ and $e_i'$ are not colinear for $t=0$). Let us calculate $x'(t)$. The Leibnitz rule gives \[\widehat{x}'(t)=e_1'(t)\otimes e_2(t)\otimes e_3(t)\otimes e_4(t)+e_1(t)\otimes e_2'(t)\otimes e_3(t)\otimes e_4(t)+e_1(t)\otimes e_2(t)\otimes e_3'(t)\otimes e_4(t)\] \[ +e_1(t)\otimes e_2(t)\otimes e_3(t)\otimes e_4'(t). \] That is, $\widehat{x}'(0)=|1000 \rangle+ |0100 \rangle+ |0010 \rangle+ |0001 \rangle$. The orbit $\mathbb{P}(G.\widehat{x}')$ is the tangential variety whose smooth points are W-states. The calculation of $x'(0)$ provides a description of the affine tangent space to $\widehat{X}$ at $\widehat{x}(0)=|0000\rangle$: \[\hat{T}_{|0000 \rangle}X=\underbrace{\mathbb{C}^2\otimes |000 \rangle}_{V_1}+ \underbrace{|0 \rangle\otimes \mathbb{C}^2\otimes |00 \rangle}_{V_2}+ \underbrace{|00 \rangle\otimes \mathbb{C}^2\otimes |0\rangle}_{V_3}+\underbrace{|000 \rangle\otimes\mathbb{C}^2}_{V_4}\] \end{ex} Following Buczy\'nski and Landsberg\cite{Lan2} we can go further and consider the variety built from second order information \[\text{Osc}(X)=\overline{\{ x'(0)+x''(0), \text{ where } x(t)\subset X \text{ is a curve}\}}\] \begin{ex}\rm\label{deriv2} In example \ref{deriv}, let us differentiate $\widehat{x}'(t)$ to obtain a general point of $\text{Osc}(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)$, \[\hat{x}'(0)+\hat{x}''(0)=|1000 \rangle+ |0100 \rangle+ |0010 \rangle+ |0001 \rangle+|1100 \rangle+ |1010 \rangle+ |1001 \rangle+ |0110 \rangle+|0101 \rangle+ |0011 \rangle\] \end{ex} The calculation of example \ref{deriv2} allows us to determine the affine second osculating space of $X$ at $|0000\rangle$ \[\widehat{T}^{(2)}_{|0000\rangle}X=\underbrace{\mathbb{C}^2\otimes \mathbb{C}^2\otimes |00\rangle}_{W_1}+\underbrace{\mathbb{C}^2\otimes|0\rangle\otimes\mathbb{C}^2\otimes |0\rangle}_{W_2} +\underbrace{\mathbb{C}^2\otimes|00\rangle\otimes\mathbb{C}^2}_{W_3}\]\[+ \underbrace{|0\rangle\otimes\mathbb{C}^2\otimes\mathbb{C}^2\otimes |0\rangle}_{W_4}+ \underbrace{|0\rangle\otimes\mathbb{C}^2\otimes|0\rangle\otimes\mathbb{C}^2}_{W_5}+\underbrace{|00\rangle\otimes\mathbb{C}^2\otimes\mathbb{C}^2}_{W_6} \] This decomposition of the second osculating space is $P$-invariant, where $P$ is the subgroup of $G$ which stabilizes $|0000\rangle$, i.e. $P=\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}\times\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}\times\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}\times\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}$. Therefore we can define the following $G$-subvarieties: \begin{enumerate} \item $\text{Osc}_{ij..}(X)=\overline{\{ x'(0)+x''(0), \text{ where } x(t)\subset X \text{ is a curve and } \widehat{x}''(0)\in W_i+W_j+\dots\}}$, with the trivial inclusion $\text{Osc}_{J_1}(X)\subset \text{Osc}_{J_2}(X)\subset \text{Osc}(X)$ for $J_1\subset J_2$. \item $\text{Osc}'(X)=\overline{\{x''(0),\text{ where }x(t)\subset X \text{ is a curve and } \widehat{x}''(0)\in \widehat{T}^{(2)} X\}}$ with the inclusion $\text{Osc}'(X)\subset \text{Osc}(X)$. \end{enumerate} In section \ref{nulcone} we will need to consider the $6$ subvarieties $\text{Osc}_{i}(X)$ and $4$ subvarieties $\text{Osc}_{ijk}(X)$. Representatives of $\text{Osc}_J(X)$ are easily determined. For instance a representative of $\text{Osc}_1(X)$ is the state $\underbrace{|1000 \rangle+ |0100 \rangle+ |0010 \rangle+ |0001\rangle}_{\in \widehat{T}_{|0000\rangle}}+\underbrace{|1100 \rangle}_{\in W_1}$ and a representative of $\text{Osc}_{123}(X)$ is the state $\underbrace{|1000 \rangle+ |0100 \rangle+ |0010 \rangle+ |0001 \rangle}_{\in \widehat{T}_x X}+\underbrace{|1100 \rangle+ |1010 \rangle+ |1001 \rangle}_{\in W_1+W_2+W_3}$. In section \ref{3sct} we will meet the variety $\text{Osc}'(X)$ whose representative is, according to its definition, the state \[|1100 \rangle+ |1010 \rangle+ |1001 \rangle+ |0110 \rangle+|0101 \rangle+ |0011 \rangle\] \begin{lemma}\label{osc} For $\sharp J\leq 3$ the variety $\text{Osc}_J(X)$ is quasihomogeneous \end{lemma} \proof Let $P=\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}\times\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}\times\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}\times\begin{pmatrix} * & *\\ 0 & * \end{pmatrix}\subset G$ be the stabilizer of $\widehat{x}=|0000\rangle$. Let us show that $P$ acts transitively on the generic element of $\widehat{T}_xX+W_i+\dots+W_j$ if $\sharp J\leq 3$. Let us consider the case $\sharp J=3$ and assume without loss of generality that $J=\{1,2,4\}$. This case is the most generic one, as $W_1\cap W_2\cap W_4=\{|0000\rangle\}$. Then for any $\widehat{x}'+\widehat{x}''\in \widehat{T}_xX+W_1+W_2+W_4$ we have \[\widehat{x}'+\widehat{x}''=\lambda_1|0000\rangle+\lambda_2|1000\rangle+\lambda_3|0100\rangle+\lambda_4|0010\rangle +\lambda_5|0001\rangle +\lambda_6|1100\rangle+\lambda_7|1010\rangle+\lambda_8|0110\rangle\] i.e., it depends on $8$ parameters (less than $8$ if $\sharp J<3$). But $\text{dim}(P)=8$ and we check by direct calculation that $P$ acts transitively on the generic elements of $\widehat{T}_xX+W_1+W_2+W_4$. Let $z=y'+y''$ be a general point of $\text{Osc}_J(X)$ then by homogeneity of $X$ we can assume that $y'+y''\in \widehat{T}_{|0000\rangle} X+W_1+W_2+W_4$ and by transitive action of $P$ any general point $z\in \text{Osc}_J(X)$ is in the $G$-orbit of $|1000 \rangle+ |0100 \rangle+ |0010 \rangle+ |0001 \rangle+|1100 \rangle+ |1010 \rangle+ |0110 \rangle$.$\Box$ \begin{rem}\rm A consequence of Lemma \ref{osc} is that the varieties $\text{Osc}_J(X)$ are irreducible for $\sharp J\leq 3$. In Section \ref{3sct} those varieties will be identified with more classical ones. \end{rem} \begin{rem}\label{rem_dim_osc}\rm The varieties $\text{Osc}_{ijk}(X)$ satifying the genericity condition $W_i\cap W_j\cap W_k=\{|0000\rangle\}$ will be of maximal dimension among the varieties $\text{Osc}_J(X)$ with $\sharp J=3$. For instance it can be checked from the calculation of the tangent space that $\text{dim}(\text{Osc}_{123}(X))<\text{dim}(\text{Osc}_{124}(X))$. \end{rem} Another type of variety constructed from first order information by Buczy\'nski and Landsberg\cite{Lan2} is the variety $Z(X)$. This variety is defined as follo \[Z(X)=\{ x'(0)+y'(0), x(t), y(t)\subset X \text{ two curves such that } \mathbb{P}^1_{x(0)y(0)}\subset X\}\] It is proved\cite{Lan2} that $Z(X)$ is a closed variety non-necessarily irreducible. \begin{ex}\rm For the 4-qubit system, 4 components of $Z(X)$ will have to be considered. Indeed $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ and through any points $x\in X$, there are four lines contained in $X$. Assume $\widehat{x}=|0000\rangle$ then $\mathbb{P}(\mathbb{C}^2\otimes |000\rangle)$, $\mathbb{P}(|0\rangle\otimes \mathbb{C}^2\otimes |00\rangle)$, $\mathbb{P}(|00\rangle \otimes \mathbb{C}^2\otimes |0\rangle)$ and $\mathbb{P}(|000\rangle\otimes\mathbb{C}^2)$ correspond to the four lines contained in $X$ and passing through $x$. Let us define $Z_1(X)$ as the component of $Z(X)$ obtained by $\widehat{x}(t)=e_1(t)\otimes e_2(t)\otimes e_3(t)\otimes e_4(t)$ and $\widehat{y}(t)=f_1(t)\otimes f_2(t)\otimes f_3(t)\otimes f_4(t)$ with $e_1(0)=|0\rangle$, $f_1(0)=|1\rangle$ and $e_2(0)=e_3(0)=e_4(0)=f_2(0)=f_3(0)=f_4(0)=|0\rangle$. A representative of $Z_1(X)$ is \[\underbrace{|0100\rangle+|0001\rangle}_{\in\widehat{T}_{|0000\rangle} X} +\underbrace{|1100\rangle+|1010\rangle}_{\in \widehat{T}_{|1000\rangle} X} \] and similarly representatives for $Z_2(X), Z_3(X)$ and $Z_4(X)$ will be respectively \begin{itemize} \item $\underbrace{|1000\rangle+|0010\rangle}_{\in\widehat{T}_{|0000\rangle} X} +\underbrace{|1100\rangle+|0101\rangle}_{\in \widehat{T}_{|0100\rangle} X}$ \item $\underbrace{|1000\rangle+|0100\rangle}_{\in\widehat{T}_{|0000\rangle} X} +\underbrace{|0110\rangle+|0011\rangle}_{\in \widehat{T}_{|0010\rangle} X}$ \item $\underbrace{|1000\rangle+|0100\rangle}_{\in\widehat{T}_{|0000\rangle} X} +\underbrace{|0101\rangle+|0011\rangle}_{\in \widehat{T}_{|0001\rangle} X}$ \end{itemize} \end{ex} \begin{rem}\rm More generally\cite{Lan2} a representative of $Z_1(X)$ will be of type \[v+w=\underbrace{|1000\rangle+|0100\rangle+|0010\rangle+|0001\rangle}_{=v\in\widehat{T}_{|0000\rangle} X}+\underbrace{|1100\rangle+\alpha|1010\rangle+\beta|1001\rangle}_{w\in \widehat{T}_{|1000\rangle} X}\] where the parameters $\alpha$ and $\beta$ are introduced to avoid combinations between components of $v$ and $w$ (we can take any $\alpha$, $\beta$ such that $\alpha\neq \beta$ and $\alpha,\beta\neq 1$). One may notice in particular that $w\in W_1+W_2+W_3$. This description leads to the following observation : \[Z_1(X)=\text{Osc}_{123}(X).\] It tells us that $Z_1(X)$ will contain the varieties $\text{Osc}_1(X)$, $\text{Osc}_2(X)$ and $\text{Osc}_3(X)$ which will be confirmed in Section \ref{nulcone} when we compute the adherence graph of the nullcone. According to Remark \ref{rem_dim_osc} it also tells us that $Z_1(X)$ will be of dimension smaller than $\text{Osc}_{124}(X)$ because $W_1\cap W_2\cap W_3=\text{span}\{|0000\rangle,|1000\rangle\}$ does not satisfy the genericity condition. Similarly one obtains: \begin{itemize} \item $Z_2(X)=\text{Osc}_{145}(X)$, \item $Z_3(X)=\text{Osc}_{246}(X)$, \item $Z_4(X)=\text{Osc}_{356}(X)$. \end{itemize} Lemma \ref{osc} implies that the varieties $Z_i(X)$ are quasihomogeneous. \end{rem} \subsubsection{Join and secant varieties or how to build entangled states from the superposition of two states} The join of two varieties $X$ and $Y$ is the (Zariski) closure of the union of the secant lines with $x\in X$ and $y\in Y$: \[J(X,Y)=\overline{\bigcup_{x\in X, y\in Y, x\neq y} \mathbb{P}^{1}_{xy}}\] In particular if $Y=X$ the join $J(X,X)$ is called the secant variety of $X$ and will be denoted by $\sigma_2(X)$. The secant variety of $X$ is the closure of the set of secant lines of $X$. \begin{rem}\rm For $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times \mathbb{P}^1$, a generic point $x\in \sigma_2(X)$ will be a sum, $x=y+z$ of a generic pair of points of $X$. Under the action of $G$ we can choose this generic pair $(\widehat{y},\widehat{z})$ to be $(|0000\rangle,|1111\rangle)$. In other words the orbit $\mathbb{P}(G.(|0000\rangle+|1111\rangle))$ is a dense open orbit in $\sigma_2(X)$. It is clear from the representative $\widehat{x}=|0000\rangle+|1111\rangle$ that this open set is the set of GHZ entangled states. \end{rem} Going further we can consider the join of $X$ and $\sigma_2(X)$ and inductively we obtain the definition of the $s$-secant variety of $X$ as the join of $X$ and $\sigma_{s-1}(X)$. It is not difficult to check that the variety $\sigma_s(X)$ is indeed the closure of the union of the linear span of $s$-tuples of points of $X$: \[\sigma_s(X)=J(X,\sigma_{s-1}(X))=\overline{\bigcup_{x_1,\dots,x_s\in X} \mathbb{P}^{s-1}_{x_1\dots x_s}}\] where $\mathbb{P}^{s-1}_{x_1\dots x_s}$ is a projective space of dimension $s-1$ passing through $x_1,\dots, x_s$. In the case of Segre products there is a definition of {\em subsecant varieties}, first introduced in our previous article\cite{HLT}: \begin{def }\label{jpair} Let $Y_i\subset\mathbb{P}^{n_i}$, with $1\leq i\leq m$ be $m$ nondegenerate varieties and let us consider $X=Y_1\times Y_2\times\dots\times Y_m\subset \mathbb{P}^{(n_1+1)(n_2+1)\dots(n_m+1)-1}$ the corresponding Segre product. For $J=\{j_1,\dots,j_k\}\subset \{1,\dots,m\}$, a $J$-pair of points of $X$ will be a pair $(x,y)\in X\times X$ such that $x=[v_1\otimes v_2\otimes\dots\otimes{v_{j_1}}\otimes v_{j_1+1}\otimes\dots\otimes{v_{j_2}}\otimes\dots\otimes{v_{j_k}}\otimes\dots\otimes v_{m}]$ and $y=[w_1\otimes w_2\otimes\dots\otimes {v_{j_1}}\otimes w_{j_1+1}\otimes\dots\otimes {v_{j_2}}\otimes\dots\otimes{v_{j_k}}\otimes\dots\otimes w_{m}]$, i.e. the tensors $\widehat{x}$ and $\widehat{y}$ have the same components for the indices in $J$. The $J$-subsecant variety of $\sigma_2(X)$ denoted by $\sigma_2(Y_1\times\dots \times\underline{Y}_{j_1}\times \dots\times \underline{Y}_{j_k}\times \dots\times Y_m)\times Y_{j_1}\times Y_{j_2}\times\dots\times Y_{j_k}$ is the closure of the union of line $\mathbb{P}^1_{xy}$ with $(x,y)$ a $J$-pair of points: \[\sigma_2(Y_1\times\dots \times\underline{Y}_{j_1}\times \dots\times \underline{Y}_{j_k}\times \dots\times Y_m)\times Y_{j_1}\times Y_{j_2}\times\dots\times Y_{j_k} =\overline{\displaystyle\bigcup_{(x,y)\in X\times X, (x,y) J-\text{pair of points}} \mathbb{P}_{xy} ^1}\] \end{def } \begin{rem}\rm The underlined varieties in the notation of the $J$-subsecant varieties correspond to the common components for the points which define a $J$-pair. Roughly speaking those components are the ``common factor'' of $x$ and $y$ in the decomposition of $z=x+y\in \sigma_2(Y_1\times\dots \times\underline{Y}_{j_1}\times \dots\times \underline{Y}_{j_k}\times \dots\times Y_m)\times Y_{j_1}\times Y_{j_2}\times\dots\times Y_{j_k}$. For instance when we consider the $\{1\}$-subsecant (respectively the $\{m\}$-subsecant) variety we can indeed factorize the first (respectively the last) component and we have the equality $\sigma_2(\underline{Y}_1\times Y_2\times\dots \times Y_m)\times Y_1=Y_1\times \sigma_2(Y_2\times\dots\times Y_m)$. \end{rem} \begin{rem}\rm For $J=\emptyset$, the $J$-subsecant variety is $\sigma_2(X)$. \end{rem} \begin{rem}\rm As we will see, the subsecant varieties will correspond to partially entangled states. \end{rem} Suppose $Y\subset X$. A general point of $J(Y,X)$ will be a sum of two states $z=[\widehat{x}+\widehat{y}]$ with $x\in X$ and $y\in Y$. However other points may belong on $J(X,Y)$. Let us assume $x(t)\subset X$, $y(t)\subset Y$ and $x(t),y(t)\to y_0\in Y$. The projective line $\mathbb{P}^1_{x(t)y(t)}$ is a secant line for all $t\neq 0$ and the limiting line $\mathbb{P}^1 _*=\lim_{t\to 0} \mathbb{P}^1_{x(t)y(t)}$ is in $J(Y,X)$. The union of the $\mathbb{P}^1 _*$ is called the projective tangent star to $X$ with respect to $Y$ at the point $y_0$, and is denoted by $T^\star _{X,Y,y_0}$. The union of the tangent stars to $X$ with respect to $Y$ is an algebraic variety, called the variety of relative tangent stars\cite{Z2} of $X$ with respect to $Y$ \[T(Y,X)=\bigcup_{y\in Y} T^\star _{X,Y,y}\] In particular for $Y=X$ we have $T(X,X)=\tau(X)$ the tangential variety. The expected dimension of $J(Y,X)$ is equal to $\text{dim} (X)+\text{dim}(Y)+1$ (${\text{dim}(X)}$ degrees of freedom to choose $x\in X$, ${\text{dim}(Y)}$ degrees of freedom to choose $y\in Y$ and $1$ degree of freedom to choose $z\in \mathbb{P}^1_{xy}$). An important consequence of the Fulton-Hansen connectedness Theorem proved by Zak\cite{Z2} insures that if $J(X,Y)$ has the expected dimension then $T(Y,X)$ is of dimension $\text{dim}(J(Y,X))-1$ and if $J(X,Y)$ is of dimension less than expected then $J(Y,X)=T(Y,X)$. \begin{ex}\label{WGHZ}\rm For any multipartite systems $(n_1+1)\times (n_2+1)\times\dots\times (n_k+1)$ with $k\geq 3$, we have $\text{dim}(\sigma_2(X))=1+2\sum_{i=0} ^k n_i$, i.e. the secant variety is of the expected dimension and therefore $\tau(X)\subsetneq \sigma_2(X)$. It means that for tripartite and higher qudit systems the GHZ and W type always exist and the states belonging to W type are on limiting lines of states of type GHZ. Moreover the rank of tensors corresponding to the state W is greater than the rank of tensor of the GHZ state. This is another way to say that mutipartite systems with $k\geq 3$ always contain exceptional states\cite{ST}. \end{ex} The relation between tangential and join varieties will be illustrated in Section \ref{3sct}. \section{The nullcone}\label{nulcone} In this section we investigate the geometry of the nullcone. The nullcone $\widehat{\mathcal{N}}\subset \mathcal H$ is defined as the set of states which annihilate all invariant polynomials. The ring of invariant polynomials being finitely generated we have \[\widehat{\mathcal N}=\{ |\Psi\rangle\in \mathcal{H}, B(|\psi\rangle=L(|\Psi\rangle)=M(|\Psi\rangle)=D_{xy}(|\psi\rangle)=0\}\] As usual $\mathcal{N}\subset\mathbb{P}(\mathcal H)$ will denote the projectivization of the nullcone. We apply Method \ref{Meth} by first running our algorithm and then establish the geometric interpretations. \subsection{Computing the adherence graph}\label{nulcone_inv} We consider the set $\mathcal F=\{\alpha\in \mathcal E: B[\alpha]=L[\alpha]=M[\alpha]=D_{xy}[\alpha]=0\}.$ Now the sets $\mathcal F$ and $\mathcal B$ being chosen, we let a computer algebra system find the classes, a representative for each classes and the inclusion graph. Applied to the $11662$ forms of $\mathcal F$, we find $31$ classes whose representatives are $$\begin{array}{l}\{0,65535, 65520, 65484, 65450, 64764, 64250, 61166, 64160, 61064, 64704, 59624 , 59520, 65530,\\ 65518, 65532, 65278, 64700, 65041, 65075, 61158, 65109, 64218, 65508, 64762, 65506, 65482,\\65511, 65218, 65271, 65247\},\end{array}$$ and obtain the inclusion graph of Fig. \ref{FOrbNull}. In fact, we do not need all the covariants and we can summarize the results by evaluating the following covariants on the representatives (see appendix \ref{NulCT}): \[T:= \begin{array}{|c|} \hline A\\ \hline B_{2200},B_{2020},B_{2002},B_{0220},B_{0202},B_{0022}\\\hline C_{3111},C_{1311} ,C_{1131}, C_{1113}\\\hline D_{4000},D_{0400},D_{0040},D_{0004}\\\hline D_{2200},D_{2020},D_{2002},D_{0220},D_{0202},D_{0022}\\\hline F_{2220}^1,F_{2202}^1,F_{2022}^1,F_{0222}^1\\\hline L_{6000},L_{0600},L_{0060},L_{0006}\\\hline \end{array} \] Fig. \ref{FOrbNull} is directly deduced from appendix \ref{NulCT}. It is interesting to remark that in this case, we obtain the classification without the help of the geometry (we will see in subsection \ref{GeoNull} why the classification is complete). Straightforwardly, the results contained in appendix \ref{NulCT} provide an algorithm to decide to which orbit a given form belongs. \\ \begin{algo}\label{AlgoNulCone} Compute the orbit of a nilpotent form $|\Psi\rangle$.\\ {\tt Input:} A state $|\Psi\rangle$\\ {\tt Output:} The name of the orbit (according to Fig. \ref{FOrbNull}) or {\tt FAIL} if $|\Psi\rangle$ is not nilpotent.\\ {\tt If } $|\Psi\rangle$ is nilpotent {\tt then}\\ {\color{white}......} compute $T\left[|\Psi\rangle\right]$ and compare it to Appendix B\\ {\tt else} {\tt FAIL} \end{algo} Observing Fig. \ref{FOrbNull}, we can define $9$ stratas of representatives whose elements have symmetric behaviors \emph{w.r.t.} their evaluations:\\ $Gr_0:=\{0\}$,\\ $Gr_1:=\{65535\}$,\\ $Gr_2:=\{65520,65484,65450,64764,64250,61166\}$,\\ $Gr_3:=\{64160,61064,64704,59624\}$,\\ $Gr_4:=\{65530,65518,65532,65278\}$,\\ $Gr_5:=\{59520\}$,\\ $Gr_6:=\{64700,65041,65075,61158,65109,64218\}$,\\ $Gr_7:=\{65508,64762,65506,65482\}$,\\ $Gr_8:=\{65511,65218,65271,65247\}$.\\ Consider the following polynomials:\\ $P_B:=B_{2200}+B_{2020}+B_{2002}+B_{0220}+B_{0202}+B_{0022},$\\ $P_C^1:=C_{3111}+C_{1311}+C_{1131}+C_{1113}$,\\ $P_C^2:=C_{3111}C_{1311}C_{1131}C_{1113}$,\\ $P_D^1:=D_{4000}+D_{0400}+D_{0040}+D_{0004}$,\\ $P_D^2:=D_{2200}+D_{2020}+D_{2002}+D_{0220}+D_{0202}+D_{0022}$,\\ $P_F:=F_{2220}^1+F_{2202}^1+F_{2022}^1+F_{0222}^1$,\\ $P_L:=L_{6000}+L_{0600}+L_{0060}+L_{0006}$.\\ We can decide to which strata a given form belongs by evaluating the vector: \[ V:=[A,P_B,P_C^1,P_C^2,P_D^1,P_D^2,P_F,P_L]. \] The results are summarized in Table \ref{TNullStra}, where $1$ means that the covariant does not vanish. \begin{table}[h] \[ \begin{array}{|c|c|} \hline Gr_0&[0, 0, 0, 0, 0, 0, 0, 0]\\ Gr_1&[1, 0, 0, 0, 0, 0, 0, 0]\\ Gr_2&[1, 1, 0, 0, 0, 0, 0, 0]\\ Gr_3&[1, 1, 1, 0, 0, 0, 0, 0]\\ Gr_4&[1, 1, 1, 0, 1, 0, 0, 0]\\ Gr_5&[1, 1, 1, 1, 0, 0, 0, 0]\\ Gr_6&[1, 1, 1, 1, 1, 1, 0, 0]\\ Gr_7&[1, 1, 1, 1, 1, 1, 1, 0]\\ Gr_8&[1, 1, 1, 1, 1, 1, 1, 1] \\\hline \end{array} \] \caption{The values of $V[\alpha]$ on each strata of the nullcone \label{TNullStra}} \end{table} \begin{figure}[h] \begin{tikzpicture} \matrix (mat) [matrix of nodes,ampersand replacement=\&, row sep=35pt,column sep=20pt, left delimiter={.}, right delimiter={.}, nodes={minimum height=0.5cm,minimum width=1.5cm }]{ \tiny \bf Genuine entanglement \& 65511 \& 65218 \& \& 65271 \& 65247\\ \& 65508 \& 64762 \& \& 65506 \& 65482\\ 64700 \& 65041 \& 65075 \& \& 61158 \& 65109 \& 64218\\ \\ \& \& \& 59520 \& \& \& \\\hline \tiny \bf Partial entanglement \& 65530 \& 65518 \& \& 65532 \& 65278\\ \& 64160 \& 61064 \& \& 64704 \& 59624\\ 65520 \& 65484 \& 65450 \& \& 64764 \& 64250 \& 61166\\\hline \tiny\bf Unentangled states \& \& \& 65535 \& \& \& \\ \& \& \& 0 \& \& \& \\}; \node (0) [fill=gray,opacity=0.1,rectangle,rounded corners, inner sep=0pt, fit= (mat-10-4) (mat-10-4)] {}; \node (65535) [fill=gray,opacity=0.1,rectangle,rounded corners, inner sep=0pt, fit= (mat-9-4) (mat-9-4)] {}; \node (65520) [rectangle,rounded corners, inner sep=0pt, fit= (mat-8-1) (mat-8-1)] {}; \node (65484) [rectangle,rounded corners, inner sep=0pt, fit= (mat-8-2) (mat-8-2)] {}; \node (65450) [rectangle,rounded corners, inner sep=0pt, fit= (mat-8-3) (mat-8-3)] {}; \node (64764) [rectangle,rounded corners, inner sep=0pt, fit= (mat-8-5) (mat-8-5)] {}; \node (64250) [rectangle,rounded corners, inner sep=0pt, fit= (mat-8-6) (mat-8-6)] {}; \node (61160) [rectangle,rounded corners, inner sep=0pt, fit= (mat-8-7) (mat-8-7)] {}; \node (59520)[rectangle,rounded corners, fill=gray,opacity=0.1,inner sep=0pt, fit= (mat-5-4) (mat-5-4)] {}; \node (64160) [rectangle,rounded corners, inner sep=0pt, fit= (mat-7-2) (mat-7-2)] {}; \node (61064) [rectangle,rounded corners, inner sep=0pt, fit= (mat-7-3) (mat-7-3)] {}; \node (64704) [rectangle,rounded corners, inner sep=0pt, fit= (mat-7-5) (mat-7-5)] {}; \node (59624) [rectangle,rounded corners, inner sep=0pt, fit= (mat-7-6) (mat-7-6)] {}; \node (65530) [rectangle,rounded corners, inner sep=0pt, fit= (mat-6-2) (mat-6-2)] {}; \node (65518) [rectangle,rounded corners, inner sep=0pt, fit= (mat-6-3) (mat-6-3)] {}; \node (65532) [rectangle,rounded corners, inner sep=0pt, fit= (mat-6-5) (mat-6-5)] {}; \node (65278) [rectangle,rounded corners, inner sep=0pt, fit= (mat-6-6) (mat-6-6)] {}; \node (64700) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-1) (mat-3-1)] {}; \node (65041) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-2) (mat-3-2)] {}; \node (65075) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-3) (mat-3-3)] {}; \node (61158) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-5) (mat-3-5)] {}; \node (65109) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-6) (mat-3-6)] {}; \node (64218) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-7) (mat-3-7)] {}; \node (65508) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-2) (mat-2-2)] {}; \node (64720) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-3) (mat-2-3)] {}; \node (65506) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-5) (mat-2-5)] {}; \node (65482) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-6) (mat-2-6)] {}; \node (65511) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-2) (mat-1-2)] {}; \node (65218) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-3) (mat-1-3)] {}; \node (65271) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-5) (mat-1-5)] {}; \node (65247) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-6) (mat-1-6)] {}; \node (rect M2) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-8-1) (mat-8-7)] {}; \node (rect M3) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-7-2) (mat-7-6)] {}; \node (rect M5) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-6-2) (mat-6-6)] {}; \node (rect M6) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-3-1) (mat-3-7)] {}; \node (rect M7) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-2-2) (mat-2-6)] {}; \node (rect M8) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-1-2) (mat-1-6)] {}; \draw (0) -- (65535); \draw (65535)--(65520); \draw (65535)--(65484); \draw (65535)--(65450); \draw (65535)--(64764); \draw (65535)--(64250); \draw (65535)--(61160); \draw (65520) -- (64160); \draw (65520) -- (64704); \draw (65484) -- (61064); \draw (65484) -- (64704); \draw (65450) -- (61064); \draw (65450) -- (64160); \draw (64764) -- (64704); \draw (64764) -- (59624); \draw (61160) -- (59624); \draw (61160) -- (61064); \draw (64250) -- (59624); \draw (64250) -- (64160); \draw (64160) -- (59520); \draw (61064) -- (59520); \draw (64704) -- (59520); \draw (59624) -- (59520); \draw (64160) -- (65530); \draw (61064) -- (65518); \draw (64704) -- (65532); \draw (59624) -- (65278); \draw (65530)--(64700); \draw (65530)--(65075); \draw (65530)--(61158); \draw (65518)--(64700); \draw (65518)--(65041); \draw (65518)--(64218); \draw (59520)--(64700); \draw (59520)--(65041); \draw (59520)--(65075); \draw (59520)--(61158); \draw (59520)--(65109); \draw (59520)--(64218); \draw (65532)--(61158); \draw (65532)--(65109); \draw (65532)--(64218); \draw (65278)--(65041); \draw (65278)--(65075); \draw (65278)--(65109); \draw (64700)--(65508); \draw (64700)--(64720); \draw (65041)--(65508); \draw (65041)--(65506); \draw (65075)--(65508); \draw (65075)--(65482); \draw (61158)--(64720); \draw (61158)--(65482); \draw (65109)--(65506); \draw (65109)--(65482); \draw (64218)--(64720); \draw (64218)--(65506); \draw (65508)--(65511); \draw (65508)--(65218);\draw (65508)--(65271);\draw (65508)--(65247); \draw (64720)--(65511); \draw (64720)--(65218);\draw (64720)--(65271);\draw (64720)--(65247); \draw (65506)--(65511); \draw (65506)--(65218);\draw (65506)--(65271);\draw (65506)--(65247); \draw (65482)--(65511); \draw (65482)--(65218);\draw (65482)--(65271);\draw (65482)--(65247); \node [left=20pt] at (0) {$Gr_0$}; \node [left=20pt] at (65535) {$Gr_1$}; \node [left=20pt] at (65520) {$Gr_2$}; \node [left=20pt] at (64160) {$Gr_3$}; \node [left=20pt] at (59520) {$Gr_5$}; \node [left=20pt] at (65530) {$Gr_4$}; \node [left=20pt] at (64700) {$Gr_6$}; \node [left=20pt] at (65508) {$Gr_7$}; \node [left=20pt] at (65511) {$Gr_8$}; \end{tikzpicture} \caption{Varieties of the nullcone \label{FOrbNull}} \end{figure} \subsection{The geometry of the nullcone\label{GeoNull}} We now translate the previous calculations in a geometric description of the $G$-orbits of the projectivized nullcone $\mathcal{N}\subset \mathbb{P}^{15}$. \begin{theorem}\label{thnulcone} The nullcone $\mathcal{N}^{11}\subset \mathbb{P}^{15}$ is the union of 4 irreducible algebraic varieties of dimension $11$ and contains $30$ non-equivalent classes of (entangled) states\footnote{The set of separable states as well as sets of partially entangled states are part of the orbits.} . All of those classes are algebraic varieties which can be built up by geometric constructions from the set of separable states $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\subset \mathbb{P}^{15}$. The identifications of those algebraic varieties are given in Table \ref{table_nul_cone1} (genuine entangled states) and Table \ref{table_nul_cone} (partially entangled states). The stratification of the nullcone by those $G$-varieties is the adherence graph of Figure \ref{FOrbNull} (without the trivial orbit) and it is sketched in geometric terms in Figure \ref{figure_nul_cone}. \end{theorem} \proof The action of $G=SL_2\times SL_2\times SL_2\times SL_2$ on $\mathbb{P}(\mathcal{H})$ is not finite but it is known from Kac's work\cite{Kac} that $G$ acts with finitely many orbits on $\mathcal{N}$. More precisely according to Kostant-Sekiguchi Theorem\cite{DLS} there are 30 $G$-orbits in $\mathcal{N}$ ($31$ in $\widehat{\mathcal{N}}$ when adding the trivial orbit). Therefore the orbits identified by the calculation of Section \ref{nulcone_inv} exhaust all orbits of the nullcone. To identify the orbit closures calculated in Section \ref{nulcone_inv} with the varieties of Tables \ref{table_nul_cone1} and \ref{table_nul_cone} we first check with the algorithm that the representative of a variety is a point of the corresponding orbit and thus the $G$-orbit of the representative is contained in the given orbit. But all varieties involved in Tables \ref{table_nul_cone1} and \ref{table_nul_cone} are quasihomogneous (it is proved for $\text{Osc}_{ijk}(X)$ in Section \ref{tools} and it is well known for the other orbits\cite{HLT}). Then comparing dimensions proves the equality between the varieties and the orbit closures. $\Box$ \begin{figure}[!h] \[\xymatrix{& & \text{Osc}_{124}(X)\ar@{^{}-}[d]&& 4 \text{ orbits } (Gr_8) \\ && Z_3(X)\ar@{^{}-}[d] & &4 \text{ orbits } (Gr_7) \\ & & \text{Osc}_{3}(X)\ar@{^{}-}[dr]\ar@{^{}-}[dl]& &6 \text{ orbits } (Gr_6)\\ 4 \text{ orbits } (Gr_4) & \mathbb{P}^1\times \mathbb{P}^7 \ar@{^{}-}[dr] & & \tau(X)\ar@{^{}-}[dl]& 1 \text{ orbit } (Gr_5) \\ & & \mathbb{P}^1\times \tau(\mathbb{P}^1\times\mathbb{P}^1\times \mathbb{P}^1)\ar@{^{}-}[d]& &4 \text{ orbits } (Gr_3) \\ & & \mathbb{P}^1\times \mathbb{P}^1\times \mathbb{P}^3 & &6 \text{ orbits } (Gr_2)\\ & & X \ar@{^{}-}[u] & &1 \text{ orbit } (Gr_1) }\] \caption{Sketch of the stratification of the nullcone by $G$-algebraic varieties. Only one variety is given for each group $Gr_i$, other representatives of each group correspond to permutations.}\label{figure_nul_cone} \end{figure} \begin{table}[!h] \begin{center} \begin{tabular}{|c|c|c|c|} \hline Name & Variety (orbit closure) & Normal form & dim \\ \hline $65247$ & $\text{Osc}_{135}(X)$&$|0001\rangle+|0010\rangle+|0100\rangle+|1000\rangle+|1100\rangle+|1001\rangle+|0101\rangle$ & $11$\\ $65271$ & $\text{Osc}_{236}(X)$&$|0001\rangle+|0010\rangle+|0100\rangle+|1000\rangle+|1010\rangle+|1001\rangle+|0011\rangle$ &$11$ \\ $65218$ & $\text{Osc}_{456}(X)$&$|0001\rangle+|0010\rangle+|0100\rangle+|1000\rangle+|0110\rangle+|0101\rangle+|0011\rangle$ &$11$ \\ $65511$ & $\text{Osc}_{124}(X)$&$|0001\rangle+|0010\rangle+|0100\rangle+|1000\rangle+|1100\rangle+|1010\rangle+|0110\rangle$ & $11$\\ \hline $65482$ & $Z_3(X)$ & $|1000\rangle+|0100\rangle+|0110\rangle+|0011\rangle$ & $10$\\ $65506$ & $Z_2(X)$ & $|1000\rangle+|0010\rangle+|1100\rangle+|0101\rangle$ & $10$\\ $64762$ & $Z_4(X)$ &$|1000\rangle+|0100\rangle+|0101\rangle+|0011\rangle$ & $10$\\ $65508$ & $Z_1(X)$ & $|0100\rangle+|0001\rangle+|1100\rangle+|1010\rangle$ & $10$\\ \hline $64218$ & $\text{Osc}_{5}(X)$ & $|0000\rangle+|1000\rangle+|0100\rangle+|0010\rangle+|0001\rangle+|0101\rangle$ & $9$\\ $65109$ & $\text{Osc}_{4}(X)$ & $|0000\rangle+|1000\rangle+|0100\rangle+|0010\rangle+|0001\rangle+|0110\rangle$ & $9$\\ $61158$ & $\text{Osc}_{6}(X)$ & $|0000\rangle+|1000\rangle+|0100\rangle+|0010\rangle+|0001\rangle+|0011\rangle$ & $9$\\ $65075$ & $\text{Osc}_{2}(X)$ & $|0000\rangle+|1000\rangle+|0100\rangle+|0010\rangle+|0001\rangle+|1010\rangle$ & $9$\\ $65041$ & $\text{Osc}_{1}(X)$ & $|0000\rangle+|1000\rangle+|0100\rangle+|0010\rangle+|0001\rangle+|1100\rangle$& $9$\\ $64700$ & $\text{Osc}_{3}(X)$ & $|0000\rangle+|1000\rangle+|0100\rangle+|0010\rangle+|0001\rangle+|1001\rangle$ & $9$\\ \hline $59520$ & $\tau(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)$ & $|0001\rangle+|0010\rangle+|0100\rangle+|1000\rangle$ & $8$\\ \hline \end{tabular} \caption{Genuine entangled states ($G$-orbits) of the nullcone, their geometric identifications (varieties), their representatives and the dimensions of the varieties.}\label{table_nul_cone1} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Name & Variety (orbit closure) & Normal form & dim \\ \hline $65278$ & $\mathbb{P}^7\times\mathbb{P}^1$ & $|0000\rangle+|1110\rangle$ & $8$\\ $65532$ & $\mathbb{P}^1\times\mathbb{P}^7$ & $|0000\rangle+|0111\rangle$ & $8$\\ $65518$ & $\sigma(\mathbb{P}^1\times\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1$&$|0000\rangle+|1101\rangle$ & $8$\\ $65530$ & $\sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1$&$|0000\rangle+|1011\rangle$ & $8$\\ \hline $59624$ & $\tau(\mathbb{P}^1\times\mathbb{P}^1\times\times\mathbb{P}^1)\times\mathbb{P}^1$ & $|0110\rangle+|1010\rangle+|1100\rangle$ & $7$\\ $64704$ & $\mathbb{P}^1\times\tau(\mathbb{P}^1\times\mathbb{P}^1\times\times\mathbb{P}^1)$ & $|0011\rangle+|0101\rangle+|0110\rangle$ & $7$\\ $61064$ & $\tau(\mathbb{P}^1\times\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1$ & $|0101\rangle+|1001\rangle+|1100\rangle$ & $7$\\ $64160$ & $\tau(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1$ & $|0011\rangle+|1001\rangle+|1010\rangle$ & $7$\\ \hline $61166$ & $\sigma(\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1\times\mathbb{P}^1$ & $|0000\rangle+|1100\rangle$ & $5$\\ $64250$ & $\sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\underline{\mathbb{P}^1})\times\mathbb{P}^1\times\mathbb{P}^1$ & $|0000\rangle+|1010\rangle$ & $5$\\ $64764$ & $\mathbb{P}^1\times\sigma(\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1$ & $|0000\rangle+|0110\rangle$ & $5$\\ $65450$ & $\sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1\times\mathbb{P}^1$ & $|0000\rangle+|1001\rangle$ & $5$\\ $65484$ & $\mathbb{P}^1\times \sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1$ & $|0000\rangle+|0101\rangle$ & $5$\\ $65520$ & $\mathbb{P}^1\times\mathbb{P}^1\times \sigma(\mathbb{P}^1\times\mathbb{P}^1)$ & $|0000\rangle+|0011\rangle$ & $5$\\ \hline $65635$ & $\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times \mathbb{P}^1$ &$|0000\rangle$& $4$\\ \hline \end{tabular} \caption{Partially entangled states ($G$-orbits) of the nullcone, their geometric identifications (varieties), their representatives and the dimensions of the varieties.}\label{table_nul_cone} \end{center} \end{table} \section{The third secant variety}\label{3sct} In this section we look at four qubit systems which belong to $\sigma_3(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)$, the third secant variety of the set of separable states. The third secant variety of the Segre product of four projective lines is an algebraic variety of dimension\cite{CGG} $13$ defined by the vanishing of two invariant polynomials\cite{CD}: \[\sigma_3(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)=\{[|\Psi\rangle]\in\mathbb{P}^{15}, L(|\Psi\rangle)=M(|\Psi\rangle)=0\}\] It is also the projectivized set of tensors which are limits of tensors of rank three\cite{Lan3}. \subsection{Computing the adherence graph}\label{comput_sec} In this section the results obtained by our method depend on the choice of the invariant polynomials. The fact that $L$ and $M$ carry geometrical information (they define the third secant variety) plays a particular role which will facilitate the geometric identifications. Looking with a computer algebra system for the classes of the set $\mathcal F=\{\alpha\in \mathcal E: L(\alpha)=M(\alpha)=0\}$ we find $17$ new orbits. This example illustrate the fact that our method is not really an algorithm since the interest of the results depend on the choice of the covariants. Here, it is better to use the invariants $L$ and $M$ instead the invariants $D^1_{0000}$ and $D^2_{0000}$ of appendix \ref{AppCov}. The invariant of degree $6$, $F_{0000}$, will also be advantageously replaced by $D_ {xy}$ \begin{figure}[h] \begin{center} \begin{tikzpicture} \matrix (mat) [matrix of nodes,ampersand replacement=\&, row sep=50pt,column sep=1pt, left delimiter={.}, right delimiter={.}, nodes={minimum height=0.5cm,minimum width=1.5cm }]{ \& $L=M=0$ \& \& \&65257 \& \& \& \& $B_{0000}=0$ \\ \& \& \& \& \& \& \& \& 59777\\ \& \ \ \ \ \ \ \ \ \ \& 65259 \& 65261 \& \& 65513 \& 65273\\ 59510\\ \&65267 \& 65509 \& 65507 \& \& 65269 \& 65510 \& 65231\& {\bf Null-cone}\\ \& \& 65529 \& \& 65515 \& \& 65517\\ \& $D_{xy}=0$ \& \& \& 65534 \& \& \& \& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ }; \node (65257) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-5) (mat-1-5)] {}; \node (59510) [rectangle,rounded corners, inner sep=0pt, fit= (mat-4-1) (mat-4-1)] {}; \node (59777) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-9) (mat-2-9)] {}; \node (65259) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-3) (mat-3-3)] {}; \node (65261) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-4) (mat-3-4)] {}; \node (65513) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-6) (mat-3-6)] {}; \node (65273) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-7) (mat-3-7)] {}; \node (65267) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-2) (mat-5-2)] {}; \node (65509) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-3) (mat-5-3)] {}; \node (65507) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-4) (mat-5-4)] {}; \node (65269) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-6) (mat-5-6)] {}; \node (65510) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-7) (mat-5-7)] {}; \node (65231) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-8) (mat-5-8)] {}; \node (65529) [rectangle,rounded corners, inner sep=0pt, fit= (mat-6-3) (mat-6-3)] {}; \node (65515) [rectangle,rounded corners, inner sep=0pt, fit= (mat-6-5) (mat-6-5)] {}; \node (65517) [rectangle,rounded corners, inner sep=0pt, fit= (mat-6-7) (mat-6-7)] {}; \node (65534) [rectangle,rounded corners, inner sep=0pt, fit= (mat-7-5) (mat-7-5)] {}; \draw[thick,rounded corners=8pt](65257)--(59510); \draw[thick,rounded corners=8pt](65257)--(59777); \draw[thick,rounded corners=8pt](65257)--(65261); \draw[thick,rounded corners=8pt](65257)--(65259); \draw[thick,rounded corners=8pt](65257)--(65513); \draw[thick,rounded corners=8pt](65257)--(65273); \draw (65267)--(65259);\draw (65267)--(65261);\draw (65267)--(65513);\draw (65267)--(65273); \draw (65509)--(65259);\draw (65509)--(65261);\draw (65509)--(65513);\draw (65509)--(65273); \draw (65507)--(65259);\draw (65507)--(65261);\draw (65507)--(65513);\draw (65507)--(65273); \draw (65269)--(65259);\draw (65269)--(65261);\draw (65269)--(65513);\draw (65269)--(65273); \draw (65510)--(65259);\draw (65510)--(65261);\draw (65510)--(65513);\draw (65510)--(65273); \draw (65231)--(65259);\draw (65231)--(65261);\draw (65231)--(65513);\draw (65231)--(65273); \draw[dotted] (65231)--(59510); \draw[dotted] (65510)--(59510); \draw[dotted] (65269)--(59510); \draw[dotted] (65507)--(59510); \draw[dotted] (65509)--(59510); \draw[dotted] (65267)--(59510); \draw (65529)--(65267); \draw (65529)--(65509); \draw (65529)--(65507); \draw (65529)--(65269); \draw (65515)--(65267); \draw (65515)--(65509);\draw (65515)--(65510);\draw (65515)--(65231); \draw (65517)--(65269); \draw (65517)--(65507);\draw (65517)--(65510);\draw (65517)--(65231); \draw (65534)--(65529);\draw (65534)--(65515);\draw (65534)--(65517); \node (rect M1) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-3-2) (mat-7-9)] {}; \node (rect M2) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-1-9) (mat-7-9)] {}; \end{tikzpicture} \end{center} \caption{Inclusion diagram of the varieties in the third secant variety \label{OrbSec}} \end{figure} From these modifications, our algorithm finds $17$ new orbits: \[\begin{array}{l} \{65257, 59777, 59510, 65259, 65261, 65513, 65273, 65267, 65509, 65507, 65269, 65510,\\ 65231,65529, 65515, 65517, 65534 \}, \end{array} \] together with the inclusion graph illustrated in Fig. \ref{OrbSec}. \begin{rem}\label{rem5910}\rm The dotted edges are obtained by the calculation of the adherence graph but do not mean there are inclusions between the varieties which contains the orbits. Indeed the algorithm detects a specific orbit of the third secant which do not belong to the varieties defined by $B=0$ or $D_{xy}=0$. This variety will be defined by a polynomial in $B$ and $D_{xy}$ which we should add to the algorithm to compute the adherence graph. In fact this missing invariant will be determined geometrically in Section \ref{3sctrev}. \end{rem} Obviously, we find three kinds of orbits: \begin{enumerate} \item $P_1:=\{65257,59510\}$ for which $B_{0000}\neq 0$ and $D_{xy}\neq 0$, \item $P_2:=\{59777\}$ for which $B_{0000}= 0$ and $D_{xy}\neq 0$, \item $P_3:=\{65259,\dots,65534\}$ for which $B_{0000}\neq 0$ and $D_{xy}= 0$. \end{enumerate} We separate the orbits of $P_1$ by testing the nullity of the components of the vector,$V'=L_{6000},\ L_{0600},\ L_{0060},\ L_{0006}\}$ (see Table \ref{V'}). \begin{table}[h] \[\begin{array}{|c|c|c|}\hline \mbox{ form }& V'&\\\hline 65257&[1111]&Gr'_2\\\hline 59510&[0000]&Gr'_1\\\hline \end{array} \] \caption{Values of V' \label{V'}} \end{table} Now, let us show how to separate the orbits of $P_3$. First, we define the six covariants ${\bf F}_{\star\star00}:=F_{4200}+F_{2400}$, ${\bf F}_{\star0\star0}:=F_{4020}+F_{2040},\dots,\ {\bf F}_{00\star\star}:=F_{0042}+F_{0024}$. We compute the orbit of a given form in $P_3$ by evaluating the vector \[ V''=[\mathbf F_{\star\star00},\mathbf F_{\star0\star0},\mathbf F_{\star00\star},\mathbf F_{0\star\star,0}, \mathbf F_{0\star0\star},\mathbf F_{00\star\star},L_{6000},L_{0600},L_{0060},L_{0006}]. \] The results are summarized in Table \ref{V''} where $1$ means that the covariant does not vanish. \begin{table}[h] \[\begin{array}{|c|c|c|}\hline \mbox{ form }& V''&\\\hline 65259&[1111111000]& \\ 65261&[1111110100]& Gr''_4\\ 65513&[1111110001]&\\ 65273&[1111110010]&\\\hline 65267&[1111010000]&\\ 65509&[1011110000]&\\ 65507&[1110110000]&Gr''_3\\ 65269&[1101110000]&\\ 65510&[0111110000]&\\ 65231&[1111100000]&\\\hline 65529&[1000010000]&\\ 65515&[0011000000]&Gr''_2\\ 65517&[0100100000]&\\\hline 65534&[0000000000]&Gr''_1\\\hline \end{array} \] \caption{Evaluation of $V''[\alpha]$ \label{V''}} \end{table} It is worth noticing here that two forms $\alpha$, $\alpha'$ may cancel the same invariants and take the same values on the vectors $V'$ and $V''$ and still not be in the same orbit. Recall that the third secant variety does not contain any dense orbit, it depends on two parameters. However the vanishing of covariants define $G$-algebraic subvarieties of the third secant and two forms which cancel the same invariants and covariants will be points of the same $G$-algebraic variety. If the corresponding variety is quasihomogeneous then the two forms are in the same orbit.\\ Thus we have described an algorithm which can recognize points of $17+30$ subvarieties of the third secant variety. \begin{algo}\label{AlgThirdSec} Compute the orbit of a state in the third secant\\ {\tt Input}: a state $|\Psi\rangle$\\ {\tt Output}: the name of the orbit of $|\Psi\rangle$ according to Fig. \ref{FOrbNull} and \ref{OrbSec} or {\tt FAIL} if $|\Psi\rangle$ does not belong to the third secant.\\ \\ {\tt If } $L(|\Psi\rangle)=M(|\Psi\rangle)=0$ {\tt then}\\ {\color{white} ......}{\tt If } $B_{0000}(|\Psi\rangle)=0$ {\tt then}\\ {\color{white} ......}{\color{white} ......}{\tt If } $D_{xy}(|\Psi\rangle)=0$ {\tt then}\\ {\color{white} ......}{\color{white} ......}{\color{white} ......} use Algo \ref{AlgoNulCone}\\ {\color{white} ......}{\color{white} ......}{\tt else} {\tt return} $59777$\\ {\color{white} ......}{\tt else}\\ {\color{white} ......}{\color{white} ......}{\tt If} $D_{xy}(|\Psi\rangle)=0$ {\tt then}\\ {\color{white} ......}{\color{white} ......} {\color{white} ......} Evaluate $V''$ on $|\Psi\rangle$ and compare to table \ref{V''}\\ {\color{white} ......}{\color{white} ......}{\tt else} Evaluate $V'$ on $|\Psi\rangle$ and compare to table \ref{V'}\\ {\tt else} {\tt FAIL} \end{algo} The geometric identification of those varieties will be carried on in the next section. Set \begin{eqnarray*} \mathbf F_{42}&=&\mathbf F_{**00}+\cdots+\mathbf F_{00**}\\ \overline{\mathbf F_{**00}}&=&\mathbf F_{42}-\mathbf F_{**00}-\mathbf F_{00**}\\ \overline{\mathbf F_{*0*0}}&=&\mathbf F_{42}-\mathbf F_{*0*0}-\mathbf F_{0*0*}\\ \overline{\mathbf F_{*00*}}&=&\mathbf F_{42}-\mathbf F_{*00*}-\mathbf F_{0**0} \end{eqnarray*} The evaluation of the vector \[W=[\mathbf F_{42},\overline{\mathbf F_{**00}}\cdot\overline{\mathbf F_{*0*0}}\cdot\overline{\mathbf F_{*00*}}, \mathbf F_{**00}\mathbf F_{*0*0}\cdots\mathbf F_{00**}] \] allows us to determine the strata of a given form as shown by Table \ref{W}. \begin{table}[h]\[ \begin{array}{|c|c|}\hline \mathrm{Strata}&W\\\hline Gr''_4& [111]\\ Gr''_3& [110]\\ Gr''_2& [100]\\ Gr''_1& [000]\\\hline \end{array} \] \caption{Evaluation of $W[\alpha]$ on strata \label{W}} \end{table} The evaluation of the covariants allows us to describe the inclusion diagram between the stratas $Gr''_i$ and $Gr_j$ (see fig \ref{GR''4},\ref{GR''3},\ref{GR''2} and \ref{GR''1}) \begin{figure}[h] \begin{center} \begin{tikzpicture} \matrix (mat) [matrix of nodes,ampersand replacement=\&, row sep=50pt,column sep=10pt, left delimiter={.}, right delimiter={.}, nodes={minimum height=0.5cm,minimum width=1.5cm }]{ 65259\& 65261\& 65513\& 65273\&\ \ \\ \& \& \& \& 59777\\ 65511\& 65218\& 65271\& 65247\\}; \node (p65259) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-1) (mat-1-1)] {}; \node (p65261) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-2) (mat-1-2)] {}; \node (p65513) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-3) (mat-1-3)] {}; \node (p65273) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-4) (mat-1-4)] {}; \node (p59777) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-5) (mat-2-5)] {}; \node (p65511) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-1) (mat-3-1)] {}; \node (p64762) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-2) (mat-3-2)] {}; \node (p65506) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-3) (mat-3-3)] {}; \node (p65482) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-4) (mat-3-4)] {}; \draw (p65511)--(p65513);\draw (p65511)--(p59777); \draw (p64762)--(p65259);\draw (p64762)--(p59777); \draw (p65506)--(p65273);\draw (p65506)--(p59777); \draw (p65482)--(p65261);\draw (p65482)--(p59777); \node (Gr''4) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-1-1) (mat-1-4)] {}; \node (Gr8) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-3-1) (mat-3-4)] {}; \node [left=20pt] at (p65259) {$Gr''_4$}; \node [left=20pt] at (p65511) {$Gr_8$}; \end{tikzpicture} \end{center} \caption{$Gr''_4$, $59777$ and $Gr_8$\label{GR''4}} \end{figure} \begin{figure}[h] \begin{center} \begin{tikzpicture} \matrix (mat) [matrix of nodes,ampersand replacement=\&, row sep=50pt,column sep=10pt, left delimiter={.}, right delimiter={.}, nodes={minimum height=0.5cm,minimum width=1.5cm }]{ 65267\& 65509\& 65507\& 65269\& 65510\& 65231\& \\ \& \& \& \& \& \& 59510\\ \& 65508\& 64762\& 65506\& 65482\&\\}; \node (65267) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-1) (mat-1-1)] {}; \node (65509) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-2) (mat-1-2)] {}; \node (65507) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-3) (mat-1-3)] {}; \node (65269) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-4) (mat-1-4)] {}; \node (65510) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-5) (mat-1-5)] {}; \node (65231) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-6) (mat-1-6)] {}; \node (65508) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-2) (mat-3-2)] {}; \node (64762) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-3) (mat-3-3)] {}; \node (65506) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-4) (mat-3-4)] {}; \node (65482) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-5) (mat-3-5)] {}; \node (59510) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-7) (mat-2-7)] {}; \draw (65508)--(65509);\draw (65508)--(65269);\draw (65508)--(65510);\draw (65508)--(59510); \draw (64762)--(65269);\draw (64762)--(65267);\draw (64762)--(65231);\draw (64762)--(59510); \draw (65506)--(65510);\draw (65506)--(65267);\draw (65506)--(65507);\draw (65506)--(59510); \draw (65482)--(65509);\draw (65482)--(65231);\draw (65482)--(65507);\draw (65482)--(59510); \node (Gr''3) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-1-1) (mat-1-6)] {}; \node (Gr7) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-3-2) (mat-3-5)] {}; \node [left=20pt] at (65267) {$Gr''_3$}; \node [left=20pt] at (65508) {$Gr_7$}; \end{tikzpicture} \end{center} \caption{$Gr''_3$, $59510$ and $Gr_7$\label{GR''3}} \end{figure} \begin{figure}[h] \begin{center} \begin{tikzpicture} \matrix (mat) [matrix of nodes,ampersand replacement=\&, row sep=50pt,column sep=10pt, left delimiter={.}, right delimiter={.}, nodes={minimum height=0.5cm,minimum width=1.5cm }]{ 65267\& 65509\& 65507\& 65269\& 65510\& 65231 \\ \& 65529 \& \& 65515 \& \& 65517 \\ 64700\& 65041\& 65075\& 61158\& 65109\& 64218\\}; \node (65267) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-1) (mat-1-1)] {}; \node (65509) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-2) (mat-1-2)] {}; \node (65507) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-3) (mat-1-3)] {}; \node (65269) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-4) (mat-1-4)] {}; \node (65510) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-5) (mat-1-5)] {}; \node (65231) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-6) (mat-1-6)] {}; \node (65529) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-2) (mat-2-2)] {}; \node (65515) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-4) (mat-2-4)] {}; \node (65517) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-6) (mat-2-6)] {}; \node (64700) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-1) (mat-3-1)] {}; \node (65041) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-2) (mat-3-2)] {}; \node (65075) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-3) (mat-3-3)] {}; \node (61158) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-4) (mat-3-4)] {}; \node (65109) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-5) (mat-3-5)] {}; \node (64218) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-6) (mat-3-6)] {}; \draw (64700)--(65515);\draw (64700)--(65269); \draw (65041)--(65529);\draw (65041)--(65510); \draw (65075)--(65517);\draw (65075)--(65509); \draw (61158)--(65529);\draw (61158)--(65231); \draw (65109)--(65515);\draw (65109)--(65507); \draw (64218)--(65517);\draw (64218)--(65267); \node (Gr''3) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-1-1) (mat-1-6)] {}; \node (Gr''2) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-2-2) (mat-2-6)] {}; \node (Gr6) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-3-1) (mat-3-6)] {}; \node [left=20pt] at (65267) {$Gr''_3$}; \node [left=20pt] at (65529) {$Gr''_2$}; \node [left=20pt] at (64700) {$Gr_6$}; \end{tikzpicture} \end{center} \caption{$Gr''_3$, $Gr''_2$ and $Gr_6$\label{GR''2}} \end{figure} \begin{figure}[h] \begin{center} \begin{tikzpicture} \matrix (mat) [matrix of nodes,ampersand replacement=\&, row sep=50pt,column sep=10pt, left delimiter={.}, right delimiter={.}, nodes={minimum height=0.5cm,minimum width=1.5cm }]{ 65534\& \\ \& \& \& \& 59520\\ 65530\& 65518\& 65532\& 65278\&\\}; \node (65534) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-1) (mat-1-1)] {}; \node (59520) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-5) (mat-2-5)] {}; \node (65530) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-1) (mat-3-1)] {}; \node (65518) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-2) (mat-3-2)] {}; \node (65532) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-3) (mat-3-3)] {}; \node (65278) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-4) (mat-3-4)] {}; \draw (65534)--(59520); \draw (65534)--(65278); \draw (65534)--(65530); \draw (65534)--(65518); \draw (65534)--(65532); \node (Gr''1) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-1-1) (mat-1-1)] {}; \node (Gr5) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-2-5) (mat-2-5)] {}; \node (Gr4) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-3-1) (mat-3-4)] {}; \node [left=20pt] at (65534) {$Gr''_1$}; \node [left=20pt] at (59520) {$Gr_5$}; \node [left=20pt] at (65530) {$Gr_4$}; \end{tikzpicture} \end{center} \caption{$Gr''_1$, $Gr_4$ and $Gr_5$\label{GR''1}} \end{figure} \begin{rem}\rm Those inclusions between groups $Gr$ and $Gr''$ will be discussed in the next section and in appendix \ref{inclusion} form the geometry perspective. \end{rem} \subsection{Geometric interpretations} Let us now find out which varieties are identified by the previous calculation. As mentioned in Section \ref{tools}, Buczy\'nski and Landsberg\cite{Lan2} provide a detailed and precise analysis of the normal forms of points in $\sigma_3(X)$ for a certain class of homogeneous varieties including the Segre products of projective spaces. They proved in particular that \begin{theorem}(Theorem\cite{Lan2} 1.2)\label{lan} Assume $n\geq 3$ and let $X=Seg(\mathbb{P}(A_1)\times\dots\times\mathbb{P}(A_n))$. Let $p=[v]\in \sigma_3(X)\diagdown\sigma(X)$. Then $v$ has one of the following normal forms: \begin{enumerate} \item $v=x+y+z$ with $[x],[y],[z]\in X$. \item $v=x+x'+y$ with $[x],[y]\in X$ and $x'\in \widehat{T}_{[x]}X$. \item $v=x+x'+x''$ where $[x(t)]\subset X$ is a curve and $x'=x'(0)$, $x''=x''(0)$. \item $v=x'+y'$ where $[x]$, $[y]\in X$ are distinct points that lie on a line contained in $X$, $x'\in \widehat{T}_{[x]} X$ and $y'\in \widehat{T}_{[y]} X$. \end{enumerate} \end{theorem} This theorem will serve as a guide to detect varieties among the orbits distinguished in Section \ref{comput_sec}. It says that any entangled state of the third secant variety could be written as a normal form of type 1, 2, 3 or 4. Before stating our Theorem regarding the entangled states of $\sigma_3(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)$ let us make a few observations and describe some of the varieties contained in $\sigma_3(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)$. As already stated, in the case of $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$, the third secant variety is of dimension $13$ and defined by $L=M=0$. This is an example of defective higher secant variety\cite{CGG,Lan3} (the expected dimension is $14$). But $\sigma_3(X)\supset J(X,\tau(X))$ and the variety $J(X,\tau(X))$ is of dimension $13$, as it can be proved by Terracini's Lemma\cite{HLT}. Thus by irreducibility of the varieties we have $\sigma_3(X)=J(X,\tau(X))$. It is clear\cite{Lan2} from the normal forms that point of type $2$ in Theorem \ref{lan} belong to $J(X,\tau(X))$. In other words in the particular situation of the Segre embedding of 4 projective lines, points of type 1 and 2 are the same. In our Theorem other states of type 2 will appear by considering varieties $J(X,Y)$ with $Y\subset \tau(X)$ and the subvarieties of $\tau(X)$ are known from Theorem \ref{thnulcone}. States of type 2 will also be obtained by taking specific joins of varieties of the group $Gr_2$. An example of such a join is $J(\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1,\mathbb{P}^1\times\mathbb{P}^3\times\mathbb{P}^1)$ and a general element of the cone of this variety is $|\Psi\rangle=\underbrace{|1010\rangle+|0110\rangle}_{\in \mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1}+\underbrace{|0000\rangle+|1001\rangle}_{\in\mathbb{P}^1\times\mathbb{P}^3\times\mathbb{P}^1}$. The state $|\Psi\rangle$ is point of type 2 as it can be seen from the following decomposition \[|\Psi\rangle=\underbrace{|0000\rangle+|1010\rangle+|0110\rangle}_{\in T_{|0010\rangle} X}+\underbrace{|1001\rangle}_{\in X}\] Other points of type 2 can be obtained by looking for forms $x'+y$ where $\widehat{x}'$ and $\widehat{y}$ are isotropic vectors for the quadratic form befined by $B_{0000}$. Geometrically it is the same as intersecting a variety of points of type 2 with the projective quadric hypersurface $\mathbb{Q}^{14}=\{B_{0000}=0\}\subset\mathbb{P}^{15}$. For instance we have already said that points of type 2 form a dense open subset of $\sigma_3(X)$. We can then consider \[\sigma_3^{(1)}(X)=\sigma_3(X)\cap\mathbb{Q}^{14}\] which is by construction a $G$-invariant subvariety of $\sigma_3(X)$ of codimension one in $\sigma_3(X)$. An example of normal form for $\sigma_3^{(1)}(X)$ is \[|1111\rangle+|1000\rangle+|0100\rangle+|0010\rangle+|0001\rangle\] Similarly one can consider $J^{(1)}(X,\tau(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1)=J(X,\tau(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1)\cap \mathbb{Q}^{14}$ with normal form \[|1111\rangle+|1000\rangle+|0100\rangle+|0010\rangle\] as well as the following varieties \begin{center} \begin{tabular}{|c|c|} \hline $J^{(1)}(X,\tau(\mathbb{P}^1\times\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1)$ &$|1111\rangle+|1000\rangle+|0100\rangle+|0001\rangle$ \\ \hline $J^{(1)}(X,\tau(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1)$ &$|1111\rangle+|1000\rangle+|0010\rangle+|0001\rangle$ \\ \hline $J^{(1)}(X,\mathbb{P}^1\times\tau(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1))$ &$|1111\rangle+|0100\rangle+|0010\rangle+|0001\rangle$ \\ \hline \end{tabular} \end{center} Regarding states of normal forms of type 3 we have the following lemma. \begin{lemma}\label{osc-tgt} Let $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ then $\text{\em Osc}(X)=T(X,\tau(X))$. \end{lemma} \proof The proof is based on the same techniques as the ones developped in the paper by Buczy\'nski and Landsberg\cite{Lan2}. First let us notice that $\text{Osc}(X)$ is of dimension $12$ because it is strictly contained in $\sigma_3(X)$ and it strictly contains the nullcone. The variety $J(X,\tau(X))$ is of dimension $13$, i.e. the expected one. Thus by Zak's corollary of the Fulton-Hansen Theorem one knows there exists an irreducible subvariety $T(X,\tau(X))$ of dimension $12$ whose points are on limiting secants of $J(X,\tau(X))$. The points of $T(X,\tau(X))$ can also be seen as points on limiting $3$-planes because $J(X,\tau(X))=\sigma_3(X)$. Let us consider three curves $x(t), y(t), z(t)$ of $X$ such that $x(0)=y(0)=z(0)=x_0\in X$ and such that $\mathbb{P}^1_*=\lim_{t\to 0}\mathbb{P}^1 _{yz}\subset \tau(X)$. That last assumption implies that $y'(0)=z'(0)=x_1\in T_{x_0} X$. Expanding the three curves into Taylor series we get \[\begin{array}{lll} x(t)& = & x_0+\dots\\ y(t) & = & x_0 +t x_1+\dots\\ z(t) & = & x_0+tx_1+t^2 x_2+\dots \end{array}\] with $x_2=z''(0)\in T_{x_0} ^{(2)} X$. The three plane passing through $x(t), y(t)$ and $z(t)$ can be noted as a point of $\bigwedge^3 V$ $x(t)\wedge y(t)\wedge z(t)\in \bigwedge^3 V$ and we have $x(t)\wedge y(t)\wedge z(t)=x(t)\wedge(y(t)-x_0)\wedge (z(t)-x_0-tx_1)$. Thus taking the limit \[\lim_{t\to 0} \frac{1}{t^3} x(t)\wedge y(t)\wedge z(t)=x_0\wedge x_1\wedge x_2\] The limit plane is spanned by $\tilde{x}+\tilde{x}'+\tilde{x}''$ for a curve $\tilde{x}$ such that $\tilde{x}(0)=x_0$, $\tilde{x}'(0)=x_1$ and $\tilde{x}''(0)=x_2$. This proves $T(X,\tau(X))\subset \text{Osc}(X)$ and the equality follows by dimension argument.$\Box$ \begin{rem}\rm The same reasonning allows us to conclude to the following identifications \[\begin{array}{|l|} \hline \text{Osc}_{124}(X)=T(X,\tau(\mathbb{P}^1\times\mathbb{P}^1\times\times\mathbb{P}^1)\times\mathbb{P}^1))\\ \text{Osc}_{456}(X)=T(X,\mathbb{P}^1\times\tau(\mathbb{P}^1\times\mathbb{P}^1\times\times\mathbb{P}^1))\\ \text{Osc}_{135}(X)=T(X,\tau(\mathbb{P}^1\times\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1)\\ \text{Osc}_{236}(X)=T(X,\tau(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1)\\ \hline \text{Osc}_{1}(X)=T(X,\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1) \\ \text{Osc}_{2}(X)=T(X,\sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\underline{\mathbb{P}^1})\times\mathbb{P}^1\times\mathbb{P}^1) \\ \text{Osc}_{3}(X)=T(X,\sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1\times\mathbb{P}^1) \\ \text{Osc}_{4}(X)=T(X,\mathbb{P}^1\times\mathbb{P}^3\times\mathbb{P}^1) \\ \text{Osc}_{5}(X)=T(X, \sigma(\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1\times\mathbb{P}^1) \\ \text{Osc}_{6}(X)=T(X,\mathbb{P}^1\times\mathbb{P}^1\times \mathbb{P}^3) \\ \hline \end{array}\] \end{rem} Finally points of type 4 have already been discussed as they correspond to points on the varieties $Z_i(X)$. We can now state our second Theorem. \begin{theorem}\label{thm_3_sct} Algorithm \ref{AlgThirdSec} allows us to identify $17$ different classes of entangled states in $\sigma_3(X)\setminus\mathcal{N}$. Those classes are $G$-algebraic varieties built by superposition of states from the set of separable states. The states, the corresponding varieties and normal forms are given in Table \ref{table_3rd_sec2}, the inclusion among the varieties is given by Figure \ref{OrbSec} and sketched with their geometric interpretations in Figure \ref{figure_3rd_sec} \end{theorem} \proof We proceed similarly to our proof of Theorem \ref{thnulcone}. For each variety of Table \ref{table_3rd_sec2} we define normal forms from its geometric description, as explained in Section \ref{tools}, and in the discussion above. Then, as a consequence of Terracini's lemma\cite{HLT}, all the varieties have the expected dimension (except $\sigma_3(X)$ as already discussed). Finally once we have the normal forms and the dimension we identify the varieties given by algorithm \ref{AlgThirdSec} with a polynomial test on the normal form. $\Box$ \begin{table} \footnotesize \begin{tabular}{|c|c|c|c|} \hline Name & Variety & Normal form& Dimension\\ \hline $6527$ & $\sigma_3(X)$& \tiny$|1111\rangle+ |0000\rangle+|1000\rangle$ \tiny$+|0100\rangle+|0010\rangle+|0001\rangle$ & $13$\\ \hline $59777$ & $\sigma_3^{(1)}(X)$ &\tiny$|1111\rangle+|1000 \rangle+ |0100 \rangle+ |0010 \rangle+ |0001 \rangle$ & $12$\\ \hline $65513$ & $J(X,\tau(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1)$ &\tiny$|1111\rangle+|0000\rangle+|1000\rangle+|0100\rangle+|0010\rangle$ & $12$\\ $65261$ & $J(X,\tau(\mathbb{P}^1\times\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1)$ &\tiny$|1111\rangle+|0000\rangle+|1000\rangle+|0100\rangle+|0001\rangle$ & $12$\\ $65273$ & $J(X,\tau(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\mathbb{P}^1)\times\mathbb{P}^1)$ &\tiny$|1111\rangle+|0000\rangle+|1000\rangle+|0010\rangle+|0001\rangle$ & $12$\\ $65259$ & $J(X,\mathbb{P}^1\times\tau(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1))$ &\tiny$|1111\rangle+|0000\rangle+|0100\rangle+|0010\rangle+|0001\rangle$ & $12$\\ \hline $59510$ & $\text{Osc}'(X)$ &\tiny$|1100\rangle+|1010\rangle+|1001\rangle+|0110\rangle+|0101\rangle+|0011\rangle$ & $12$\\ \hline $65507$ & $J(\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1,\sigma(\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1\times\mathbb{P}^1)$ &\tiny $|0000\rangle+|0011\rangle+|0101\rangle+|1111\rangle$& $11$\\ $65509$ & $J(\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1,\sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1\times\mathbb{P}^1)$ & \tiny$|0000\rangle+|1010\rangle+|0110\rangle+|1001\rangle$& $11$ \\ $65510$ &\tiny$J(\sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\underline{\mathbb{P}^1})\times\mathbb{P}^1\times\mathbb{P}^1, \sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1\times\mathbb{P}^1)$&\tiny$|0000\rangle+|1010\rangle+|0110\rangle+|1111\rangle$ & $11$ \\ $65231$ & $J(\mathbb{P}^1\times\mathbb{P}^3\times\mathbb{P}^1,\sigma(\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\underline{\mathbb{P}^1})\times\mathbb{P}^1\times\mathbb{P}^1)$ &\tiny$|0000\rangle+|0110\rangle+|0101\rangle+|1111\rangle$ & $11$ \\ $65267$ & $J(\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1,\mathbb{P}^1\times\mathbb{P}^3\times\mathbb{P}^1)$ &\tiny$|0000\rangle+|0110\rangle+|1001\rangle+|0101\rangle$ & $11$ \\ $65269$ & $J(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^3,\sigma(\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1\times\mathbb{P}^1)$ &\tiny $|0000\rangle+|1010\rangle+|1001\rangle+|0101\rangle$ & $11$ \\ \hline $65529$ & $J(X,\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1)$ &\tiny $|1111\rangle+ |0000\rangle+|1000\rangle+|0100\rangle$ & $10$\\ $65517$ & $J(X,\sigma(\underline{\mathbb{P}^1}\times\mathbb{P}^1\times\underline{\mathbb{P}^1}\times\mathbb{P}^1)\times\mathbb{P}^1\times\mathbb{P}^1)$ &\tiny$|1111\rangle+|0000\rangle+|0100\rangle+|0001\rangle$ & $10$\\ $65515$ & $J(X,\mathbb{P}^1\times\mathbb{P}^3\times\mathbb{P}^1)$ &\tiny$|1111\rangle+|0000\rangle+|0100\rangle+|0010\rangle$ &$10$\\ \hline $65534$ & $\sigma(X)$ &\tiny $|0000\rangle+|1111\rangle$ & $9$\\ \hline \end{tabular} \caption{Non-nilpotent entangled states of the third secant variety}\label{table_3rd_sec2} \end{table} \begin{rem}\rm Those geometric interpretations allow us to get a deeper understanding of the inclusions of Figures \ref{GR''4}, \ref{GR''3}, \ref{GR''2} and \ref{GR''1}. For instance the inclusions of Figure \ref{GR''4} are obvious once we identify the varieties of $Gr_8$ with intersection of join varieties with $\mathbb{Q}^{14}$. The inclusions of Figure $\ref{GR''1}$ are just the natural inclusions of the tangential and the subsecant varieties to the second secant variety (Figure \ref{GR''1bis}), while the inclusions of Figure \ref{GR''2} between varieties of type $Gr_2''$ and $Gr_6$ are consequence of Zak's corollary of the Fulton-Hansen Theorem (see Section \ref{tools}). More precisely the variety $J(X,\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1)$ is of the expected dimension and therefore contains a subvariety of codimension one which is $T(X,\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1)$. But the variety $J(X,\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1)$ is identical to $J(X,\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^3)$ (Figure \ref{GR''2bis}). Finally the inclusions of Figure \ref{GR''4} correspond to inclusions of tangential varieties with their respective joins. Those tangential varieties are also sub-tangential varieties of $T(X,\tau(X))$ (see Figure \ref{GR''4bis}). We gather Figures \ref{GR''4bis}, \ref{GR''2bis} and \ref{GR''1bis} with more explanations in Appendix \ref{inclusion}. \end{rem} \subsection{The third secant variety atlas revisited}\label{3sctrev} It should be noticed here that the variety $\text{Osc}(X)$ does not appear in Theorem \ref{thm_3_sct}. But this variety played a crucial role to identify subvarieties of the nullcone and subvarieties of the third secant variety. Thus, in that example, our algorithm does not see a specific strata whose geometric feature is important to understand entanglement --- $\text{Osc}(X)$ could be interpreted as the next dimensional generalization of the W-states. Let us sketch in Figure \ref{figure_3rd_sec} our stratification of $\sigma_3(X)$ by algebraic varieties. We mark by dotted edges the inclusion of the variety $\text{Osc}(X)$ and we give only one variety of each group $Gr_i$ and $Gr_i''$ (the others being obtained by permutations). \begin{figure}[!h] \[\xymatrix { \sigma_3(X) \ar@{^{}-}[d]\ar@{^{}-}[dr]\ar@{^{}.}[drr]\\ J(X,\mathbb{P}^1\times\tau(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1))\ar@{^{}-}[d] & \sigma^{(1)} _3(X) & T(X,\tau(X))=\text{Osc}(X) \\ J(\mathbb{P}^3\times\mathbb{P}^1\times\mathbb{P}^1,\mathbb{P}^1\times\mathbb{P}^3\times\mathbb{P}^1)\ar@{^{}-}[d] & T(X,\mathbb{P}^1\times\tau(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1))\ar@{^{}-}[u]\ar@{^{}.}[ur]\ar@{^{}-}[ul] & \text{Osc}'(X)\ar@{^{}.}[u]\\ J(X,\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^3)\ar@{^{}-}[d] & Z_1(X) \ar@{^{}-}[u]\ar@{^{}-}[ur]\ar@{^{}-}[ul] & \\ \sigma(X) & T(X,\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^3)\ar@{^{}-}[ul]\ar@{^{}-}[u]& \\ \mathbb{P}^1\times \mathbb{P}^7\ar@{^{}-}[ur] \ar@{^{}-}[u] & \tau(X)\ar@{^{}-}[u]\ar@{^{}-}[ul] & & \\ & \mathbb{P}^1\times\tau(\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1) \ar@{^{}-}[u]\ar@{^{}-}[ul] & & \\ & \mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^3\ar@{^{}-}[u] & & \\ & X\ar@{^{}-}[u] & & } \] \caption{Extended atlas of $\sigma_3(X)$}\label{figure_3rd_sec} \end{figure} Back to our description of our Method \ref{Meth}, geometry suggests here to introduce a new invariant to identify $\text{Osc}(X)$. To obtain by calculation the extended decomposition of the third secant variety one needs to introduce the new invariant \[Z=D_{xy}-\dfrac{1}{27}B^3\] and modify our original algorithm. We can also use the polynomial $\Delta$ (the hyperdeterminant, see Section \ref{tools}) to slightly refine the inclusion graph. We remark first that for non-nilpotent forms verifying $L=M=0$, we have $D_{xy}=0$ implies $\Delta=0$, $B_{0000}=0$ implies $\Delta\neq 0$ and from eq (\ref{Delta2I_2}) $L_{6000}=0$ implies $\Delta=0$. So the condition $\Delta=0$ allows only to find one more subvariety of $65257$ whose representative is $6014$. When $L=M=0$ and $D_{xy}\neq 0$ the polynomials $\Delta$ and $Z$ play the same role because $\Delta_{|L=M=0}=6912D_{xy}Z$. We will explore this type of description of $\Delta$ in a forthcoming paper\cite{HLT2}. As a consequence, we obtain a new atlas, and a new algorithm with $Z$, for entanglement types within the third secant variety (see Fig. \ref{OrbSec2} which is the computational analogue of Fig. \ref{figure_3rd_sec}) including a new variety whose representative is $6014$. Note also that the variety represented by $59510$ has only nilpotent strict subvarieties in the graph as suspected in Remark \ref{rem5910}. It is quite a good surprise to find out here that the missing invariant to detect $\text{Osc}(X)$ was a component of the restriction to $\sigma_3(X)$ of the hyperdeterminant $\Delta$. The hypersurface given by $\Delta=0$ which is the dual variety of $X=\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1\times\mathbb{P}^1$ should play a central role to understand the entanglement of four qubits as already pointed out by Miyake\cite{My}. \begin{figure}[h] \begin{center} \begin{tikzpicture} \matrix (mat) [matrix of nodes,ampersand replacement=\&, row sep=50pt,column sep=10pt, left delimiter={.}, right delimiter={.}, nodes={minimum height=0.5cm,minimum width=1.5cm }]{ \& \& \& \& {\color{gray} 65257} \& \& \& \& \\ \bf 6014 \& \ $\Delta=0$ \& \color{gray} 65259 \&\color{gray} 65261 \ \& \& \color{gray}65513 \& \color{gray}65273 \& \&\color{gray} 59777\\ \color{gray} 59510 \& \& \& \& \& \& \& \& \\ \& \& \& \& \& \& \& \& \\ $Z=0$ \&\color{gray}65267 \& \color{gray}65509 \& \color{gray}65507 \& \& \color{gray}65269 \& \color{gray}65510 \& \color{gray}65231\& \\ }; \node (6014) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-1) (mat-2-1)] {}; \node (65257) [rectangle,rounded corners, inner sep=0pt, fit= (mat-1-5) (mat-1-5)] {}; \node (59510) [rectangle,rounded corners, inner sep=0pt, fit= (mat-3-1) (mat-3-1)] {}; \node (59777) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-9) (mat-2-9)] {}; \node (65259) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-3) (mat-2-3)] {}; \node (65261) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-4) (mat-2-4)] {}; \node (65513) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-6) (mat-2-6)] {}; \node (65273) [rectangle,rounded corners, inner sep=0pt, fit= (mat-2-7) (mat-2-7)] {}; \node (65267) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-2) (mat-5-2)] {}; \node (65509) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-3) (mat-5-3)] {}; \node (65507) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-4) (mat-5-4)] {}; \node (65269) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-6) (mat-5-6)] {}; \node (65510) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-7) (mat-5-7)] {}; \node (65231) [rectangle,rounded corners, inner sep=0pt, fit= (mat-5-8) (mat-5-8)] {}; \draw[dotted](65257)--(59777); \draw[thick,rounded corners=8pt](65257)--(6014); \draw[dotted](65257)--(65261); \draw[dotted](65257)--(65259); \draw[dotted](65257)--(65513); \draw[dotted](65257)--(65273); \draw[dotted] (65267)--(65259);\draw[dotted] (65267)--(65261); \draw[dotted] (65267)--(65513);\draw[dotted] (65267)--(65273); \draw[dotted] (65509)--(65259);\draw[dotted] (65509)--(65261);\draw[dotted] (65509)--(65513); \draw[dotted] (65509)--(65273); \draw[dotted] (65507)--(65259);\draw[dotted] (65507)--(65261);\draw[dotted] (65507)--(65513); \draw[dotted] (65507)--(65273); \draw[dotted] (65269)--(65259);\draw[dotted] (65269)--(65261);\draw[dotted] (65269)--(65513); \draw[dotted] (65269)--(65273); \draw[dotted] (65510)--(65259);\draw[dotted] (65510)--(65261);\draw[dotted] (65510)--(65513); \draw[dotted] (65510)--(65273); \draw[dotted] (65231)--(65259);\draw[dotted] (65231)--(65261);\draw[dotted] (65231)--(65513); \draw[dotted] (65231)--(65273); \draw[thick,rounded corners=8pt](6014)--(59510);% \node (Gr4) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-2-1) (mat-5-1)] {}; \node (Gr5) [rectangle,rounded corners,fill=gray,opacity=0.1, inner sep=0pt, fit= (mat-2-1) (mat-5-8)] {}; \end{tikzpicture} \end{center} \caption{New inclusion diagram of the varieties in the third secant variety \label{OrbSec2}} \end{figure} \section{Conclusion} We have investigated the geometry of four qubit systems using the knowledge of the algebra of covariant polynomials and the algebraic geometry of auxiliary varieties. Imposing conditions on the generators of the algebra of invariant polynomials, we have been able to describe precisely the so-called nullcone variety and the third secant variety. Our descriptions are geometric --- each entanglement pattern appears as an open subset of an algebraic variety, and algorithmic --- each strata is identified by the vanishing of specific covariants. Our motivations for studying the third secant variety of the four qubit systems had been explained in our previous paper\cite{HLT}. The third secant variety is the next higher dimensional generalization of the GHZ-state. It is also a case of higher secant variety investigated by geometers\cite{Lan2,CGG}. The method (Method \ref{Meth}) applied in this paper gives a complete description of the nullcone as well as specific stratas of the third secant variety given by the algorithm and completed by geometric analysis. This geometric atlas of the third variety is made of $18$ ($17$ found by the algorithm and $1$ by geometric considerations, corresponding to the intersection of the third secant with the zero locus of the hyperdeterminant) classes of non-nilpotent entangled states and $29$ nilpotent entangled states (the orbits of the nullcone without the set of separable states). The new classes of entanglement are all built by superposition of states --- starting from the set of separable states --- or by taking derivatives of curves lying on a variety defining a class of entanglement --- starting again from the set of separable states. This second construction amounts to building new states as limits of superpositions (Section \ref{tools}) and corresponds to exceptional states\cite{ST}. In our atlas, the subvarieties of the nullcone are all obtained by this second construction. For higher dimensional varieties, our method fails to be exhaustive. The representation $\mathcal{H}=\mathbb{C}^2\otimes\mathbb{C}^2\otimes\mathbb{C}^2\otimes \mathbb{C}^2$ of $G$ is {\em tame}, which means in particular that $G$ acts with finitely many orbits on the nullcone, but the number of $G$-orbit in $\mathcal{H}$ is infinite. This is already the case for the third secant variety. In this case our initial algorithm was not able see the variety $\text{Osc}(X)$ (again this variety generalize the W-states). We had to modify our algorithm by geometric arguments to detect this variety. This suggests how to further investigate the geometry of the four qubit systems. If we relax the condition on the vanishing of the invariants --- the conditions which defined here the nullcone and the third secant --- our algorithm will detect many more orbits, but may also miss some important geometric objects. For example the study the entanglement of the states satisfying $L=0$ will require a better geometric understanding of the $G$-varieties of $\mathbb{P}(\mathcal{H})$ to select the invariants for our algorithm. The choice of the conditions need to be discussed. In particular, we point out the role of the invariants of the quartic form $$x_0^4-2B_{0000}x_0^3x_1+(B_{0000}^2+2L+4M)x_0^2x_1^2-4(B_{0000}(M+\frac12L)-D_{xy})x_0x_1^3+L^2x_1^4.$$ This polynomial has two interesting properties: first, when one evaluates it on the $G_ {abcd}$ state \cite{VDMV}, its roots are $a^2,\,b^2,\,c^2$ and $d^2$; furthermore its discriminant is $\Delta$. This work will be presented in an forthcoming paper\cite{HLT2}.\\ \noindent{\bf Acknowledgments}: This paper is partially supported by the PEPS-ICQ project COGIT (COmbinatoire et G\'eom\'etrie pour l'InTrication) of the CNRS.
{'timestamp': '2013-07-09T02:06:38', 'yymm': '1306', 'arxiv_id': '1306.6816', 'language': 'en', 'url': 'https://arxiv.org/abs/1306.6816'}
\section{Introduction} Molecular clouds play a key role in star formation processes. Stars are born in the dense interiors of molecular clouds that form out of the atomic phase of the highly turbulent interstellar medium (ISM) \citep{1981MNRAS.194..809L,2012MNRAS.424.2599C,2014prpl.conf....3D,2014ApJ...790...10S,2016SAAS...43...85K}. Molecular clouds consist mainly of molecular hydrogen \citep{2003RPPh...66.1651L,2004RvMP...76..125M,2007ARA&A..45..565M,2014prpl.conf....3D}. However, the cloud formation process out of the diffuse atomic phase is still not well constrained. According to the standard photodissociation region (PDR) model, layers of cold atomic hydrogen can effectively shield the cloud from photo-dissociating UV radiation at sufficiently high densities, allowing a more complete conversion of \ion{H}{i} to its molecular form. The cold neutral medium (CNM) with temperatures of $\leq 300\rm\,K$ and volume densities of $10-100\rm\,cm^{-3}$ \citep{1977ApJ...218..148M,2003ApJ...586.1067H,2003ApJ...587..278W,2009ARA&A..47...27K} is thought, due to its relatively high density, to be a key component in the conversion process from diffuse atomic hydrogen to its molecular phase. Constraining the physical and dynamical properties of the CNM is therefore crucial to understand early cloud formation processes. The CNM is a major constituent of the ISM \citep[see e.g.,][]{2001RvMP...73.1031F,2003ApJ...586.1067H}. Even though the observation of the \ion{H}{i} 21cm line allows one to study the properties of atomic hydrogen in general, it is difficult to attribute certain properties to different components of \ion{H}{i}. In pressure equilibrium, atomic hydrogen can exist in different phases \citep[e.g.,][]{1977ApJ...218..148M,2003ApJ...587..278W}. Observations of \ion{H}{i} 21cm line emission are generally attributed to both warm neutral medium (WNM) and CNM. To separate the WNM from the CNM, we make use of the presence of \ion{H}{i} self-absorption \citep[HISA; see e.g.,][]{1972A&A....18...55R,1974AJ.....79..527K,1988A&A...201..311V,1993A&A...276..531F,2000ApJ...540..851G,2005ApJ...626..195G,2005ApJ...626..214G,2003ApJ...598.1048K,2018MNRAS.479.1465D,2020A&A...634A.139W} to trace the cold atomic phase. \ion{H}{i} self-absorption is found throughout the Milky Way in various environments. Many studies have focused on the detection of HISA, first detected in 1954 \citep{1954AJ.....59..324H,1955ApJ...121..569H}, in known sources, but statistical treatments of the kinematic properties and densities of the CNM in large-scale high-resolution maps are still rare. For HISA to be detected, sufficient background emission of warmer gas along the line of sight is required. Since the warm component of atomic hydrogen is more diffuse, it fills up a larger volume than the cold component \citep{1977ApJ...218..148M,2005fost.book.....S,2009ARA&A..47...27K}. \ion{H}{i} self-absorption occurs when a cold \ion{H}{i} cloud is located in front of a warmer \ion{H}{i} emitting cloud. Self-absorption can occur within the same cloud but can also be induced by an emitting cloud in the far background that has the same velocity as the absorbing medium with respect to the local standard of rest $v_{\rm LSR}$. Therefore, the clouds do not have to be spatially associated for HISA to be observable. While absorption against strong continuum sources does yield a direct measurement of the optical depth, the discreteness of the sources only delivers an incomplete grid of optical depth measurements \citep[e.g.,][]{2020A&A...634A..83W}. The interpolation of optical depths across an entire \ion{H}{i} cloud is challenging. Therefore, the great advantage of HISA is that larger areas of cold atomic hydrogen can be mapped. Large filamentary gas structures, also known as Giant Molecular Filaments (GMFs), are suitable to study the CNM on large scales. These objects are the largest coherent structures found in the Milky Way and are subject of many studies probing the physical properties of the Galactic ISM \citep{2010ApJ...719L.185J,2014ApJ...797...53G,2014A&A...568A..73R,2015ApJ...815...23Z,2018ApJ...864..153Z,2016A&A...590A.131A}. We study the hydrogen content by means of HISA, atomic and molecular line emission toward the giant molecular filament GMF20.0-17.9 \citep{2014A&A...568A..73R}. We address the physical processes driving the kinematics of the CNM and the properties that lead to molecular cloud formation. GMF20.0-17.9 was already identified in part by \citet{2013A&A...550A.116T}. Furthermore, \citet{2015ApJ...815...23Z,2018ApJ...864..153Z} define a subsection of this filament as a ``bone'' of the Scutum-Centaurus (SC) spiral arm. GMF20.0-17.9 is characterized by grouping several infrared dark clouds (IRDCs) into a single structure that is velocity-coherent as traced by \element[ ][13]{CO} emission. Figure~\ref{fig:HI_moment0_with_regions} shows an overview of GMF20.0-17.9. Prominent IRDC features along the \element[][13]{CO} emission are visible in the Spitzer $8\rm\,\mu m$ image, in particular toward the western part of the filament. It furthermore shows features of stellar activity. GMF20.0-17.9 extends from $20.2^{\circ}$ to $17.6^{\circ}$ in Galactic longitude and \mbox{$+0.3^{\circ}$} to \mbox{$-0.7^{\circ}$} in Galactic latitude. At the computed kinematic near distance of 3.3--$3.7\rm\,kpc$, this corresponds to a projected length of $\sim$170$\rm\,pc$. \citet{2014A&A...568A..73R} associate the velocity range of $37-50\rm\,km\,s^{-1}$ with GMF20.0-17.9. The filament is near the midplane of the Galaxy, and the velocity of the lower longitude part at $\sim$18$^{\circ}$ agrees fairly well with that of the near SC spiral arm \citep{2008AJ....135.1301V,2014ApJ...783..130R,2019ApJ...885..131R}. However, the sense of the velocity gradient of GMF20.0-17.9 as defined by \citet{2014A&A...568A..73R} goes against the trend of the spiral arm structure. \citet{2015ApJ...815...23Z} argue that the bone at $19.2^{\circ}\gtrsim\ell\gtrsim 18.6^{\circ}$, $b\approx -0.1^{\circ}$ traces the spine of the SC spiral arm well. This discrepancy is attributed to the different methodology for defining filaments and can be brought into agreement if only the lower longitude section of GMF20.0-17.9 is considered. The ATLASGAL survey \citep{2009A&A...504..415S} reveals several high-density clumps within GMF20.0-17.9, particularly in the western part of the filament. \citet{2019A&A...622A..52Z} identified young stellar object (YSO) populations within all currently known GMFs and derive a star formation rate (SFR) of $\mathrm{SFR}=1.2\cdot10^3\rm\,M_{\sun}\,Myr^{-1}$ and efficiency (SFE) of $\mathrm{SFE}=0.01$ for GMF20.0-17.9, which is consistent with SFEs of nearby star-forming regions \citep[see][and references therein]{2019A&A...622A..52Z}. \section{Observations and methods} \subsection{\ion{H}{i} 21 cm line and continuum}\label{sec:methods_and_observation} The following analysis employed the \ion{H}{i} and 1.4 GHz continuum data from the THOR survey \citep[The \ion{H}{i}/OH Recombination line survey of the inner Milky Way;][]{2016A&A...595A..32B,2020A&A...634A..83W}. The \ion{H}{i} and 1.4$\rm\,GHz$ continuum data include observations from the Karl G. Jansky Very Large Array (VLA) in both C- and D-configuration as well as single-dish observations from the Green Bank Telescope (GBT) and Effelsberg, respectively, to recover missing flux on short $uv$ spacings. Depending on the purpose of the analysis, different data products were utilized. For the analysis of \ion{H}{i} emission and the subsequent identification of HISA features, the combined THOR \ion{H}{i} data (VLA C+D + GBT) without continuum were used. The final data have been smoothed to an angular resolution of $\Delta\Theta=40\arcsec$ for better brightness sensitivity that is required especially for studying HISA. The rms noise in emission-free channels is $\sim$5$\rm\,K$. The spectral resolution is $\Delta v=1.5\rm\,km\,s^{-1}$. The final THOR 1.4 GHz continuum emission data (VLA C+D + Effelsberg) have an angular resolution of $\Delta\Theta=25\arcsec$. Additionally, optical depths were derived from \ion{H}{i} absorption against strong continuum sources. For that purpose, THOR-only data that comprise \ion{H}{i} emission with continuum were used. THOR C-array-only data have a higher angular resolution, making them suitable to study absorption against discrete continuum sources. Since these data consist of observations from the VLA in C-array configuration only, large-scale \ion{H}{i} emission is effectively filtered out. The THOR-only data have an angular resolution of $\Delta\Theta\sim 20\arcsec$, depending slightly on Galactic longitude. For more details about the THOR data, we refer to the two data release papers by \citet{2016A&A...595A..32B} and \citet{2020A&A...634A..83W}. We used the Galactic Ring Survey \element[ ][13]{CO}(1--0) data \citep[GRS;][]{2006ApJS..163..145J} to investigate the kinematic properties of the molecular gas and estimate the \element[ ][13]{CO} and $\rm H_2$ column density. The GRS \element[ ][13]{CO} data have an angular and spectral resolution of $\Delta\Theta=46\arcsec$ and $\Delta v=0.21\rm\,km\,s^{-1}$, respectively. \subsection{\ion{H}{i} self-absorption (HISA) extraction}\label{sec:poly_fitting} In the following section, different methods to identify and extract HISA spectra from the \ion{H}{i} emission are discussed. Several approaches have been tested as the accurate extraction of HISA spectra poses a challenging task. \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\textwidth]{Spitzer8mu_HI_moment0_HISA_paper.pdf} \caption[]{GMF20.0-17.9 overview. \textit{Top panel:} Spitzer GLIMPSE $8\rm\,\mu m$ image of GMF20.0-17.9 \citep{2009PASP..121..213C}. The color scale is chosen to bring IRDC features to prominence. \textit{Bottom panel:} \ion{H}{i} integrated emission for a small velocity interval from 44.5--$47.5\rm\,km\,s^{-1}$. The yellow and black contours show the integrated \element[ ][13]{CO} emission from $42$ to $57\,\rm km\,s^{-1}$ at the levels of $10.5\,\rm K\,km\,s^{-1}$ and $15\,\rm K\,km\,s^{-1}$, respectively. The yellow circles in the bottom panel mark the regions of the spectra shown in Fig.~\ref{fig:baseline_fits}.} \label{fig:HI_moment0_with_regions} \end{figure*} The random motion of individual \ion{H}{i} clouds, superposed on the Galactic rotation, contributes significantly to the broadening of the observed 21cm emission and creates multiple emission peaks as seen in Fig.~\ref{fig:whole_HI_spectrum}. \ion{H}{i} spectra show many features and \element[ ][13]{CO} line emission has to be inspected to search for HISA. Since we assume that the CNM is associated with cold molecular gas, we take local \element[ ][13]{CO} emission peaks as a reference point. We thus identify HISA by constraining these features kinematically. The \element[ ][13]{CO} emission peaks at different velocities are not associated with GMF20.0-17.9 as their velocities are attributed to neighboring spiral arm structures \citep[e.g.,][]{2008AJ....135.1301V,2014ApJ...783..130R,2019ApJ...885..131R}. For the analysis of the physical properties of HISA, we followed the derivation by \citet{2000ApJ...540..851G} and \citet{2020A&A...634A.139W}. A comprehensive discussion of the radiative transfer of HISA clouds is given in \citet{2000ApJ...540..851G}, \citet{2003ApJ...598.1048K}, and \citet{2003ApJ...585..823L}. Adopting the geometric model from \citet{2000ApJ...540..851G}, we identify four different cloud components when looking toward a HISA cloud, which we describe below. According to this model \citep[see Fig.~2 in][]{2020A&A...634A.139W}, we observe emitting foreground and background clouds that have spin temperatures of $T_{\mathrm{fg}}$ and $T_{\mathrm{bg}}$, respectively. Between these clouds a cold absorbing HISA cloud can be located, with a spin temperature of $T_{\mathrm{HISA}}$. Diffuse continuum emission, $T_{\mathrm{cont}}$, is assumed to be in the background. Strong continuum point sources will be neglected as they contaminate the absorption features that are caused by HISA. By comparing an ``$\mathrm{on}$'' spectrum, where a HISA cloud is located along the line of sight, with the ``$\mathrm{off}$'' spectrum that we would observe in the absence of the HISA cloud, we can derive the optical depth of the HISA component \citep[see e.g., Eq.~(6) in][]{2000ApJ...540..851G}, defined as \begin{equation} \tau_{\mathrm{HISA}} = -\mathrm{ln}\left(1-\frac{T_{\mathrm{on}}-T_{\mathrm{off}}}{T_{\mathrm{HISA}} - pT_{\mathrm{off}} - T_{\mathrm{cont}}}\right) \: , \label{equ:T_ON-T_OFF} \end{equation}{} \noindent with the dimensionless parameter $p\equiv T_{\mathrm{bg}}\,\left(1-e^{-\tau_{\mathrm{bg}}}\right)/T_{\mathrm{off}}$ describing the fraction of background emission in the optically thin limit \citep{1993A&A...276..531F}. Assuming a HISA spin temperature $T_s$ ($=T_{\rm HISA}$), we can then calculate the \ion{H}{i} column density of the cold \ion{H}{i} gas using the general form \citep{2013tra..book.....W} \begin{equation} \frac{N_{\mathrm{H}}}{\rm cm^{-2}} = 1.8224\times 10^{18}\,\, \frac{T_s}{\rm K}\,\int\tau\left(T_s,v\right)\,\left( \frac{\mathrm{d}v}{\rm km\,s^{-1}}\right) \: , \label{equ:HI_column_density} \end{equation}{} \noindent where $T_s$ is the spin temperature of atomic hydrogen and $\tau\left(T_s,v\right)$ describes the optical depth. \begin{figure}[!htbp] \centering \resizebox{\hsize}{!}{\includegraphics{HI_whole_spectrum_stepsmid_region12_paper.pdf}} \caption{\ion{H}{i}, HISA, and \element[][13]{CO} spectra. The black curve shows an example spectrum of \ion{H}{i} emission ($T_{\mathrm{on}}$) from $-$80$\,\rm km\,s^{-1}$ to $160\,\rm km\,s^{-1}$ averaged over an area of $180\arcsec\times180\arcsec$ centered in $\ell=19.9^{\circ}, b=-0.5^{\circ}$. The dashed red curve is a second-order polynomial fit ($T_{\mathrm{off}}$) to the absorption-free channels of the \ion{H}{i} spectrum at 33.5--$43.0\rm\,km\,s^{-1}$ and 56.0--$65.5\rm\,km\,s^{-1}$ (see Sect.~\ref{sec:poly_fitting}). We estimated the HISA spectrum by then subtracting the \ion{H}{i} spectra from the fitted background emission. The GRS \element[ ][13]{CO} spectrum \citep{2006ApJS..163..145J} covering velocities from $-$5$\,\rm km\,s^{-1}$ to $135\,\rm km\,s^{-1}$ is shown in blue and has been multiplied by a factor of ten for better visibility.} \label{fig:whole_HI_spectrum} \end{figure}{} To reliably identify HISA features, it is crucial to know the emission in the absence of a HISA cloud. Many methods have been tested to estimate $T_{\mathrm{off}}$ in Eq.~\eqref{equ:T_ON-T_OFF} \citep[e.g.,][]{2000ApJ...540..851G,2003ApJ...598.1048K,2003ApJ...585..823L,2008ApJ...689..276K,2020A&A...634A.139W}. \citet{2020A&A...634A.139W} tested estimating the background spectrum $T_{\mathrm{off}}$ by measuring several $\mathrm{off}$ positions offset from apparent absorption features at slightly shifted lines of sight. Their spectra partly show large variations depending on the line of sight, so the assumption that the \ion{H}{i} background emission stays spatially constant does not really hold. Therefore, we refrain from selecting actual $\mathrm{off}$ positions to estimate $T_{\mathrm{off}}$. Instead, we estimate $T_{\mathrm{off}}$ for each line of sight by fitting the baselines of absorption features with polynomial functions. The fits reconstruct an $\mathrm{off}$ spectrum as if there was no absorption present. Different studies have been conducted successfully by applying polynomial fitting procedures \citep{2003ApJ...598.1048K,2003ApJ...585..823L,2020A&A...634A.139W}. We extensively tested various methods to find an independent and systematic fitting procedure. Fitting the baselines with first and second order polynomial functions yielded the most robust results as these functions are not sensitive to small-scale fluctuations along the spectral axis. We therefore rebinned the spectral axis of \ion{H}{i} emission by a factor of two, which gave the best results for reconstructing $T_{\mathrm{off}}$, independent of the chosen velocities at which the baselines were fitted. Higher-order polynomial functions are prone to either over- or underestimating the background spectrum. As outlined below, we utilized a combination of first and second order polynomials in order to fit the baselines of HISA spectra. For the baseline fitting, we furthermore smoothed the \ion{H}{i} emission maps spatially to an angular resolution of $\Delta\Theta=80\arcsec$ to enhance the brightness sensitivity. Irrespective of the actual presence of \element[][13]{CO} emission at individual positions, every pixel spectrum is searched for HISA and fitted at the velocities $33.5-43.0\rm\,km\,s^{-1}$ and $56.0-65.5\rm\,km\,s^{-1}$, omitting the velocity range associated with GMF20.0-17.9. In the first cycle of the fitting procedure, all spectra are fitted with second order polynomial functions ($f(x)=ax^2+bx+c$). Spectra that are contaminated by continuum emission produce bad second order polynomial fits, with $a>0$. For those spectra, we used first order fits instead. Figure~\ref{fig:poly12_fit} presents a comparison between first and second order polynomial fits toward a position that is contaminated by diffuse continuum emission, which can contribute to the broadening of the absorption profile. \begin{figure}[!htbp] \centering \resizebox{\hsize}{!}{\includegraphics{HI_spectrum_stepsmid_region6_paper.pdf}} \caption{Comparison of baseline fits toward continuum emission. The black curve shows an example spectrum of \ion{H}{i} emission centered in $\ell=18.95^{\circ}, b=-0.03^{\circ}$ that is contaminated by continuum emission. The dashed magenta and green curve show a second and first order polynomial fit to the velocity channels of the \ion{H}{i} spectrum at 33.5--$43.0\rm\,km\,s^{-1}$ and 56.0--$65.5\rm\,km\,s^{-1}$, respectively. Due to the continuum contamination, the second order polynomial yields a bad fit to the HISA baseline.} \label{fig:poly12_fit} \end{figure}{} Figure~\ref{fig:baseline_fits} shows our baseline fitting procedure and extracted HISA spectra from example regions of GMF20.0-17.9. The example regions have been selected based on the visual inspection of the \ion{H}{i} and \element[][13]{CO} emission maps. The spectra show the case of HISA with strong, weak, and no molecular counterparts as well as no HISA at all. The final HISA maps were inferred by subtracting the native THOR \ion{H}{i} emission with a spatial and spectral resolution of 40\arcsec and $1.5\rm\,km\,s^{-1}$, respectively, from the fitted baselines. The rms noise of the extracted HISA spectra is $\sim$8$\rm\,K$ and arises from the noise of the observations and the uncertainty of the fitting procedure. We discuss these uncertainties in Appendix~\ref{sec:discussion_extraction_method}. Using this approach, we are biased in the search of HISA since we utilize \element[ ][13]{CO} velocities to constrain the velocities of extracted HISA features. We lack a systematic approach to find HISA independent of molecular line emission. At the spectral resolution of $1.5\rm\,km\,s^{-1}$, we are not able to detect narrow self-absorption features \citep[HINSA;][]{2003ApJ...585..823L,2008ApJ...689..276K} that can be identified through line profile characteristics, such as the line width and the second derivative of the absorption feature. \ion{H}{i} self-absorption features with line widths $\geq 1\rm\,km\,s^{-1}$ are difficult to differentiate from emission troughs. The kinematic information of molecular line emission is therefore crucial in our analysis. \begin{figure}[!htbp] \centering \resizebox{\hsize}{!}{\includegraphics{HISA_region_all_paper.pdf}} \caption[]{\ion{H}{i} ($T_{\mathrm{on}}$) and extracted HISA spectra ($T_{\mathrm{off}}-T_{\mathrm{on}}$) toward the regions marked by the yellow circles in the bottom panel of Fig.~\ref{fig:HI_moment0_with_regions}. The colors are the same as described in Fig.~\ref{fig:whole_HI_spectrum}.} \label{fig:baseline_fits} \end{figure} \section{Results} \subsection{\ion{H}{i} self-absorption} In order to compare the kinematics in a statistical sense, we regridded the HISA data to the same pixel scale as the \element[][13]{CO} GRS data. The properties and kinematics of the CNM were analyzed by fitting single Gaussian components to the HISA spectra. Due to the limited velocity resolution, it is not feasible to resolve multiple HISA components between $43$ and $56\rm\,km\,s^{-1}$. Fits that have a peak intensity of $>25\rm\, K$ ($\sim$3$\sigma$) and a line width between $1.5$ and $20\rm\,km\,s^{-1}$ (FWHM) are considered good. The fitted peak values of the extracted HISA spectra are shown in Fig.~\ref{fig:HISA_gauss_amp}. The derived HISA peaks have intensities between $\sim$30$\rm\,K$ and $\sim$70$\rm\,K$. By comparing the inferred HISA features with the molecular gas emission, the filament can be separated into two subregions (see Fig.~\ref{fig:HISA_gauss_amp}). The western part of the filament ($19.3^{\circ}\gtrsim\ell\gtrsim 17.9^{\circ}$) shows good spatial correlation between HISA and \element[ ][13]{CO} as the cold atomic gas is expected to be closely associated with its molecular counterpart. We assess the spatial correlation quantitatively in Sect.~\ref{sec:HOG} to confirm this finding. However, the eastern part of the filament ($20.5^{\circ}\gtrsim\ell\gtrsim 19.5^{\circ}$) shows significant HISA that does not spatially overlap with the \element[ ][13]{CO} emission at the velocities around $\sim$45$\rm\,km\,s^{-1}$. On the eastern side of the cloud, the CNM as traced by HISA appears to envelop the denser molecular filament. The extracted features indicate the presence of a cold \ion{H}{i} cloud as the velocities generally agree with the molecular gas (Fig.~\ref{fig:peak_velocity_map}). Furthermore, optical depth measurements against bright continuum sources reveal high optical depths in the same velocity regime (Sect.~\ref{sec:HI_emission_optical_depth}). This underlines the robustness of the extraction method. We examined the two subregions separately in the following analysis. \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\textwidth]{HISA_gauss_amp_37-52_paper.pdf} \caption[]{Peak values of a Gaussian fit applied to the estimated HISA spectra. This panel presents the fitted peak values of the extracted HISA spectra. The spectra have been fitted with a single-component Gaussian curve. The black contours represent the integrated \element[ ][13]{CO} emission from 42 to $57\rm\,km\,s^{-1}$ at levels of $10.5$ and $15\,\rm K\,km\,s^{-1}$. The red dashed polygons define the eastern and western part of the filament. The yellow circles mark the regions of the spectra shown in Fig.~\ref{fig:baseline_fits}.} \label{fig:HISA_gauss_amp} \end{figure*} \subsection{Kinematics} We smoothed the \element[ ][13]{CO} spectra to a spectral resolution of $1.5\rm\,km\,s^{-1}$ and applied single-component Gaussian fitting to be consistent in our analysis. Emission features with a peak intensity of $>1.25\rm\,K (\sim 5\sigma)$ and a line width $1.5\rm\,km\,s^{-1}<FWHM<20\,km\,s^{-1}$ are considered to be good fits. The peak velocity maps of HISA and \element[ ][13]{CO} are presented in Fig.~\ref{fig:peak_velocity_map}. The peak velocities of HISA in the eastern part of the filament show a velocity of $\sim$$44-46\rm\,km\,s^{-1}$. The western part reveals slightly higher peak velocities from $\sim$45 to $\sim$49$\rm\,km\,s^{-1}$. The peak velocities of \element[ ][13]{CO} show a coherent distribution along the filament \citep{2014A&A...568A..73R}. \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\textwidth]{HISA_gauss_velocity_HISA_CO_37-52_paper.pdf} \caption[]{Peak velocity maps. The upper panel presents the HISA peak velocities inferred from Gaussian fits. The lower panel shows the fitted \element[ ][13]{CO} peak velocities. The black contours indicate the integrated \element[ ][13]{CO} emission at levels of $10.5$ and $15\rm\,K\,km\,s^{-1}$ for reference. The red dashed polygons mark the eastern and western part of the filament that were used for a separate analysis.} \label{fig:peak_velocity_map} \end{figure*} Although there are slight systematic differences in peak velocity at some positions, the median of the histograms of peak velocities reveal good agreement between \ion{H}{i} and \element[][13]{CO} emission in both the eastern and western regions (Fig.~\ref{fig:histogram_peak_velocity}). The similar velocities are a confirmation that the extracted HISA structures are trustworthy, even though HISA and \element[ ][13]{CO} show a lower degree of line-of-sight correlation in the eastern part of the filament. \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\textwidth]{histogram_peak_velocities_paper.pdf} \caption[]{Histogram of peak velocities. The histograms show the peak velocities of HISA and \element[ ][13]{CO} in black and blue, respectively. The middle and right panels show the velocity distribution in the regions marked by the left and right polygon in Fig.~\ref{fig:peak_velocity_map}, respectively.} \label{fig:histogram_peak_velocity} \end{figure*} The HISA structures in the northern part of the eastern region reveal large line widths of $\sim$8--$10\rm\,km\,s^{-1}$ (Fig.~\ref{fig:HISA_gauss_linewidths}). The bulk of HISA south of the \element[ ][13]{CO} contours shows line widths of $3$--$6\rm\,km\,s^{-1}$. Possible implications of this line width enhancement are discussed in Sect.~\ref{sec:discussion_kinematics}. \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\textwidth]{HISA_gauss_linewidth_HISA_CO_37-52_paper.pdf} \caption[]{Line width maps. The upper panel presents the HISA line widths inferred from Gaussian fits. The lower panel shows the fitted \element[ ][13]{CO} line widths. The black contours indicate the integrated \element[ ][13]{CO} emission at levels of $10.5$ and $15\rm\,K\,km\,s^{-1}$ for reference. The red dashed polygons mark the eastern and western part of the filament that were used for a separate analysis.} \label{fig:HISA_gauss_linewidths} \end{figure*} The \element[ ][13]{CO} line widths are $\sim$2--$3\rm\,km\,s^{-1}$ in the western part and show line widths that are slightly higher in the eastern part (Fig.~\ref{fig:histogram_linewidths}). Assuming a kinetic temperature, we can estimate the expected thermal line width. In local thermodynamic equilibrium (LTE), the thermal line width (FWHM) is given by $\Delta v_{\mathrm{th}} = \sqrt{8\,\mathrm{ln}2\,k_BT_k/(\mu m_{\rm H})}$, where $k_B$, $T_k$, and $\mu$ are the Boltzmann constant, kinetic temperature, and the mean molecular weight of \ion{H}{i} and the CO molecule in terms of the mass of a hydrogen atom $m_{\rm H}$, respectively. If different line broadening effects are uncorrelated, the total observed line width will be \begin{equation} \Delta v_{\mathrm{obs}} = \sqrt{\Delta v_{\mathrm{th}}^2 + \Delta v_{\mathrm{nth}}^2 + \Delta v_{\mathrm{res}}^2} \: , \end{equation}{} \noindent where $\Delta v_{\mathrm{nth}}$ is the line width due to nonthermal effects and $\Delta v_{\mathrm{res}}$ is the line width introduced by our spectral resolution and is equal to $1.5\rm\,km\,s^{-1}$. The observed \element[ ][13]{CO} line widths even at the lower end of the distribution at $2$--$3\rm\,km\,s^{-1}$ cannot be explained by thermal line broadening. Effects such as turbulent motions are most likely the dominant driver for the broadening of the \element[ ][13]{CO} line. More than 70\% of the observed HISA line widths are $\geq 3\rm\,km\,s^{-1}$. \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\textwidth]{histogram_linewidths_paper.pdf} \caption[]{Histogram of line widths. The histograms show the line widths of HISA and \element[ ][13]{CO} in black and blue, respectively. The middle and right panels show the line width distribution in the regions marked by the left and right polygon in Fig.~\ref{fig:HISA_gauss_linewidths}, respectively.} \label{fig:histogram_linewidths} \end{figure*} We can investigate the three-dimensional Mach number of the filament by assuming isotropic turbulence $\mathcal{M}=\sqrt{3}\,\sigma_{\mathrm{nth}}/c_s$, where $\sigma_{\mathrm{nth}}$ is the nonthermal one-dimensional velocity dispersion that is related to the nonthermal line width via $\Delta v_{\mathrm{nth}}=\sqrt{8\,\mathrm{ln}2}\,\sigma_{\mathrm{nth}}$. The sound speed $c_s$ is estimated using a mean molecular weight $\mu=2.34$ for the molecular gas and $\mu=1.27$ for the cold \ion{H}{i} phase \citep{1973asqu.book.....A,2000asqu.book.....C}. To calculate the thermal component of the velocity dispersion, we assumed the spin temperature of the cold atomic hydrogen to be close to the kinetic temperature and set $T_k=T_{\mathrm{HISA}}=40\rm\,K$. As we find \element[][13]{CO} excitation temperatures as high as $\sim$25$\rm\,K$ where the line is becoming optically thick (see Sect.~\ref{sec:CO_column_dens}), we assumed that the actual kinetic temperature of \element[][13]{CO} is close to the excitation temperature in those regions, meaning these regions are dense and in LTE. We therefore set a uniform kinetic temperature of $T_k=20\rm\,K$ for the Mach number estimates of \element[][13]{CO}. \begin{figure}[!htbp] \centering \resizebox{\hsize}{!}{\includegraphics{histogram_mach_number_HISA_13CO_paper.pdf}} \caption[]{Distribution of the turbulent Mach number across the whole filament. The cold atomic hydrogen traced by HISA is shown in black. The Mach numbers of the molecular gas seen in \element[][13]{CO} emission are shown in blue. The green and gray dashed distribution shows the \element[][13]{CO} Mach numbers if the \element[][13]{CO} line width is on average overestimated by 30\% and $\sqrt{3}$, respectively.} \label{fig:mach_number} \end{figure} Figure~\ref{fig:mach_number} shows that the Mach number of the CNM traced by HISA peaks at $\sim$3, indicating that a significant fraction of the CNM has transonic and supersonic velocities. Furthermore, there is an indication of a shoulder at $\mathcal{M}\sim6$. The Mach numbers of \element[][13]{CO} show a broad distribution and are dominated by supersonic motions. The distribution is slightly skewed toward higher Mach numbers as we observe multiple \element[][13]{CO} components between 43 and $56\rm\,km\,s^{-1}$ in some regions. Consequently, fitting single Gaussian components results in overly broad line widths where we observe multiple velocity components. Hence, the nonthermal velocity dispersion and therefore the Mach number is systematically overestimated. If we utilize a single Gaussian component fit on the GRS spectra at the full spectral resolution of $0.21\rm\,km\,s^{-1}$, the \element[][13]{CO} Mach number distribution does not significantly change. Therefore, the spectral smoothing has a negligible effect on the derivation of the Mach numbers if we fit single components. To address the uncertainty of possible component blending, we assumed that the \element[][13]{CO} line width is on average systematically overestimated by 30\% due to the single-component fitting. The Mach number distribution is then shifted to lower values with a more pronounced peak (see Fig.~\ref{fig:mach_number}). Furthermore, if the filament lacks spatial isotropy, we could be overestimating the Mach number by a factor as high as $\sqrt{3}$, which would lead to a distribution with a median of $\mathcal{M}\sim6$ (Fig.~\ref{fig:mach_number}). \subsection{Column density and mass} \subsubsection{CNM column density traced by HISA} We calculated the column density of the CNM traced by HISA following Eqs.~\eqref{equ:T_ON-T_OFF} and \eqref{equ:HI_column_density}. We therefore have to assume either an optical depth or a spin temperature. As we know that HISA traces the coldest component of atomic hydrogen, we assumed a constant spin temperature of $T_S=40\rm\,K$ for the whole cloud to calculate the column density. This is a typical spin temperature of cold self-absorbing \ion{H}{i} clouds \citep[e.g.,][]{1974AJ.....79..527K,2000ApJ...540..851G,2003ApJ...586.1067H}. We emphasize that a constant spin temperature is an approximation that might not hold for every region of the cloud. However, the maximum spin temperature is constrained in Appendix~\ref{sec:discussion_max_spin_temperature} and the actual temperature variation should be moderate. Different spin temperatures (if constant over the whole cloud) will not change the structure of the column density distribution in the cloud but only change the normalization factor. Furthermore, we have to assume the fraction of background emission parameterized by the factor $p$ (Eq.~\ref{equ:T_ON-T_OFF}). Although we cannot measure this parameter directly, we can constrain $p$ by its effect on the spin temperature and the location of the cloud. Because of the cloud's location toward the inner Galactic plane ($\ell\sim 19^{\circ}$) and its distance of $\sim$3.5$\rm\,kpc$ \citep{2014A&A...568A..73R}, it is unlikely that most of the \ion{H}{i} emission originates in the foreground. The fraction of background emission should therefore be at least $p\gtrsim 0.5$. Since self-absorption can also be induced by \ion{H}{i} emission from the far side of the Galaxy due to the kinematic distance ambiguity, the background fraction $p$ should be systematically higher than the foreground emission fraction. Therefore, we assumed a background fraction of $p=0.9$ and discuss its uncertainties in Appendix~\ref{sec:discussion_background_fraction}. \citet{2020A&A...634A.139W} assumed the same background fraction for their HISA analysis of the giant molecular filament GMF38.1-32.4a \citep{2014A&A...568A..73R}. Furthermore, \citet{2000ApJ...540..851G} argue that the HISA detection is biased toward higher $p$ values since a high background fraction is more efficient in producing prominent HISA features. \subsubsection{Molecular gas column density traced by \element[ ][13]{CO}}\label{sec:CO_column_dens} In the optically thin limit, the \element[ ][13]{CO} column density is computed by \citep{2013tra..book.....W} \begin{equation} N(\element[ ][13]{CO}) = 3.0\times 10^{14}\,\frac{\int T_B(v)\,\mathrm{d}v}{1-\mathrm{exp}(-5.3/T_{\mathrm{ex}})} \: , \label{equ:N_CO} \end{equation}{} \noindent where $N(\element[ ][13]{CO})$ is the column density of \element[][13]{CO} molecules in $\rm cm^{-2}$, $\mathrm{d}v$ is in units of $\rm km\,s^{-1}$, $T_B$ and $T_{\mathrm{ex}}$ are the brightness temperature and excitation temperature of the \element[ ][13]{CO} line in units of Kelvin, respectively. By assuming that the excitation temperatures of \element[ ][12]{CO} and \element[ ][13]{CO} are the same in LTE, we derived the excitation temperature from \element[ ][12]{CO}(1-0) emission data of the FOREST Unbiased Galactic plane Imaging survey with the Nobeyama 45m telescope \citep[FUGIN;][]{2017PASJ...69...78U}, using \citep{2013tra..book.....W} \begin{equation} T_{\mathrm{ex}} = 5.5\cdot\left[\textrm{ln}\left(1+\frac{5.5}{T_B^{12}+0.82}\right)\right]^{-1} \: , \end{equation}{} \noindent where $T_B^{12}$ is the brightness temperature of the \element[][12]{CO} line in units of Kelvin. The FUGIN \element[][12]{CO} data have an angular and spectral resolution of $\Delta\Theta=20\arcsec$ and $\Delta v=1.3\rm\,km\,s^{-1}$, respectively. To calculate the excitation temperature, we reprojected the data cube on the same spatial and spectral grid as the GRS \element[][13]{CO} data of GMF20.0-17.9. We find a lower limit to the excitation temperature of $5\rm\,K$ for regions where the \element[ ][12]{CO} brightness temperature reaches the $5\sigma$ level ($2\rm\,K$). We can then derive the optical depth of the \element[][13]{CO} line from the excitation and brightness temperature, using \citep[see e.g.,][]{2013tra..book.....W,2016A&A...587A..74S} \begin{equation} \tau = -\mathrm{ln}\left[ 1- \frac{T_B}{5.3}\cdot\left(\left[\mathrm{exp}\left(\frac{5.3}{T_{\mathrm{ex}}}\right)-1\right]^{-1}-0.16\right)^{-1}\right] \: . \end{equation} \noindent We estimate a lower limit of the optical depth of $\tau\sim 0.06$ for \element[ ][13]{CO} brightness temperatures above $1.25\rm\,K$ ($\sim$5$\sigma$) and the highest excitation temperatures we find ($\sim$25$\rm\,K$). Hence, we set the optical depth to $\tau=0.06$ in regions where $\tau<0.06$. Only few positions show optical depths as high as $\tau\sim 2$. We employ a correction factor to compensate for high optical depth effects by replacing the integral in Eq.~\eqref{equ:N_CO} with \citep{1982ApJ...262..590F,1999ApJ...517..209G} \begin{equation} \int T_B(v)\,\mathrm{d}v \rightarrow \frac{\tau}{1-e^{-\tau}}\,\int T_B(v)\,\mathrm{d}v \: . \end{equation} \noindent This correction factor is accurate to 15\% for $\tau<2$. To translate the \element[ ][13]{CO} column density into a column density of molecular hydrogen, we first estimated the relative abundance of \element[ ][12]{CO} with respect to \element[ ][13]{CO}. \citet{2005ApJ...634.1126M} and \citet{2014A&A...570A..65G} derived relative abundance relations based on different CO isotopologs and metallicities. At the Galactocentric radius of $D_{GC}=5.0\,\rm kpc$ \citep{2014A&A...568A..73R}, these relations give [\element[ ][12]{CO}]/[\element[ ][13]{CO}] abundances between 40 and 56. Given the large uncertainty of these numbers, we chose a canonical conversion factor of 45. The relative abundance of the main isotopolog \element[ ][12]{CO} compared to molecular hydrogen is given in \citet{2012MNRAS.423.2342F} who derive an $\rm H_2$ abundance with respect to \element[ ][12]{CO} of $X_{\element[ ][12]{CO}}^{-1}=7500$. Therefore, we adopted a conversion factor of $[\rm H_2]/[\element[ ][13]{CO}]=3.4\times 10^5$. The derived $\rm H_2$ column densities have uncertainties of at least a factor of two due to the large uncertainties in these relations. Furthermore, CO might not always be a good tracer of $\rm H_2$ as "CO-dark $\rm H_2$" could account for a significant fraction of the total $\rm H_2$ \citep{2008ApJ...679..481P,2009ApJ...692...91G,2013A&A...554A.103P,2014MNRAS.441.1628S}. For the derivation of the column densities we integrated over the whole velocity range between $43$ and $56\rm\,km\,s^{-1}$ where we find HISA. The \ion{H}{i} (CNM) and $\rm H_2$ column densities derived from HISA, and \element[ ][13]{CO}, respectively, are presented in the lower panels of Fig.~\ref{fig:column_density_map}. The column densities are partly correlated in the western part of the filament but the strongest cold \ion{H}{i} column density peaks in the eastern part ($\ell\sim 20^{\circ}$, $b\sim +0.2^{\circ}$) do not show an $\rm H_2$ column density counterpart. The strongest $\rm H_2$ column density peak in the western part ($\ell\sim 18.1^{\circ}$, $b\sim -0.3^{\circ}$) reveals little \ion{H}{i} column density but coincides with continuum emission. Continuum emission contaminates self-absorption features and hence makes it difficult to measure HISA. Most locations that are associated with continuum emission do not show HISA counterparts. However, we can measure the optical depth toward strong continuum emission sources and thus constrain the spin temperature of the HISA cloud. This is addressed in the following subsection. \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\textwidth]{column_density_map_HI_HISA_H2_overview_paper.pdf} \caption[]{\textit{Top panel:} \ion{H}{i} column densities of the combined WNM and CNM seen in \ion{H}{i} emission. The column densities were corrected for optical depth, weak diffuse continuum emission, and the kinematic distance ambiguity. The optical depth is measured toward the continuum source G19.075-0.287 and is applied to correct the column densities throughout the whole cloud (see Sect.~\ref{sec:HI_emission_optical_depth}). The position of G19.075-0.287 is indicated by the green cross. \textit{Middle panel:} The \ion{H}{i} column densities of the CNM inferred from HISA features assuming a spin temperature of $T_{\mathrm{HISA}}=40\rm\,K$ and a background fraction of $p=0.9$. \textit{Bottom panel:} $\rm H_2$ column densities inferred from \element[ ][13]{CO} as a tracer. We assumed an $\rm H_2/\element[][13]{CO}$ ratio of $3.4\times 10^5$. The red dashed polygons in each panel indicate the eastern and western part of the filament that are analyzed separately. The white and black contours indicate the column density thresholds that were used for the derivation of the column density probability density functions (see Sect.~\ref{sec:N-PDFs}).} \label{fig:column_density_map} \end{figure*} \subsubsection{Atomic gas column density seen in \ion{H}{i} emission}\label{sec:HI_emission_optical_depth} In addition to HISA, we investigated the properties of atomic hydrogen (WNM+CNM) by measuring the column density from \ion{H}{i} emission and correcting for optical depth effects and diffuse continuum. We can utilize the measurement of the optical depth to constrain the spin temperature of the cold atomic hydrogen (see Appendix~\ref{sec:discussion_max_spin_temperature}). Further details of optical depth and column density corrections are given in \citet{2015A&A...580A.112B} and \citet{2020A&A...634A.139W}. As the optically thin assumption might not hold for some regions, we can utilize strong continuum emission sources to directly measure the optical depth. \ion{H}{i} continuum absorption (HICA) is a classical method to derive the properties of the CNM \citep[e.g.,][]{2004ApJ...603..560S,2003ApJ...586.1067H}. This method employs strong continuum sources, such as Galactic \ion{H}{ii} regions or active galactic nuclei (AGNs), to measure the optical depth of \ion{H}{i}. As these sources have brightness temperatures that are larger than typical spin temperatures of cold \ion{H}{i} clouds ($T_s\sim 100\rm\,K$), we observe the \ion{H}{i} cloud in absorption. The absorption feature is furthermore dominated by the CNM since the absorption is proportional to $T_s^{-1}$. By measuring $\mathrm{on}$ and $\mathrm{off}$ positions, we can directly compute the optical depth of \ion{H}{i}. The optical depth is given by \citep[see][]{2015A&A...580A.112B,2020A&A...634A.139W} \begin{equation} \tau = -\mathrm{ln}\left(\frac{T_{\mathrm{on}}-T_{\mathrm{off}}}{T_{\mathrm{cont}}}\right) \: , \label{equ:HICA_tau} \end{equation}{} \noindent where $T_{\mathrm{on}}$ and $T_{\mathrm{off}}$ is the \ion{H}{i} brightness temperature toward a strong continuum background source and offset from the source, respectively. The brightness temperature $T_{\mathrm{cont}}$ describes the continuum level of the background source that is not affected by \ion{H}{i} absorption. The advantage of this method is the direct measurement of the optical depth. However, the HICA method requires strong continuum emission sources. As most strong continuum sources are discrete point sources, this method results in an incomplete census of optical depth measurements \citep[see][for a compilation of all optical depth measurements in the THOR survey]{2020A&A...634A..83W}. Consequently, the intrinsic structure of individual \ion{H}{i} clouds cannot be determined. Some continuum emission sources also show extended structures. Therefore, finding reliable $\mathrm{off}$ positions could pose difficulties. As we exploited THOR-only (VLA C-configuration) data for this measurement, we filter out most large-scale \ion{H}{i} emission. The THOR-only data reveal \ion{H}{i} emission of less than $30\rm\,K$, often just within the noise. Therefore, we can neglect the emission of the \ion{H}{i} cloud in Eq.~\eqref{equ:HICA_tau} and set $T_{\mathrm{off}}=0$. We can then calculate the optical depth without measuring an $\mathrm{off}$ position by \begin{equation} \tau_{\mathrm{simplified}} = -\mathrm{ln}\left(\frac{T_{\mathrm{on}}}{T_{\mathrm{cont}}}\right) \: . \label{equ:HICA_tau_simplified} \end{equation} \noindent Depending on the brightness of the continuum source and the \ion{H}{i} optical depth, the absorption spectrum can approach zero. Due to the noise in the spectra, the spectra can exhibit brightness temperatures smaller than zero, which is not physically meaningful. We therefore report a lower limit for the optical depth where the absorption $T_{\mathrm{on}}$ becomes smaller than $5\sigma$. Besides strong continuum sources we observe weak continuum emission throughout the Galactic plane. This component has brightness temperatures between $10$ and $50\rm\,K$. For the derivation of the \ion{H}{i} column densities we employed the combined THOR data as in the case of HISA. The continuum emission has been subtracted during data reduction as described in Sect.~\ref{sec:methods_and_observation}. However, even weak continuum emission can still influence the observed brightness temperature. If we neglect weak continuum emission, the measured \ion{H}{i} emission will be underestimated as weak continuum emission can suppress a significant fraction of \ion{H}{i} emission \citep[e.g.,][]{2015A&A...580A.112B}. Consequently, the derived \ion{H}{i} column densities will be underestimated. We took this effect into account when computing the \ion{H}{i} column density \citep[see][Eq.~9]{2015A&A...580A.112B}. In contrast to \citet{2020A&A...634A..83W}, where they used a $6\sigma$ threshold to select continuum sources, we measured the optical depth of atomic hydrogen toward the brightest continuum sources with brightness temperatures $T_{\rm cont}>200\rm\,K$ to not suffer from low saturation limits since we expect the optical depth to be high. Four sources have been identified with this threshold. The measured optical depths of these sources vary between $0.5$ and $2.5$ (lower limit). We selected the continuum source G19.075-0.287 \citep{2018A&A...619A.124W} as a representative source for the optical depth as it is not in the $5\sigma$ saturation at most velocities between 43 and $56\rm\,km\,s^{-1}$, which gives a mean optical depth of $\tau\sim 0.9$ (Fig.~\ref{fig:T_spin_tau}). This is a reasonable approximation as the optical depth map derived by \citet{2020A&A...634A..83W} gives a mean optical depth of $\sim$1.0 when averaged over the whole filament. However, this optical depth measurement is a lower limit as G19.075-0.287 is a Galactic \ion{H}{ii} region located in the Galactic plane. As no strong extragalactic continuum sources are identified toward the position of GMF20.0-17.9, the optical depth measurement has drawbacks in the current investigation. For the emission data, we have to take into account an opacity contribution from the far side \ion{H}{i} beyond the location of the \ion{H}{ii} region. To first approximation, we assumed that the optical depth from the background is similar to that of the measured foreground. We therefore adopted $2\times\tau(v_{\rm LSR})$ for the whole map and corrected the \ion{H}{i} column density for the optical depth per velocity channel. Given the corrected mean optical depth of $2\times\tau\sim1.8$, the opacity correction factor $\tau/(1-e^{-\tau})$ increases the mean column density by a factor of $\sim$2. The derived column densities are a result of the \ion{H}{i} emission stemming from both the kinematic far ($12.0\rm\,kpc$) and near ($3.5\rm\,kpc$) side of the Milky Way. The kinematic distances have been obtained using the Kinematic Distance Utilities\footnote{\url{https://github.com/tvwenger/kd}} \citep{2018ApJ...856...52W} and employing the Galactic rotation model from \citet{2019ApJ...885..131R}. Since the distribution of the atomic gas in the Galactic plane is approximately axisymmetric with respect to the Galactic center \citep{2008A&A...487..951K}, we can assume that the atomic gas density distribution in the vertical direction is similar for a given Galactocentric radius. Using the average vertical density profile from \citet{1990ARA&A..28..215D}, we can estimate the gas fraction at the kinematic near and far side for each line of sight. Since most \ion{H}{i} emission is observed close to the Galactic midplane, the foreground gas fraction is $\sim$50\%. Therefore, due to the kinematic distance ambiguity half of the \ion{H}{i} emission is attributed to the background, which is not associated with GMF20.0-17.9. We derived the \ion{H}{i} column density map shown in the top panel of Fig.~\ref{fig:column_density_map} taking into account only the near side gas at $3.5\rm\,kpc$. \begin{figure}[!htbp] \centering \resizebox{\hsize}{!}{\includegraphics{overview_tau_T_spin_spectrum_paper.pdf}} \caption[]{Optical depth measurement toward the \ion{H}{ii} region \mbox{G19.075-0.287} \citep{2018A&A...619A.124W}. The plot shows the optical depth as a function of $\mathrm{LSR}$ velocity and was computed using Eq.~\eqref{equ:HICA_tau_simplified}. For some channels, the absorption spectrum saturates and the measured optical depth is a lower limit of $\tau=1.6$, which is indicated by the horizontal dotted line. The gray shaded area indicates the velocity range between 43 and $56\rm\,km\,s^{-1}$, where HISA features have been extracted.} \label{fig:T_spin_tau} \end{figure} \subsubsection{Masses} As we have determined the column densities and know the distance of GMF20.0-17.9 \citep[$\sim$3.5$\rm\,kpc$,][]{2014A&A...568A..73R}, we can directly estimate the atomic and molecular mass of each part of the filament (see Table \ref{tab:masses}). All masses were calculated from the column densities integrated over 43\,--\,$56\rm\,km\,s^{-1}$. The molecular hydrogen mass of the whole filament as marked by both the red polygons in Fig.~\ref{fig:column_density_map} is 3.5$\times 10^5\rm\,M_{\odot}$. Inside the polygon regions, the mass of the total atomic hydrogen, accounting for WNM and CNM measured from \ion{H}{i} emission, corresponds to $\sim$75\% of the $\rm H_2$ mass ($2.6\times 10^5\rm\,M_{\odot}$) for the whole filament after correcting for optical depth effects, weak continuum emission, and the kinematic distance ambiguity. However, if we take into account all diffuse \ion{H}{i} emission beyond the polygon regions, arising from the region between $20.6>\ell>17.6^{\circ}$ and $-1.25<b<+0.5^{\circ}$, the mass of the total \ion{H}{i} component rises by 75\% to $\sim$4.6$\times 10^5\rm\,M_{\odot}$. The molecular filament is therefore embedded in a large gas reservoir of atomic hydrogen. The CNM mass traced by HISA corresponds to 1-5\% of the molecular mass, depending on the region and assumed spin temperature. The uncertainty of the column density directly translates to an uncertainty of mass. If we assume a spin temperature of $20\rm\,K$, instead of our canonical value of $40\rm\,K$, the mass traced by HISA decreases by a factor of $\sim$3. Hence, the largest uncertainty arises from the assumption of a spin temperature. We are able to constrain an upper limit of the spin temperature for the column density derivation by assuming an optically thick ($\tau\rightarrow\infty$) cloud, as we show in Appendix~\ref{sec:discussion_max_spin_temperature}. The atomic mass fraction generally increases toward the eastern part of the filament, agreeing with our findings in the column density distributions (see Sect.~\ref{sec:N-PDFs} for details). \begin{table*}[!htbp] \caption{Derived masses of the giant molecular filament GMF20.0-17.9.} \renewcommand*{\arraystretch}{1.3} \centering \begin{tabular}{ c c c c c c c c } \hline\hline Region & $M$($\rm H_2$) & $M$(\ion{H}{i}) & $M$(HISA) & $M$(HISA) & $f_{\rm HISA}$ & $f_{\rm HISA}$ & $f_{\ion{H}{i}}$ \\ & & & ($T_s=20\rm\,K$) & ($T_s=40\rm\,K$) & ($T_s=20\rm\,K$) & ($T_s=40\rm\,K$) & \\ & [$\rm M_{\odot}$] & [$\rm M_{\odot}$] & [$\rm M_{\odot}$] & [$\rm M_{\odot}$] & & & \\\hline Total & $3.5\times 10^{5}$ & $2.6\times 10^{5}$ ($4.6\times 10^{5}$)\tablefootmark{(a)} & $4.6\times 10^{3}$ & $1.3\times 10^{4}$ & 1\% & 4\% & 75\% (130\%)\tablefootmark{(a)} \\ East & $1.1\times 10^{5}$ & $1.2\times 10^{5}$ & $1.9\times 10^{3}$ & $5.7\times 10^{3}$ & 2\% & 5\% & 110\% \\ West & $2.3\times 10^{5}$ & $1.5\times 10^{5}$ & $2.6\times 10^{3}$ & $7.5\times 10^{3}$ & 1\% & 3\% & 65\% \\\hline \end{tabular} \tablefoot{ The masses were calculated for each part of the filament as well as the whole filament marked by the red polygons in Fig.~\ref{fig:column_density_map}. The second column gives the molecular hydrogen mass as traced by \element[ ][13]{CO} emission. The third column shows the total atomic hydrogen mass inferred from the optical depth and continuum corrected \ion{H}{i} emission. The fourth and fifth column present the mass of the cold atomic hydrogen traced by HISA with an assumed spin temperature of 20 and $40\rm\,K$, respectively. The last three columns give the corresponding mass fractions with respect to the $\rm H_2$ mass.\\ \tablefoottext{a}{This mass was calculated using the corrected \ion{H}{i} emission between 43 and $56\rm\,km\,s^{-1}$, $20.6>\ell>17.6^{\circ}$ and $-1.25<b<+0.5^{\circ}$.} } \label{tab:masses} \end{table*} As discussed in Appendix~\ref{sec:discussion_extraction_method}, we estimated a column density uncertainty of $\sim$40\% to account for systematic differences and noise in our baseline extraction method. Depending on the background fraction $p$, the column density further varies by a factor of $\sim$2 between $p=0.9-0.7$ (see Appendix~\ref{sec:discussion_background_fraction}). It is difficult to exactly quantify the uncertainty of the cold \ion{H}{i} column density and mass traced by HISA. Considering the estimated uncertainties due to the extraction method, background fraction, and spin temperature, the HISA-traced column density and mass have an uncertainty of a factor of 2\,--\,4. As we study the CNM through HISA, we might miss a significant fraction of it. Since we can only trace gas that is cold enough to be observed in absorption against a warmer background, we are limited in our HISA detection by the requirement of sufficient background emission. The CNM has temperatures $\lesssim 300\rm\,K$ \citep{1977ApJ...218..148M,2003ApJ...587..278W}. For future investigations, simulations could help to quantify the fraction of CNM that is invisible to our HISA method. The computed $\rm H_2$ column density and mass has an uncertainty of at least a factor of two due to the uncertainties in the value for the CO-to-$\rm H_2$ conversion \citep{2005ApJ...634.1126M,2012MNRAS.423.2342F,2014A&A...570A..65G}. Furthermore, we could miss a significant fraction of CO-dark $\rm H_2$ column density \citep{2008ApJ...679..481P,2009ApJ...692...91G,2013A&A...554A.103P}. Simulations suggest that the fraction of CO-dark $\rm H_2$ could even be as high as $\sim$50\% at conditions typical of the Milky Way disk \citep{2014MNRAS.441.1628S,2016MNRAS.458.3667D}. However, this fraction should be moderate in low-temperature environments \citep{2016MNRAS.462.3011G}. \section{Discussion} \subsection{Kinematics}\label{sec:discussion_kinematics} The histograms of line-of-sight peak velocities derived from HISA and \element[ ][13]{CO} generally agree for both parts of the filament. The results of the Gaussian fits to the spectra might not always reflect the actual kinematic structure as \element[ ][13]{CO} emission exhibits multiple velocity components in some regions between 37--$50\rm\,km\,s^{-1}$. However, the \element[ ][13]{CO} velocities generally agree with the HISA features in a statistical sense, even in the eastern part of the filament where we do not observe a spatial correlation along the line of sight. We take this as a confirmation that our extracted HISA features are in fact due to self-absorption. The line widths of HISA show a broad distribution of 2\,--\,$10\rm\,km\,s^{-1}$. The eastern part reveals enhanced HISA line widths, which are 3\,--\,$4\rm\,km\,s^{-1}$ higher toward the north of the molecular filament. Although speculative, this could be a signature of the compression of \ion{H}{i} gas passing through the spiral arm potential and triggering $\rm H_2$ formation \citep{2004ApJ...612..921B}. As the gas is leaving the spiral arm structure, this could inject turbulence on the downstream side that enhances the line widths. Although observationally difficult to distinguish, simulations of the galactic dynamics of the ISM suggest that there are systematic differences in velocity dispersion between molecular clouds within the spiral arm potential and inter-arm clouds \citep{2015MNRAS.447.3390D,2016MNRAS.458.3667D,2017MNRAS.470.4261D}. The morphological and kinematic differences in each part of the filament could therefore be related to its position with respect to the spiral arm potential. However, in order to differentiate between different scenarios, we need to investigate synthetic \ion{H}{i} observations, which is beyond the scope of our current analysis. We note that the broadened HISA lines toward some positions of the cloud might be subject to resolution effects and could be the superposition of multiple lines. Spectrum~5 in Fig.~\ref{fig:baseline_fits} clearly shows multiple \element[][13]{CO} components where we detect an enhanced HISA line width. The lack of spatial correlation between HISA and \element[][13]{CO}, particularly in the eastern region, makes it difficult to assess if multiple \element[][13]{CO} components are preferentially associated with enhanced HISA line widths. Since the velocity dispersion is mostly due to turbulence in both tracers, we conclude that the agreement in velocities is robust. \subsection{Column density probability density functions (N-PDFs)}\label{sec:N-PDFs} The column density maps derived in Fig.~\ref{fig:column_density_map} can be further evaluated by determining their probability density function (PDF). Column or volume density PDFs are commonly used as a measure of the density structure and physical processes acting within the cloud \citep[e.g.,][]{2014Sci...344..183K}. A log-normal shape of the N-PDF is usually attributed to turbulent motions dominating the early diffuse phase of a cloud's evolution. Furthermore, the width of the log-normal distribution reflects the amplitude of turbulence and can be associated with the Mach number \citep[e.g.,][]{1997MNRAS.288..145P,1998PhRvE..58.4501P,2007ApJ...665..416K,2008PhST..132a4025F}. In later evolutionary stages, molecular clouds can develop high-density regions due to the increasing effect of self gravity, producing a power-law tail in their N-PDF. Molecular cloud complexes that show star-forming activity favor this scenario as they reveal such power-law tails \citep{2009A&A...508L..35K,2013ApJ...766L..17S,2016A&A...587A..74S}. The shape of the resulting N-PDF is also sensitive to the regions where column densities are taken into account, especially in the low column density regime \citep{2015A&A...576L...1L}, and it is sensitive to the treatment of zero spacing information in interferometric data \citep{2016A&A...590A.104O}. We derived each N-PDF from the regions marked by the red polygons in Fig.~\ref{fig:column_density_map}, respectively. However, if we take into account low column densities that extend beyond the region enclosed by the polygons, we will miss a significant fraction of low column densities and hence the shape of the N-PDF will not recover the structure at the lower end well. We therefore chose to derive the N-PDFs from column densities approximately within the last closed contours that are still within the selected polygon regions. In order to compare the \ion{H}{i} column densities with those of molecular hydrogen, we converted $N(\rm H_2)$ to $N(\rm H)$ to construct the $\rm H_2$ N-PDFs. For the N-PDFs, we chose closed contours of $1.9\times 10^{20}$ and $3.2\times 10^{21}\rm\,cm^{-2}$ for HISA and $\rm H_2$, respectively. The selected closed contours that go beyond the polygon regions do not significantly change the shape of the N-PDFs. We set the column density threshold for \ion{H}{i} emission to $2.7\times 10^{21}\rm\,cm^{-2}$. It is difficult to define a last closed contour for \ion{H}{i} emission. However, this boundary bias has a negligible impact on the shape of the \ion{H}{i} N-PDF as we observe a small range of column densities due to the diffuse nature of \ion{H}{i} emission. For turbulence-dominated gas, last closed contours are not essential to sample the N-PDF properly \citep{2019MNRAS.482.5233K}. We furthermore normalized each N-PDF by the mean column density. \begin{figure*}[!htbp] \centering \includegraphics[width=1.0\textwidth]{PDF_HI_HISA_Ts_40K_H2_paper.pdf} \caption[]{\textit{Top panels:} N-PDFs traced by \ion{H}{i} emission. The distributions are derived from the \ion{H}{i} column densities that have been corrected for optical depth and continuum emission (top panel of Fig.~\ref{fig:column_density_map}). \textit{Middle panels:} The N-PDFs of the gas traced by HISA. \textit{Bottom panels:} $\rm H_2$ N-PDFs traced by \element[ ][13]{CO}. The left panels show the derived N-PDF of the whole filament (east+west), respectively. The middle and right panel show the N-PDFs of the eastern and western part of the filament, respectively. The blue curves indicate the log-normal fits to the distribution. The red vertical dashed and solid lines mark the column density threshold (last closed contour) and mean column density, respectively. The red solid lines in the lower panels indicate the fit to the power-law tail.} \label{fig:column_density_PDFs} \end{figure*} Figure~\ref{fig:column_density_PDFs} presents the N-PDFs of \ion{H}{i} emission, HISA, and $\rm H_2$ column densities for each part of the filament (east/west) as well as the whole filament (east+west), respectively. \begin{table}[!htbp] \caption{Results of the fits to the N-PDFs.} \renewcommand*{\arraystretch}{1.3} \centering \begin{tabular}{c c c c} \hline\hline Component & $\langle N_{\mathrm{H}}\rangle$ [$\rm cm^{-2}$] & Width $\sigma$ & PL index $\alpha$ \\\hline \ion{H}{i} (WNM+CNM) & & & \\ \hline Whole filament & $3.2\times 10^{21}$ & 0.11 & - \\ East & $3.3\times 10^{21}$ & 0.12 & - \\ West & $3.2\times 10^{21}$ & 0.09 & - \\\hline HISA (CNM) & & & \\ \hline Whole filament & $3.0\times 10^{20}$ & 0.29 & - \\ East & $3.0\times 10^{20}$ & 0.30 & - \\ West & $3.0\times 10^{20}$ & 0.28 & - \\\hline $\rm H_2$ & & & \\ \hline Whole filament & $7.6\times 10^{21}$ & 0.57 & 2.99 \\ East & $7.3\times 10^{21}$ & 0.54 & 4.34 \\ West & $7.7\times 10^{21}$ & 0.58 & 2.81 \\\hline All gas & & & \\ \hline Whole filament & $8.3\times 10^{21}$ & - & 3.53 \\ \end{tabular} \tablefoot{ The second column shows the mean column density of each component designated in the first column. The third column presents the widths of the log-normal function fitted to the N-PDFs. The last column shows the index of the power-law (PL) function fitted to the tail of the $\rm H_2$ and All gas N-PDFs. } \label{tab:PDF_fitting} \end{table}{} As expected from the column density maps in Fig.~\ref{fig:column_density_map}, the N-PDF of the CNM as traced by HISA peaks at lower column densities than molecular hydrogen. The HISA N-PDFs are well represented by a log-normal function. The results of log-normal fits are shown in Table \ref{tab:PDF_fitting}. The log-normal shape implies that turbulent motions might be dominant and gravitational collapse leading to high column density peaks is not visible in HISA within the whole filament. There is no significant difference between the subregions defined in the eastern and western part of the filament. The widths of the HISA N-PDFs are the same for both regions. The mean column density derived from HISA is $\sim$3$\times 10^{20}\rm\,cm^{-2}$. The mean column densities and widths of the N-PDFs agree well with those found by \citet{2020A&A...634A.139W} for \mbox{GMF38.1-32.4a.} To investigate how observational uncertainties affect the width of the N-PDF, \citet{2020A&A...634A.139W} created model images of \ion{H}{i} and continuum emission with similar properties and noise as the real THOR data. They introduced artificial \ion{H}{i} absorption features from known column density distributions and added them to the model data. They extracted the HISA features using a similar method and derived column density distributions showing that the widths of the N-PDFs they find do not significantly increase due to observational uncertainties or the HISA extraction method. They conclude that the widths of the derived N-PDFs are robust and not subject to broadening introduced by observational noise and the fitting approach. The mean column densities of molecular hydrogen are about an order of magnitude higher than the column densities of HISA. In contrast to the HISA N-PDFs, the N-PDFs of molecular hydrogen are poorly represented by a log-normal function. Even though the eastern region does not show similarly high column density peaks as the western region, a power-law behavior is evident in both column density distributions. Therefore, power-law functions ($p(x)\propto x^{-\alpha}$) were additionally fitted to the high column density tail of the $\rm H_2$ N-PDFs. The best minimal column density for the power-law fit is obtained from the minimal Kolmogorov-Smirnov distance between the fit and the N-PDF. The fits were performed using the python package \mbox{\textit{Powerlaw}} \citep{10.1371/journal.pone.0085777}. The fitted parameters of the power-law functions are also listed in Table \ref{tab:PDF_fitting}. Power-law tails can be a sign of gravitational collapse, which creates high column density peaks \citep{2000ApJ...535..869K,2008PhST..132a4025F,2009A&A...508L..35K,2016A&A...587A..74S}. In agreement with observations, simulations of self-gravitating, turbulent molecular clouds show that star-forming activity reveals strong deviations from the log-normal shape in the form of power-law tails toward high column densities \citep{2011ApJ...727L..20K}. The slope of the power-law tails can then be associated with the evolutionary stage of the cloud, with shallower slopes indicative of an increasing degree of star formation efficiency \citep{2013ApJ...763...51F}. In general, theoretical studies and simulations of molecular clouds can reproduce N-PDFs of different forms, depending on the degree of turbulence, star-forming activity, and magnetic field support \citep{1994ApJ...423..681V,2010A&A...512A..81F,2015ApJ...808...48B}. Both subregions miss a small fraction of low column densities above the closed contour threshold. However, the shape of the $\rm H_2$ N-PDFs does not change significantly if we take into account all closed contours beyond the polygon regions. The N-PDFs derived from the \ion{H}{i} emission that traces a combination of CNM and WNM show a narrow log-normal shape with widths of $\sigma=0.09 - 0.12$. Observations toward well-known molecular cloud complexes also show N-PDFs of \ion{H}{i} emission with narrow log-normal shapes \citep{2015ApJ...811L..28B,2016ApJ...829..102I,2017MNRAS.472.1685R}. We might overestimate the column densities as the optical depth derived from absorption (see Sect.~\ref{sec:HI_emission_optical_depth}) is mostly due to cold atomic gas acting as the absorbing medium. However, we used this optical depth measurement to correct for \ion{H}{i} emission that might also be attributed to warm and optically thin gas. \citet{2015A&A...580A.112B} assess this effect and investigate the overestimation by comparing the corrected total \ion{H}{i} column densities with actual column densities of known spin temperatures and optical depths. They show that this overestimate is $<10\%$ for measured CNM optical depths $\tau<1.5$. This effect is therefore negligible compared to the uncertainty of the optical depth measurement itself. Furthermore, this systematic effect does not significantly affect the shape of the N-PDF. The mean column densities inferred from \ion{H}{i} emission are $\langle N_{\rm HI}\rangle\sim 3\,\times 10^{21}\rm\,cm^{-2}$. The \ion{H}{i} column densities show a narrow log-normal distribution driven by turbulent motions whereas the N-PDFs of molecular hydrogen show a broad distribution with a power-law behavior toward high column densities that might be subject to gravitational collapse. The column densities traced by \ion{H}{i} emission are an order of magnitude higher than those traced by self-absorption. The narrow width of the N-PDF represents the diffuse nature of \ion{H}{i} emission while the broader column density distribution traced by HISA indicates a clumpier structure of the CNM. We examined the column density distribution of the entire hydrogen content of the filament, that is, both the atomic and molecular phase of GMF20.0-17.9. We derived an ``All gas'' N-PDF in Fig.~\ref{fig:all_gas_PDF} by adding together the column densities of all three tracers. \begin{figure}[!htbp] \centering \resizebox{\hsize}{!}{\includegraphics{PDF_all_gas_whole_filament_paper.pdf}} \caption[]{All gas N-PDF of GMF20.0-17.9. The PDF is derived by adding the column densities of \ion{H}{i}, HISA, and $\rm H_2$. The plot shows the derived N-PDF of the whole filament marked by both the red polygons in Fig.~\ref{fig:column_density_map}. The red vertical dashed and solid line marks the column density threshold (last closed contour) at $4.5\times10^{21}\rm\,cm^{-2}$ and mean column density, respectively. The red solid line indicates the power-law fit to the high column density tail of the distribution.} \label{fig:all_gas_PDF} \end{figure} We fitted the high column density tail of the distribution with a power-law function. The N-PDF can be very well described by a single power-law function. The western part shows higher $\rm H_2$ column density peaks and a shallower power-law tail in the N-PDF. The $\rm H_2$ column densities are generally lower in the eastern part of the filament. The ATLASGAL survey \citep{2009A&A...504..415S} reveals several high-density clumps in the western part of the filament and few clumps in the eastern part within the \element[][13]{CO} velocity range. The N-PDFs traced by HISA do not show significant differences between each part of the filament. However, the mass of HISA compared to its molecular counterpart does therefore increase toward the eastern part of the filament. The maximum spin temperature of our extracted HISA features is also lower in the eastern subregion (Fig.~\ref{fig:HISA_max_spin_temperature}). This might be an indication that the eastern subfilament is a young, cold \ion{H}{i} cloud while the western region exhibits a more evolved structure and star-forming activity. To furthermore test the validity of this hypothesis, we would need to extend our analysis to a larger sample of GMFs to deduce statistical evidence. This will be addressed in a future analysis. Simulations of cloud formation could also give constraints on signatures of kinematics and column densities in atomic and molecular line tracers. However, this is beyond the scope of this current investigation. \subsection{Signatures of phase transition} The conversion of atomic to molecular gas (\ion{H}{i}-to-$\rm H_2$) is fundamental for molecular cloud formation processes. Theoretical models predict for a single \ion{H}{i}-to-$\rm H_2$ transition a mass surface density threshold of $\Sigma_{\rm HI}\sim5$\,--\,10$\rm\,M_{\odot}\,pc^{-2}$ for solar metallicity \citep{2008ApJ...689..865K,2009ApJ...693..216K,2014ApJ...790...10S}. In such models, the \ion{H}{i}-to-$\rm H_2$ transitions are computed assuming a balance between far-UV photodissociation and molecular formation, and accounting for the rapid attenuation of the radiation field due to $\rm H_2$ self-shielding and dust absorption \citep[see also][]{2016SAAS...43...85K}. Figure~\ref{fig:HI-to-H2_transition} presents the atomic hydrogen as a function of the total hydrogen mass surface density. We take into account all \ion{H}{i} column densities traced by the corrected \ion{H}{i} emission between $20.6>\ell>17.6^{\circ}$ and $-1.25<b<+0.5^{\circ}$. The figure reveals a saturation of atomic hydrogen at a mass surface density of $\sim$20\,--\,30$\rm\,M_{\odot}\,pc^{-2}$. A least squares fit to the mean of the distribution yields a mass surface density threshold of $\sim$25$\rm\,M_{\odot}\,pc^{-2}$ ($=3\times 10^{21}\rm\,cm^{-2}$). When examined individually, both the eastern and western subregion show the same \ion{H}{i} saturation level within the uncertainties ($24$ and $26\rm\,M_{\odot}\,pc^{-2}$, respectively). This transition exceeds the column density threshold predicted by theoretical models \citep{2008ApJ...689..865K,2009ApJ...693..216K,2014ApJ...790...10S}. \citet{2015A&A...580A.112B} report a column density threshold of 50\,--\,80$\rm\,M_{\odot}\,pc^{-2}$ toward the star-forming region W43, which is significantly higher than predicted transitions at $\sim$5\,--\,10$\rm\,M_{\odot}\,pc^{-2}$. \citet{2017ApJ...835..126B} argue that such high mass surface density thresholds cannot be explained by typical physical properties of the CNM as it would require an unrealistically high UV radiation field or low dust-to-gas ratio. As the clumpiness of a molecular cloud might regulate how far UV radiation penetrates the medium \citep[e.g.,][]{1988ApJ...332..379S,1991ApJ...374..522S}, \citet{2017ApJ...835..126B} suggest that the high thresholds can naturally be explained by a superposition of multiple transition layers observed along the line of sight. These authors predict a mass surface density threshold of $\sim$13$\rm\,M_{\odot}\,pc^{-2}$ for the more active star-forming region W43. \citet{2020A&A...634A.139W} find similar values of 14\,--\,23$\rm\,M_{\odot}\,pc^{-2}$ toward GMF38.1-32.4a, where the atomic gas surface density saturates to an almost flat distribution. While the derived atomic mass surface densities are a result of the combined column densities of WNM and CNM, the shielding from dissociating Lyman-Werner (LW) photons provided by a transition layer between atomic and molecular gas should be dominated by the CNM \citep{2009ApJ...693..216K}. The $\rm H_2$ formation rate per atom scales as the number density $n$, so the CNM, due to its higher density, is far more effective at shielding than the WNM. The observed transition should therefore be an upper limit and the actual critical surface density is $\leq 25\rm\,M_{\odot}\,pc^{-2}$, depending to the first approximation on the ratio $\Sigma_{\rm CNM}/\Sigma_{\rm WNM}$. \begin{figure}[!htbp] \centering \resizebox{\hsize}{!}{\includegraphics{HI_corrected_over_HI_corrected-and-H2_paper.pdf}} \caption[]{\ion{H}{i}-to-$\rm H_2$ transition. The plot shows the \ion{H}{i} mass surface density traced by \ion{H}{i} emission as a function of the total hydrogen mass surface density. The black solid line indicates a 1-to-1 relation. The dashed blue line shows a fit to the mean of the distribution.} \label{fig:HI-to-H2_transition} \end{figure} Taking these considerations into account, we conclude that we observe at most 3--5 transition layers of atomic to molecular gas between 43 and $56\rm\,km\,s^{-1}$. \subsection{Spatial correlation between atomic and molecular gas}\label{sec:HOG} The Histogram of Oriented Gradients\footnote{\url{https://github.com/solerjuan/astroHOG}} (HOG) is a method based on machine vision to study the spatial correlation in the emission by two or more spectral line tracers across velocity channels in an unbiased and systematic way. In Appendix~\ref{sec:appendix_HOG} we briefly outline the basic principles involved in this method. A comprehensive description is given by \citet{2019A&A...622A.166S}. We applied the HOG on each part of the filament to investigate the spatial correlation between \ion{H}{i} and \element[ ][13]{CO}. The output of the HOG analysis is a matrix where the rows and columns correspond to the different velocity channels in each tracer, as shown in Fig.~\ref{fig:HOG_vplane_whole_filament}. The number in each matrix position corresponds to the projected Rayleigh statistic ($V$), which is an optimal estimator of the morphological correlation between the velocity channel maps as evaluated by the orientation of its intensity gradients. High values of $V$ correspond to high spatial correlation and values of $V$\,$\approx$\,0 correspond to very low spatial correlation. The intensity gradients are calculated using a Gaussian derivative kernel whose width determines the spatial scales under consideration. To exploit the available spatial resolution, we selected a derivative kernel size that matches the synthesized beam size of the GRS \element[][13]{CO} data, that is, 46\arcsec. The projected Rayleigh statistic is a measure of the significance of the spatial correlation, $V$\,$\approx$\,$\sqrt{2}$ is roughly the equivalent of a $1\sigma$ deviation from complete lack of correlation, which corresponds to a flat distribution in the angles between the intensity gradients. However, the significance of the result also has to be evaluated with respect to the chance correlation that may be present between the velocity channel maps. We use the standard deviation of $V$ in the velocity range between 10 and 90 km\,s$^{-1}$ as an estimate of the amplitude of the chance correlation against which we can evaluate the significance of the $V$ values. This assumes that there are enough independent velocity-channel maps in the selected velocity range. Figure~\ref{fig:HOG_vplane_whole_filament} presents the correlation distribution between \ion{H}{i} and \element[ ][13]{CO} for all parts of the filament as a function of velocity. \begin{figure}[!htbp] \centering \resizebox{\hsize}{!}{\includegraphics{HOG_vplane_HI_CO_all_filaments_46arcs_MonteCarlo_paper_onecolumn.pdf}} \caption[]{HOG correlation plane of the filament as marked by the red polygons in Fig.~\ref{fig:column_density_map}. This figure presents the computed correlation between \ion{H}{i} and \element[ ][13]{CO} across velocities defined by the projected Rayleigh statistic. The white line shows a 1-to-1 correlation across velocities. The blue contours show the $5\sigma$ level on $V$. Large values of $V$ indicate a high spatial correlation. Values of $V$ close to zero indicate a negligible spatial correlation.} \label{fig:HOG_vplane_whole_filament} \end{figure} We observe a significant spatial correlation in the velocity channels around $v_{\mathrm{HI}}\approx v_{^{13}\rm CO}\sim 43\rm\,km\,s^{-1}$ and $\sim47\rm\,km\,s^{-1}$ toward the west. However, the eastern part of the filament shows no significant correlation between \ion{H}{i} and \element[ ][13]{CO} emission at the velocities of GMF20.0-17.9. The observed correlation within the whole filament is therefore dominated by the western region. While we computed the spatial correlation between \ion{H}{i} and \element[][13]{CO} emission, we test the validity of the correlation by applying the HOG analysis to the inferred HISA and \element[][13]{CO} emission maps. The HOG yields similar findings for HISA and \element[][13]{CO}. As the absence of spatial correlation within the eastern part of the filament is reproduced in both analyses, we are confident that we do not observe any significant spatial correlation between \ion{H}{i} and \element[][13]{CO} within the eastern part of the filament. Small kernel sizes close or equal to the angular resolution of the telescope makes features produced by noise and nonideal telescope beams more evident. Since spatial correlation is expected across multiple scales \citep{1993MNRAS.262..327G,2000ApJ...537..720L,2001ApJ...555..130L}, we also examine the correlation in each analysis by setting the kernel size to $90\arcsec$, which is approximately twice the angular resolution of the THOR and GRS data ($40\arcsec$ and $46\arcsec$, respectively). The differences we find between the eastern and western region with our HISA method are reproduced by the HOG analysis, irrespective of the spatial scale we use. Thus, we consider these findings robust and not an artifact of our HISA extraction method. We conclude that the CNM appears to be associated with molecular gas in the western part whereas the molecular gas seems to be decoupled from its atomic counterpart in the more diffuse cloud envelope toward the east. The systematic differences in spatial correlation between east and west can be interpreted as an indication of different evolutionary stages. \section{Conclusions} We have studied the atomic and molecular gas within the giant molecular filament GMF20.0-17.9. The molecular component is traced by GRS \element[ ][13]{CO} observations whereas the atomic gas is observed via \ion{H}{i} emission from the THOR survey. We isolated HISA features to disentangle the CNM from the atomic gas traced by \ion{H}{i} emission. We aimed to study the properties of the CNM as traced by HISA and compare our findings with the molecular counterpart. The results are summarized in the following: \begin{enumerate} \item We extracted HISA features by estimating the \ion{H}{i} emission spectrum in the absence of HISA. We employed a combination of first and second order polynomial functions to fit the baselines of HISA spectra at velocities adjacent to HISA features. This method gave the most reliable and robust results among the procedures we tested. \item The extracted HISA features reveal a spatial correlation with \element[ ][13]{CO} emission toward the western region of the filament while the eastern part shows no evidence that HISA traces the molecular gas. This finding is supported by the HOG analysis reporting significant spatial correlation toward the western part and no correlation toward the eastern part of the filament. However, the peak velocities of HISA and \element[ ][13]{CO} are in good agreement in both parts of the filament. The observed line widths of \element[ ][13]{CO} and HISA suggest that nonthermal effects like turbulent motions are the dominant driver for most regions within the filament. \item We derived $\rm H_2$ column densities from \element[ ][13]{CO} emission and compared the molecular column density distribution with its atomic counterpart. The HISA column densities show a more diffuse structure compared to those of molecular hydrogen. The $\rm H_2$ column densities reveal high-density peaks, particularly in the western part of the filament. The mass ratio of \ion{H}{i} (traced by HISA) and $\rm H_2$ is $0.01-0.05$, depending on the assumed spin temperature and region. This mass ratio increases toward the eastern part of the filament. The total \ion{H}{i} mass traced by \ion{H}{i} emission is similar to the molecular mass within the defined regions. The mass surface density threshold from the total \ion{H}{i} to $\rm H_2$ is observed to be $\sim$25$\rm\,M_{\odot}\,pc^{-2}$, in excess of predictions by theoretical models. However, this result can naturally be explained by a superposition of multiple transition layers or an additional WNM fraction that is less effective at shielding. \item The HISA N-PDFs can be well described by log-normal functions in both parts of the filament, indicative of turbulent motions as the main driver for these structures. While the magnitude of the column densities are dependent on the assumed parameters of spin temperature and background fraction, the shape and width of the N-PDFs are robust. The N-PDFs of \ion{H}{i} emission tracing both the WNM and CNM of the atomic gas represent the diffuse structure and show a narrow log-normal shape. The $\rm H_2$ column densities show a broad log-normal distribution with an indication of a power-law tail, more pronounced in the western part of the filament. \item We speculate that the two parts of the filament reflect different evolutionary stages. Interestingly, the derived HISA features in the eastern part of the cloud show lower maximum spin temperatures. This favors the scenario of a younger, less evolved cloud that is forming molecular gas out of the atomic gas reservoir. The western region harbors signs of active star formation and shows more pronounced column density peaks of $\rm H_2$. Moreover, the mass fraction of $\rm H_2$ compared to cold atomic hydrogen traced by HISA is larger toward the western part of the filament. While the HISA features correlate well with the molecular gas in the western part of the filament, they lack spatial correlation with the molecular component in the eastern region. Furthermore, we speculate that signatures of spiral arm interaction with atomic gas are visible toward the eastern part of the filament, due to an enhancement of line widths. The spatial structure and kinematics provide useful observables for theoretical models and simulations. \end{enumerate} A statistical treatment of the HISA properties in the Galactic plane is still missing. However, this case study toward a known large-scale filament, which is complementary to the analysis by \citet{2020A&A...634A.139W}, serves as a good laboratory to study the properties of the CNM. \begin{acknowledgements} J.S., H.B., R.S.K., and S.C.O.G. acknowledge support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- Project-ID 138713538 -- SFB 881 (``The Milky Way System'', subprojects A01, B01, B02, and B08). Y.W., H.B., and J.D.S. additionally acknowledge support from the European Research Council under the Horizon 2020 Framework Program via the ERC Consolidator Grant CSF-648505. R.S.K. and S.C.O.G. also thank for funding from the Heidelberg Cluster of Excellence \mbox{STRUCTURES} in the framework of Germany’s Excellence Strategy (grant EXC-2181/1 - 390900948). R.S.K. also thanks for funding from the European Research Council via the ERC Synergy Grant ECOGAL (grant 855130). F.B. acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No.726384/Empire). This work was carried out in part at the Jet Propulsion Laboratory which is operated for NASA by the \mbox{California Institute of Technology}. This research made use of Astropy and affiliated packages, a community-developed core Python package for Astronomy \citep{2018AJ....156..123A}, Python package SciPy\footnote{\url{https://www.scipy.org}}, and APLpy, an open-source plotting package for Python \citep{aplpy2012}. The authors thank the anonymous referee for the detailed comments and constructive suggestions that improved the paper. \end{acknowledgements} \bibliographystyle{aa_url}
{'timestamp': '2020-09-01T02:29:49', 'yymm': '2008', 'arxiv_id': '2008.13502', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.13502'}
\section{Introduction} \label{sec:1} The AdS/CFT correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj} maybe a useful tool to investigate the properties of quantum gravity. It is important to get a complete understanding of the AdS/CFT correspondence and to construct the systematical way of translation an arbitrary quantity of CFT to the dual one of the gravity side, vise versa. The most interesting issue among them maybe how to construct the gravity dual of the (reduced) density matrix of a CFT \cite{Czech:2012bh} because the (reduced) density matrix has all information of the (reduced) system. The Ryu-Takayanagi(RT) conjecture \cite{Ryu:2006bv,Ryu:2006ef} has played a fundamental role to work on this issue and shed light on the relation of the AdS/CFT correspondence and the information theory. Besides this issue, there are many information-theoretical analyses \cite{Swingle:2009bg,Pastawski:2015qua,Lashkari:2013koa,Dong:2016eik} of the AdS/CFT correspondence based on the RT conjecture. Thus, the RT conjecture is perhaps a most fundamental concept to grasp the AdS/CFT correspondence. The RT conjecture is believed to give a gravity dual of the entanglement entropy(EE) in a static system. If you take care of time-dependent systems, see \cite{Hubeny:2007xt}. In this paper, we deal with only static systems. First, we define EE. Let $\rho_A$ be the reduced density matrix of a spacelike region $A$ in CFT, then EE $S_A$ of a region $A$ is defined as \begin{align} S_A = - \mathrm{Tr}_A \left(\rho_A \log \rho_A \right). \label{eq:def of EE} \end{align} Then, the RT conjecture proposes that the homologous global minimal surface $\gamma_A$ in the dual bulk spacetime represents the EE of a region $A$ for the large $c$ limit static CFT as \begin{align} S_A = \frac{\mathrm{Area\ of\ }\gamma_A}{4G}, \label{eq:RT conjecture} \end{align} where $G$ is the gravitational constant. The homologous condition means that there exists the region $\mathcal{R}_A$ satisfying $\partial \mathcal{R}_A = A \cup \gamma_A$ \cite{Headrick:2007km}. Surprisingly, in some systems, the RT conjecture eq.\eqref{eq:RT conjecture} is completely consistent with eq.\eqref{eq:def of EE}. There, however, exist counterexamples of the RT conjecture in the sense that the EE derived from it does not match that of CFT, as we will see later. Therefore, we have to find a refined gravity dual of the EE by improving the RT conjecture because the RT conjecture may yield to incorrect predictions not only as a tool of calculation for the EE of the dual CFT but also as a basis to unravel the mechanism of the AdS/CFT correspondence. To construct a better gravity dual of the EE, we will review how the RT conjecture has derived. Historically, the RT conjecture is proposed as an analog to the Bekenstein-Hawking entropy formula. Later, the homologous condition is imposed by hand to prevent the RT conjecture from contradiction to the strong subadditivity inequality \cite{Headrick:2007km}. Mathematically, the RT conjecture is derived by directly building a gravity dual of the EE eq.\eqref{eq:def of EE} due to the AdS/CFT correspondence \cite{Lewkowycz:2013nqa,Haehl:2014zoa}. These derivations can explain why the area of the \emph{extremal} surfaces is related to EE of dual CFT. In particular, Haehl et al. \cite{Haehl:2014zoa} derived the restriction of the configuration of the extremal surfaces as the intersection number of them, which is a general and rigorous condition of the homologous condition. However, no one has not justified the validity of the global minimality condition. In this paper, we will give a gravity dual of the EE by taking into account the multiple extremal surfaces satisfying some assumptions. We now consider that not only a single minimal or dominant surface but rather a bunch of extremal surfaces represent EE. We will justify this idea by generalizing the gravity action including a single cosmic brane \cite{Dong:2016fnf} to that including countless cosmic branes with positive and negative brane tension. The construction of this paper is as follows. In section~\ref{sec:2}, we will review the replica trick, which is a technique of calculation for EE and investigate the geometry of the replica manifold, and see counterexamples of the RT conjecture. In section~\ref{sec:3}, we will construct a gravity dual of the EE, modifying the existing derivations for the RT conjecture. In section~\ref{sec:4}, we will confirm that the gravity dual is consistent with the EEs mentioned as the counterexamples in section~\ref{sec:2} and calculate the mutual information. Finally, section~\ref{sec:5} is the conclusion. \section{Comparison of RT conjecture and CFT result} \label{sec:2} In this section, before we see counterexamples of RT conjecture, we give a quick review for the replica trick \cite{Calabrese:2004eu, Calabrese:2009qy} and investigate the geometry of the replica manifold to construct a gravity dual of it in the next section. \subsection{Replica trick and geometry of replica manifold} First, we will rewrite eq.\eqref{eq:def of EE} by the replica trick to clarify the geometry of the replica manifold. Applying the replica trick to eq.\eqref{eq:def of EE}, EE of $A$ is expressed as \begin{align} S_A = -\lim_{n \to 1} \frac{\partial}{\partial n} \log \mathrm{Tr}_A \left(\rho_A^{\ n} \right). \label{eq:replica trick} \end{align} For convenience, we also consider the Renyi entropy $S^{(n)}_A$ defined as \begin{align} S^{(n)}_A = - \frac{1}{n-1} \log \mathrm{Tr}_A \left(\rho_A^{\ n} \right). \label{eq:Renyi entropy} \end{align} Since $\mathrm{Tr}_A \left(\rho_A^{\ n=1} \right) =1$, the Renyi entropy $S^{(n)}_A$ goes to $S_A$ as taking $n \to 1$. Let $Z$ be a partition function on a manifold $\mathcal{B}$. We define the partition function $Z_A^{(n)}$ of the $n$ replicated theory on the replica manifold $\mathcal{B}_A^{(n)}$ as corresponding to a set of $\mathcal{B}$, $A$ and $Z$ in order that $\mathrm{Tr}_A \left(\rho_A^{\ n} \right) = Z_A^{(n)}/Z^n$. Then, eq.\eqref{eq:Renyi entropy} is written by the partition functions as \begin{align} S^{(n)}_A = -\frac{1}{n-1} \left( \log Z_A^{(n)} - n \log Z \right). \label{eq:replica trick2} \end{align} Note that in this paper, $Z^n$ means $Z$ to the $n$-th powered and the quantity with the superscript $(n)$ like $Z_A^{(n)}$ represents the $n$ replicated quantity of the original one. For example, consider a partition function $Z$ of a massless free scalar filed on the $1+1$ dimensional Minkowski spacetime $\mathcal{B}$. Let $A$ be an interval $[u,v]$ on a time slice, where $v-u = \ell \geq \epsilon$ and $\epsilon$ is the UV-cutoff. Note that it is irrational to consider the EE of a region of which length is smaller than $\epsilon$. The replica manifold $\mathcal{B}_A^{(n)}$ is the $n$ branched covering of $\mathcal{B}$ of which a branch cut is from $u$ to $v$. In this case, we can write down the $n$ replicated partition function $Z_A^{(n)}$ as \begin{align} Z_A^{(n)} = \langle \mathcal{T}^{(n)}(u)\mathcal{\tilde{T}}^{(n)}(v) \rangle_{\mathcal{B}_A^{(n)}}, \label{eq:n replicated partition function} \end{align} where $\mathcal{T}^{(n)}(u)$ and $\mathcal{\tilde{T}}^{(n)}(v)$ are the twist operators which connect the fields on branch cut from $u$ to $v$ in an appropriate manner. As the scaling dimensions of the twist operators are $c\,(n^2-1)/12n$ where $c$ is the central charge, we can evaluate eq.\eqref{eq:n replicated partition function} analytically. Then eq.\eqref{eq:replica trick} is \begin{align} S_{A} = \lim_{n \to 1} \frac{c(n+1)}{6n} \log \frac{\ell}{\epsilon} = \frac{c}{3} \log \frac{\ell}{\epsilon}. \label{eq:EE of a interval} \end{align} In the large $c$ limit, this is a well known universal term of EE and the Renyi entropy of $1+1$ dimensional CFT on an infinitely long line at zero temperature. Next, we will study the geometry of $\mathcal{B}_A^{(n)}$. Consider the scale transformation of the Renyi entropy for the above example. Let $g_{\mu \nu},\, g^{(n)}_{A\,\mu \nu}$ be the metric tensor of the manifold $\mathcal{B},\, \mathcal{B}_A^{(n)}$ and $T,\,T^{(n)}_{A}$ be the trace of the energy-momentum tensors from $Z,\,Z_A^{(n)}$, respectively. Applying the Ward-Takahashi identity for the scale transformation to eq.\eqref{eq:replica trick2}, we get \begin{align} \ell \frac{\partial}{\partial \ell} S_{A}^{(n)} &= - \frac{2}{n-1} \int d^2x \left( g^{(n)}_{A\,\mu \nu} \frac{\partial}{\partial g^{(n)}_{A\, \mu \nu}} \log Z_A^{(n)} - n g_{\mu \nu}\frac{\partial}{\partial g_{\mu \nu}}\log Z \right) \nonumber \\ &= - \frac{1}{n-1} \left( \int d^2x \sqrt{\left| g^{(n)}_A \right|} \langle T^{(n)}\rangle_{\mathcal{B}_A^{(n)}} -n \int d^2x \sqrt{\left|g\right|} \langle T \rangle_{\mathbb{R}^2}\right) \nonumber \\ &= - \frac{1}{n-1} \int d^2x \frac{c}{24\pi}\sqrt{\left|g^{(n)}_A \right|}R_A^{(n)}. \label{eq:scale trans for EE} \end{align} where $g^{(n)}_A$ is the determinant of $g^{(n)}_{A\, \mu \nu}$ and $R_A^{(n)}$ is the Ricci scalar from the metric $g^{(n)}_{A\, \mu \nu}$. Recall that the replica manifold $\mathcal{B}_A^{(n)}$ is a $n$ branched covering of $\mathcal{B}$. There are the conical singularities at $\partial A := \{ (0,u) \cup (0,v) \}$. From comparison for eq.\eqref{eq:EE of a interval} and eq.\eqref{eq:scale trans for EE}, the conical defect angle is $\pi(1-n^2)/n$. Then, $R_A^{(n)}$ is almost flat but singular at $\partial A$ as \begin{align} R^{(n)}_A = 2\pi \frac{1-n^2}{n} \delta(t)\left[ \delta(x-u) + \delta(x-v) \right]. \label{eq:conical deficit} \end{align} Actually, substituting eq.\eqref{eq:conical deficit} into eq.\eqref{eq:scale trans for EE}, we get \begin{align} \ell \frac{\partial}{\partial \ell} S^{(n)}_A = \frac{c(n+1)}{6n}. \label{eq:2-point} \end{align} Integrating this, we can restore eq.\eqref{eq:EE of a interval}. Remind that the replica manifold $\mathcal{B}_A^{(n)}$ has singularities of which defect angle is $\pi(1-n^2)/n$ made by the twist operators at $\partial A$. \subsection{Counterexamples of RT conjecture} First, consider the EE of two disjoint space-like interval $A$ in $1+1$ dimensional CFT at zero temperature. Without loss of generality thanks to conformal symmetry, it is enough to consider the intervals as $A = [0, \ell] \cup [1, \infty],\, \ell \in (\epsilon,1-\epsilon)$. The trace of $\rho_A^{\ n}$ is described by $4$-point function of the twist operators as follows. \begin{align} \mathrm{Tr}_A \left(\rho_A^{\ n} \right) &= \langle \mathcal{T}^{(n)} (0)\tilde{\mathcal{T}}^{(n)}(\ell)\mathcal{T}^{(n)}(1)\tilde{\mathcal{T}}^{(n)}(\infty) \rangle \label{eq:4-point} \end{align} In the large $c$ limit, according to the paper \cite{Hartman:2013mia} with the crossing symmetry we can evaluate eq.\eqref{eq:4-point} as \begin{align} S_A = \lim_{n \to 1} \frac{c(n+1)}{6n}\left[ \log \frac{\ell}{\epsilon} + \log \left( \frac{1-\ell}{\epsilon} \right) \right] + c_1 = \frac{c}{3}\left[ \log \frac{\ell}{\epsilon} + \log \left( \frac{1-\ell}{\epsilon} \right) \right] + c_1. \label{eq:EE of 2 disjoint interval} \end{align} where $\epsilon$ is the UV cut off, $c_1$ is determined by the IR cut off. Second, consider the EE of a space-like interval $A$ in $1+1$-dimensional CFT on the circle of which circumference $1$ at the inverse temperature $\beta$ \cite{Azeyanagi:2007bj,Datta:2013hba}. EE of an interval $A = [-\ell/2, \ell/2],\, \ell \in (\epsilon,1-\epsilon)] $ in the large $c$ limit is \begin{align} S_A &= \lim_{n \to 1} \frac{c(n+1)}{6n} \left\{ \log\left[ \frac{\beta}{\pi\epsilon} \sinh\left(\frac{\pi\ell}{\beta}\right)\right] + \sum_{m=1}^{\infty}\log \frac{(1-e^{2\pi \ell/\beta}e^{-2\pi m/\beta})(1-e^{-2\pi \ell/\beta}e^{-2\pi m/\beta})}{(1-e^{-2\pi m/\beta})^2} \right\} \nonumber\\ &=\frac{c}{3} \log\left[ \frac{\beta}{\pi\epsilon} \sinh\left(\frac{\pi\ell}{\beta}\right)\right] + \frac{c}{3} \sum_{m=1}^{\infty}\log \frac{\sinh \left[\frac{\pi}{\beta}(m-\ell)\right]\sinh \left[\frac{\pi}{\beta}(m+\ell)\right]}{\sinh^2 \left(\frac{\pi m}{\beta}\right)} . \label{eq:EE of finite circle} \end{align} Note that the above calculations are complete within CFT and irrelevant from the AdS/CFT correspondence. On the other hand, we will see EEs derived from the RT conjecture applying to the corresponding spacetimes as the AdS spacetime and BTZ spacetime \cite{Banados:1992wn}, respectively. The EE by the RT conjecture corresponding to eq.\eqref{eq:EE of 2 disjoint interval} is \begin{align} S_A = \min \left\{ \frac{c}{3}\log \frac{\ell}{\epsilon},\ \frac{c}{3}\log \left( \frac{1-\ell}{\epsilon} \right) \right\} \label{eq:EE from RT of 2 disjoint interval} \end{align} and to eq.\eqref{eq:EE of finite circle} is \begin{align} S_A =\min \left\{ \frac{c}{3} \log \left[ \frac{\beta}{\pi\epsilon} \sinh\left(\frac{\pi\ell}{\beta}\right) \right],\ \frac{c \pi}{3\beta} + \frac{c}{3} \log \left[ \frac{\beta}{\pi\epsilon} \sinh\left(\frac{\pi}{\beta}(1-\ell)\right) \right]\right\} \label{eq:EE from RT of finite circle} \end{align} If and only if $\ell \sim \epsilon$ or $1-\ell \sim \epsilon$, they are consistent with eq.\eqref{eq:EE of 2 disjoint interval} and eq.\eqref{eq:EE of finite circle} respectively. In this limit, one extremal surface contributes dominantly to EE. However, no longer they have the consistency with each other if we consider EE of a more general interval. How do we improve the RT conjecture? Looking at eq.\eqref{eq:EE of 2 disjoint interval} and eq.\eqref{eq:EE of finite circle}, we think intuitively that they are described by multiple extremal surfaces in the corresponding bulks. In section~\ref{sec:3}, we will use this idea as a clue to improve the RT conjecture. Then, in section~\ref{sec:4}, we will confirm that our proposal is consistent with eq.\eqref{eq:EE of 2 disjoint interval} and eq.\eqref{eq:EE of finite circle}. \section{Refinement of RT conjecture} \label{sec:3} In this section, based on the derivation \cite{Lewkowycz:2013nqa,Dong:2016fnf}, we will formulate a gravity dual of EE eq.\eqref{eq:replica trick} and the Renyi entropy eq.\eqref{eq:Renyi entropy} by modifying three assumptions about the geometry of the bulk replica manifold, the $n$-replicated gravity action, and the homologous condition. \subsection{Gravity dual of replica manifold} First, we consider the geometry of the gravity dual manifold of the replica boundary manifold. Consider the $d+1$-dimensional replica bulk spacetime $\mathcal{M}_A^{(n)}$ of which boundary is the $d$ dimensional replica manifold $\mathcal{B}^{(n)}_A$ in the sense of $\partial \mathcal{M}_A^{(n)} = \mathcal{B}_A^{(n)}$. Since $\mathcal{B}_A^{(n)}$ has the conical singularities as mentioned in the previous section, we assume that the replica bulk spacetime $\mathcal{M}_A^{(n)}$ is a singular manifold. This is the first modification. We restrict the configuration of the singularities $\mathcal{C}_A^{(n)}$ in $\mathcal{M}_A^{(n)}$ so that $\mathcal{M}^{(n)}_A$ has the same topology as $\mathcal{B}_A^{(n)}$ has. We call this condition as the topological consistency condition; TCC according to \cite{Haehl:2014zoa}. In the next subsection, we will see that the TCC restricts the configuration of the extremal surfaces that can contribute to the EE. Notice that the first modification does not matter to the net result of this discussion, but this assumption makes the argument clear and leads to the natural bulk dual of the twist operators as we will see later. \begin{figure}[h] \vspace{10pt} \centering \includegraphics[width=0.6\linewidth]{CosmicBranes.pdf} \caption{The red and blue dotted lines represent the cosmic branes $\mathcal{C}_{A,\,\pm}^{(n),\, i}$ on a time-slice with the defect angle $\pm \pi(1-n^2)/n$, respectively. The total defect angle is $\pi(1-n^2)/n$ at both $\partial A$.} \label{fig:Cosmic branes} \end{figure} Second, we will build the gravity action which realizes the $n$ replicated bulk manifold $\mathcal{B}^{(n)}_A$ as a solution. By the AdS/CFT correspondence, the partition function $Z$ of a CFT can be described by the path integral as $Z = \int \mathcal{D}\phi \exp (-I[\phi])$, where $I[\phi]$ is the gravity action. Then, what is a gravity action $I_A^{(n)}[\phi]$ corresponding to $Z_A^{(n)}$? Since the partition function $Z_A^{(n)}$ is the vacuum expectation value of the twist operators, the corresponding gravity action $I_A^{(n)}[\phi]$ should have the somehow sources which make the conical singularities of which defect angle is $\pi(1-n^2)/n$ at $\partial A$. The following cosmic brane $\mathcal{C}_A^{(n)}$ anchored on $\partial A$ makes the singularities with defect angle $\pi(1-n^2)/n$. \begin{align} I_{\mathrm{brane}}^{(n)} = -\frac{1-n^2}{8n G} \int_{\mathcal{C}^{(n)}_A} d^{d-1} y \sqrt{h}, \label{eq:Action for cosmic brane} \end{align} where $y$ is the $d-1$ dimensional coordinate on $\mathcal{C}^{(n)}_A$ and $h$ is the determinant of the induced metric on $\mathcal{C}^{(n)}_A$. However, this is not a unique action satisfying the boundary condition on $\partial A$. As shown in figure~\ref{fig:Cosmic branes}, if we also introduce cosmic branes with the brane tension $-(1-n^2)/8nG$, we can generalize eq.\eqref{eq:Action for cosmic brane} involving infinitely many cosmic branes as follows. \begin{align} I_{\mathrm{branes}}^{(n)} = \sum_i &-\frac{1}{16 \pi G} \int_{\mathcal{M}^{(n)}_A} d^{d+1} x \sqrt{g}\, \frac{2\pi(1-n^2)}{n} \delta(t)\delta(\xi-\xi^{(n),\,i}_{A,\,+}) \nonumber \\ &+ \sum_j \frac{1}{16 \pi G} \int_{\mathcal{M}^{(n)}_A} d^{d+1} x \sqrt{g}\, \frac{2\pi(1-n^2)}{n} \delta(t)\delta(\xi-\xi^{(n),\,j}_{A,\,-}), \label{eq:Action for cosmic brane2} \end{align} where $\xi^{(n),\,i}_{A,\,\pm}$ represents the location of a $d-1$ dimensional singularities anchored on $\partial A$ with defect angle $\pm \pi(1-n^2)/n$, respectively. As the second modification, we use eq.\eqref{eq:Action for cosmic brane2} instead of eq.\eqref{eq:Action for cosmic brane}. That is, we regard the $n$-replicated bulk gravity action as $I_A^{(n)}[\phi] = n I[\phi] + I_{\mathrm{branes}}^{(n)}$. Since eq.\eqref{eq:Action for cosmic brane2} is generalization of eq.\eqref{eq:conical deficit}, we can regard the cosmic branes as a gravity dual of the twist operators. In the large $N$ limit, we can use the saddle point approximation to evaluate the path integral as $Z^{(n)}_A = \int \mathcal{D}\phi \exp (-I^{(n)}_A[\phi]) \sim \exp (-\tilde{I}^{(n)}_{A}[\phi])$ where $\tilde{I}^{(n)}_{A}[\phi]$ is the solutions for the Eular-Lagrange equation. If $n \sim 1$, the difference of the Ricci scalar of $\tilde{I}^{(n)}_{A}[\phi]$ and $n \tilde{I}[\phi]$ is only the cosmic branes action. Then, the Renyi entropy \eqref{eq:replica trick2} in $N \to \infty, n \to 1$ is \begin{align} S_A^{(n)} = -\frac{1}{n-1}\,\tilde{I}_{\mathrm{branes}}^{(n)}. \label{eq:Area of cosmic brane?} \end{align} Since $\tilde{I}_{\mathrm{branes}}^{(n)}$ is the saddle point, it is the sum of signed areas of extremal surfaces. In the next subsection, we will study what a combination of extremal surfaces can satisfy the TCC. \subsection{Intersection number of cosmic branes} We discuss the third modification that is the refinement of the homologous condition. Recall that the homologous condition is required for consistency with the strong subadditivity of the EE. However, the homologous condition that there exist the region $\mathcal{R}_A$ satisfying $\partial \mathcal{R}_A = A \cup \gamma_A$ makes sense only when we deal with the minimal surface. Thus, we need to refine the homologous condition to treat a self intersecting cosmic brane and multiple cosmic branes. \begin{figure}[h] \centering \includegraphics[width=0.78\linewidth]{Intersection.pdf} \caption{An example of 2-dimensional surface $\mathcal{D}$. Thanks to the condition $\partial \mathcal{D} \subset \mathcal{B}- \partial A$, the intersection number is $0$ for an arbitrarily $\mathcal{D}$ if we assign the orientation of an interval $A$ and a cosmic brane $\mathcal{C}_{A}^{(n)}$ like this figure.} \label{fig:Intersection} \end{figure} Fortunately, there is such a refinement of the homologous condition thanks to \cite{Haehl:2014zoa}. Let $\mathcal{D}$ be a 2-dimensional surface satisfying $\partial \mathcal{D} \subset \mathcal{B}-\partial A$ like figure~\ref{fig:Intersection}. The definition of the intersection number is the number of times that the oriented cosmic branes $\mathcal{C}_{A,\,\pm}^{(n),\, i}$ in $\mathcal{M}_A^{(n)}$ and the oriented region $A$ on $\mathcal{B}$ penetrate $\mathcal{D}$. Notice that $\mathcal{M}_A^{(n)}$ satisfies TCC if and only if the intersection number is 0. Let $E_A$ be the set of all extremal surfaces of which intersection number is 0 against a region $A$ and an arbitrary $\mathcal{D}$ and which satisfy the boundary condition at $\partial A$. In $1+1$ dimensional CFT system, we impose one more assumption. Consider a cosmic string connects two twist operators of the same type. For example, a string connects $\mathcal{T}^{(n)}(u_1)$ with $\mathcal{T}^{(n)}(u_2)$ or $\tilde{\mathcal{T}}^{(n)}(v_1)$ with $\tilde{\mathcal{T}}^{(n)}(v_2)$. We assume that such a cosmic string has a negative defect angle as $-\pi (1-n^2)/n$. If not, eq.\eqref{eq:Area of cosmic brane?} would reproduce incorrect EE and contradict the SSA inequality as we will see. Then, in this paper, we say that the extremal surfaces satisfy the refined homologous condition(RHC) if the extremal surfaces are elements of $E_A$ and cosmic branes $\mathcal{C}_{A,\,+}^{(n),\, i}$ with a positive defect angle do not connect the same type of twist operators. As the conclusion of the discussion of this whole section, the gravity dual of the Renyi entropy of the large $N$ limit static CFT is as follows. \begin{align} S_A^{(n)} = \frac{n+1}{8n G} \sum_{ \mathcal{C}_{A,\,\pm}^{(n),\, i} \in \tilde{E}_A} \left( \mathrm{Area\ of\ }\mathcal{C}_{A,\,+}^{(n),\, i} - \mathrm{Area\ of\ } \mathcal{C}_{A,\,-}^{(n),\, i} \right) \label{eq:the gravity dual} \end{align} where $n \sim 1$ and $\tilde{E}_A$ is the set of all cosmic branes $\mathcal{C}_{A,\,\pm}^{(n),\, i}$ satisfying the refined homologous condition. Notice that we have to introduce the UV cutoff for CFT to prevent the IR divergence of the gravity partition function. Then, the areas of the cosmic brane take finite value. \section{Holographic EE from extremal surfaces} \label{sec:4} We will confirm for our gravity dual of the Renyi entropy eq.\eqref{eq:the gravity dual} to be consistent with eq.\eqref{eq:EE of 2 disjoint interval} and eq.\eqref{eq:EE of finite circle}. \subsection{EE and mutual information of two intervals} We will apply eq.\eqref{eq:the gravity dual} to the calculation for EE of the two intervals as $A = [u_1, v_1] \cup [u_2, v_2],\, u_1<v_1<u_2<v_2$. Since the dual bulk spacetime is planar AdS spacetime, there only exists the cosmic string connecting the two points at the AdS boundary. Therefore, the cosmic strings in $\tilde{E}_A$ is depicted in figure~\ref{fig:2interval} and the Renyi entropy is \begin{align} S_A^{(n)} = \frac{c(n+1)}{6n} \log \frac{(v_1-u_1)(v_2-u_2)(u_2-v_1)(v_2-u_1)}{\epsilon\, (u_2-u_1)(v_2-v_1)} \label{eq:4-point2} \end{align} If $(u_1,v_1,u_2,v_2)=(0,\ell,1,\infty)$, $\epsilon \leq \ell \leq 1-\epsilon$, eq.\eqref{eq:4-point2} goes to eq.\eqref{eq:EE of 2 disjoint interval}. Notice that general behavior of $4$-point function of the twist operators in $1+1$ dimensional CFT is \cite{Calabrese:2009qy} \begin{align} \mathrm{Tr}_A \left(\rho_A^{\ n} \right) = \left[\frac{(v_1-u_1)(v_2-u_2)(u_2-v_1)(v_2-u_1)}{(u_2-u_1)(v_2-v_1)}\right]^{-\frac{c}{6}\left(n-\frac{1}{n}\right)}\mathcal{F}_n(x) \label{eq:4-point3} \end{align} where $x = (v_1-u_1)(v_2-u_2)/(u_2-u_1)(v_2-v_1)$ is the cross ratio and the function $\mathcal{F}_n$ depends on the detail of CFT. In the large $c$ limit, since the spectrum involved in a theory become universal, eq.\eqref{eq:4-point2} is consistent with eq.\eqref{eq:4-point3}. \begin{figure}[h] \centering \includegraphics[width=0.7\linewidth]{2interval.pdf} \caption{The extremal surfaces satisfying RHC for $2$ intervals case. } \label{fig:2interval} \end{figure} Next, consider the mutual information of two intervals $A = [u_1, v_1]$ and $B = [u_2, v_2]$, $u_1<v_1<u_2<v_2 $. The mutual information $I_{AB}$ is \begin{align} I_{AB} = S_A + S_B -S_{A+B} = \frac{c}{3} \log \frac{(u_2-v_1)(v_2-u_1)}{ (u_2-u_1)(v_2-v_1)} \geq 0. \label{eq:mutual information} \end{align} As explained in \cite{Headrick:2007km}, this is non negative thanks to the relation between the size of the extremal surface area, and we can regard it as a result of the strong sub additivity inequality $S_{A+B+C} + S_{B} \geq S_{A+B} + S_{B+C}$. If $A = [u_1, v_1], B = [v_1, u_2]$ and $C = [u_2, v_2]$, then eq.\eqref{eq:mutual information} is non negative. Revisit the condition that a cosmic string with a positive defect angle doesn't connect the same type of twist operators. Suppose that the positive extremal surfaces connect ($u_1$,$v_1$), ($u_2$,$v_2$), ($u_1$,$u_2$) and ($v_1$,$v_2$), and the negative ones connect ($u_1$,$v_2$) and ($v_1$,$u_2$). Although there is no problem with the boundary condition and the intersection number of them, eq.\eqref{eq:mutual information} becomes negative. On the other hand, notice that we permit for cosmic strings with a negative defect angle connecting $\mathcal{T}$ and $\tilde{\mathcal{T}}$. Consider that such surfaces are added in figure~\ref{fig:2interval}. Then, we have to add again those with a positive defect angle as many as with a negative defect angle to satisfy RHC. These contributions are canceled each other and the only contribution of original surfaces is left. In particular, eq.\eqref{eq:the gravity dual} is equivalent to the RT conjecture if there exists the only extremal surface corresponding a region $A$ in the spacetime. \subsection{BTZ black hole} The BTZ black hole spacetime eq.\eqref{eq:BTZmetric} is the dual gravity spacetime of $1+1$ dimensional CFT on the circle at the finite temperature. If the inverse temperature of CFT be $\beta$, the black hole mass satisfies $M =1/\beta^2$. Let a subregion $A$ be a single interval, then there are infinitely many extremal surfaces anchored on the $\partial A$ in the BTZ black hole spacetime, as shown in figure~\ref{fig:ExtremalsInBTZ}. See also appendix~\ref{Appendix}. \begin{figure}[h] \vspace{4pt} \centering \includegraphics[width=0.8\linewidth]{Extremalsurfaces.pdf} \vspace{-4pt} \caption{The extremal surfaces in the BTZ black hole spacetime. These red and blue extremal surfaces are anchored on $\theta = \pm \pi/3$ and $\theta= - \pi/3$, respectively. The black hole mass $M=1$. Each disk represents a time slice of the spacetime with compactification for the radial direction and the outer circles represent the AdS boundary. Note that the black hole is not written out.} \label{fig:ExtremalsInBTZ} \end{figure} First, we confirm how RHC works in this system. Since the time coordinate is periodic in a finite temperature system, there can exist a special $2$-surface $\mathcal{D}$. For example, the surface $\mathcal{D}$ has the boundary $\partial \mathcal{D} = \{(t,\theta)| 0\leq t \leq \beta,\,\theta=\mathrm{const.} \}$ shown in figure~\ref{fig:TCC}. Furthermore, some surface can connect between $\partial A$ avoiding such a surface $\mathcal{D}$ as the topology of the boundary manifold $\mathcal{B}$ is the torus. \begin{figure}[h] \centering \includegraphics[width=0.35\linewidth]{TCC.pdf} \caption{An example of the extremal surfaces satisfying RHC in the BTZ black hole spacetime. The condition for the intersection number forbid to replace a surface to one that goes around the black hole a lot.} \label{fig:TCC} \end{figure} Noting that, we construct the combination of the extremal surfaces satisfying RHC as follows. See figure~\ref{fig:TCC} and \ref{fig:ThesurfacesInBTZ} and specify the signed surfaces by the function $\pm s(m \pm \ell)$. First, the surface $s(\ell)$ satisfies RHC. Next, from the boundary condition consider the surfaces $+s(m_1-\ell)$, $+s(m_2+\ell)$, $-s(m_3)$ and $-s(m_4)$. The condition for the intersection number imposes $m_1 = m_2 = m_3 =m_4$. To confirm this, note that we can transform the pair of the surfaces $+s(m-\ell)$ and $-s(m)$ to $A$ homotopically, and do so the pair of the surfaces $s(m+\ell)$ and $s(m)$ too. Then, the surfaces $+s(\ell)$, $+s(m-\ell)$, $+s(m+\ell)$, $-s(m)$ and $-s(m)$ satisfies RHC. If we replace a surface out of them to another surface, it always changes the intersection number. Therefore, all the combination of the extremal surfaces satisfying RHC is $+s(\ell)$ and all the pair of $4$ surfaces $+s(m-\ell)$, $+s(m+\ell)$, $-s(m)$ and $-s(m)$ for $m \geq 1$. \begin{figure}[h] \centering \includegraphics[width=0.74\linewidth]{TheExtremals.pdf} \caption{The diagrams of the extremal surfaces which contribute to the EE of a single interval in the BTZ black hole spacetime. The function $s$ defined as eq.\eqref{eq:DefofAreaFunction} denotes the area$/4G$ of each surface.} \label{fig:ThesurfacesInBTZ} \end{figure} Then, EE is the following sum of signed area of all surfaces. \begin{align} S_A &= s(\ell) + \sum_{m=1}^{\infty} \left[ s(m - \ell) + s(m + \ell) - 2 s(m) \right] \nonumber \\ &= \frac{c}{3} \log\left[ \frac{\beta}{\pi\epsilon} \sinh\left(\frac{\pi\ell}{\beta}\right)\right] + \frac{c}{3} \sum_{m=1}^{\infty}\log \frac{\sinh \left[\frac{\pi}{\beta}(m-\ell)\right]\sinh \left[\frac{\pi}{\beta}(m+\ell)\right]}{\sinh^2 \left(\frac{\pi m}{\beta}\right)}. \label{eq:BTZ} \end{align} This is consistent with eq.\eqref{eq:EE of finite circle}. We discuss a few geometrical aspects of the configuration of the extremal surfaces corresponding to eq.\eqref{eq:BTZ}. First, the most striking difference in the geometry between the RT conjecture's eq.\eqref{eq:EE from RT of finite circle} and ours eq.\eqref{eq:BTZ} is the region where the surfaces can sweep. If the homologous minimal surface can describe EE completely, we face to the plateaux problem \cite{Freivogel:2014lja}. That is, EE is independent of the property of the bulk region slightly outside the black hole horizon because the minimal surface cannot reach such a region even considering any boundary region. Therefore, if RT conjecture is correct in this spacetime, it may be impossible to reconstruct the bulk geometry from the boundary information. On the other hand, since the surfaces which represent eq.\eqref{eq:BTZ} can sweep all bulk region outside the black hole horizon, the plateaux problem is solved in the BTZ spacetime. In the same way, since in the higher dimensional Schwarzschild-AdS black hole there are surfaces that can approach the black hole horizon infinitesimally, the plateaux problem may not happen. Second, the configuration of the surfaces of eq.\eqref{eq:BTZ} is consistent with the restriction of causality \cite{Headrick:2014cta}. That is, all the surface of eq.\eqref{eq:BTZ} exist on the entanglement shadow region. In this sense, RHC is more strong condition than the causality constraint since it determines not only the position but also the combination of the extremal surfaces, although the causality cannot restrict the combination of them. Finally, we point out the relation between surfaces of eq.\eqref{eq:BTZ} and the black hole horizon. Although eq.\eqref{eq:BTZ} seems not to include the black hole horizon explicitly, if $A = 1-\delta,\, \epsilon < \delta \ll 1$, eq.\eqref{eq:BTZ} goes \begin{align} S(1-\delta) &= \frac{c}{3} \log\left[ \frac{\beta}{\pi\epsilon} \sinh\left(\frac{\pi \delta}{\beta}\right)\right] + \frac{c}{3} \lim_{n \to \infty} \log \frac{\sinh \left(\frac{\pi}{\beta}(m+1)\right)}{\sinh \left(\frac{\pi m}{\beta}\right)} + O(\delta) \nonumber \\ &= \frac{c}{3} \log\left[ \frac{\beta}{\pi\epsilon} \sinh\left(\frac{\pi \delta}{\beta}\right)\right] + \frac{c \pi}{3 \beta}. \label{eq:BTZ large interval} \end{align} Note that taking $\delta, \epsilon \to 0$, the Araki-Lieb is saturated and the difference of the EEs corresponds to $c$ times the black hole entropy $S_{BH}$: \begin{align} \lim_{\delta,\epsilon \to 0} S(1-\delta)- S(\delta) = \frac{c \pi}{3 \beta} = c S_{BH}. \label{eq:Araki-Lieb} \end{align} See eq.\eqref{eq:BTZ large interval}, since the black hole horizon emerges as a result of the difference of the surface wrapped $m+1$ times around the black hole and one wrapped $m$ times taking $m$ to the infinity, both sides of eq.\eqref{eq:Araki-Lieb} are equivalent as the surfaces not only values. \section{Conclusion} \label{sec:5} We provided a gravity dual of the Renyi entropy eq.\eqref{eq:the gravity dual} as the sum of the signed areas of cosmic branes satisfying the refined homologous condition. We saw that the RT conjecture has contradictions for the cases that EE of two intervals on an infinitely long line at zero temperature and that of an interval on a circle at finite temperature. Our holographic EE eq.\eqref{eq:the gravity dual} is consistent with them. In the course of the derivation of eq.\eqref{eq:the gravity dual}, we first investigated the geometry of the replica manifold on a CFT side. On the corresponding gravity side, we gave the gravity action with infinitely many cosmic branes that is consistent with the geometry of the boundary replica manifold. After that, we impose the refined homologous condition on the configuration of the cosmic branes. In $1+1$-dimensional system, we imposed a non-trivial assumption as for the connection of twist operators by the cosmic strings. Then, this assumption may have to be revisited and justified on a somehow natural assumption. As future works, we have to verify our conjecture eq.\eqref{eq:the gravity dual} in various systems by confirming the consistency with CFT calculation, for example, in higher-dimensional black holes, the AdS soliton spacetime, the D$p$-brane spacetime, and things like that. We should revisit existing arguments relying on the homologous minimality of the RT conjecture. In a general system, since a region on the AdS boundary has many corresponding extremal surfaces in the bulk, the gravity dual of EE should contain the contribution of multiple extremal surfaces. For example, regarding the HRT conjecture\cite{Hubeny:2007xt} and the quantum correction of the RT conjecture\cite{Faulkner:2013ana}, we have to consider the dynamics and the quantum correction of multiple cosmic branes. \acknowledgments{ I would like to thank Yasusada Nambu and Chulmoon Yoo for useful discussions and comments. }
{'timestamp': '2020-11-12T02:14:47', 'yymm': '2011', 'arxiv_id': '2011.00407', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.00407'}
\section{Introduction} In the last few years, many researches have been done on classical problems in complex analysis in the case of singular spaces; for example the $\overline\partial$-Neumann operator has been studied in \cite{Rup2} by Ruppenthal, the Cauchy-Riemann equation in \cite{AS,DFV,ForII,Rup0,Rup1} by Andersson, Samuelsson, Diederich, Forn\ae ss, Vassiliadou, Ruppenthal, ideals of holomorphic functions on analytic spaces in \cite{ASS} by Andersson, Samuelsson and Sznajdman, problems of extensions and restrictions of holomorphic functions on analytic spaces in \cite{DiMa0, Duquenoy} by Diederich, Mazzilli and Duquenoy. In this article, we will be interested in problems of extension of holomorphic functions defined on an analytic space. Let $D$ be a bounded pseudoconvex domain of $\cc^n$ with smooth boundary, let $f$ be a holomorphic function in a neighborhood of $D$ and let $X=\{z, f(z)=0\}$ be an analytic set such that $D\cap X\neq \emptyset$. The first extension problem that one can consider is the following one~: Is it true that a function $g$ which is holomorphic on $D\cap X$ has a holomorphic extension on $D$ ?\\ It is known by Cartan's theorem B that the answer to this question is affirmative and that any function $g$ holomorphic on $X\cap D$ has a holomorphic extension $G$ on the whole domain $D$ if and only if $D$ is pseudoconvex. More difficulties arise when we ask $G$ to satisfy some growth conditions like being in $L^q(D)$ or in $BMO(D)$. This question has been widely studied by many authors under different assumptions on $D$ or $X$. In \cite{Ohsawa}, Ohsawa-Takegoshi proved when $X$ is a hyperplane that any $g\in L^2(X\cap D)\cap {\cal O}(X\cap D)$ admits an extension $G\in L^2(D)\cap \oo (D)$. This result was generalized to the case of manifolds of higher codimension in \cite{OhsawaII} by Ohsawa. In \cite{Berndtsson}, Berndtsson investigated the case of singular varieties and got a condition on $g$ which implies that it admits a holomorphic $L^2$ extension on $D$. However this condition requires that $g$ vanishes on the singularities of $X$ and thus $g\equiv 1$ does not satisfy this condition while it can trivially be extended holomorphically.\\ Assuming that $D$ is strictly pseudoconvex and that $X$ is a manifold, Henkin proved in \cite{Henkin} that any $g\in L^\infty(D\cap X)\cap\ \oo(D\cap X)$ has an extension in $L^\infty(D)\cap\oo(D)$, provided that $bD$, the boundary of $D$, and $X$ are in general position. Cumenge in \cite{Cum} generalized this result to the case of Hardy spaces and Amar in \cite{Ama} removed the hypothesis of general position of $bD$ and $X$ assumed in \cite{Henkin}. The case of $L^\infty$ extensions has also been investigated in the case of weak (pseudo)convexity. In \cite{DiMa2} Diederich and Mazzilli proved that when $D$ is convex of finite type and $X$ is a hyperplane, any $g\in L^\infty(D\cap X)\cap\oo(D\cap X)$ is the restriction of some $G\in L^\infty(D)\cap\oo(D)$. In \cite{WA}, again for $D$ convex of finite type but for $X$ a manifold, a sufficient and nearly necessary condition on $X$ was given under which any function $g$ which is bounded and holomorphic on $X\cap D$ is the restriction of a bounded holomorphic function on $D$. This restriction problem was also studied in \cite{Jasiczak} by Jasiczak for $D$ a pseudoconvex domain of finite type in $\cc^2$ and $X$ a manifold. In this article we consider a strictly convex domain $D$ of $\cc^n$ and an analytic subset $X$ of $\cc^n$ such that $X\cap D\neq\emptyset$ and $X\cap bD$ is transverse in the sense of tangent cones. We give necessary conditions and, when $n=2$, sufficient conditions under which a function $g$ holomorphic in $X\cap D$ admits a holomorphic extension in the class $BMO(D)$ or $L^q(D)$, $q\in [1,+\infty)$. \par\medskip Let us write $D$ as $D=\{z\in\cc^n,\ \rho(z)<0\}$ where $\rho$ is a smooth strictly convex function defined on $\cc^n$ such that the gradient of $\rho$ does not vanish in a neighborhood $\cal U$ of $bD$. We denote by $D_r$, $r\in\rr$, the set $D_r=\{z\in\cc^n,\ \rho(z)<0\}$, by $\eta_\zeta$ the outer unit normal to $bD_{\rho(\zeta)}$ at a point $\zeta\in{\cal U}$ and by $v_\zeta$ a smooth complex tangent vector field at $\zeta$ to $bD_{\rho(\zeta)}$. Our first result is the following. \begin{theorem}\label{th0} For $n=2$, there exists two integers $k,l\geq 1$ depending only from $X$ such that if $g$ is a holomorphic function on $X\cap D$ which has a $C^\infty$ smooth extension $\tilde g$ on $D$ which satisfies \begin{enumerate}[(i)] \item \label{th0i} there exists $N\in\nn$ such that $|\rho|^N \tilde g$ vanishes to order $l$ on $bD$, \item \label{th0ii}there exists $q\in[1,+\infty]$ such that $\left|\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}\right||\rho|^{\alpha+\frac\beta 2}$ belongs to $L^q(D)$ for all non-negative integers $\alpha$ and $\beta$ with $\alpha+\beta\leq k$, \item\label{th0iii} $\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}=0$ on $X\cap D$ for all non-negative integers $\alpha$ and $\beta$ with $\alpha+\beta\leq k$, \end{enumerate} then $g$ has a holomorphic extension $G$ in $L^q(D)$ when $q<+\infty$ and in $BMO(D)$ when $q=+\infty$. Moreover, up to a uniform multiplicative constant depending only from $k$, $l$ and $N$, the norm of $G$ is bounded by the supremum of the $L^q$-norm of $\zeta\mapsto \left|\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}(\zeta)\right||\rho(\zeta)|^{\alpha+\frac\beta 2}$ for $\alpha ,\beta $ with $\alpha+\beta\leq k$. \end{theorem} In Lemma \ref{lemma2}, Corollary \ref{th2} and Theorem \ref{th5}, we will give conditions under which a function $g$ holomorphic on $X\cap D$ admits a smooth extension on $D$ which satisfies the assumption of Theorem \ref{th0}.\\ Let us mention that the integer $k$ in Theorem \ref{th0} is in fact equal to the maximum of the order of the singularities of $X$ and the hypothesis of Theorem \ref{th0} can be relaxed a little in the following way. The theorem is still valid if for all singularities $z_0\in X\cap\overline D$ of $X$ of order $k_0$, we check the hypothesis (\ref{th0ii}) and (\ref{th0iii}) with $k$ replaced by $k_0$ and $D$ replaced by ${\cal U}_0\cap D$ where ${\cal U}_0$ is a neighborhood of $z_0$. \par\smallskip The holomorphic extension of Theorem \ref{th0} is given by an integral operator combining the Berndtsson-Andersson reproducing kernel and a residue current. In \cite{Ama}, Amar pointed out for the first time the importance of the current $\overline\partial\left[ \frac1f\right]$ in the problem of extension. In \cite{Duquenoy} the extension is given by an operator constructed by Passare which uses the classical residue current $\overline\partial \left[ \frac1f\right]$ (see \cite{Passare}). However, as pointed out in \cite{Duquenoy}, it is not so easy to handle the case of singularities of order greater than 2 and the classical currents do not give a good extension in this case. To overcome this difficulty we have to adapt a construction due to the second author of new residue currents which will play the role of $\overline\partial\left[ \frac1f\right]$ (see \cite{Maz1} and \cite{Mazzilli2}). The extension given by Theorem \ref{th0} will be obtained via a linear operator which uses a Berndtsson-Andersson reproducing kernel and these new currents (see Section \ref{section3}). \par\smallskip Observe that in Theorem \ref{th0} we assume the existence of a smooth extension $\tilde g$ satisfying properties (\ref{th0i}), (\ref{th0ii}) and (\ref{th0iii}), whereas no such assumption is made in the previous articles we quoted, which deal with extension problems. It should be pointed out that while boundedness is a sufficient hypothesis in order to obtain a bounded holomorphic extension when $X$ is a manifold (see \cite{ WA,Ama,Cum,DiMa2}), it is not possible to obtain $L^\infty$ or even $L^2$ extensions when $X$ has singularities if we only assume that $g$ is bounded on $X\cap D$ (see \cite{DiMa0}) : a stronger condition is needed. Actually, even if in the manifold case no smooth extension is assumed to exist, a smooth extension, which satisfies (\ref{th0ii}) and (\ref{th0iii}), is constructed for example in \cite{Cum,DiMa2,WA}. This is done as follows. When $X$ is a manifold, let us locally write $X$ as $X=\{(z',\alpha(z')),\ z'\in\cc^{n-1}\}$, with $\alpha$ holomorphic. If for $z=(z_1,\ldots, z_n)$ we set $z'=(z_1,\ldots, z_{n-1})$, then the function $\tilde g$ defined by $\tilde g(z):=g(z',\alpha(z'))$ is a local holomorphic extension of $g$. Gluing all these local extensions together we get a smooth extension which will satisfy (\ref{th0ii}) and (\ref{th0iii}). In some sense, the way the local holomorphic extension is constructed in the manifold case is a kind of interpolation : $\tilde g(z',\cdot)$ is the polynomial of degree $0$ which interpolates $g(z',\alpha(z'))$ at the point $z_n=\alpha(z')$. Following this idea, we will construct in Section \ref{section5} a local holomorphic extension by interpolation. Provided we have a good control of the polynomials which interpolate $g$ on the different sheets of $X$, gluing together these local extensions, we will obtain an appropriate smooth extension. The control of the interpolating polynomials will be achieved thanks to an assumption on the divided differences we can build with $g$ between the different sheets of $X$. This will give us simple numerical conditions under which the function $g$ has a smooth extension $\tilde g$ which satisfies (\ref{th0i}), (\ref{th0ii}) and (\ref{th0iii}) from Theorem \ref{th0} (see Theorem \ref{th2} and \ref{th5}). The divided differences are defined has follows. \par\smallskip For $z\in D$, $v$ a unit vector in $\cc^n$, and $\varepsilon$ a positive real number we set $\Delta_{z,v}(\varepsilon)=\{z+\lambda v,\ |\lambda|<\varepsilon\}$ and $$\tau(z,v,\varepsilon)=\sup\{\tau>0,\ \rho(z+\lambda v)-\rho(z)<\varepsilon\text{ for all } \lambda\in\cc,\ |\lambda|<\tau\}.$$ Therefore $\tau(z,v,\varepsilon)$ is the maximal radius $r>0$ such that the disc $\Delta_{z,v}\left(r\right)$ is in $D_{\rho(z)+\varepsilon}$. It is also the distance from $z$ to $bD_{\rho(z)+\varepsilon}$ in the direction $v$. For $\kappa$ a small positive real number, to be chosen later on, we set $$\Lambda_{z,v}=\{\lambda\in\cc,\ |\lambda|<3\kappa\tau(z,v,|\rho(z)|)\text{ and } z+\lambda v\in X\}.$$ The points $z+\lambda v$, $\lambda\in\Lambda_{z,v},$ are the points of $X$ which belong to $\Delta_{z,v}\left(3\kappa \tau(z,v,|\rho(z)|)\right)$, thus they all belong to $D$ provided $\kappa<\frac13$.\\ For $\lambda\in \Lambda_{z,v}$ let us define $g_{z,v}[\lambda]=g(z+\lambda v)$ and if $g_{z,v}[\lambda_1,\ldots,\lambda_k]$ is defined, let us set for $\lambda_1,\ldots, \lambda_k,\lambda_{k+1}$ belonging to $\Lambda_{z,v}$ and pairwise distinct $$g_{z,v}[\lambda_1,\ldots,\lambda_{k+1}]=\frac{g_{z,v}[\lambda_1,\ldots, \lambda_k]-g_{z,v}[\lambda_2,\ldots,\lambda_{k+1}]}{\lambda_1-\lambda_{k+1}}.$$ Now consider the quantity $$c_\infty(g)=\sup|g_{z,v}[\lambda_1,\ldots,\lambda_k]| \tau(z,v,|\rho(z)|)^{k-1}$$ where the supremum is taken over all $z\in D,$ all $v\in\cc^n$ with $|v|=1$ and all $\lambda_1,\ldots,\lambda_k\in\Lambda_{z,v}$ pairwise distinct. In Section \ref{section5}, we will prove that the finiteness of $c_\infty(g)$ implies the existence of a smooth extension $\tilde g$ which satisfies the hypothesis of Theorem \ref{th0}. We will then obtain the following theorem \begin{theorem}\mlabel{th3} In $\cc^2$, any function $g$ holomorphic on $X\cap D$ such that $c_\infty(g)$ is finite admits a holomorphic extension $G$ which belongs to $BMO(D)$ such that $\|G\|_{BMO(D)}$ is bounded up to a multiplicative uniform constant by $c_\infty(g)$. \end{theorem} Conversely, if we know that $g$ admits a bounded holomorphic extension $G$ on $D$ and if $\lambda_1,\lambda_2$ belong to $\Lambda_{z,v}$, Montel in \cite{Mon} proves that there exist a point $a$ in the unit disc of $\cc$ and $\mu$ in the segment $[\lambda_1,\lambda_2]$ such that $\frac{g_{z,v}(\lambda_1)-g_{z,v}(\lambda_2)}{\lambda_1-\lambda_2}$ can be written as $a \diffp{G}{v}(z+\mu v)$. But since $G$ is bounded, its derivative, and therefore the divided difference $\frac{g_{z,v}(\lambda_1)-g_{z,v}(\lambda_2)}{\lambda_1-\lambda_2}$ as well, are bounded by $\|G\|_{L^\infty(D)}$ times the inverse of the distance from $z+\mu v$ to the boundary of $D$ in the direction $v$, and this quantity is comparable to $\tau(z,v,|\rho (z)|)$. We will show in Section \ref{section5} that this necessary condition holds in fact in $\cc^n$, $n\geq 2$, and for more than two points $\lambda_1$ and $\lambda_2$, and so we will prove the following theorem \begin{theorem}\mlabel{th1} In $\cc^n$, $n\geq 2$, if a function $g$ holomorphic on $X\cap D$ admits an extension $G$ which is bounded and holomorphic on $D$ then $c_\infty(g)$ is finite. \end{theorem} In Section \ref{section5}, we will also study the case of $L^q$ extensions and, still using divided differences, we will give in $\cc^n$, $n\geq 2$, a necessary condition for a function $g$ holomorphic on $X\cap D$ to admit a holomorphic extension to $D$ which belong to $L^q(D)$. Then we will also prove that this condition is sufficient when $n=2$ (see Theorem \ref{th4}, \ref{th5} and \ref{th6} for precise statements). We will also see in Section \ref{section5}, Theorem \ref{th8} and \ref{th9}, that all these results can be generalized in a natural way to weak holomorphic functions in the sense of Remmert. It should be noticed that a condition using divided differences was already used in \cite{Duquenoy} but that only varieties with singularities of order 2 were considered there. Here we have no restriction on the order of the singularities, and our condition uses all the divided differences of degree at most the orders of the singularities. In Section \ref{section6}, we illustrate these conditions by examples. Among other things, when $D$ is the ball of center $(1,0)$ and radius $1$ and $X=\{(z=(z_1,z_2)\in\cc^2, z_1^q=z_2^2\}$, with $q$ a positive odd integer, we will prove that any $g$ holomorphic and bounded on $X\cap D$ has a $L^2$-holomorphic extension on $D$ if and only if $q=1$ or $q=3$. \par\bigskip The article is organized as follows. In Section \ref{section2} we fix our notations and recall some results concerning the Berndtsson-Andersson kernel. In Section \ref{section3} we construct the new residue current adapted to our extension problem, and we prove Theorem \ref{th0} in Section \ref{secIII}. In Section \ref{section5} we prove Theorem \ref{th3} and \ref{th1} and we treat the case of $L^q$ holomorphic extensions. We give examples of applications of our results in Section \ref{section6}. \section{Notations and tools} \mlabel{section2} As usually, when $BMO$ questions or estimates of integral kernels arise in this context, the Koranyi balls or McNeal polydiscs, their generalization for convex domains of finite type, naturally appear (see \cite{AB,AC, DFF} for example). This will be of course the case in this article, but here (and it seems to be the first time this happens) the Koranyi balls will appear directly in the construction of the residue current, and so in the construction of a good extension. These balls enable us to establish a connection between the geometric properties of the boundary of the domain and the geometric properties of the variety (see Section \ref{section3}). The second classical tool we use is the Berndtsson-Andersson reproducing kernel which we also recall in this section. \subsection{Notations} Let us first fix our notation and adopt the following convention. We will often have estimates up to multiplicative constants. For readability convenience we introduce the following notation: We write $A\leqs B$ if there exists some constant $c>0$ such that $A\leq cB$. Each time we will mention from which parameters $c$ depends. We will write $A\eqs B$ if $A\leqs B$ and $B\leqs A$ both holds. We write $X$ as $X=\{z,\ f(z)=0\}$ where $f$ is a holomorphic function defined in a neighborhood of $\overline D$. Without restriction we assumed that $f$ is minimal (see \cite{Chabat}, Theorem 3, paragraph 50). \subsection{Koranyi balls} We call the coordinates system centered at $\zeta$ of basis $\eta_{\zeta}, v_{\zeta}$ the \ko coordinates system at $\zeta$. We denote by $(z_1^*,z_2^*)$ the coordinates of a point $z$ in the \ko coordinates system centered at $\zeta$. The \ko ball centered in $\zeta$ of radius $r$ is the set $\kb r {\zeta}:=\{\zeta+\lambda\eta_{\zeta}+\mu v_{\zeta},\ |\lambda|<r,\ |\mu|<r^{\frac12}\}$. These balls have the following properties~: \begin{proposition}\mlabel{propII.0.1} There exists a neighborhood $\cal U$ of $bD$ and positive real numbers $\kappa$ and $c_1$ such that \begin{enumerate}[(i)] \item for all $\zeta\in {\cal U}\cap D$, ${\cal P}_{4\kappa|\rho(\zeta)|}(\zeta)$ is included in $D$. \item for all $\varepsilon>0$, all $\zeta,z\in {\cal U}$, $\p\varepsilon\zeta\cap\p\varepsilon z\neq \emptyset$ implies $\p\varepsilon z\subset \p{c_1\varepsilon}\zeta$. \item for all $\varepsilon>0$ sufficiently small, all $z\in {\cal U}$, all $\zeta\in \p{\varepsilon}z$ we have $|\rho(z)-\rho(\zeta )|\leq c_1 \varepsilon$. \item For all $\varepsilon >0$, unit vector $v\in\cc^n$, all $z\in{\cal U}$ and all $\zeta\in{\cal P}_\varepsilon (z)$, $\tau(z,v,\varepsilon )\eqs\tau(\zeta ,v,\varepsilon )$ uniformly with respect to $\varepsilon ,$ $z$ and $\zeta $. \end{enumerate} \end{proposition} For $\cal U$ given by Proposition \ref{propII.0.1} and $z$ and $\zeta$ belonging to $\cal U$, we set $\delta(z,\zeta)=\inf\{\varepsilon>0, \zeta\in \p\varepsilon z\}$. Proposition \ref{propII.0.1} implies that $\delta$ is a pseudo-distance in the following sense: \begin{proposition}\label{propII.0.2} For $\cal U$ and $c_1$ given by Proposition \ref{propII.0.1} and for all $z,\ \zeta$ and $\xi$ belonging to $\cal U$ we have $$\frac1{c_1}\delta(\zeta,z)\leq \delta(z,\zeta)\leq c_1 \delta(\zeta,z)$$ and $$\delta(z,\zeta)\leq c_1(\delta(z,\xi)+\delta(\xi,\zeta))$$ \end{proposition} \subsection{Berndtsson-Andersson reproducing kernel} \mlabel{secII.0} We now recall the definition of the Berndtsson-Andersson kernel of $D$ when $D$ is a strictly convex domain of $\cc^2$. We set $h_i(\zeta,z)=-\diffp{\rho}{\zeta_i}(\zeta)$, $h=\sum_{i=1,2} h_id\zeta_i$ and $\tilde{h}=\frac{1}{\rho} h$. For a $(1,0)$-form $\beta (\zeta,z)=\sum_{i=1,2}\beta_i d\zeta_i$ we set $\langle \beta (\zeta,z),\zeta-z\rangle = \sum_{i=1,2} \beta_i(\zeta,z)(\zeta_i-z_i)$. Then we define the Berndtsson-Andersson reproducing kernel by setting for an arbitrary positive integer $N$, $n=1,2$ and all $\zeta,z\in D$~: $$P^{N,n}(\zeta,z)=C_{N,n} \left(\frac{1}{1+\langle \tilde{h}(\zeta,z),\zeta-z\rangle }\right)^{N+n}\left(\overline \partial \tilde{h}\right)^n,$$ where $C_{N,n}\in \cc$ is a constant. We also set $P^{N,n}(\zeta,z)=0$ for all $z\in D$ and all $\zeta\notin D$. Then the following theorem holds (see \cite{BA}): \begin{theorem} For all $g\in \oo(D)\cap C^\infty(\overline D)$ we have $$g(z)=\int_D g(\zeta)P^{N,2}(\zeta,z).$$ \end{theorem} In the estimations of this kernel, we will need to write $h$ in the \ko coordinates at some point $\zeta_0$ belonging to $D$. We set for $i=1,2$ $h_i^*=-\diffp{\rho}{\zeta^*_i}(\zeta)$. Then $h$ is equal to $\sum_{i=1,2}h_i^* d\zeta^*_i$ and satisfies the following proposition. \begin{proposition}\mlabel{estiBA} There exists a neighborhood $\cal U$ of $bD$ such that for all $\zeta\in D\cap {\cal U}$, all $\varepsilon>0$ sufficiently small and all $z\in \p\varepsilon\zeta$ we have \begin{enumerate}[(i)] \item $|\rho(\zeta)+\langle h(\zeta,z),\zeta-z\rangle|\geqs \varepsilon+|\rho(\zeta)|+|\rho(z)|$, \item $|h^*_1(\zeta,z)|\leqs1$, \item $|h^*_2(\zeta,z)|\leqs \varepsilon^{\frac{1}{2}}$, \end{enumerate} and there exists $c>0$ not depending from $\zeta$ nor from $\varepsilon$ such that for all $z\in \p\varepsilon\zeta\setminus c\p\varepsilon\zeta$ we have $$|\langle h(\zeta,z),\zeta-z\rangle|\geqs \varepsilon+|\rho(z)|+|\rho(\zeta)|,$$ uniformly \wrt $\zeta,z$ and $\varepsilon$. \end{proposition} \section{Construction of the extension operator}\mlabel{section3} The holomorphic extension provided by Theorem \ref{th0} will be given by a linear integral operator. Its definition is based upon the construction of Mazzilli in \cite{Maz1} which uses Berndtsson-Andersson's reproducing kernel and a current $T$ such that $fT=1$. The current $T$ relies on a family of currents $T_{\cal V}$, where ${\cal V}$ is an open subset of $D$, such that $fT_{\cal V}=1$. Then using a locally finite covering $\left({\cal V}_j\right)_{j\in\nn}$ of $D$ and a partition of unity $\left(\chi_j\right)_{j\in\nn}$ associated with this covering, Mazzilli glues together all the currents $T_{{\cal V}_j}$ and gets a current $T=\sum_{j\in\nn} \chi_jT_{{\cal V}_j}$ such that $fT=1$. In \cite{Maz1}, the only assumption on the covering $\left({\cal V}_j\right)_j$ is to be locally finite. \par\smallskip In order to get very fine estimates of the operator, instead of an ordinary locally finite covering, we will use a covering of $D$ by Koranyi balls $\left( {\cal P}_{\kappa |\rho(z_j)|}(z_j)\right)_{j\in\nn}$ which will be more suited to the geometry of $bD$ (see subsection \ref{maxcover}). \par\smallskip In \cite{Maz1}, the local current $T_{\cal V}$ is constructed using the Weierstrass polynomial $P_f$ of $f$ in the open set ${\cal V}$. This means that every roots of $P_f$, or equivalently every sheets of $X$ intersecting ${\cal V}$, are used. We will modify the construction of $T_{\cal V}$ in order to use only the sheets of $X$ which are meaningful for our purpose. In order to be able to choose the good sheets of $X$, we construct in subsection \ref{secII.2} for $z_0$ near $bD$ a parametrization of $X$ in the \ko ball ${\cal P}_{\kappa|\rho(z_0)|}(z_0)$. \par\smallskip At last, we will have all the tools to define in subsection \ref{secII.3} the current $T$ such that $fT=1$ and the extension operator. \subsection{Koranyi covering}\mlabel{maxcover} In this subsection, for $\varepsilon_0>0$, we cover $D\setminus D_{-\varepsilon_0}$ with a family of \ko balls $\left({\cal P}_{\kappa|\rho(z_j)|}(z_j)\right)_{j\in\nn}$ where $\kappa$ is a positive small real number. This construction uses classical ideas of the theory of homogeneous spaces and is analogous to the construction of the covering of \cite{BCD}.\\ Let $\varepsilon_0$, $\kappa$ and $c$ be positive real numbers sufficiently small. We construct a sequence of point of $D\setminus D_{\varepsilon _0}$ as follows.\\ Let $k$ be a non negative integer and choose $z_1^{(k)}$ in $bD_{-(1-c\kappa )^k\varepsilon _0}$ arbitrarily.\\ When $z_1^{(k)},\ldots, z_j^{(k)}$ are chosen, they are two possibilities. Either for all $z\in bD_{-(1-c\kappa )^k\varepsilon _0}$ there exists $i\leq j$ such that $\delta(z,z_i^{(k)})<c\kappa (1-c\kappa )^k\varepsilon _0$ and the process ends here or there exists $z\in bD_{-(1-c\kappa )^k\varepsilon _0}$ such that for all $i\leq j$ we have $\delta(z,z_i^{(k)})\geq c\kappa (1-c\kappa )^k\varepsilon _0$ and we chose $z^{(k)}_{j+1}$ among these points. Since $D_{-(1-c\kappa )^k\varepsilon _0}$ is bounded, this process stops at some rank $n_k$.\\ We thus have constructed a sequence $(z_j^{(k)})_{k\in\nn, j\in\{1,\ldots,n_k\}}$ such that \begin{enumerate}[(i)] \item \label{seqi} For all $k\in\nn$, and all $j\in\{1,\ldots, n_k\}$, $z_j^{(k)}$ belongs to $bD_{-(1-c\kappa )^k\varepsilon _0}$. \item \label{seqii} For all $k\in\nn$, all $i,j\in \{1,\ldots, n_k\}$, $i\neq j$, we have $\delta(z_i^{(k)},z_j^{(k)})\geq c\kappa (1-c\kappa )^k\varepsilon _0$. \item \label{seqiii} For all $k\in\nn$, all $z\in bD_{-(1-c\kappa )^k\varepsilon _0}$, there exists $j\in\{1,\ldots, n_k\}$ such that $\delta(z,z_j^{(k)})<c\kappa (1-c\kappa )^k\varepsilon _0$. \end{enumerate} For such sequences, we prove the following proposition. \begin{proposition}\mlabel{propmax} For $\kappa >0$ and $c>0$ small enough, let $\left(z_j^{(k)}\right)_{k\in\nn,j\in\{1,\ldots, n_k\}}$ be a sequence which satisfies (\ref{seqi}), (\ref{seqii}) and (\ref{seqiii}). Then \begin{enumerate}[(a)] \item \label{propmax1} $D\setminus D_{\varepsilon _0}$ is included in $ \cup_{k=0}^{+\infty} \cup_{j=1}^{n_k} {\cal P}_{\kappa |\rho (z_j^{(k)})|}\left(z_{j}^{(k)}\right)$, \item \label{propmax2} there exists $M\in\nn$ such that for $z\in D\setminus D_{-\varepsilon _0}$, ${\cal P}_{4\kappa |\rho (z)|}(z)$ intersect at most $M$ Koranyi balls ${\cal P}_{ 4\kappa |\rho (z_j^{(k)})|}\left(z_{j}^{(k)}\right)$. \end{enumerate} \end{proposition} \pr We first prove that (\ref{propmax1}) holds. For $z\in D\setminus D_{\varepsilon _0}$ let $k\in\nn$ be such that $$(1-c\kappa )^{k+1}\varepsilon _0<|\rho (z)|<(1-c\kappa )^k\varepsilon _0$$ and let $\lambda\in\cc$ be such that $\zeta=z+\lambda \eta_z$ belong to $bD_{-(1-c\kappa)^k\varepsilon_0}$. On the one hand the assumption $(\ref{seqiii})$ implies that there exists $j\in\{1,\ldots, n_k\}$ such that $\delta\left(\zeta ,z_j^{(k)}\right)\leq c\kappa (1-c\kappa )^k\varepsilon _0$. On the other one hand we have $|\lambda|=\delta(z,\zeta)\leq C c\kappa(1-c\kappa)^k\varepsilon_0$ where $C$ does not depend from $z$ nor from $\zeta $. These two inequalities yield \begin{eqnarray*} \delta\left(z,z^{(k)}_j\right)&\leq& c_1(\delta(z,\zeta )+c_1\delta(\zeta ,z^{(k)}_j)\\ &\leq& \kappa cc_1(1-c\kappa )^k\varepsilon _0 (C\kappa +1)\\ &\leq& \kappa |\rho \left(z_j^{(k)}\right)| \end{eqnarray*} provided $c$ is small enough. Therefore $z$ belongs to ${\cal P}_{\kappa |\rho (z_j^{(k)})|}(z_j^{(k)})$ and (\ref{propmax1}) holds.\\ We now prove (\ref{propmax2}). Let $z$ be a point of $D\setminus D_{\varepsilon _0}$. For all $\zeta \in{\cal P}_{4\kappa |\rho(z)|}(z)$, if $\kappa $ is small enough, proposition \ref{propII.0.1} yields $$\frac12 |\rho (z)|\leq |\rho (\zeta )|\leq 2|\rho (z)|.$$ The same inequalities hold for all $z^{(k)}_j$ and all $\zeta\in {\cal P}_{4\kappa |\rho (z_j^{(k)})|}(z_j^{(k)})$. Thus if ${\cal P}_{4\kappa |\rho (z_j^{(k)})|}(z_j^{(k)})\cap {\cal P}_{\kappa |\rho (z)|}(z)\neq \emptyset$ we have $$\frac14|\rho (z)| \leq (1-c\kappa )^k\leq 4 |\rho (z)|.$$ Therefore $k$ can take at most $\frac{4\ln 2}{|\ln(1-c\kappa )|}$ values.\\ For such a $k$, we set $I_k=\left\{j\in\{1,\ldots,n_k\},\ {\cal P}_{4\kappa |\rho (z_j^{(k)})|}(z_j^{(k)})\cap {\cal P}_{4\kappa |\rho (z)|}(z)\neq \emptyset\right\}$. Assertion (\ref{propmax2}) will be proved provided we show that $\#I_k$, the cardinal of $I_k$, is bounded uniformly with respect to $k$ and $z$.\\ We denote by $\sigma$ the area measure on $bD_{-(1-c\kappa )^k\varepsilon _0}$. Since for all $i,j\in I_k$ distinct we have $ \delta\left(z_{i}^{(k)},z_j^{(k)}\right)\geq c\kappa (1-c\kappa )^k\varepsilon _0$, provided $c$ is small enough, we have \begin{eqnarray*} \lefteqn{\sigma \left(\cup_{j\in I_k}{\cal P}_{4\kappa \left|\rho \left(z_j^{(k)}\right)\right|}\left(z_j^{(k)}\right)\cap bD_{-(1-c\kappa )^k\varepsilon _0}\right)}\\ & \geq&\sigma \left(\cup_{j\in I_k}{\cal P}_{\frac c{c_1} \kappa(1-c\kappa )^k\varepsilon _0}\left(z_j^{(k)}\right)\cap bD_{-(1-c\kappa )^k\varepsilon _0}\right)\\ & \geq& \# I_k \left(\frac c{c_1} \kappa (1-c\kappa )^k\varepsilon _0\right)^n. \end{eqnarray*} Now we look for an upper bound of $\sigma \left(\cup_{j\in I_k}{\cal P}_{4\kappa |\rho (z_j^{(k)})|}(z_j^{(k)})\cap bD_{-(1-c\kappa )^k\varepsilon _0}\right)$. We fix $j_0\in I_k$. For all $j\in I_k,$ since ${\cal P}_{4\kappa |\rho (z_j^{(k)})|}(z_j^{(k)})\cap {\cal P}_{4\kappa |\rho (z)|}(z)\neq \emptyset$ and ${\cal P}_{4\kappa |\rho (z_{j_0}^{(k)})|}(z_{j_0}^{(k)})\cap {\cal P}_{4\kappa |\rho (z)|}(z)\neq \emptyset$, we have \begin{eqnarray*} \delta\left(z_{j_0}^{(k)},z_j^{(k)}\right)&\leqs & \delta\left(z_{j_0}^{(k)},z\right) +\delta\left(z,z_j^{(k)}\right)\\ &\leqs& 4\kappa \left(\left|\rho \left(z_{j_0}^{(k)}\right)\right|+\left|\rho \left(z_{j}^{(k)}\right)\right|\right)\\ &\leqs & \kappa (1-c\kappa )^k\varepsilon _0 \end{eqnarray*} uniformly with respect to $k$, $j$ and $j_0$. Thus there exists $K$ not depending from $z$, $j$, $j_0$ nor on $k$ such that ${\cal P}_{4\kappa |\rho (z_j^{(k)})|}(z_j^{(k)}) \subset {\cal P}_{\kappa K |\rho (z_{j_0}^{(k)})|}(z_{j_0}^{(k)})$. Therefore \begin{eqnarray*} \sigma \left(\cup_{j\in I_k}{\cal P}_{4\kappa |\rho (z_j^{(k)})|}(z_j^{(k)})\cap bD_{-(1-c\kappa )^k\varepsilon _0}\right) &\leq& \sigma \left({\cal P}_{4K\kappa |\rho (z_{j_0}^{(k)})|}(z_{j_0}^{(k)})\cap bD_{-(1-c\kappa )^k\varepsilon _0}\right)\\ &\leqs& \left(K\kappa (1-c\kappa )\varepsilon _0\right)^n \end{eqnarray*} which yields $\#I_k\leqs c^{-n}$.\qed\\ The covering property (\ref{propmax1}) allows us to settle the following definition \begin{definition} Let $\cal U$ be any subset of $\cc^n$. If the sequence $(z_j)_{j\in\nn}$ can renumbered such that (\ref{seqi}), (\ref{seqii}) are satisfied and such that (\ref{seqiii}) holds true for all $z\in {\cal U}\cap (D\setminus D_{-\varepsilon_0})$, the family $\left({\cal P}_{\kappa |\rho (z_j)|}(z_j)\right)_{j\in\nn}$ will be called a $\kappa$-covering of ${\cal U}\cap (D\setminus D_{-\varepsilon_0})$. \end{definition} \subsection{A family of parametrizations}\mlabel{secII.2} In order to construct the current we need to define our extension operator, we will need some kind of parametrization for $X$ over ${\cal P}_{\kappa|\rho(z_0)|}(z_0)$ when $z_0$ is near the boundary of the domain and when ${\cal P}_{\kappa|\rho(z_0)|}(z_0)\cap X\neq \emptyset$. Moreover, we will need some uniform estimates for this parametrization. Of course if we are near a regular point of $X$, such parametrizations do exist but the situation is more delicate when we are near a singularity of $X$. Given a point $z_0$ near a singularity $\zeta_0$ of $X$ which belongs to $bD$, we denote by $(\zeta_{0,1}^*,\zeta^*_{0,2})$ the coordinates of $\zeta_0$ is the \ko coordinates at $z_0$. We denote by $\Delta$ the unit of $\cc$ and by $\Delta_z(r)$ the disc of $\cc$ centered at $z$ of radius $r$. Our goal in this subsection is to prove the following propositions: \begin{proposition}\label{new_prop} There exists $\kappa >0$ sufficiently small and not depending on $z_0$ such that if $X\cap {\cal P}_{\kappa|\rho(z_0)|}(z_0)\neq\emptyset,$ then $|\zeta^*_{0,1}|\geq 2\kappa|\rho(z_0)|$. \end{proposition} \begin{proposition}\mlabel{propII.2.1} There exist $\kappa$ and $r$ positive real numbers sufficiently small, a positive integer $p_0$ and a neighborhood $\cal U$ of $\zeta_0$ such that for all $z_0\in {\cal U}$, if $|\zeta^*_{0,1}|\geq\kappa|\rho(z_0)|$ then there exist $\alpha_1^*,\ldots, \alpha^*_{p_0}$ holomorphic functions in $\Delta_0(2\kappa|\rho(z_0)|)$ which satisfy \begin{enumerate}[(i)] \item \mlabel{propII.2.1.2} $\alpha_j^*$ and $\diffp{\alpha^*_j}{z^*_1}$ are bounded on $\Delta_0(2\kappa|\rho(z_0)|)$ uniformly \wrt $z_0.$ \item \mlabel{propar3}if there exists $j$ and $z^*_1$ such that $(z^*_1,\alpha_j^*(z^*_1))$ belong to ${\cal P}_{2\kappa|\rho(z_0)|}(z_0)$ then for all $\zeta_1^*\in \Delta_0(2\kappa|\rho(z_0)|)$ we have $|\alpha_j^*(\zeta_1^*)|\leq \left(3\kappa|\rho(z_0)|\right)^{\frac12}.$ \item \mlabel{propII.2.1.4} There exists $u_0$ holomorphic in $\Delta_{z_0}(r)^2$ such that $|u_0|\eqs 1$ uniformly with respect to $z_0$ and $f(\zeta)=u_0(\zeta) \prod_{i=1}^{p_0}(\zeta^*_2-\alpha_{i}^*(\zeta^*_1))$ for all $\zeta\in{\cal P}_{2\kappa|\rho(z_0)|}(z_0)$. \end{enumerate} \end{proposition} The proofs of this proposition will relies on the following two lemmas. \begin{lemma}\mlabel{propweierstrass} Let $(A,d)$ be a metric space, $\alpha_0\in A$ and $(f_\alpha)_{\alpha\in A}$ a family of holomorphic function on $\Delta^2$ such that \begin{itemize} \item[-] $(f_\alpha)_{\alpha\in A}$ converges uniformly to $f_{\alpha_0}$ when $\alpha$ tends to $\alpha_0$, \item[-] $f_{\alpha_0}(0,\cdot)\neq 0$ and $f_{\alpha_0}(0)=0$. \end{itemize} Then there exist positive real numbers $r_1,r_2,\eta>0$, a positive integer $p$ such that, for all $\alpha\in A$ with $d(\alpha,\alpha_0)<\eta$, there exist $p$ functions $a_1^{(\alpha)},\ldots,a_p^{(\alpha)}$ holomorphic on $\Delta_0(r_1)$ and a function $u_\alpha$ holomorphic in $\Delta_0(r_1)\times \Delta_0(r_2)$ which satisfy \begin{enumerate}[(i)] \item $f_\alpha(z)=u_\alpha(z) \left(z_2^p+a_1^{(\alpha)}(z_1) z_2^{p-1}+\ldots+ a_p^{(\alpha)}(z_1)\right)$, \item $|u_\alpha(z)|\eqs 1$ for all $z\in \Delta_0(r_1)\times \Delta_0(r_2)$ uniformly \wrt $z$ and $\alpha$. \end{enumerate} \end{lemma} \pr We first want to apply Rouch\'e's theorem to $f_{\alpha}(z_1,\cdot)-f_{\alpha_0}(0,\cdot)$, $z_1$ fixed in $\Delta_0(r_1)$ where $r_1>0$ is to be chosen in a moment.\\ Since $f_{\alpha_0}(0,\cdot)$ is not identically zero, there exists $r_2>0$ such that $f_{\alpha_0}(0,z_2)\neq 0$ for all $z_2\in\Delta_0(r_2)\setminus\{0\}$. We denote by $a$ the positive real number $a=\inf_{|z_2|=r_2}|f_{\alpha_0}(0,z_2)|$ and by $p$ the order the root $0$ of $f_{\alpha_0}(0,\cdot)$.\\ Since $(f_\alpha )_\alpha $ converges uniformly to $f_{\alpha _0}$ on $\Delta_0(1)$, there exists $\eta>0$ such that for all $\alpha\in A$, $d(\alpha_0,\alpha)<\eta$, all $z\in\Delta_0(1)^2$ the following inequality holds: $\sup_{z\in \Delta_0(1)^2}|f_\alpha(z)-f_{\alpha_0}(z)|<\frac a4$.\\ By Cauchy's inequalities, there exists $r_1>0$ such that for all $z\in \Delta_0(r_1)\times \Delta_0(r_2)$ we have $|f_{\alpha_0}(z_1,z_2)-f_{\alpha_0}(0,z_2)|<\frac a4$.\\ Thus $|f_\alpha(z_1,z_2)-f_{\alpha_0}(0,z_2)|\leq |f_{\alpha_0}(0,z_2)|$ and by Rouch\'e's theorem, $f_\alpha(z_1,\cdot)$ has exactly $p$ zeros in $\Delta_0(r_2)$ for all $z_1$ fixed in $\Delta_0(r_1)$. Therefore by the Weierstrass preparation theorem there exist $p$ functions $a_1^{(\alpha)}, \ldots, a_p^{(\alpha)}$ holomorphic on $\Delta_0(r_1)$ and a function $u_\alpha$ holomorphic on $\Delta_0(r_1)\times \Delta_0(r_2)$ zero free such that $$f_\alpha(z)=u_\alpha(z) \left(z_2^p+a^{(\alpha)}_1(z_1) z_2^{p-1}+\ldots+a^{(\alpha)}_p(z_1)\right).$$ We set $P_\alpha(z_1,z_2)=z_2^p+a^{(\alpha)}_1(z_1) z_2^{p-1}+\ldots+a^{(\alpha)}_p(z_1)$. To end the proof of the lemma we have to prove that $1\leqs| u_\alpha|\leqs 1$. We prove the lower uniform boundedness.\\ For all $z_1\in \Delta_0(r_1)$, $\frac1{u_\alpha(z_1,\cdot)}$ is holomorphic and $$\frac{1}{|u_\alpha(z_1,z_2)|}\leq \max_{|\zeta_2|=r_2}\left|\frac{P_\alpha(z_1,\zeta_2)}{f_\alpha(z_1,\zeta_2)} \right|.$$ On the one hand, for all $\alpha\in A$ such that $d(\alpha,\alpha_0)<\eta$, all $(z_1,z_2)\in \Delta_0(r_1)\times b\Delta_0(r_2)$ we have \begin{eqnarray*} |f_\alpha(z)|&\geq& |f_{\alpha_0}(0,z_2)|-|f_{\alpha_0}(z)-f_{\alpha_0}(0,z_2)|-|f_\alpha(z)-f_{ \alpha_0}(z)|\\ &\geq& a-\frac a4-\frac a4=\frac a2. \end{eqnarray*} On the other one hand, since $(f_\alpha)_{\alpha\in A}$ converges uniformly to $f_{\alpha_0}$ when $\alpha$ tends to $\alpha_0$ and since $f_\alpha(z)$ is uniformly bounded away from $0$ for $(z_1,z_2)\in\Delta_0(r_1)\times b\Delta_0(r_2)$, $(a_j^{(\alpha)})_{\alpha\in A}$ converge uniformly to $a_j^{(\alpha_0)}$ for all $j$ when $\alpha$ tends to $\alpha_0$. This implies that $(P_\alpha)_{\alpha\in A}$ converges uniformly to $P_{\alpha_0}$ and therefore $\sup_{\Delta_0(r_1)\times \Delta_0(r_2)} |P_\alpha|$ is uniformly bounded for $\alpha$ near $\alpha_0$.\\ This yields $|u_\alpha(z)|\geqs 1$ uniformly \wrt $z\in\Delta_0(r_1)\times \Delta_0(r_2)$ and $\alpha\in A$ such that $d(\alpha,\alpha_0)<\eta$. The upper boundedness can be proved in the same way.\qed \begin{lemma}\mlabel{lemII.2.2} Let $\zeta_0\in bD$ be a singularity of $X$, let $z_0\in D$ be a point near enough $\zeta_0$. There exist $r>0$ not depending from $z_0$ and a parametric representation of $X$ in the \ko coordinates system centered at $z_0$ of the form $({t^*}^p+\zeta^*_{0,1}, \phi(t^*)+\zeta^*_{0,2})$, such that $|\phi^*(t^*)|\leqs \left|t^*\right|^p$, $t^*\in \Delta_0(r)$, uniformly with respect to $z_0$. \end{lemma} \pr Without restriction we assume that $\zeta _0$ is the origin of $\cc^2$. Maybe after a unitary linear change of coordinates, there exists $r_0>0$, $p,q\in\nn$, $q>p>1$, and $u$ holomorphic and bounded on $\Delta_0(r_0)$, $u(0)\neq 0$ such that $\phi:t\mapsto (t^p,t^qu(t))$ is a parametric representation of $X$ over $\Delta_0(r_0)$.\\ We consider $z_0$ such that $|\zeta_0-z_0|<r_0$ and we denote by $(\alpha,\beta)$ the coordinates of $\eta_{z_0}$ and by $(-\overline{\beta},\overline{\alpha})$ the coordinates of $v_{z_0}$. In the \ko coordinates centered at $z_0$, $X$ is parametrized by $t\mapsto (\overline{\alpha}t^p+\overline{\beta}t^qu(t)+\zeta^*_{0,1}, -\beta t^p+\alpha t^qu(t)+\zeta^*_{0,2})$.\\ Let $(\alpha_0,\beta_0)$ denotes the coordinates of $\eta_{\zeta_0}$. The transversality hypothesis implies that $\alpha_0\neq 0$ so there exists $r_1>0$ and a $p$-th determination of the root $\phi_1$ in $\Delta_{\overline{\alpha_0}}(r_1).$ If $r_0>0$ is sufficiently small, ${\alpha}$ belongs to $\Delta_{\alpha_0}(r_1)$ and $\overline{\alpha}t^p+\overline{\beta}t^q u(t)=(\phi_1(\overline{\alpha})t)^p \left(1+\frac{\overline{\beta}}{\overline{\alpha}}t^{q-p}u(t)\right).$\\ Since $q>p$, there exists $r_2\in]0,r_1[$ such that for all $t\in \Delta_0(r_2),$ all $\beta\in \Delta_{\beta_0}(r_2)$ and all $\alpha\in \Delta_{\alpha_0}(r_2)$, we have $\left|1+\frac{\overline{\beta}}{\overline{\alpha}}t^{q-p}u(t) \right|\geq \frac12$ and so there exists $\phi_2$ holomorphic for $t\in \Delta_0(r_2)$, $C^\infty$-smooth for $\alpha\in \Delta_{\alpha_0}(r_2)$ and $\beta\in\Delta_{\beta_0}(r_2)$ such that $\phi_2(t,\alpha,\beta)^p=1+\frac{\overline{\beta}}{\overline{\alpha}}t^{q-p} u(t)$.\\ We apply the implicit functions theorem to $\Psi: (t,t^*,\alpha,\beta)\mapsto t^*-\phi_1(\overline{\alpha})\phi_2(t,\alpha,\beta) t$. Since $\Psi(0,0,\alpha_0,\beta_0)=0$ and $\diffp{\Psi}{t}(0,0,\alpha_0,\beta_0)\neq 0$, there exist $r>0$ and ${\tilde \psi}:\Delta_0(r)\times \Delta_{\alpha_0}(r)\times \Delta_{\beta_0}(r)\to V(0)$, $V(0)$ neighborhood of $0\in \cc$ such that $\tilde \psi$ is holomorphic in $t$, and $C^\infty$-smooth in $\alpha$ and $\beta$ such that ${t^*}^p=\overline{\alpha} t^p+\overline{\beta} t^q u(t)$ if and only if $t=\tilde \psi(t^*,\alpha,\beta)$.\\ We now end the proof of the lemma by setting $$\phi^*(t^*)=-\beta\tilde \psi(t^*,\alpha,\beta)^p+\alpha\tilde\psi(t^*,\alpha, \beta)^qu\left(\tilde\psi(t^*,\alpha,\beta)\right).$$\qed\\[10pt] {\it Proof of proposition \ref{new_prop}:} We first choose $\kappa >0$ such that $2\kappa|\rho(z_0)|\leq r$, $r$ given by lemma \ref{lemII.2.2} and we write $\zeta\in X\cap \p{\kappa|\rho(z_0)|}{z_0}$ as $\zeta=\left({t^*}^{p_0}+\zeta_{0,1}^*,\phi^*(t^*)+\zeta^*_{0,2}\right)$ for some $t^*$ belonging to $\Delta_0(r)$. Now, if we assume that $\left|\zeta^*_{0,1} \right|< 2\kappa|\rho(z_0)|$ we get $|\zeta_1^*-\zeta^*_{0,1}|\leq 3\kappa|\rho(z_0)|$ and therefore $|t^*|\leq (3\kappa|\rho(z_0)|)^{\frac1{p_0}}$. This yields \begin{eqnarray*} |\zeta^*_{0,2}|&\leq& |\zeta^*_{0,2}-\zeta^*_2|+|\zeta_2^*|\\ &\leq& |\phi^*(t^*)|+|\zeta_2^*|\\ &\leqs& \kappa|\rho(z_0)|+(\kappa|\rho(z_0)|)^{\frac12}\\ &\leqs& (\kappa|\rho(z_0)|)^{\frac12} \end{eqnarray*} uniformly with respect to $z_0$. Thus there exists $K>0$ not depending from $z_0$ nor from $\kappa $ such that $\zeta_0$ belongs to $\p{\kappa K|\rho(z_0)|}{z_0}$. Moreover, if $\kappa $ is chosen sufficiently small, for all $\xi\in \p{\kappa K|\rho(z_0)|}{z_0}$ Proposition \ref{propII.0.1} gives $|\rho(\xi)|\geq \frac12 |\rho(z_0)|$. This gives a contradiction because $|\rho(\zeta_0)|=0<|\rho(z_0)|$ whereas $\zeta_0$ belongs to $\p{\kappa K|\rho(z_0)|}{z_0}$. Therefore we can choose $\kappa >0$ not depending from $z_0$ such that $\left|\zeta^*_{0,1} \right|\geq 2\kappa|\rho(z_0)|$.\qed\\[10pt] {\it Proof of proposition \ref{propII.2.1}:} Let $p_0$ be the multiplicity of the singularity $\zeta_0$ of $X$ and let $\psi$ be a $p_0$-th determination of the root holomorphic in $\Delta_{\zeta^*_{0,1}}(2\kappa|\rho(z_0)|)$. We set $\alpha_j^*(z_1^*)=\phi^*\left(\psi(z^*_1-\zeta^*_{0,1}) e^{\frac{2i\pi}{p_0}j}\right)+\zeta^*_{0,2}$, $j=1,\ldots, p_0$. For all $j$, $\alpha_j^*$ is holomorphic on $\Delta_0(2\kappa|\rho(z_0)|)$ and is uniformly bounded on $\Delta_0(2\kappa|\rho(z_0)|)$. We have $$\diffp{\alpha_j^*}{z_1^*}(z_1^*)=\psi'(z_1^*-\zeta^*_{0,1}) \diffp{\phi^*}{t^*}\left(\psi(z^*_1-\zeta^*_{0,1})e^{\frac{2i\pi}{p_0}j}\right)e^{ \frac{2i\pi}{p_0}j}.$$ Since $|\phi^*(t^*)|\leqs |t^*|^p$ this yields $\left|\diffp{\alpha_j^*}{z_1^*}(z_1^*) \right|\leqs 1$ which proves (\ref{propII.2.1.2}). \par\medskip We now prove that (\ref{propar3}) holds. We denote by $K$ a uniform bound of the derivative of $\alpha_j^*$. If $z^*_1\in \Delta_0(2\kappa|\rho(z_0)|)$ is such that $|\alpha_j^*(z_1^*)|\leq \left(2\kappa|\rho(z_0)|\right)^{\frac{1}{2}}$, we have for all $\zeta_1^*\in \Delta(2\kappa |\rho (z_0)|)$: \begin{eqnarray*} |\alpha_j^*(\zeta_1^*)|&\leq& |\alpha_j^*(z_1^*)|+\left|\alpha_j^*(z_1^*)-\alpha_j^*(\zeta_1^*)\right|\\ &\leq&(2\kappa|\rho(z_0)|)^{\frac12}+K|\zeta_1^*-z_1^*|\\ &\leq& (2\kappa|\rho(z_0)|)^{\frac12}+4K\kappa|\rho(z_0)|. \end{eqnarray*} Therefore choosing again $\kappa $ small enough, uniformly with respect to $z_0$, we get $|\alpha_j^*(\zeta_1^*)|\leq \left(3\kappa|\rho(z_0)|\right)^{\frac{1}{2}}$. \par\medskip Only (\ref{propII.2.1.4}) is left to be shown. For $z$ near $\zeta_0$ we set $f_z(\lambda,\mu)=f(\zeta_0+\lambda \eta_{z}+\mu v_{z})$ and we apply Lemma \ref{propweierstrass} to the family $(f_z)_z$ which gives $u_0$ and $P_0$ such that $f_{z_0}=u_0P_0$ where $|u_0|\eqs 1$ uniformly with respect to $z_0$ and where $P_0(\lambda\eta_{z_0}+\mu v_{z_0})$ is a polynomial of the variable $\mu$ with coefficients holomorphic \wrt $\lambda$. We have $f_{z_0} (z_0-\zeta_0+\zeta_1^*\eta_{z_0}+\alpha_i^*(\zeta_1^*)v_{z_0})=0$ for all $i$ so for all $\zeta$ such that $|\zeta^*_1|< 2\kappa|\rho(z_0)|$ $$P_0(\zeta_1^*-\zeta_{0,1}^*,\zeta_2^*-\zeta_{0,2}^*)=\prod_{i=1}^{p_0} (\zeta^*_2-\alpha_i^*(\zeta_1^*)).$$ \qed \subsection{Definition of the operator}\mlabel{secII.3} We now come to the definition of the current $T$ such that $fT=1$ and of the extension operator. Our construction is a refinement of \cite{Maz1}. We choose a positive real number $\kappa$ so that Propositions \ref{propmax} and \ref{propII.2.1} hold true for such a $\kappa $ and such that Proposition \ref{propII.0.1} implies that $2\rho(z)\leq \rho(\zeta)\leq\frac12\rho(z)$ for all $z\in D$ near $bD$.\\ For $\varepsilon _0>0$ and $z_0\in \overline{D_{-\varepsilon_0}}$, that is when $z_0$ is far from the boundary, we do not modify the construction except that we require that ${\cal U}_0$ is included in $D_{-\frac{\varepsilon_0}2}$. We get a covering ${\cal U}_{-m},\ldots, {\cal U}_{-1}$ of $\overline{D_{-\varepsilon_0}}$ and the corresponding currents $T_{-m},\ldots, T_{-1}$ such that $fT_j=1$ on ${\cal U}_j$ for all $j=-m,\ldots, -1$.\\ Near the boundary, we have to be more precise and we use a $\kappa $-covering $\left({\cal P}_{\kappa|\rho(z_j)|}(z_j)\right)_{j\in\nn}$ of $D\cap D_{-\varepsilon _0}$ constructed in Section \ref{maxcover}. In the \ko coordinates centered at $z_j$, the fiber of $X$ above $(z^*_1,0)\in \p{\kappa|\rho(z_j)|}{z_j}$ is given by $\{(z_1^*,\alpha^*_i(z_1^*)),\ i=1,\ldots, p_j\}$ where $p_j$ and $\alpha^*_1,\ldots, \alpha_{p_j}^*$ are given by Proposition \ref{propII.2.1}. In \cite{Maz1}, Mazzilli actually considered the Weierstrass polynomial in a neighborhood of $z_j$ but this neighborhood may be smaller than $\p{\kappa|\rho(z_j)|}{z_j}$ or the Weierstrass polynomial may include all the $\alpha_i^*$. However, in order to make a good link between the geometry of the boundary of $D$ and $X$, we need to have a polynomial in all $\pk{j}$ and we have to take into account only the sheets of $X$ which intersect $\pk{j}$ or equivalently the $\alpha_i^*$ such that for some $z_1^*\in \Delta_0(\kappa|\rho (z_j)|)$, the point $z_j+z^*_1\eta_{z_j+}\alpha^*_i(z^*_1)v_{z_j}$ belongs to $\pk j$. So we put $I_j\hskip -1.5pt =\hskip -1.5pt\left\{i, \exists z_1^*\in \Delta_0(\kappa |\rho(z_j)|) \text{ such that } |\alpha_i^*(z^*_1)|\leq (2\kappa|\rho(z_j)|)^{\frac12}\right\}$, $q_j=\#I_j$, the cardinal of $I_j$, and for any $C^\infty$-smooth $(2,2)$-form $\phi$ compactly supported in $\pk j$ we set $$\tilde{T}_j[\phi]=\int_{{\cal P}_{\kappa|\rho(z_j)|}(z_j)} \frac{\prod_{i\in I_j}\overline{\zeta_2^*-\alpha_i^*(\zeta_1^*)}}{f(\zeta)} \diffp{^{q_j}\phi}{\overline{\zeta^*_2}^{q_j}}(\zeta).$$ As in \cite{Maz1}, integrating by parts $q_j$-times gives $f\tilde T_j=c_j$ where $|c_j|=q_j!$. Now let $\left(\chi_j\right)_{j\geq -m}$ be a partition of unity subordinated to the covering ${\cal U}_{-m},\ldots, {\cal U}_{-1}$, $\left(\pk j\right)_{j\in\nn}$ of $D$. We assume that $\chi_j$ has been chosen so that $\left|\diffp{^{\alpha+\overline\alpha+\beta+\overline\beta}\chi_j}{{\zeta^*_1} ^\alpha \partial \overline{\zeta^*_1}^{\overline\alpha} \partial{\zeta^*_2}^\beta \partial\overline{\zeta^*_2}^{\overline\beta}}(\zeta)\right|\leqs \frac{1}{|\rho(z_j)|^{\alpha+\overline\alpha+\frac{\beta+\overline\beta}{2}}}$ for all $j\in\nn$, $\zeta\in\pk j$, $\alpha,\beta, \overline\alpha,\overline\beta\in\nn,$ uniformly with respect to $z_j$ and $\zeta$. We set as in \cite{Maz1}: $T_j=\frac1{c_j} \tilde T_j$ for $j\in\nn$ and $T=\sum_{j=-m}^\infty \chi_j T_j$. Therefore we have $fT=1$ on $D$. Moreover, since $T$ is supported in $\overline D$ which is compact, $T$ is of finite order (see \cite{Sch}) and we can apply $T$ to smooth forms vanishing to a sufficient order $l$ on $bD$. Therefore if the function $\tilde g$ is such that $|\rho|^N \tilde g$ belongs to $C^l(\overline D)$, we can apply $T$ to $\tilde gP^{N,2}$. This gives us the integer $l$ of Theorem \ref{th0}. \par\medskip Let $b(\zeta,z)=\sum_{j=1,2}b_j(\zeta,z)d\zeta_j$ be the holomorphic $(1,0)$-form defined by $b_j(\zeta,z)=\int_0^1\diffp{f}{\zeta_j}(\zeta+t(z-\zeta))dt$ so that for all $z$ and $\zeta$ we have $f(z)-f(\zeta)=\sum_{i=1,2}b_i(\zeta,z)(z_i-\zeta_j).$ Let $g$ be a holomorphic function admitting a smooth extension $\tilde g$ which satisfies the assumptions of Theorem \ref{th0}. Following the construction of \cite{Maz1}, we define the extension $E_N(g)$ of $g$ by setting $${E_N}[g](z)=C_1 \overline\partial T[\tilde gb(\cdot,z)\wedge P^{N,1}(\cdot,z)],\qquad\forall z\in D,$$ where $C_1$ is a suitable constant (see \cite{Maz1}). We have to check that $E_N(g)$ is indeed an extension of $g$. We have the two following facts :\\ {\it Fact 1~:}\label{fact1} Mazzilli proved in \cite{Maz1} that if $\tilde g$ is holomorphic on $D$ and of class $C^l$ on $\overline D$ then ${E_N}\tilde g=\tilde g$ on $X\cap D$.\\[2pt] {\it Fact 2~:} \label{fact2} We have $E_N\tilde g_1=E_N\tilde g_2$ when $\tilde g_1$ and $\tilde g_2$ are any smooth functions such that $\diffp{^{\alpha+\beta}\tilde g_1}{\overline{\zeta_1^*}^\alpha\partial \overline{\zeta_2^*}^\beta}=\diffp{^{\alpha+\beta}\tilde g_2}{\overline{\zeta_1^*}^\alpha\partial \overline{\zeta_2^*}^\beta}$ on $X\cap D$ for all integers $\alpha ,\beta $ with $\alpha +\beta \leq k$, where $k$ is the supremum of the orders of the singularities of $X$. Indeed, since $f$ is assumed to be minimal, using Theorem I, paragraph 11.2 and the theorem of paragraph 14.2 of \cite{Tsi}, for any function $\tilde g$ we can write $E_N \tilde g$ as a sum of integrals over $X\cap D$ where only the derivatives $\diffp{^{\alpha+\beta}\tilde gP^{N,1}}{\overline{\zeta_1^*}^\alpha\partial \overline{\zeta_2^*}^\beta}$ with $\alpha+\beta\leq k$. Applying this formula to $\tilde g=\tilde g_1$ and $\tilde g=\tilde g_1$ we get $E_N\tilde g_1=E_N\tilde g_2$. We notice that this gives us the integer $k$ of Theorem \ref{th0}. \par\medskip Now let $g$ be a holomorphic function on $X\cap D$ which admits a smooth extension $\tilde g$ which satisfies the assumptions of Theorem \ref{th0}. We prove that $E_N(g)(z_0)=g(z_0)$ for all $z_0\in X\cap D$.\\ For $\varepsilon>0$ small enough we construct $P_\varepsilon^{N,n}$, the Berndtsson-Andersson kernel of the domain $D_{-\varepsilon}$ which has the defining function $\rho_\varepsilon=\rho+\varepsilon$. We set $P^{N,n}_\varepsilon(\zeta ,z)=0$ for $\zeta \notin D_{-\varepsilon}$. The kernel $P_{\varepsilon}^{N,n}(\cdot,z_0)$ converges to $P^{N,n}(\cdot,z_0)$ when $\varepsilon$ tends to $0$. Now let $g_\varepsilon$ be an holomorphic extension of $g$ on $D_{-\frac\varepsilon2}$ given by Cartan's Theorem B. Fact~1 yields \begin{eqnarray*} g(z_0)&=&g_{\varepsilon}(z_0)\\ &=&\int_D g_\varepsilon (\zeta )\wedge P_{\varepsilon}^{N,2}(\zeta ,z_0)\\ &=&T\left[fg_\varepsilon\wedge P_{\varepsilon}^{n,2}(\cdot,z_0)\right]\\ &=&C_1\overline\partial T\left[g_\varepsilon b(\cdot,z_0)\wedge P_{\varepsilon}^{N,1}(\cdot,z_0)\right]. \end{eqnarray*} Then, since $P_\varepsilon^{N,1}$ is supported in $D_{-\varepsilon}$, since $\tilde g=g_\varepsilon$ on $X\cap D_{-\frac\varepsilon2}$ and since $\diffp{^{\alpha+\beta}\tilde g}{\overline{\zeta_1^*}^\alpha\partial \overline{\zeta_2^*}^\beta}=0$ on $D_{-\frac\varepsilon2}\cap X$, fact~2 gives $$g(z_0)=C_1\overline\partial T\left[\tilde g b(\cdot,z_0)\wedge P_{\varepsilon}^{N,1}(\cdot,z_0)\right]$$ and when $\varepsilon$ goes to $0$, this yields $g(z_0)=E_N \tilde g(z_0)$ and thus $E_N g$ is an extension of $g$. \section{Estimate of the extension operator}\mlabel{secIII} We prove in this section that $E_N(g)$ satisfies the conclusion of Theorem \ref{th0}. For this purpose we write $b$ in the \ko coordinates at $z_j,$ as $b(\zeta,z)=\sum_{l=1,2} b^*_l(\zeta,z)d\zeta^*_l$ where $b^*_l(\zeta,z)=\int_0^1 \diffp f {\zeta^*_l}(\zeta+t(z-\zeta))dt$ and we prove the following estimates. We recall that for any non negative integer $j$, $p_j$ is the integer given by proposition \ref{propII.2.1} and $$I_j\hskip -1.5pt =\hskip -1.5pt\left\{i, \exists z_1^*\in \Delta_0(\kappa |\rho(z_j)|) \text{ such that } |\alpha_i^*(z^*_1)|\leq (2\kappa|\rho(z_j)|)^{\frac12}\right\}.$$ \begin{proposition}\mlabel{kernelestimate} For all positive integer $j$, all $z$ in $D$ and all $\zeta$ in $\pk j$, we have uniformly in $z,\zeta$ and $j$ \begin{eqnarray*}{\left|\frac{\prod_{i\in I_j}\overline{\zeta^*_2-\alpha_i^*(\zeta_1)}}{f(\zeta)}b_1(\zeta,z)\right|} &\leqs& \sum_{0\leq\alpha+\beta\leq p_j} \delta(\zeta,z)^{\alpha+\frac\beta2}|\rho(\zeta)|^{-1-\alpha+\frac{\# I_j-\beta}2},\\ {\left|\frac{\prod_{i\in I_j}\overline{\zeta^*_2-\alpha_i^*(\zeta_1)}}{f(\zeta)}b_2(\zeta,z)\right|} &\leqs& \sum_{0\leq\alpha+\beta\leq p_j} \delta(\zeta,z)^{\alpha+\frac\beta2}|\rho(\zeta)|^{-\frac12-\alpha+\frac{\# I_j-\beta}2},\\ {\left|\frac{\prod_{i\in I_j}\overline{\zeta^*_2-\alpha_i^*(\zeta_1)}}{f(\zeta)}d_z b_1(\zeta,z)\right|} &\leqs& \sum_{0\leq\alpha+\beta\leq p_j} \delta(\zeta,z)^{\alpha+\frac\beta2}|\rho(\zeta)|^{-2-\alpha+\frac{\# I_j-\beta}2},\\ {\left|\frac{\prod_{i\in I_j}\overline{\zeta^*_2-\alpha_i^*(\zeta_1)}}{f(\zeta)}d_z b_2(\zeta,z)\right|} &\leqs& \sum_{0\leq\alpha+\beta\leq p_j} \delta(\zeta,z)^{\alpha+\frac\beta2}|\rho(\zeta)|^{-\frac32-\alpha+\frac{\# I_j-\beta}2}.\end{eqnarray*} \end{proposition} \pr We prove the first inequality, the others are analogous. For $A\subset\{1,\ldots,p_j\}$ we denote by $A^c$ the complementary of $A$ in $\{1,\ldots, p_j\}$. Proposition \ref{propII.2.1} yields: \begin{eqnarray*} \left|\frac{\prod_{i\in I_j} \overline{\zeta^*_2-\alpha_i^*(\zeta^*_1)}}{f(\zeta)} \right| &\leqs& \frac1{\prod_{i\in I_j^c} |\zeta^*_2-\alpha^*_i(\zeta_1^*)|} \end{eqnarray*} uniformly \wrt $\zeta$ and $j$.\\ We estimate $b_1^*$. We have $$\diffp{f}{\zeta^*_1}(\zeta+t(z-\zeta))=\sum_{0\leq \alpha+\beta\leq p_j}\diffp{^{\alpha+\beta+1}f}{{\zeta^*_1}^{\alpha+1}\partial {\zeta_2^*}^\beta}(\zeta)(z^*-\zeta^*)^{\alpha+\beta}+o(|\zeta^*-z^*|^{ p_j})$$ and $$\left|\diffp{^{\alpha+\beta+1}f}{{\zeta^*_1}^{\alpha+1}\partial {\zeta^*_2}^\beta}(\zeta)\right|=\left|\sum_{\over{n_1+\ldots n_{p_j}=\alpha+1}{F_1\dot{\cup} F_2\dot{\cup}F_3=\{1,\ldots, p_j\}}}\prod_{i\in F_1}\diffp{^{n_i} \alpha^*_i}{{\zeta^*_1}^{n_i}}(\zeta^*_1) \prod_{i\in F_3}(\zeta^*_2-\alpha_i^*(\zeta^*_1))\right|$$ where $\dot{\cup}$ means that the union is disjoint, $F_1=\{i,\ n_i\neq 0\}$ and $\#F_2=\beta$.\\ Since $\diffp{\alpha^*_i}{\zeta^*_1}$ is uniformly bounded and holomorphic on $\Delta_0(2\kappa|\rho(z_j)|)$, we have $\left|\diffp{^{n_i}\alpha_i^*}{{\zeta^*_1}^{n_i}}\right|\leqs |\rho(z_j)|^{-n_i+1}$ on $\Delta_0(\kappa|\rho(z_j)|)$. Moreover Proposition \ref{propII.0.1} gives $|\rho(z_j)|\eqs |\rho(\zeta)|$ for all $\zeta\in\pk j$ so \begin{eqnarray*} {\left|\diffp{^{\alpha+\beta+1}f}{{\zeta^*_1}^{\alpha+1}\partial {\zeta^*_2}^\beta}(\zeta)\right|} &\leqs&\sum_{\over{n_1+\ldots n_{p_j}=\alpha+1}{\over{F_1\dot{\cup} F_2\dot{\cup}F_3=\{1,\ldots, p_j\}}{\# F_2=\beta}}} |\rho(\zeta)|^{-\alpha-1+\# F_1} \prod_{i\in F_3} |\zeta^*_2-\alpha^*_i(\zeta^*_1)| \end{eqnarray*} and so \begin{eqnarray*} {|b^*_1(\zeta,z)|} &\leqs&\sum_{0\leq\alpha+\beta\leq p_j}\sum_{{\over{F_1\dot{\cup} F_2\dot{\cup}F_3=\{1,\ldots, {p_j}\}}{\#F_2=\beta}}} |\rho(\zeta)|^{-1-\alpha+\# F_1}\delta(\zeta,z)^{\alpha+\frac\beta2} \prod_{i\in F_3} |\zeta^*_2-\alpha^*_i(\zeta^*_1)|.\\ \end{eqnarray*} Therefore $\frac{\prod_{i\in I_j} \overline{\zeta^*_2-\alpha_i^*(\zeta^*_1)}}{f(\zeta)} b^*_1(\zeta,z)$ is bounded by a sum for $0\leq\alpha+\beta\leq p_j$, $F_1\dot{\cup} F_2\dot{\cup}F_3=\{1,\ldots, {p_j}\}$, ${\#F_2=\beta}$ of \begin{eqnarray*} {S^{\alpha,\beta}_{F_1,F_2,F_3}} &:=&\frac{\prod_{i\in F_3}|\zeta^*_2-\alpha^*_i(\zeta_1^*)|} {\prod_{i\in I_j^c}|\zeta^*_2-\alpha^*_i(\zeta_1^*)|}|\rho(\zeta)|^{-1-\alpha+\#F_1} \delta(\zeta,z)^{\alpha+\frac\beta2}. \end{eqnarray*} On the one hand for $i\in I_j^c$ and $\zeta\in \pk j$ we have $|\zeta^*_2-\alpha_i^*(\zeta^*_1)|\geqs |\rho(z_j)|^{\frac12}\eqs |\rho(\zeta)|^{\frac12}$. On the other hand for $i\in I_j$ and $\zeta\in \pk j$ we have $|\zeta^*_2-\alpha_i^*(\zeta^*_1)|\leqs |\rho(\zeta)|^{\frac12}$. Therefore, writing $\frac{\prod_{i\in F_3}(\zeta^*_2-\alpha^*_i(\zeta_1^*))}{\prod_{i\in I_j^c}(\zeta^*_2-\alpha^*_i(\zeta_1^*))}$ as $\frac{\prod_{i\in F_3\cap I_j}(\zeta^*_2-\alpha^*_i(\zeta_1^*))}{\prod_{i\in I_j^c\cap F_3^c}(\zeta^*_2-\alpha^*_i(\zeta_1^*))}\cdot \frac{\prod_{i\in F_3\cap I_j^c}(\zeta^*_2-\alpha^*_i(\zeta_1^*))}{\prod_{i\in I_j^c\cap F_3}(\zeta^*_2-\alpha^*_i(\zeta_1^*))}$ we get $$S^{\alpha,\beta}_{F_1,F_2,F_3}\leqs\delta(\zeta,z)^{\alpha+\frac\beta2} |\rho(\zeta)|^{-1-\alpha+\#F_1 +\frac{\#F_3\cap I_j - \# F_3^c\cap I_j^c}2}.$$ The equality ${\#F_3\cap I_j - \# F_3^c\cap I_j^c}=\#I_j-\# F_3^c$ implies that $\#F_1 +\frac{\#F_3\cap I_j - \# F_3^c\cap I_j^c}2\geq \frac{\# I_j-\beta} 2$.\\ This gives $S^{\alpha,\beta}_{F_1,F_2,F_3}\leqs \delta(\zeta,z)^{\alpha+\frac\beta2} |\rho(\zeta)|^{-1-\alpha+\frac{\#I_j-\beta}2}$ which finally yields \begin{eqnarray*}{\left|\frac{\prod_{i\in I_j}\overline{\zeta^*_2-\alpha_i^*(\zeta_1)}}{f(\zeta)}b_1(\zeta,z)\right|} &\leqs& \sum_{0\leq\alpha+\beta\leq p_j} \delta(\zeta,z)^{\alpha+\frac\beta2}|\rho(\zeta)|^{-1-\alpha+\frac{\# I_j-\beta}2}.\qed \end{eqnarray*} As usually in the estimates of the Berndtsson-Andersson kernel, the main difficulty appears when we integrate for $\zeta$ near $z$ and $z$ near $bD$. Therefore we choose $\varepsilon_0>0$ arbitrarily small and we divide the domain of integration in two parts : ${\cal P}_{\frac{\varepsilon_0}{2c_1}}(z)$ and $D\setminus{\cal P}_{\frac{\varepsilon_0}{2 c_1}}(z)$ where $c_1$ is given by Proposition \ref{propII.0.1}. In order to estimate the integral over ${\cal P}_{\frac{\varepsilon_0}{2c_1}}(z)$, we prove the following lemma: \begin{lemma}\mlabel{lemcov} For all $z\in D\setminus D_{-\frac{\varepsilon_0}2}$ such that $|\rho(z)|<\frac{\varepsilon_0}{2}$, let $j_0$ be an integer such that $(1-c\kappa)^{-j_0}\varepsilon_0< |\rho(z)|\leq (1-c\kappa)^{-j_0-1}\varepsilon _0$ and let $z_1^{i,j},\ldots, z_{m_{i,j}}^{i,j}$, $i\in \nn$, $j\in\zz$, be the points of the covering such that \begin{itemize} \item[-] $\rho(z^{i,j}_m)=-(1-c\kappa)^{j-j_0}\varepsilon _0$, \item[-] $\delta(z_m^{i,j},z)\in [i\kappa(1-c\kappa)^{j-j_0}\varepsilon _0,(i+1)\kappa(1-c\kappa)^{j-j_0}\varepsilon _0[$, \item[-] $\delta(z_m^{i,j},z)\leq \varepsilon_0$. \end{itemize} For $j\geq j_0$ let $i_0(j)$ be the non negative integer such that $i_0(j)\kappa(1-c\kappa)^{j-j_0}<1 \leq (1+i_0(j))\kappa(1-c\kappa)^{j-j_0}$.\\ Then \begin{enumerate}[(i)] \item \mlabel{premierpoint} ${\cal P}_{\frac{\varepsilon_0}{2c_1}}(z)\subset \cup_{j=j_0}^{+\infty}\cup_{i=0}^{i_0(j)}\cup_{m=1}^{m_{i,j}}{\cal P}_{\kappa|\rho(z^{i,j}_m)|}(z_m^{i,j})$, \item $m_{i,j}\leqs i^2$ uniformly with respect to $z_0,z,i$ and $j$. \end{enumerate} \end{lemma} \pr We first prove (\ref{premierpoint}). Let $\zeta$ be a point in ${\cal P}_{\frac{\varepsilon_0}{2c_1}}(z)$. Proposition \ref{propII.0.1} implies that $\zeta$ belongs to $D\setminus D_{-\varepsilon_0}$ so there exists a point $\zeta_0$ of the covering such that $\zeta$ belongs to ${\cal P}_{\kappa|\rho(\zeta_0)|}(\zeta _0)$.\\ The point $\zeta _0$ belongs to $D\setminus D_{-\varepsilon_0}$ thus there exists $j\geq j_0$ such that $|\rho(\zeta_0)|=(1-c\kappa)^{j-j_0}\varepsilon _0$. Moreover if $\kappa $ is small enough \begin{eqnarray*} \delta(\zeta _0,z)&\leq& c_1(\delta(\zeta,\zeta_0)+\delta(\zeta,z))\\ &\leq& c_1\left(\kappa(1-c\kappa)^{j-j_0}\varepsilon _0+\frac{\varepsilon_0}{2c_1}\right)\\ &\leq&\varepsilon_0. \end{eqnarray*} So there exists $i\in\nn$ such that $\delta(\zeta_0,z)$ belongs to $[i\kappa(1-c\kappa)^{j-j_0}\varepsilon _0,(i+1)\kappa(1-c\kappa)^{j-j_0}\varepsilon _0[$ and $(i+1)\kappa(1-c\kappa)^{j-j_0}\varepsilon _0\leq \varepsilon_0$ which means that $i\leq i_0(j)$. Thus $\zeta_0$ is one the points $z_1^{i,j},\ldots, z_{m_{i,j}}^{i,j}$ and (\ref{premierpoint}) holds.\\ In order to prove that $m_{i,j}\leqs i^2$ we introduce the set $$E_{i,j}=\{\zeta\in D,\ \rho(\zeta)=-(1-c\kappa)^{j-j_0}\varepsilon _0\text{ and } \delta(\zeta,z)\leq c_1\kappa(i+2)(1-c\kappa)^j|\rho(z)|\}.$$ On the one hand we have \begin{eqnarray} \sigma(E_{i,j})&=&\sigma\left(bD_{-(1-c\kappa)^j|\rho(z_0)|}\cap \nonumber {\cal P}_{c_1\kappa(i+2)(1-c\kappa)^j|\rho(z)|}(z)\right)\\ &\leq& \left(c_1\kappa(i+2)(1-c\kappa)^j|\rho(z)|\right)^2\nonumber\\ &\leqs&\left(c_1\kappa(i+2)(1-c\kappa)^{j-j_0}\varepsilon _0\right)^2\mlabel{eq2} \end{eqnarray} On the other one hand for all $m$, all $\zeta\in {\cal P}_{\kappa|\rho(z^{i,j}_m)|}(z_m^{i,j})$ we have: \begin{eqnarray*} \delta(\zeta,z)&\leq& c_1(\delta(\zeta,z_m^{i,j})+\delta(z_m^{i,j},z))\\ &\leq& c_1(\kappa(1-c\kappa)^{j-j_0}\varepsilon _0+\kappa(i+1)(1-c\kappa)^{j-j_0}\varepsilon _0)\\ &\leq&c_1\kappa(i+2)(1-c\kappa)^{j-j_0}\varepsilon _0. \end{eqnarray*} This implies that ${\cal P}_{\kappa|\rho(z^{i,j}_m)|}(z_m^{i,j})\cap bD_{-(1-c\kappa)^{j-j_0}\varepsilon _0}\subset E_{i,j}$ for all $m$ and so \begin{eqnarray} \sigma(E_{i,j})&\geq& \sigma\left(\cup_{m=1}^{m_{i,j}} {\cal P}_{\kappa|\rho(z^{i,j}_m)|}(z_m^{i,j})\cap bD_{-(1-c\kappa)^{j-j_0}\varepsilon _0} \right)\nonumber. \end{eqnarray} Now, the construction of a $\kappa$-covering and Proposition \ref{propII.0.1} implies that the intersection of ${\cal P}_{\frac{c\kappa}{c_1}|\rho(z^{i,j}_m)|}(z_m^{i,j})$ and ${\cal P}_{\frac{c\kappa}{c_1}|\rho(z^{i,j}_l)|}(z_l^{i,j})$ is empty for for $l\neq m$. Therefore we have \begin{eqnarray} \sigma(E_{i,j})&\geq& \sum_{m=1}^{m_{i,j}}\sigma\left({\cal P}_{\frac{c\kappa}{c_1}|\rho(z^{i,j}_m)|}(z_m^{i,j})\cap bD_{-(1-c\kappa)^{j-j_0}\varepsilon _0} \right),\nonumber\\ &\geq& m_{i,j}(\frac{c\kappa}{c_1}(1-c\kappa)^{j-j_0}\varepsilon _0)^2.\mlabel{eq3} \end{eqnarray} Inequalities (\ref{eq2}) and (\ref{eq3}) together imply that $m_{i,j}\leqs i^2$, uniformly with respect to $z$, $i$ and $j$.\qed In order to prove the $BMO$-estimates of Theorem \ref{th0} we apply the following classical lemma: \begin{lemma} Let $h$ be a function of class $C^1$ on $D$. If there exists $C>0$ such that ${\rm d}h(\zeta)\leq C|\rho(\zeta)|^{-1}$ then $h$ belongs to ${BMO}(D)$ and $\|h\|_{BMO(D)}\leq C$. \end{lemma} \noindent{\it Proof of Theorem \ref{th0} for $q=+\infty$~: } Let $g$ be a holomorphic function on $X\cap D$ which have a smooth extension $\tilde g$ which satisfies the assumptions (\ref{th0i}), (\ref{th0ii}) and (\ref{th0iii}) of Theorem \ref{th0}. We put $\gamma_\infty=\sup_{\over{\zeta\in D}{\alpha+\beta\leq k}}\left|\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}(\zeta)\right||\rho(\zeta)|^{\alpha+\frac\beta 2}$ In order to prove Theorem \ref{th0} when $q=+\infty$, we have to prove that $E_N g$ is in $BMO(D)$ and $\|E_N g\|_{BMO(D)}\leqs \gamma_\infty$.\\ Since the Berndtsson-Andersson kernel is regular when $\zeta$ and $z$ are far from each other or when $z$ is far from $bD$, we only have to estimate the integral over ${\cal P}_{\frac{\varepsilon_0}{2c_1}}(z)$ for $z$ near $bD$ and $\varepsilon_0>0$ not depending from $z$. We keep the notation of lemma \ref{lemcov} and use the covering $\cup_{j=j_0}^{+\infty}\cup_{i=0}^{i_0(j)}\cup_{m=1}^{m_{i,j}}{\cal P}_{\kappa|\rho(z^{i,j}_m)|}(z_m^{i,j})$ of ${\cal P}_{\frac{\varepsilon_0}{2c_1}}(z)$ given by lemma \ref{lemcov}. We denote by $p_m^{i,j}$ the number of sheets given by proposition \ref{propII.2.1} for $z_m^{i,j}$, $I_{m}^{i,j}$ is the set $I_m^{i,j}\hskip -1.5pt =\hskip -1.5pt\left\{k, \exists z_1^*\in \Delta_0(\kappa|\rho(z_m^{i,j})|) \text{ such that } |\alpha_k^*(z^*_1)|\leq (2\kappa|\rho(z^{i,j}_m)|)^{\frac12}\right\}$ and $q_m^{i,j}$ denotes its cardinal.\\ From Proposition \ref{estiBA} and \ref{kernelestimate} we get for all $\zeta\in {\cal P}_{\kappa|\rho(z^{i,j}_m)|}(z_m^{i,j})$ \begin{eqnarray*} \lefteqn{\left|{d}_z \left(\frac{\prod_{i\in I_m^{i,j}}\overline{\zeta^*_2-\alpha_i^*(\zeta_1)}}{f(\zeta)}b(\zeta,z)\wedge \overline \partial\diffp{^{q^{i,j}_m}}{\overline{\zeta^*_2}^{q^{i,j}_m}} \left(\tilde g(\zeta)P^{N,n}(\zeta,z)\right) \right)\right|}\\ &\leqs& \gamma_\infty\sum_{0\leq \alpha+\beta\leq p_m^{i,j}} \left(\frac{\delta(\zeta,z)}{|\rho(\zeta)|}\right)^{\alpha+\frac\beta2} \frac{|\rho(\zeta)|^{N}}{(|\rho(\zeta)|+|\rho(z)|+\delta(z,\zeta))^{N+4}} \\ &\leqs&\gamma_\infty \frac{|\rho(\zeta)|^{N'}}{(|\rho(\zeta)|+|\rho(z)|+\delta(z,\zeta))^{N'+4}}. \end{eqnarray*} where $N'=N-\max_{i,j} p_{i,j}$.\\ We have for all $\zeta\in {\cal P}_{\kappa|\rho(z^{i,j}_m)|}(z_m^{i,j})$, $|\rho(\zeta)|\geq \frac12 |\rho(z^{i,j}_m)|$ and thus: \begin{eqnarray*} |\rho(\zeta)|+\delta(\zeta,z) &\geq& \frac12|\rho(z^{i,j}_m)|+\frac1{c_1} \delta(z,z_m^{i,j})-\delta(z^{i,j}_m,\zeta)\\ &\geq& |\rho(z^{i,j}_m)|(\frac12-\kappa)+\frac1{c_1}\delta(z,z_m^{i,j})\\ &\geqs& |\rho(z_{m}^{i,j})|+\delta(z,z_m^{i,j}). \end{eqnarray*} Therefore \begin{eqnarray*} \lefteqn{\left|{d}_z \left( \frac{\prod_{i\in I_m^{i,j}}\overline{\zeta^*_2-\alpha_i^*(\zeta_1)}}{f(\zeta)}b(\zeta,z)\wedge \overline \partial\diffp{^{q^{i,j}_m}}{\overline{\zeta^*_2}^{q^{i,j}_m}} \left(\tilde g(\zeta)P^{N,n}(\zeta,z)\right)\right) \right|}\\ &&\hskip 90pt\leqs \gamma_\infty \frac{|\rho(z^{i,j}_m)|^{N'}}{(|\rho(z)|+|\rho(z^{i,j}_m)|+\delta(z,z^{i,j}_m))^{N'+4}}. \end{eqnarray*} Now, integrating over ${\cal P}_{\kappa|\rho(z^{i,j}_m)|}(z^{i,j}_m)$ and summing over $m$, $i$ and $j$ we have to prove that the sum $$\sum_{j=j_0}^\infty\sum_{i=0}^{i_0(j)}\sum_{m=1}^{m_{i,j}} \frac{|\rho(z_{m}^{i,j})|^{N'}}{\left((i+1)|\rho(z^{i,j}_m)|+|\rho(z)|\right)^{N'+1}} $$ is uniformly bounded by $\frac1{|\rho(z)|}$. We have: \begin{eqnarray*} \lefteqn{\sum_{j=j_0}^\infty\sum_{i=0}^{i_0(j)}\sum_{m=1}^{m_{i,j}} \frac{|\rho(z_{m}^{i,j})|^{N'}}{\left((i+1)|\rho(z^{i,j}_m)|+|\rho(z)|\right)^{N'+1}}}\\ &\leq& \sum_{j=j_0}^\infty\sum_{i=0}^{i_0(j)}\sum_{m=1}^{m_{i,j}} \left(\frac{(1-c\kappa)^j}{(i+1)(1-c\kappa)^j+1}\right)^{N'} \cdot\frac{1}{((i+1)(1-c\kappa)^j+1)|\rho(z)|}\\ &\leq&\frac{1}{|\rho(z)|}\left(\sum_{j=0}^\infty \sum_{i=0}^\infty \frac{(1-c\kappa)^j}{(i+1)^{N'-3}} + \sum_{j=j_0}^{-1} \sum_{i=0}^\infty \frac1{(i+1)^{N'-2}(1-c\kappa)^{j}}\right)\\ &\leqs& \frac1{|\rho(z)|}. \end{eqnarray*} So $E_N(g)$ belongs to ${BMO}(D)$ and $\|E_N(g)\|_{BMO(D)} \leqs \sup_{\over{\zeta\in D}{\alpha+\beta\leq k}}\left|\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}(\zeta)\right||\rho(\zeta)|^{\alpha+\frac\beta 2}$.\qed\\ The $L^q$-estimates of Theorem \ref{th0} are left to be shown. For $q\in (1,+\infty)$ we will apply the following lemma (see \cite{Pol}): \begin{lemma}\mlabel{Pol} Suppose the kernel $k(\zeta,z)$ is defined on $D\times D$ and the operator $K$ is defined by $Kf(z)=\int_{\zeta\in D}k(\zeta,z)f(\zeta)d\lambda(\zeta)$. If for every $\varepsilon\in ]0,1[$ there exists a constant $c_\varepsilon$ such that \begin{eqnarray*} \int_{\zeta\in D} |\rho(\zeta)|^{-\varepsilon}|k(\zeta,z)|d\lambda(\zeta)&\leq& c_\varepsilon |\rho(z)|^{-\varepsilon},\quad \forall z\in D,\\ \int_{z\in D} |\rho(z)|^{-\varepsilon}|k(\zeta,z)|d\lambda(z)&\leq& c_\varepsilon |\rho(\zeta)|^{-\varepsilon},\quad \forall \zeta\in D \end{eqnarray*} Then for all $q\in ]1,+\infty[$, there exists $c_q>0$ such that $\|Kf\|_{L^q(D)}\leq \|f\|_{L^q(D)}$. \end{lemma} {\it Proof of Theorem \ref{th0} for $q\in(1,+\infty)$~:} Applying Lemma \ref{Pol} and Propositions \ref{estiBA} and \ref{kernelestimate}, it suffices to prove that for all $\varepsilon\in (0,1)$ there exists $c_\varepsilon>0$ such that \begin{eqnarray} {\int_{\zeta\in D} \frac {|\rho(\zeta)|^{N'-\varepsilon}} {\left(|\rho(\zeta)|+|\rho(z)|+\delta(\zeta ,z)\right)^{N'+3}} d\lambda(\zeta)}&\leq &c_{\varepsilon}|\rho(z)|^{-\varepsilon},\ \forall z\in D,\mlabel{eq5}\\ {\int_{z\in D} \frac {|\rho(\zeta)|^{N'}|\rho(z)|^{-\varepsilon}} {\left(|\rho(\zeta)|+|\rho(z)|+\delta(\zeta ,z)\right)^{N'+3}} d\lambda(z)}&\leq &c_{\varepsilon}|\rho(\zeta)|^{-\varepsilon},\ \forall \zeta \in D,\mlabel{eq6} \end{eqnarray} The inequality (\ref{eq5}) can be shown as in the proof of Theorem \ref{th0} for $q=\infty$.\\ In order to prove that the inequality (\ref{eq6}) holds true we cover $D$ with the \ko balls ${\cal P}_{\kappa|\rho(\zeta)|}(\zeta)$ and $\left({\cal P}_{2^{j+1}\kappa|\rho(\zeta)|}(\zeta)\setminus {\cal P}_{2^j\kappa|\rho(\zeta)|}(\zeta)\right)$, $j\in\nn$.\\ For $z\in{\cal P}_{\kappa|\rho(\zeta)|}(\zeta )$, $|\rho(z)|\eqs |\rho(\zeta)|$ and thus \begin{eqnarray} \int_{z\in{\cal P}_{\kappa|\rho(\zeta)|}(\zeta )} \frac {|\rho(\zeta)|^{N'}|\rho(z)|^{-\varepsilon}} {\left(|\rho(\zeta)|+|\rho(z)|+\delta(\zeta ,z)\right)^{N'+3}}d\lambda(z) &\leqs&|\rho(\zeta)|^{-\varepsilon}.\mlabel{eq20} \end{eqnarray} When we integrate on ${\cal P}_{2^{j+1}\kappa|\rho(\zeta)|}(\zeta)\setminus {\cal P}_{2^j\kappa|\rho(\zeta)|}(\zeta)$ we get \begin{eqnarray} \nonumber\lefteqn{ \int_{{\cal P}_{2^{j+1}\kappa |\rho(\zeta)|}(\zeta)\setminus {\cal P}_{2^j\kappa |\rho(\zeta)|}(\zeta)} \frac {|\rho(\zeta)|^{N'}|\rho(z)|^{-\varepsilon}} {\left(|\rho(\zeta)|+|\rho(z)|+\delta(\zeta ,z)\right)^{N'+3}}d\lambda(z)}\\ \nonumber&\hskip 100pt &\leqs \int_{\over{|x_1|,|y_1|\leq 2^{j+1}\kappa|\rho(\zeta)|}{|x_2|,|y_2|\leq \sqrt{2^{j+1}\kappa|\rho(\zeta)|}}} \frac{|\rho(\zeta)|^{N'}x_1^{-\varepsilon}} {\left(|\rho(\zeta)|+2^j\kappa|\rho(\zeta)|\right)^{N'+3}}d\lambda(z)\\ \nonumber&\hskip 100pt &\leqs (2^{j+1}\kappa|\rho(\zeta)|)^{-\varepsilon+3} \frac{|\rho(\zeta)|^{N'}} {\left(|\rho(\zeta)|+2^j\kappa|\rho(\zeta)|\right)^{N'+3}}\\ &\hskip 100pt &\leqs |\rho(\zeta)|^{-\varepsilon} 2^{-j(N'+\varepsilon)}\mlabel{eq21} \end{eqnarray} Summing (\ref{eq20}) and (\ref{eq21}) for all non-negative integer $j$ we prove inequality (\ref{eq20}). Theorem \ref{th0} is therefore proved for $q\in(1,+\infty)$.\qed\\ {\it Proof of Theorem \ref{th0} for $q=1$~:} We prove directly that $E_Ng$ belongs to $L^1(D)$. Propositions \ref{estiBA} and \ref{kernelestimate} yield \begin{eqnarray*} {\int_D|E_N g(z)|d\lambda(z)}&\leqs&\sum_{j=0}^\infty\sum_{0\leq\alpha+\beta\leq q_j+1} \int_{\pk j} {|\rho(z_j)|^{\alpha+\frac\beta2}} \left| \diffp{^{\alpha+\beta}\tilde g} {\overline{\zeta^*_1}^\alpha\partial \overline{\zeta^*_2}^\beta}(\zeta)\right|\\ &&\hskip 50pt \cdot \left(\int_D \frac{|\rho(\zeta)|^{N'}}{\left(|\rho(\zeta)|+|\rho(z)|+\delta(\zeta,z)\right)^{N'+3}} {d\lambda(z)}\right)d\lambda(\zeta). \end{eqnarray*} As for the proof of (\ref{eq6}) we cover $D$ using \ko corona and get \begin{eqnarray*} {\int_D|Eg(z)|d\lambda(z)} &\leqs&\sum_{j=0}^\infty\sum_{0\leq \alpha+\beta\leq q_j+1} \int_{\pk j} {|\rho(z_j)|^{\alpha+\frac\beta2}} \left| \diffp{^{\alpha+\beta}\tilde g} {\overline{\zeta^*_1}^\alpha\partial \overline{\zeta^*_2}^\beta}(\zeta)\right|d\lambda(\zeta)\\ &\leqs& \sum_{0\leq\alpha +\beta \leq k}\left\| \zeta \mapsto \diffp{^{\alpha +\beta }\tilde g}{\overline\eta_\zeta ^\alpha \partial\overline v_\zeta ^\beta }(\zeta )\rho (\zeta )^{\alpha +\frac\beta 2}\right\|_{L^1(D)}. \end{eqnarray*} \qed \section{Smooth extension and divided differences}\mlabel{section5} In this section we give necessary conditions in $\cc^n$ that a function $g$ holomorphic on $X\cap D$ has to satisfy in order to have a $L^q$-holomorphic extension on $D$, $q\in [1,+\infty]$. We also prove that these conditions are sufficient in $\cc^2$ for $g$ to have a $L^q$-holomorphic extension on $D$ when $q$ belongs to $[1,+\infty)$ or a $BMO$-holomorphic extension when $q=+\infty$. \subsection{$L^\infty$-$BMO$ extension} We first prove the following lemma for functions defined on $X\cap D$ which have holomorphic extension on $D$. We use the notations defined in the introduction. \begin{lemma}\mlabel{lemma0} If $g$ defined on $X\cap D$ has a holomorphic extension $G$ on $D$ then uniformly with respect to $g$, $G$, $z\in D$, $v$ unit vector of $\cc^n$ and positive integer $k$ such that $k\leq \# \Lambda(z,v)$ : $$\sup_{ \genfrac{}{}{0pt}{}{\lambda_1,\ldots,\lambda_k\in\Lambda_{z,v}}{\lambda_i\neq\lambda_j\text{ for } i\neq j}}|g_{z,v}[\lambda_1,\ldots,\lambda_k]| \tau(z,v,|\rho(z)|)^{k-1} \leqs \sup_{b\Delta_{z,v} \left(4\kappa\tau(z,v,|\rho(z)|)\right)}|G|.$$ \end{lemma} \pr For $\lambda_1,\ldots,\lambda_k\in \Lambda_{\zeta,v}$ pairwise distincts, we have by Cauchy's formula $$g_{z,v}[\lambda_1,\ldots,\lambda_k]=\frac1{2i\pi}\int_{|\lambda|=4 \tau(z,v,|\rho(z)|)} \frac{G(z+\lambda v)}{\prod_{l=1}^k(\lambda-\lambda_i)}d\lambda.$$ since for all $\lambda_i$ we have $|\lambda_i|\leq 3\tau(z,v,|\rho(z)|)$, we get $$|g_{z,v}[\lambda_1,\ldots,\lambda_k]\leqs \left(\frac{1}{\tauzv}\right)^{k-1}\sup_{b\Delta_{z,v}\left(4\kappa \tauzv\right)}|G|.$$ \qed \noindent{\it Proof of Theorem \ref{th1}~:} Lemma \ref{lemma0} implies directly that $c_\infty(g)\leqs \|G\|_{L^\infty(D)}$.\qed \par\medskip Now we prove that an even weaker assumption than $c_\infty(g)<\infty$ is actually sufficient in $\cc^2$ for $g$ to have a smooth extension which satisfies the hypothesis of Theorem \ref{th0} for $q=\infty$ and thus for $g$ to have a holomorphic $BMO$ extension on $D$. We define for $\kappa$ and $\varepsilon_0$ positive real number $$c^{(\infty)}_{\kappa,\varepsilon_0}(g)=\sup|g_{\zeta+z^*_1\eta_\zeta,v_\zeta}[\lambda_1,\ldots,\lambda_k]|\tau(\zeta,v_\zeta,|\rho(\zeta)|)^{k-1}$$ where the supremum is taken over $\zeta\in D\setminus D_{-\varepsilon_0}$, $z_1^*\in \cc$ such that $|z^*_1|\leq \kappa|\rho(\zeta)|$, $\lambda_1,\ldots, \lambda_k\in \Lambda_{\zeta+z^*_1\eta_\zeta,v_\zeta}$ pairwise distinct. Of course, $c^{(\infty)}_{\kappa,\varepsilon_0}(g)\leq c_\infty(g)$ and it may be simpler to check that $c^{(\infty)}_{\kappa,\varepsilon_0}(g)$ is finite than to check that $c_\infty(g)$ is finite. Moreover, as told by the following lemma, when $c^{(\infty)}_{\kappa,\varepsilon_0}(g)$ is finite, $g$ admits a smooth extension which satisfies the assumptions of Theorem \ref{th0}. \begin{lemma}\mlabel{lemma2} In $\cc^2$, let $g\in \oo(X\cap D)$ be such that $c^{(\infty)}_{\kappa,\varepsilon_0} (g)<\infty$. Then there exist a neighborhood $\cal U$ of $bD$ and $\tilde g\in C^\infty(D\cap {\cal U})$ such that \begin{enumerate}[(i)] \item for all non negative integer $N$, $|\rho|^{N+1} \tilde g$ vanishes to order $N$ on $bD$, \item for all $\alpha$ and $\beta$ non negative integer, $\left|\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}\right||\rho|^{\alpha+\frac\beta 2}$ is bounded up to a uniform multiplicative constant on $D\cap{\cal U}$ by $c^{(\infty)}_{\kappa,\varepsilon_0}(g)$ , \item for all $\alpha$ and $\beta$ non negative integer, $\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}=0$ on $X\cap D\cap{\cal U}$. \end{enumerate} \end{lemma} \pr For $\varepsilon_0>0$, we cover $D\setminus D_{-\varepsilon_0}$ with a $\kappa $-covering $\left({\cal P}_{\kappa|\rho(z_j)|}(z_j)\right)_{j\in\nn}$ constructed in subsection \ref{maxcover}. For a fixed nonnegative integer $j$, we set $w_1^*=\eta_{z_j}$ and $w^*_2=v_{z_j}$. Let $\alpha_1,\ldots,\alpha_{p_j}$ be the parametrization given by proposition \ref{propII.2.1}, $I_j=\{i,\ \exists z^*_1\in\cc\text{ with } |z_1^*|<\kappa |\rho(z_j)| \text{ and } |\alpha_i(z^*_1)| \leq 2\kappa|\rho(z_j)|\}$, $q_j=\# I_j$.\\ If $I_j=\emptyset$ we put $\tilde g_j=0$ on ${\cal P}_{\kappa|\rho(z_j)|}(z_j)$.\\ Otherwise, without restriction we assume that $I_j=\{1,\ldots, q_j\}$ and for $z=z_j+z^*_1w_1^*+z^*_2w^*_2\in {\cal P}_{2\kappa|\rho(z_j)|}(z_j)$ we put $$\tilde g_j(z) =\sum_{k=1}^{q_j} g_{z_j+z^*_1w^*_1,w_2^*} [\alpha_1(z^*_1),\ldots, \alpha_k (z^*_1)]\prod_{l=1}^{k-1}(\zeta^*_2- \alpha_{l}( z^*_1)).$$ Proposition \ref{propII.2.1} implies for all $z^*_1\in \Delta_0(2\kappa |\rho (z_j)|)$ that $\alpha_j(z^*_1)$ belongs to $\Lambda_{z_j+z^*_1w^*_1,w^*_2}$ thus $\tilde g_j$ is well defined on ${\cal P}_{2\kappa|\rho(z_j)|}(z_j)$.\\ The function $\zeta\mapsto \tilde g_j(z_j+z^*_1w^*_1 +\zeta w^*_2)$ is the polynomial which interpolates $\zeta\mapsto g(z_j+z^*_1w^*_1 +\zeta w^*_2)$ at the points $\alpha_1(z^*_1),\ldots, \alpha_{q_j}(z^*_1)$ and thus $\tilde g_j$ is a holomorphic extension of $g$ on ${\cal P}_{\kappa|\rho(z_j)|}(z_j)$.\\ For all $z=z_j+z^*_1w_1^*+z^*_2w^*_2\in {\cal P}_{2\kappa|\rho(z_j)|}(z_j)$, we have $$|z^*_2-\alpha_l(z^*_1)| \leq \tau(z_j,w^*_2,2\kappa|\rho(z_j)|)\leqs \tau(z,w^*_2,2\kappa|\rho(z)|)$$ thus $|\tilde g_j(z)|\leqs c_\infty(g)$ on ${\cal P}_{2\kappa|\rho(z_j)|}(z_j)$ and $|\rho(z_j)|^{\alpha+\frac\beta2} \left|\diffp{^{\alpha+\beta} \tilde g_j}{{w^*_1}^\alpha\partial{w^*_2}^\beta}(z)\right|\leqs c_\infty(g)$ on ${\cal P}_{\kappa|\rho(z_j)|}(z_j)$. Now we glue together all the $\tilde g_j$ using a suitable partition of unity and get our extension on $D\setminus D_{-\varepsilon_0}$. Let $(\chi_j)_{j\in\nn}$ be a partition of unity subordinated to $\left(\pk j\right)_{j\in\nn}$ such that for all $j$ and all $\zeta\in\pk j$, we have $\left|\diffp{^{\alpha+\overline\alpha+\beta+\overline\beta} \chi_j} {{w^{*}_1}^\alpha\partial {w^{*}_2}^\beta\partial \overline{w^{*}_1 }^{\overline\alpha}\partial \overline{w^{*}_2 }^{\overline\beta}} (\zeta)\right|\leqs \frac{1}{|\rho(z_j)|^{\alpha+\overline\alpha+\frac{\beta+\overline\beta}{2}}}$, uniformly with respect to $z_j$ and $\zeta$.\\ We set $\tilde g_{\varepsilon_0}=\sum_j\chi_j\tilde g_j$. By construction for all $N\in\nn$, $\rho^{N+1} \tilde g_{\varepsilon_0}$ is of class $C^{N}$ on $\overline D\setminus D_{-\varepsilon_0}$ and vanishes to order $N$ on $bD$. Moreover, since for all $j$ the function $\tilde g_j$ is holomorphic, $\diffp{^{\alpha+\beta}\tilde g_{\varepsilon_0}}{\overline z^\alpha_1\partial \overline z^\beta_2}=0$ on $X\cap ( D\setminus D_{-\varepsilon})$ and, by our choice of $\chi_j$, $\left|\diffp{^{\alpha+\beta}\tilde g_{\varepsilon_0}}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}(\zeta )\right|\leqs |\rho(\zeta )|^{-\left(\alpha+\frac\beta 2\right)}$ for all $\zeta \in D\setminus D_{-\varepsilon _0}$.\qed\\ As a direct corollary of Lemma \ref{lemma2}, we have \begin{corollary}\mlabel{th2} In $\cc^2$, let $g\in \oo(X\cap D)$ be such that $c_\infty(g)<\infty$. Then there exist a neighborhood $\cal U$ of $bD$ and $\tilde g\in C^\infty(D\cap {\cal U})$ such that \begin{enumerate}[(i)] \item for all non negative integer $N$, $|\rho|^{N+1} \tilde g$ vanishes to order $N$ on $bD$, \item for all $\alpha$ and $\beta$ non negative integer, $\left|\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}\right||\rho|^{\alpha+\frac\beta 2}$ is bounded up to a uniform multiplicative constant on $D\cap{\cal U}$ by $c_\infty(g)$ , \item for all $\alpha$ and $\beta$ non negative integer, $\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}=0$ on $X\cap D\cap{\cal U}$. \end{enumerate} \end{corollary} Theorem \ref{th3} is now a corollary of Theorem \ref{th0} and Corollary \ref{th2} :\\ {\it Proof of Theorem \ref{th3}~:} We use Corollary \ref{th2} to get an extension $\tilde g$ of $g$ which satisfies the hypothesis of Theorem \ref{th0} on ${\cal U}\cap D$. Cartan's Theorem B gives us a bounded holomorphic extension on $D\setminus {\cal U}$. Gluing these two extensions together, we get a smooth extension of $g$ which satisfies the hypothesis of Theorem \ref{th0} in the whole domain $D$ and thus, Theorem \ref{th0} ensure the existence of a $BMO$ holomorphic extension of $g$.\qed \subsection{$L^q(D)$-extension} The case of $L^q$-extensions is a bit harder to handle because it is not a punctual estimate but an average estimate. Therefore the assumption under which a function $g$ holomorphic on $X\cap D$ admits a $L^q$-holomorphic extension on $D$ uses a $\kappa $-covering $\left({\cal P}_{\kappa |\rho(z_j)|}(z_j)\right)_{j\in\nn}$ in addition to the divided differences.\\ By transversality of $X$ and $bD$, for all $j$ there exists $w_j$ in the complex tangent plane to $bD_{\rho(z_j)}$ such that $\pi_j$, the orthogonal projection on the hyperplane orthogonal to $w_j$ passing through $z_j$, is a $p_j$ sheeted covering of $X$. We denote by $w_1^*,\ldots, w^*_n$ an orthonormal basis of $\cc^n$ such that $w_1^*=\eta_{z_j}$ and $w_n^*=w_j$ and we set ${\cal P}'_{\varepsilon}(z_j)=\{z'=z_j+z^*_1w^*_1+\ldots+z^*_{n-1} w^*_{n-1},\ |z^*_1|< \varepsilon \text{ and } |z_k^*|<\varepsilon^{\frac12},\ k=2,\ldots, n-1\}$. We put \begin{eqnarray*} c^{(q)}_{\kappa,{(z_j)_{j\in\nn}}}(g)\hskip-1pt &=&\hskip-1pt\sum_{j=0}^\infty \int_{z'\in{\cal P}'_{2\kappa |\rho(z_j)|}(z_j)} \sum_{\over{\lambda_1,\ldots,\lambda_k\in\Lambda_{z',w_n^*}}{\lambda_i\neq\lambda_l\text{ for }i\neq l}}\hskip - 3pt |\rho(z_j)|^{q\frac{k-1}2+1} \left|g_{z',w_n^*}[\lambda_1,\ldots,\lambda_k]\right| dV_{n-1}(z') \end{eqnarray*} where $dV_{n-1}$ is the Lebesgue measure in $\cc^{n-1}$. \begin{theorem}\mlabel{th4} In $\cc^n$, $n\geq 2$, let $\left({\cal P}_{\kappa |\rho(z_j)|}(z_j)\right)_{j\in\nn}$ be a $\kappa $-covering of $D\cap X$. If $g\in\oo(X\cap D)$ has a holomorphic extension $G\in L^q(D)$ then $c^{(q)}_{\kappa,(z_j)_{j\in\nn}}(g)\leqs \|G\|^q_{L^q(D)}$ uniformly with respect to $g$, $G$ and the covering $\left({\cal P}_{\kappa |\rho(z_j)|}(z_j)\right)_{j\in\nn}$. \end{theorem} \pr For all $j\in\nn$ all $z'\in{\cal P}_{\kappa|\rho(z_j)|}(z_j)$, all $r\in\rr$ such that $\frac72 \kappa |\rho(z_j)|^{\frac12}\leq r\leq 4\kappa |\rho(z_j)|^{\frac12}$, all $\lambda_1,\ldots, \lambda_k\in\Lambda_{z',w_n^*}$ pairwise distinct we have by Cauchy's formula $$g_{z',w_j}[\lambda_1,\ldots,\lambda_k]=\frac1{2i\pi}\int_{|\lambda|=r} \frac{G(z'+\lambda w_j)}{\prod_{l=1}^k(\lambda-\lambda_i)}d\lambda.$$ After integration for $r\in[7/2\kappa |\rho (z_j)|,4\kappa |\rho (z_j)|]$, Jensen's inequality yields $$\left|g_{z',w_j}[\lambda_1,\ldots,\lambda_k]\right|^q\leqs |\rho(z_j)|^{\frac{1-k}2 q-1} \int_{|\lambda|\leq (4\kappa |\rho(z_j)|)^\frac12} |G(z'+\lambda w_j)|^q dV_1(\lambda)$$ and thus \begin{eqnarray*} {\int_{z'\in {\cal P}'_{\kappa|\rho(z_j)|}(z_j)} \left|g_{z',w_j}[\lambda_1,\ldots,\lambda_k]\right|^q |\rho(z_j)|^{\frac{k-1}2 q-1}dV_{n-1}} & \leqs& \int_{z\in {\cal P}_{4\kappa|\rho(z_j)|} (z_j)} |G(z)|^q dV_n(\lambda). \end{eqnarray*} Since $\left({\cal P}_{\kappa |\rho(z_j)|}(z_j)\right)_{j\in\nn}$ is a $\kappa $-covering, we deduce from this inequality that $c^{(q)}_{\kappa,(z_j)_{j\in\nn}}(g)\leqs \|G\|^q_{L^q(D)}$.\qed\\[5pt] Now we come back in $\cc^2$ and prove that the condition $c^{(q)}_{\kappa,(z_j)_{j\in\nn}}(g)<\infty$ is indeed sufficient for $g$ to have a $L^q$ extension. \begin{theorem} \mlabel{th5} In $\cc^n2,$ let $\left({\cal P}_{\kappa |\rho(z_j)|}(z_j)\right)_{j\in\nn}$ be a $\kappa $-covering of $D\cap X$. If the function holomorphic on $X\cap D$ satisfies is such that $c^{(q)}_{\kappa,(z_j)_{j\in\nn}}(g)<\infty$, then there exist a neighborhood $\cal U$ of $bD$ and a smooth extension $\tilde g\in C^\infty(D\cap {\cal U})$ of $g$ such that \begin{enumerate}[(i)] \item for all $N\in\nn$ such that $|\rho|^{N+4} \tilde g$ vanishes to order $N$ on $bD$, \item for all non negative integers $\alpha $ and $\beta $ the function $\zeta\mapsto\left|\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}(\zeta )\right||\rho(\zeta )|^{\alpha+\frac\beta 2}$ has a $L^q$ norm on $D\cap {\cal U}$ bounded by $c^{(q)}_{\kappa,(z_j)_{j\in\nn}}(g)$ up to a uniform multiplicative constant, \item for all non negative integer $\alpha$ and $\beta$, $\diffp{^{\alpha+\beta}\tilde g}{\overline{\eta_\zeta}^\alpha\partial \overline{v_\zeta}^\beta}=0$ on $X\cap D\cap{\cal U}$. \end{enumerate} \end{theorem} \pr We proceed as in the proof of Theorem \ref{th0}. Let $\varepsilon_0$ be a positive real number. On $D\setminus D_{-\varepsilon_0}$ we define, for any non negative integer $j$, $\chi_j$ and $\tilde g_j$ and $\tilde g_{\varepsilon _0}$ as in the proof of Lemma \ref{lemma2} and prove that it satisfies the wanted estimates. As in the proof of Lemma \ref{lemma2}, $\rho^{N+4}\tilde g_{\varepsilon_0}$ vanishes at order $N$ on $bD$ and $\diffp{^{\alpha+\beta} \tilde g_{\varepsilon_0}}{\overline{z_1}^\alpha\partial \overline {z_2}^\beta}=0$ on $X\cap D$. Moreover we have for $z\in{\cal P}_{\kappa|\rho(z_j)|}(z_j)$ \begin{eqnarray*} \left| \tilde g_j(z) \diffp{^{\alpha+\beta} \chi_j}{\overline{\eta_z}^\alpha\partial \overline{v_z}^\beta} (z)\right| &\leqs& |\rho(z_j)|^{-\alpha-\frac\beta2}\left|\tilde g_j(z)\right|\\ &\leqs& |\rho(z_j)|^{-\alpha-\frac\beta2}\sum_{k=1}^{q_j} \left|g_{z_j,v_{z_j}} [\alpha_1(z^*_1),\ldots, \alpha_k(z^*_1)]\right| |\rho(z_j)|^{\frac{k-1}2}\\ &\leqs& |\rho(z)|^{-\alpha-\frac\beta2}\sum_{k=1}^{q_j} \left|g_{z_j,v_{z_j}} [\alpha_1(z^*_1),\ldots, \alpha_k(z^*_1)]\right| |\rho(z)|^{\frac{k-1}2} \end{eqnarray*} and thus $z\mapsto |\rho(z)|^{\alpha+\frac\beta2} \diffp{^{\alpha+\beta} \tilde g_{\varepsilon_0}}{\overline{\eta_z}^\alpha\partial \overline {v_z}^\beta}(z)$ is in $L^q(D)$ for all $\alpha$ and $\beta$.\qed As a corollary of Theorem \ref{th0} and Theorem \ref{th5} we get \begin{theorem}\mlabel{th6} In $\cc^2$, if the function $g$ holomorphic in $X\cap D$ is such that $c^{(q)}_{\kappa,(z_j)_{j\in\nn}} (g)<\infty$, then $g$ has a holomorphic extension $G$ which belongs to $L^q(D)$. \end{theorem} \pr Theorem \ref{th5} and Cartan's Theorem B gives a smooth extension to which we can apply Theorem \ref{th0} and get a holomorphic extension in $L^q(D)$.\qed \subsection{Extension and weak holomorphy} One may notice that each time, the smooth extension near the boundary is controlled only by the values of $g$ on $X\cap D$. Moreover we have never used the strong holomorphy of $g$ excepted when we involved Cartan's Theorem B in order to get a bounded extension far from the boundary. Actually, we can use only weak holomorphy and get a smooth extension and then apply theorem \ref{th0} in order to get a holomorphic extension with $BMO$ or $L^q$ norm controlled only by the values of $g$ on $X\cap D$. Let us first recall the definition of weak holomorphy we shall use \begin{definition} Let ${\cal U}$ be an open set of $\cc^n$. A function $g$ defined on $X$ is said to be weakly holomorphic on $X\cap {\cal U}$ if it is locally bounded on $X\cap{\cal U}$ and holomorphic on the regular set of $X\cap{\cal U}$. \end{definition} The following theorem is a direct corollary of Lemma \ref{lemma0} \begin{theorem}\mlabel{th7} In $\cc^n$, for $q\in[1,+\infty)$, if the function $g$, defined on $X\cap D$, has a holomorphic extension $G\in L^q(D)$ then $$\sup\left|g_{z,v}[\lambda_1,\ldots,\lambda_k]\right| \tau(z,v,|\rho(z)|)^{k-1} \left({\rm Vol}\: {\cal P}_{\kappa |\rho(z)|} (z)\right)^{\frac12} \leq \|G\|_{L^q\left({\cal P}_{\kappa |\rho(z)|} (z)\right)}$$ where the supremum is taken over all $z\in D,$ all unit vector $v$ in $\cc^n$, all positive integer $k$ such that $k\leq \#\Lambda_{z,v}$ and all $\lambda_1,\ldots,\lambda_k\in\Lambda_{z,v}$ pairwise distinct. \end{theorem} When $z$ is far from $bD$, Theorem \ref{th7} essentially says that the divided differences have to be bounded even in the case of $L^q$ extensions, $q<\infty$. This is sufficient when $n=2$ to construct a smooth bounded extension in $D_{-\varepsilon}$ for $\varepsilon>0$. \begin{lemma}\mlabel{lemma1} For $X$ and $D$ in $\cc^2$, let $\varepsilon$ be a positive real number. Let $g$ be a weakly holomorphic function on $X\cap D$ such that $c_\varepsilon=\sup\left|g_{z,v}[\lambda_1,\ldots,\lambda_k]\right|<\infty$ where the supremum is taken over $z\in D_{-\frac\varepsilon2}$, all unit vector $v$ in $\cc^n$, all positive integer $k$ such that $k\leq \#\Lambda_{z,v}$, all $\lambda_1,\ldots,\lambda_k\in\Lambda_{z,v}$ pairwise distinct.\\ Then $g$ as a smooth extension on $D_{-\varepsilon}$ bounded by $c_\varepsilon$ up to a multiplicative constant uniform with respect to $g$. \end{lemma} \pr We proceed locally and glue all the extension. Since the only problems occur when we are near a singularity we consider $z_0$ a singularity of $X$ and we choose an orthonormal basis $w_1,w_2$ such that $\pi_0$, the orthogonal projection on the hyperplane orthogonal to $w_2$ passing through $z_0$, is a $k_0$ sheeted covering of $X$ in a neighborhood ${\cal U}_0\subset D$ of $z_0$.\\ For $z_1\neq 0$, we denote by $\lambda_1(z_1),\ldots,\lambda_{k_0}(z_1)$ the pairwise distinct complex number such that for $k=1,\ldots, k_0$, $z_0+z_1w_1+\lambda_k(z_1) w_2$ belongs to $X$. We set for $z=z_0+z_1w_1+z_2w_2$, $z_1\neq 0$ : $$\tilde g_0(z)=\tilde g_0(z_0+z_1w_1+z_2w_2)=\sum_{k=1}^{k_0}\prod_{\over{l=1}{l\neq k}}^{k_0} \frac {z_2-\lambda_l(z_1)}{\lambda_k(z_1)-\lambda_l(z_1)} g(z_0+z_1w_1+\lambda_k(z_1) w_2).$$ By construction, $\tilde g_0(z)=g(z)$ for all $z\in X\cap {\cal U}_0$, $z\neq z_0$. We denote by $\Delta_0$ the complex line passing through $z_0$ and supported by $w_2$.\\ Since $z_0$ is an isolated singularity of $X$, away from $0$, the $\lambda_j$ depend locally holomorphicaly from $z_1$ and thus $\tilde g_0$ is holomorphic on ${\cal U}_0\setminus \Delta_0$.\\ Since the divided differences are bounded on $D_{-\frac\varepsilon2}$ by $c_\varepsilon $, $\tilde g_0$ is bounded on $ {\cal U}_0\setminus \Delta_0$ by $c_\varepsilon$ up to a uniform multiplicative constant and thus $\tilde g_0$ is holomorphic and bounded on ${\cal U}_0$.\qed Combining Theorems \ref{th0}, \ref{th5}, Lemma \ref{lemma1} and corollary \ref{th2} we get the two following theorems. \begin{theorem}\label{th8} For $X$ and $D$ in $\cc^2$, let $g$ be a weakly holomorphic function in $\cc^2$ such that $c_\infty(g)<\infty$. Then $g$ has a holomorphic extension $G$ which belong to $BMO(D)$ such that $\|G\|_{BMO(D)}\leqs c_\infty(g)$. \end{theorem} \begin{theorem}\label{th9} For $X$ and $D$ in $\cc^2$, let $g$ be a weakly holomorphic function in $\cc^2$ such that $c^{(q)}_{\kappa,(z_j)_{j\in\nn}}(g)<\infty$ and $c_\varepsilon<\infty$. Then $g$ has a holomorphic extension $G$ which belongs to $L^q(D)$ such that $\|G\|_{L^q(D)}\leqs c^{(q)}_{\kappa,(z_j)_{j\in\nn}}(g)+c_\varepsilon(g)$. \end{theorem} \section{Examples}\mlabel{section6} \begin{example}[$BMO$ extension] Let $D$ be the ball of radius 1 and center $(1,0)$ in $\cc^2$. We choose $\rho(z)=|z_1-1|^2+|z_2|^2-1$ as a defining function for $D$. For $\alpha_1,\alpha_2,\ldots,\alpha_k\in\cc$ pairwise distinct we set $v_i=(-\overline{\alpha_i},1)$. We denote by $P_i$ the plane orthogonal to $v_i$ passing through the origin and we set $\Delta_i=P_i\cap D$ and $X=\cup_{i=1}^k P_i$. Let also $g_1,\ldots, g_k$ be $k$ bounded holomorphic functions on $\Delta$, the unit disc in $\cc$. Since $\Delta_i=\{(z_1,z_2)\in\cc^2,\ z_2=\alpha_iz_1\text{ and } |z_1-(1+|\alpha_i|^2)^{-1}|<(1+|\alpha_i|^2)^{-1}\}$, the function $$\app g {X\cap D} \cc {(z_1,z_2)}{g_i(z_1(1+|\alpha _i|^2)-1)}$$ is well defined, bounded and holomorphic on $X\cap D$. Question~: Under which conditions does $g$ have a $BMO$ holomorphic extension on the domain $D$ ? \end{example} In order to answer this question, we will try to find an upper bound for $c^{(\infty)}_{\kappa,\varepsilon_0}(g)$. Let $\zeta=(\zeta_1,\zeta_2)$ be a point in $D\setminus D_{-\varepsilon_0}$, let $z^*_1\in\cc$ be such that $|z^*_1|<\kappa|\rho(\zeta)|$ and let $\lambda _1,\ldots, \lambda _l$ be complex numbers pairwise distinct belonging to $\Lambda_{\alpha+z^*_1\eta_\zeta,v_\zeta}$. Perhaps after renumbering, we assume that $\zeta+z^*_1\eta_\zeta+\lambda_iv\zeta$ belongs to $\Delta_i$ for all $i$. Moreover, if $\zeta$ is sufficiently near the origin, we can also assume that $v_\zeta$ does not belong to any of the plane $P_i$.\\ We have \begin{eqnarray*} g_{\zeta+z^*_1\eta_\zeta,v_\zeta}[\lambda _1,\ldots,\lambda _l]&=&\sum_{i=1}^l\frac1{\prod_{\over{j=1}{j\neq i}}^l(\lambda_i-\lambda_j)} g_i\left((\zeta_1+z^*_1\eta_{\zeta,1}+\lambda_iv_{\zeta,1}) (1+|\alpha _i|^2)-1\right) . \end{eqnarray*} For $m=i,j$, $\lambda_m$ satisfies the following equalities $$\zeta_2+z^*_1\eta_{\zeta,2}+\lambda_mv_{\zeta,2}=\alpha_m(\zeta_1+z^*_1\eta_{\zeta,1}+\lambda_lv_{\zeta,1}),\qquad m=i,j$$ which yield $\lambda_i-\lambda_j=(\alpha_i-\alpha_j)(\zeta_1z^*_1\eta_{\zeta,1}+\lambda_iv_{\zeta,1})+\alpha_j(\lambda_i-\lambda_j)v_{\zeta,1}$ and so $$|\lambda_i-\lambda_j|\cdot |v_{\zeta,2}-\alpha_jv_{\zeta,1}|=|\alpha_i-\alpha_j|\cdot|\zeta_1+z^*_1\eta_{\zeta,1}+\lambda_iv_{\zeta,1}|.$$ We show that $|\zeta_1+z^*_1\eta_{\zeta,1}+\lambda_iv_{\zeta,1}|\geqs |\zeta_1|$.\\ First, we have $|z_1^*|\leq \kappa|\rho(\zeta)|$ and since $\zeta$ belongs to $D$, $|\rho(\zeta)|\leqs|\zeta_1|$ so $|z^*_1|\leqs \kappa|\zeta_1|$.\\ Secondly, $|v_{\zeta,1}|\eqs \left|\diffp{\rho}{\zeta_2}(\zeta)\right|\eqs |\zeta_2|$ and since $\zeta$ belongs to $D$, $|\zeta_2|\leqs |\zeta_1|^{\frac12}$. Since $|\lambda_i|\leq 3\kappa|\rho(\zeta)|^{\frac12}\leq |\zeta_1|^{\frac12}$, we get $|\lambda_i v_{\zeta,1}|\leqs \kappa|\zeta_1|$.\\ Thus provided $\kappa$ is small enough, $|\lambda_i-\lambda_j|\geqs |\zeta_1|$ and \begin{eqnarray*} \left|g_{\zeta+z^*_1\eta_\zeta,v_\zeta}[\lambda _1,\ldots,\lambda _l]\right|&\leqs&\frac1{|\zeta_1|^{l-1}} \sum_{i=1}^l \left| g_i\left((\zeta_1+z^*_1\eta_{\zeta,1}+\lambda_iv_{\zeta,1}) (1+|\alpha _i|^2)-1\right)\right|. \end{eqnarray*} Since $\tau(\zeta,v_\zeta,|\rho(\zeta)|)\leqs |\zeta_1|^{\frac12}$, if we assume that there exists $c\in\cc$ and $C>0$ such that for all $i$, $|g_i(z+1)-c|\leq C |z|^{\frac{l-1}2}$ for all $z$ near the origin of $\cc$, we get $$\tau(\zeta,v_\zeta,|\rho(\zeta)|)^{l-1}\left|g_{\zeta+z^*_1\eta_\zeta,v_\zeta}[\lambda _1,\ldots,\lambda _l]\right|\leqs C.$$ So $c^{(\infty)}_{\kappa,\varepsilon_0}(g)$ is finite and Lemma \ref{lemma2} and Theorem \ref{th0} implies that $g$ admits a $BMO$-holomorphic extension on $D$. \par\medskip This is in general the best result we can get. For example, let $\alpha$ be a real number and let $g_i$ be the function defined on the unit disc of $\cc$ by $g_i(z)=(1+z)^\alpha$, $i=1,\ldots, k$. Let $x$ be a small positive real number and let $\zeta$ in $D$ be the point $(x,0)$. We have $\eta_\zeta=(1,0)$, $v_\zeta=(0,1)$, $\tau(\zeta,v_\zeta,|\rho(\zeta)|)\eqs x^{\frac12}$, $(x,\alpha_ix)$ belongs to $\Delta_i$ if $x$ is sufficiently small and $$g_{\zeta,v_\zeta}[\alpha_1x,\ldots, \alpha_kx]=\sum_{i=1}^k \frac 1{x^{k-1}\prod_{\over{j=1}{j\neq i}} (\alpha_i-\alpha_j) }\left(x(1+|\alpha_i|^2)\right)^\alpha.$$ Therefore if $\alpha<\frac {k-1}2$, $\tau(\zeta,v_\zeta,|\rho(\zeta)|)^{k-1} |g_{\zeta,v_\zeta}[\alpha_1x,\ldots, \alpha_kx]|$ is unbounded when $x$ goes to $0$. So $c_\infty(g)$ is not finite and Theorem \ref{th1} implies that $g$ does not admit a holomorphic extension bounded on $D$. \begin{example}[$L^2$-extension in $\cc^2$] Again let $D$ be the ball of radius 1 and center $(1,0)$ in $\cc^2$ and for any positive odd integer $q$, let $X$ be the analytic set $X=\{z\in\cc^2,\ z^q_1=z^2_2\}$. Then all $g$ holomorphic and bounded on $X\cap D$ has a $L^2$ holomorphic extension on $D$ if and only if $q=1$ or $q=3$. \end{example} When $q=1$, $X$ a manifold and there is nothing to do. When $q=3$, $X$ has a singularity at the origin. We will prove that the assumptions of Theorem \ref{th5} are satisfied for any $\kappa$-covering provided $\kappa$ is small enough. To check these hypothesis, we set $\rho(z)=|z_1-1|^2+|z_2|^2-1$, we fix a holomorphic square root $\alpha$ in $\cc\setminus(-\infty,0]$ and we prove the following facts. The first one gives a relation between the distance from $z\in X\cap D$ to $z+\lambda v\in X\cap D$ and the coordinates of $z$. \begin{fact}\label{facta} Let $\kappa $ be a sufficiently small positive real number, let $K$ be a large positive real number, let $z=(z_1,z_2)$ be a point in $D\cap X$, let $v=(v_1,v_2)$ be a unit vector of $\cc^2$ such that $|v_1|\leq K|z_1|^{\frac12}$ and let $\lambda$ be a complex number such that $z+\lambda v$ belongs to $X\cap D$ and $|\lambda|\leq 4\kappa |\tau(z,v,|\rho(z)|)$.\\ Then, if $\kappa $ is small enough, we have $|\lambda|\geqs |z_1|^{\frac q2}$, $|z_1|\leqs |\rho(z)|^\frac 1q$ and $|z_2|\leqs |\rho(z)|^{\frac12}$ each time uniformly with respect to $z$, $\kappa$ and $v$. \end{fact} \begin{remark} The assumption $|v_1|\leq K|z_1|^{\frac12}$ means that $v$ is ``nearly'' tangential to $bD_{\rho(z)}$. \end{remark} \pr We first prove that $|\lambda|\geqs |\rho(z)|^{\frac q2}$. Since $v$ is transverse to $X$, without restriction we assume that $z=(z_1,\alpha(z_1)^q)$ and that $z+\lambda v=(z_1,-\alpha(z_1+\lambda v_1)^q)$. Therefore we have $$|\lambda|\geq|\alpha^q(z_1)+\alpha^q(z_1+\lambda v_1)|\\ \geq2|z_1|^{\frac q2}-|\alpha^q(z_1)-\alpha^q(z_1+\lambda v_1)|.$$ The mean value theorem gives $$|\alpha^q(z_1)-\alpha^q(z_1+\lambda v_1)|\leqs |\lambda||v_1| \sup_{\zeta\in[z_1,z_1+\lambda v_1]} \left|\diffp {\alpha^q}{\zeta}(\zeta)\right|.$$ For all $\zeta\in[z_1,z_1+\lambda v_1]$, we have $|\zeta|\leqs |z_1|$, and so, provided $\kappa $ is small enough, we get $|\lambda|\geq |z_1|^{\frac q2}$. Now, since $|\lambda|\leq 4\kappa |\rho(z)|^{\frac12}$, we get $|z_1|\leqs |\rho(z)|^{\frac1q}$ and $|z_2|\leqs|\rho(z)|^{\frac12}$.\qed As previously, we denote by $\eta_\zeta$ the outer unit normal to $bD_{\rho(\zeta)}$ at $\zeta$ and by $v_\zeta$ a tangent vector to $bD_{\rho(\zeta)}$ at $\zeta$. The second fact gives some kind of uniformity of Fact \ref{facta} on a Koranyi ball. \begin{fact}\label{factb} Let $\kappa $ be a sufficiently small positive real number, let $\zeta$ be a point in $D$, let $z=\zeta+z_1^*\eta_\zeta+z_2^*v_\zeta$ be a point in ${\cal P}_{4\kappa |\rho(\zeta)|}(\zeta)\cap D\cap X$ and let $\lambda$ be a complex number such that $z+\lambda v_\zeta$ belongs to $X\cap D\cap{\cal P}_{4\kappa |\rho(\zeta)|}(\zeta)$.\\ Then $|\lambda|\geqs |\zeta_1|^{\frac q2}$, $|\zeta_2|\leqs|\rho(\zeta)|^{\frac12}$ and $|\zeta_1|\leqs|\rho(\zeta)|^{\frac1q}$ uniformly with respect to $z$, $\zeta$ and $\lambda $. \end{fact} \pr We want to apply Fact \ref{facta}, so we first have to check that $|v_{\zeta,1}|\leqs |z_1|^{\frac12}$, uniformly with respect to $z$ and $\zeta$.\\ On the one hand we have $|v_{\zeta,1}|\eqs\left|\diffp{\rho}{\zeta_2}(\zeta)\right|\eqs|\zeta_2|\leqs |\zeta_1|^{\frac 12}$.\\ On the other hand $z_1=\zeta_1+z^*_1\eta_{\zeta,1}+z^*_2v_{\zeta,1}$ thus \begin{eqnarray*} |\zeta_1|&\leq& |z_1^*|+ |z_2^*||v_{\zeta,1}|+|z_1|\\ &\leqs& \kappa |\rho(z)|+ \kappa |v_{\zeta,1}|^2+|z_1|\\ &\leqs& |z_1| + \kappa |v_{\zeta,1}|^2. \end{eqnarray*} Therefore, if $\kappa $ is small enough, $|v_{\zeta,1}|\leqs |z_1|^{\frac12}$ and $|\zeta_1|\leqs|z_1|$. Therefore we can apply Fact \ref{facta} which gives $|\lambda|\geqs |z_1|^{\frac q2}$ and since $|z_1|\geqs |\zeta_1|$ the first inequality is proved. The third inequality follows from the first one and from the fact that $|\lambda|\leqs|\rho(\zeta)|^{\frac12}$.\\ Fact \ref{facta} also gives $|z_2|\leqs |\rho(z)|^{\frac12}$ and since $|\rho (\zeta)|\eqs|\rho (z)|$, we have $$|\zeta_2|\leqs |\zeta_2-z_2|+|z_2|\leqs |\rho(\zeta)|^{\frac12}+|\rho(z)|^{\frac12}\leqs|\rho(\zeta)|^{\frac12}.\qed$$ Now we check the assumptions of Theorem \ref{th5} and for any $\kappa$-covering, $\kappa>0$ sufficiently small, and any function $g$ bounded on $X\cap D$ we prove that $c^{(2)}_{\kappa ,(\zeta_j)_{j\in\nn}}(g)\leqs\|g\|_{L^\infty(D\cap X)}$, uniformly with respect to $g$.\\ Let ${\cal U}_0$ be a neighborhood of the origin, let $c$, $\varepsilon_0$ and $\kappa$ be small positive real numbers and let ${\cal P}_{\kappa|\rho(\zeta^{(k)}_j)|}(\zeta^{(k)}_j)$, $k\in\nn$, $j\in\{1,\ldots, n_k\}$ be a $\kappa$-covering of $D\cap {\cal U}_0$ such that for all $k$ and all $j$, the point $\zeta_j^{(k)}$ belongs to $bD_{-(1-c\kappa)^k\varepsilon_0}$. We assume that $\kappa$ is so small that Fact \ref{factb} holds true and we set $\tilde \kappa=1-c\kappa$. For all $\zeta\in D$, the following inequality holds \begin{eqnarray*} |\rho(\zeta)|\int_{|z^*_1|<4\kappa |\rho (\zeta)|}\sum_{\lambda\in\Lambda_{\zeta +z^*_1\eta_\zeta ,v_\zeta }} \left|g_{\zeta +z^*_1\eta_\zeta ,v_\zeta }[\lambda]\right|^2dV(z_1)&\leqs &\|g\|_{L^\infty(X\cap D)}^2 |\rho (\zeta)|^3. \end{eqnarray*} This means that the corresponding estimate for $\zeta^{(k)}_j$ does not depend on $j$ and since we will add these bound for all $k$ and $j=1,\ldots, n_k,$ we will also need an upper bound for $n_k$. For any non negative integer $k$, we denote by $\sigma_k$ the area measure on $bD_{-\tilde \kappa^k\varepsilon_0}$. Since ${\cal P}_{\kappa|\rho(\zeta^{(k)}_j)|}(\zeta^{(k)}_j)$ is a $\kappa$-covering, for all $k$ we have as in the proof of Proposition \ref{propmax} \begin{eqnarray*} \sigma_k\left (bD_{\tilde \kappa ^k\varepsilon _0}\right)&\geq&\sigma_k\left(bD_{\tilde \kappa ^k\varepsilon _0}\cap \cup_{j=1}^{n_k}{\cal P}_{\kappa |\rho (\zeta_j^{(k)})|}((\zeta_j^{(k)}))\right)\\ &\geq& \sum_{j=1}^{n_k} \sigma_k\left(bD_{\tilde \kappa ^k\varepsilon _0}\cap {\cal P}_{\frac c{c_1}\kappa |\rho (\zeta_j^{(k)})|}((\zeta_j^{(k)}))\right)\\ &\geqs& n_k \left(\tilde \kappa^k\varepsilon_0\right)^{2}. \end{eqnarray*} Therefore $n_k\leqs (\tilde \kappa^k\varepsilon_0)^{-2}$ and we have uniformly with respect to $g$ \begin{eqnarray*} \lefteqn{\sum_{k=0}^\infty \sum_{j=1}^{n_k} |\rho(\zeta_j^{(k)})|\int_{|z_1^*|<4\kappa |\rho (\zeta_j^{(k)})|}\sum_{\lambda\in\Lambda_{\zeta +z^*_1\eta_{\zeta^{(k)}_j} ,v_{\zeta^{(k)}_j} }}\left|g_{\zeta_j^{(k)}+z^*_1\eta_{\zeta^{(k)}_j},v_{\zeta^{(k)}_j}}[\lambda]\right|^2dV(z^*_1)}\\ &&\hskip230pt\leqs\|g\|_{L^\infty(X\cap D)}^2 \sum_{k=0}^{\infty} n_k \left(\tilde\kappa^k\varepsilon _0\right)^3\\ &&\hskip230pt\leqs\|g\|_{L^\infty(X\cap D)}^2. \end{eqnarray*} Now we handle the case of divided differences of order 2. We set $$I(\zeta)=|\rho(\zeta)|^2\int_{|z^*_1|<4\kappa |\rho (\zeta)|} \sum_{ \over{\lambda_1,\lambda _2\in \Lambda _{\zeta +z^*_1\eta_\zeta,v_\zeta}}{\lambda _1\neq\lambda _2}} \left|g_{\zeta +z^*_1\eta_\zeta,v_\zeta}[\lambda_1,\lambda_2] \right|^2dV(z^*_1)$$ and we aim to prove that $\sum_{k=0}^{+\infty}\sum_{j=1}^{n_k} I(\zeta_j^{(k)})\leqs\|g\|_{L^\infty(X\cap D)}$.\\ If for all complex number $z^*_1$ such that $|z^*_1|\leq \kappa |\rho (\zeta)|$ we have $\#\Lambda _{\zeta +z^*_1\eta_\zeta,v_\zeta}<2$, then $I(\zeta)=0$. Otherwise Fact \ref{factb} implies that $|\zeta_2|\leq K (\tilde \kappa \varepsilon _0)^{\frac12}$ for some $K>0$ and that $|\lambda _1-\lambda _2|\geqs|\zeta_1|^{\frac32}$ for all $\lambda_1,\lambda_2$ distinct in $\Lambda _{\zeta+z^*_1\eta_\zeta,v_\zeta}$, $z_1^*\in\cc$ such that $|z_1^*|\leq \kappa|\rho (\zeta)|$. Therefore, for all such $\zeta$, we have \begin{eqnarray} I(\zeta) \leqs|\rho(\zeta )|^2\int_{|z_1^*|<4\kappa |\rho (\zeta)|} \frac{\|g\|_{L^\infty(D\cap X)}} {|\zeta_1|^3}dV(z^*_1)\leqs \|g\|_{L^{\infty}(X\cap D)} \frac{|\rho(\zeta)|^4}{|\zeta_1|^3}\label{eq31} \end{eqnarray} Thus, when we denote by $Z^{(k)}$ the set $$Z^{(k)}=\{j\in\nn,\ \exists z^*_1\in\cc,\ |z^*_1|<\kappa|\rho (\zeta_j^{(k)})| \text{ and } \#\Lambda _{\zeta_j^{(k)} +z^*_1\eta_{\zeta_j^{(k)}},v_{\zeta_j^{(k)}}}=2\},$$ we have to estimate the sum $\sum_{k=0}^{+\infty} \sum_{j\in Z^{(k)} } \frac{(\tilde \kappa^k \varepsilon _0)^4}{|\zeta^{(k)}_{j,1}|^3}$.\\ We write $Z^{(k)}$ as $Z^{(k)}=\cup_{i=1}^{\infty} Z^{(k)}_i$ where $Z^{(i)}_{k}=\{j\in Z^{(k)},\ i\tilde\kappa^k \varepsilon _0\leq |\zeta^{(k)}_{j,1}|< (i+1)\tilde\kappa ^k\varepsilon _0\text{ and } |\zeta^{(k)}_{j,2}|\leq K (\tilde \kappa \varepsilon _0)^{\frac12} \}$ and we look for an upper bound of $\#Z_{i}^{(k)}$. We have $$\sigma _k(bD_{-\tilde\kappa ^k\varepsilon _0}\cap \{z,\ \frac12i\tilde \kappa ^k\varepsilon _0\leq|z_1|\leq 2(i+1)\tilde \kappa ^k\varepsilon _0\text{ and } |z_2|\leq 2K(\tilde \kappa ^k\varepsilon _0)^{\frac12}\})\eqs (\tilde \kappa ^k\varepsilon _0)^2$$ and, if $\kappa$ is small enough : \begin{eqnarray*} \lefteqn{\sigma _k(bD_{-\tilde\kappa ^k\varepsilon _0}\cap \{z,\ \frac12i\tilde \kappa ^k\varepsilon _0\leq|z_1|\leq 2(i+1)\tilde \kappa ^k\varepsilon _0\text{ and } |z_2|\leq K(\tilde \kappa ^k\varepsilon _0)^{\frac12}\})}\\ &&\hskip200pt\geqs \sigma_k(\cup_{j\in Z_{i}^{(k)}} {\cal P}_{\kappa|\rho (\zeta^{(k)}_j)|}(\zeta_j^{(k)}) \cap bD_{-\tilde\kappa ^k\varepsilon _0})\\ &&\hskip200pt\geqs \#Z_i^{(k)} \cdot (\tilde \kappa ^k\varepsilon _0)^2. \end{eqnarray*} These last two inequalities imply that $\# Z_i^{(k)}$ is bounded by a constant which does not depend from $i$ nor from $k$. \\ For $j\in Z_0^{(k)}$, since $|\zeta^{(k)}_{j,1}|\geqs |\rho (\zeta_j^{(k)})|$, Inequality (\ref{eq31}) yields $I(\zeta_j^{(k)})\leqs \tilde\kappa ^k\varepsilon _0\|g\|_{L^\infty(X\cap D)}$ thus $$\sum_{k=0}^{+\infty} \sum_{j\in Z_0^{(k)}} I(\zeta^{(k)}_j)\leqs \|g\|_{L^{\infty}(X\cap D)}.$$ For $i>0$, we use directly (\ref{eq31}) which gives $$ \sum_{i=1}^{+\infty} \sum_{k=0}^{+\infty}\sum_{j\in Z_i^{(k)}}I(\zeta^{(k)}_j) \leqs \|g\|_{L^{\infty}(X\cap D)} \sum_{k=0}^{+\infty} \sum_{i=1}^{+\infty} \frac{(\tilde\kappa ^k\varepsilon _0)^4}{(i\tilde \kappa ^k\varepsilon _0)^3}\\ \leqs\|g\|_{L^{\infty}(X\cap D)}. $$ This ends to prove that $c^{(2)}_{\kappa,(\zeta_j^{(k)})_{k\in\nn,j\in\{1,\ldots,n_k\}}}$ is finite and Theorem \ref{th5} now implies that $g$ admits a $L^2$-holomorphic extension on $D$. \par\medskip Now, for $q\geq 5$, we consider $g$ defined for $z$ in $X$ by $g(z)=\frac{z_2}{z_1^{\frac q2}}$. The function $g$ is holomorphic and bounded on $X$ because $|z_2|=|z_1|^{\frac q2}$ for all $(z_1,z_2)\in X$ but we will see that $g$ does not admits a $L^2$-holomorphic extension on $D$.\\ For $\varepsilon _0, \kappa,c>0$ small enough we set $\tilde \kappa =1-c\kappa $ and we denote by $\zeta^{(k)}_0=(x_k,0)$ the point of $\cc^2$ such $\rho(\zeta_0^{(k)})= -\tilde \kappa ^k\varepsilon _0$. We have $x_k\eqs \tilde \kappa^k\varepsilon _0$ uniformly with respect to $k$, $\kappa$ and $\varepsilon _0$. We complete the sequence $(\zeta_0^{(k)})_{k\in\nn}$ so as to get a $\kappa$-covering ${\cal P}_{\kappa|\rho(\zeta^{(k)}_j)|}(\zeta^{(k)}_j)$, $k\in\nn$ and $j\in\{0,\ldots,n_k\}$, of a neighborhood of the origin. We set $w_1=(1,0)$ and $w_2=(0,1)$. For all $k$, $\eta_{\zeta^{(k)}_0}=w_1$, $v_{\zeta^{(k)}_0}=w_2$ and, for all $z_1$, we have $\Lambda_{\zeta^{(k)}_0+z_1w_1,w_2}=\{(z_1+\tilde\kappa^k\varepsilon_0)^{\frac q2},-(z_1+\tilde\kappa^k\varepsilon_0)^{\frac q2}\}$. So, if $\kappa$ is small enough, for all $k$ we have \begin{eqnarray*} \lefteqn{|\rho(\zeta _0^{(k)})|^2 \int_{|z_1|<4\kappa |\rho (\zeta _0^{(k)})|} \left|g_{\zeta_0^{(k)}+z_1w_1,w_2}\left[(z_1+\tilde\kappa^k\varepsilon_0)^{\frac q2},-(z_1+\tilde\kappa^k\varepsilon_0)^{\frac q2}\right]\right|^2dV(z_1)}\\ && \hskip 160pt \geqs (\tilde \kappa ^k\varepsilon _0)^2\int_{|z_1|<4\kappa |\rho (\zeta_0^{(k)})|} \frac1{|z_1+\tilde\kappa^k\varepsilon_0|^q}dV(z_1)\\ &&\hskip 160pt \geqs (\tilde \kappa ^k\varepsilon _0)^{4-q}. \end{eqnarray*} Since for $q\geq 5$ the series $\sum_{k\geq 0} (\tilde \kappa ^k\varepsilon _0)^{4-q}$ diverges $c^{(2)}_{\kappa,(\zeta^{(k)}_j)_{k\in\nn,j\in\{0,\ldots, n_k\}}}(g)$ is not finite and so Theorem \ref{th4} implies that $g$ does not have a $L^2$ holomorphic extension on $D$. \begin{example}[The example of Diederich-Mazzilli] Let $B_3$ be the unit ball of $\cc^3$, $X=\{z=(z_1,z_2,z_3)\in\cc^3\ : \ z_1^2+z^q_2=0\}$ where $q\geq 10$ is an uneven integer and define the holomorphic function $f$ on $\cc^3$ by $$f(z)=\frac{z_1}{(1-z_3)^{\frac q4}}.$$ Then $f$ is bounded on $X\cap B_3$ and has no $L^2$ holomorphic extension on $B_3$. \end{example} This was shown in \cite{DiMa0} by Diederich and the second author. We will prove this result here with Theorem \ref{th4}. We set $\rho(\zeta )=|\zeta _1|^2+|\zeta_2|^2+|\zeta_3|^2-1$, and we denote by $w_1,w_2, w_3$ the canonical basis of $\cc^3$. For all non negative integer $j$ and $\varepsilon _0,c$ and $\kappa $ small suitable constants for $X$ and $B_3$, we define $\tilde\kappa =(1-c\kappa )$. For any integer $j$, we denote by $\zeta_j=(0,0,\zeta_{j,3})$ the point of $\cc^3$ such that $\zeta_{j,3}$ is real and satisfies $\rho(\zeta_j)=-\tilde \kappa^j\varepsilon _0$. The point $\zeta _j$ can be chosen at the first step of the construction of a $\kappa $-covering of $X\cap D$ in a neighborhood of $(0,0,1)$ and so the Koranyi balls ${\cal P}_{\kappa|\rho(\zeta_j)|}(\zeta_j)$, $j\in\nn$, are extract from a $\kappa$-covering. For all $j$ we have \begin{eqnarray*} |\rho(\zeta _j)|^2\int_{\over{|z_2|<(4\kappa|\rho(\zeta _j)|)^{\frac12}}{|z_3-\zeta _{j,3}|<4\kappa|\rho (\zeta _j)|} }\left|f_{\zeta_j +z_2 w_2+z_3w_3,w_1} \left[z_2^{\frac q2},-z_2^{\frac q2}\right]\right|^2dV(z_2,z_3)&\geqs & \tilde\kappa ^{j(5-\frac q2)} \end{eqnarray*} and thus when $q\geq 5$, \begin{eqnarray*} \sum_{j=0}^{+\infty}|\rho(\zeta _j)|^2\int_{\over{|z_2|<(4\kappa|\rho(\zeta _j)|)^{\frac12}}{|z_3-\zeta _{j,3}|<4\kappa|\rho (\zeta _j)|} }\left|f_{\zeta_j +z_2w_2+z_3w_3,w_1} \left[z_2^{\frac q2},-z_2^{\frac q2}\right]\right|^2dV(z_1,z_3)&=&+\infty. \end{eqnarray*} Theorem \ref{th4} then implies that $f$ does not have a $L^2$ holomorphic extension on $B_3$.
{'timestamp': '2012-07-09T02:02:29', 'yymm': '1101', 'arxiv_id': '1101.4200', 'language': 'en', 'url': 'https://arxiv.org/abs/1101.4200'}
\section{Introduction} Exemplar-based texture synthesis (EBTS) has been a dynamic yet challenging topic in computer vision and graphics for the past decades~\cite{heeger1995pyramid,Efros1999,Wei2000,Efros2001,zhu1998filters, portilla2000parametric,galerne2011random,Galerne_texton_noise_cgf2017,raad2017survey,xie2017synthesizing}, which targets to produce new samples that are visually similar to a given texture exemplar. The main difficulty of EBTS is to efficiently synthesize texture samples that are not only perceptually similar to the exemplar, but also able to balance the repeated and innovated elements in the texture. To overcome this difficulty, two main categories of approaches have been proposed in the literature, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, patch-based methods~\cite{Efros1999,Wei2000,Efros2001,kaspar2015self} and methods relying on parametric statistical models~\cite{heeger1995pyramid,zhu1998filters,portilla2000parametric,galerne2011random,gatys2015texture}. Given a texture exemplar, patch-based methods regard small patches in the exemplar as basic elements, and generate new samples by copying pixels or patches from the exemplar to the synthesized texture under certain spatial constrains, such as Markovian property~\cite{Efros1999,kaspar2015self,Efros2001}. These methods can produce new textures with high visual fidelity to the given exemplar, but they often result in verbatim copies and few of them can be extended to dynamic texture, except~\cite{kwatra2003graphcut}. Moreover, in contrast with their promising performance, they take less steps to understand the underlying process of textures. Whereas statistical parametric methods concentrate on exploring the underlying models of the texture exemplar, and new texture images can then be synthesized by sampling from the learned texture model. These methods are better at balancing the repetitions and innovations nature of textures, while they usually fail to reproduce textures with highly structured elements. It is worth mentioning that a few of these methods can be extended to sound textures~\cite{mcdermott2009sound} and dynamic ones~\cite{xia2014synthesizing}. Some recent surveys on EBTS can be founded in~\cite{wei2009state,raad2017survey}. Recently, parametric models has been revived by the use of {\em deep neural networks}~\cite{gatys2015texture, ulyanov2017improved,sendik2017deep,liu2016texture}. These models employ deep ConvNets that are pretrained on large-scale image datasets instead of handcrafted filters as feature extractors, and generate new samples by seeking images that maximize certain similarity between their deep features and those from the exemplar. Although these methods show great improvements over traditional parametric models, there are still two unsolved or only partially solved problems: 1) It is difficult to extend these methods to other types of textures, such as dynamic and sound textures, since these methods rely on ConvNets pre-trained on large-scale datasets, such as ImageNet, which are difficult to obtain in video or sound domain. 2) These models can not synthesize textures with non-local structures, as the optimization algorithm is likely to be trapped in local minimums where non-local structures are not preserved. A common remedy is to use extra penalty terms, such as Fourier spectrum~\cite{liu2016texture} or correlation matrix~\cite{sendik2017deep}, but these terms bring in extra hyper-parameters and are slow to optimize. In order to address these problems, we propose a new texture model named {\em conditional generative ConvNet} (cgCNN) by integrating deep texture statistics and the probabilistic framework of generative ConvNet (gCNN)~\cite{xie2016theory}. Given a texture exemplar, cgCNN first defines an energy based conditional distribution using deep statistics of a trainable ConvNet, which is then trained by maximal likelihood estimation (MLE). New textures can be synthesized by sampling from the learned conditional distribution. Unlike previous texture models that rely on pretrained ConvNets, cgCNN \emph{learns} the weights of the ConvNet for each input exemplar. It therefore has two main advantages: 1) It allows to synthesize image, dynamic and sound textures in a unified manner. 2) It can synthesize textures with non-local structures without using extra penalty terms, as it is easier for the sampling algorithm to escape from local minimums. We further present two forms of our cgCNN model, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the canonical cgCNN (c-cgCNN) and the forward cgCNN (f-cgCNN), by exploiting two different sampling strategies. We show that these two forms of cgCNN have strong theoretical connections with previous texture models. Specifically, c-cgCNN uses Langevin dynamics for sampling, and it can synthesize highly non-stationary textures. While f-cgCNN uses a fully convolutional generator network as an approximated fast sampler, and it can synthesize arbitrarily large stationary textures. We further show that \emph{Gatys' method~\cite{gatys2015texture} and TextureNet~\cite{ulyanov2017improved} are special cases of c-cgCNN and f-cgCNN respectively.} In addition, we derive a concise texture inpainting algorithm based on cgCNN, which iteratively searches for a template in the uncorrupted region and synthesizes a texture patch according to the template. Our main contributions are thus summarized as follows: \begin{itemize} \item[-] We propose a new texture model named cgCNN which combines deep statistics and the probabilistic framework of gCNN model. Instead of relying on pretrained ConvNets as previous deep texture models, the proposed cgCNN learns the weights of the ConvNet adaptively for each input exemplar. As a result, cgCNN can synthesize high quality dynamic, sound and image textures in a unified manner. \item[-] We present two forms of cgCNN and show their effectiveness in texture synthesis and expansion: c-cgCNN can synthesize highly non-stationary textures without extra penalty terms, while f-cgCNN can synthesize arbitrarily large stationary textures. We also show their strong theoretical connections with previous texture models. Note f-cgCNN is the first deep texture model that enables us to expand dynamic or sound textures. \item[-] We present a simple but effective algorithm for texture inpainting based on the proposed cgCNN. To our knowledge, it is the first neural algorithm for inpainting sound textures. \item[-] Extensive experiments\footnote{All experiments can be found at \url{captain.whu.edu.cn/cgcnn-texture}.} in synthesis, expansion and inpainting of various types of textures using cgCNN. We demonstrate that our model achieves better or at least comparable results than the state-of-the-art methods. \end{itemize} The rest of this paper is organized as follows: Sec.~\ref{relatedwork} reviews some related works. Sec.~\ref{Preliminaries} recalls four baseline models. Sec.~\ref{Sec_our_model} details cgCNN's formulation and training algorithm, and provides some theoretical analysis to the models. Sec.~\ref{Dynamic_and_Sound} uses cgCNN for the synthesis of various types of textures and adapts the synthesis algorithm to texture inpainting. Sec.~\ref{experiments} presents results that demonstrate the effectiveness of cgCNN in synthesizing, expanding and inpainting all three types of textures. Sec.~\ref{conclusion} draws some conclusions. \section{Related Work} \label{relatedwork} One seminal work on parametric EBTS was made by Heeger and Bergen~\cite{heeger1995pyramid}, who proposed to synthesize textures by matching the marginal distributions of the synthesized and the exemplar texture. Subsequently, Portilla \emph{et al}\onedot~\cite{portilla2000parametric} extended this model by using more and higher-order measurements. Another remarkable work at that time was the FRAME model proposed by Zhu \emph{et al}\onedot~\cite{zhu1998filters}, which is a framework unifying the random field model and the maximum entropy principle for texture modeling. Other notable works include~\cite{galerne2011random,Galerne_texton_noise_cgf2017,Peyre2009}. These methods built solid theoretical background for texture synthesis, but are limited in their ability to synthesize structured textures. Recently, Gatys~\cite{gatys2015texture} made a breakthrough in texture modelling by using deep neural networks. This model can be seen as an extension of Portilla's model~\cite{portilla2000parametric}, where the linear filters was replaced by a pretrained deep ConvNet. Gatys' method was subsequently extended to style transfer~\cite{gatys2016image}, where the content image was force to have similar deep statistics with the style image. In more recent works, Gatys' method has been extended to synthesizing textures with non-local structures by using more constraints such as correlation matrix~\cite{sendik2017deep} and spectrum~\cite{liu2016texture}. However, such constraints bring in extra hyper-parameters that require manual tuning, and are slow to optimize~\cite{sendik2017deep} or cause spectrum like noise~\cite{liu2016texture}. In contrast, our model can synthesize non-local structures without the aid of these constraints due to the effective sampling strategy. In order to accelerate the synthesis process and synthesize larger textures than the input, Ulyanov \emph{et al}\onedot~\cite{ulyanov2017improved} and Johnson \emph{et al}\onedot~\cite{johnson2016perceptual} proposed to combine a fully convolutional generator with Gaty's model, so that textures can be synthesized in a fast forward pass of the generator. Similar to Ulyanov's model \emph{et al}\onedot~\cite{ulyanov2017improved}, our model also uses a generator for fast sampling and texture expansion. In contrast to Gatys' method which relies on pretrained ConvNets, Xie~\cite{xie2016theory} proposed a generative ConvNet (gCNN) model that can learn the ConvNet and synthesize textures simultaneously. In subsequent works, Xie~\cite{xie2018cooperative} proposed CoopNet by combining gCNN and a latent variable model. This model was latter extended to video~\cite{xie2017synthesizing} and 3D shape~\cite{xie2018learning} synthesis. Our model can be regarded as a combination of Gatys' method and gCNN, as it utilizes the idea of deep statistics in Gatys' method and the probabilistic framework of gCNN. Considering dynamic texture synthesis, it is common to use linear auto-regressive models~\cite{doretto2003dynamic,xia2014synthesizing} to model the appearance and dynamics. Later work~\cite{yang2018learning} compared these method quantitatively by studying the synthesizability of the input exemplars. Recent works leveraged deep learning techniques for synthesizing dynamic textures. For instance, Tesfaldet \emph{et al}\onedot~\cite{tesfaldet2018two} proposed to combine Gatys' method~\cite{gatys2015texture} with an optic flow network in order to capture the temporal statistics. In contrast, our model does not require the aid of other nets, as our model is flexible to use spatial-temporal ConvNets for spatial and temporal modelling. As for sound texture synthesis, classic models~\cite{mcdermott2009sound} are generally based on wavelet framework and use handcrafted filters to extract temporal statistics. Recently, Antognini \emph{et al}\onedot~\cite{antognini2019audio} extended Gatys' method to sound texture synthesis by applying a random network to the spectrograms of sound textures. In contrast, our model learns the network adaptively instead of fixing it to random weights, and our model is applied to raw waveforms directly. The texture inpainting problem is a special case of image or video inpainting problem, where the inpainted image or video is assumed to be a texture. Igehy~\cite{igehy1997image} transferred Heeger and Bergen's texture synthesis algorithm~\cite{heeger1995pyramid} to an inpainting algorithm. Our inpainting algorithm shares important ideas with Igehy's method~\cite{igehy1997image}, as we also adopt an inpainting by synthesizing scheme. Other important texture inpainting methods include conditional Gaussian simulation~\cite{galerne2017texture} and PatchMatch based methods~\cite{liu2013exemplar-based, barnes2009patchmatch}. \section{Preliminaries} \label{Preliminaries} This section recalls several baseline models on which cgCNN is built. The theoretical connections between these model and cgCNN will be discussed in Sec.~\ref{Sec_our_model}. Given a RGB-color image texture exemplar $f_0 \in \mathbb{R}^{H \times W \times 3}$, where $H$ and $W$ are the height and width of the image, texture synthesis targets to generate new samples $f\in \mathbb{R}^{H \times W \times 3}$ that are visually similar to $f_0$. \paragraph{\bf Gatys' method~\cite{gatys2015texture}} \emph{Gatys' method} uses a pretrained deep ConvNet as a feature extractor. For an input texture exemplar, the Gram matrices of feature maps at selected layers are first calculated. New texture samples are then synthesized by matching the Gram matrices of the synthesized textures and the exemplar. Formally, Gatys' method tries to solve the following optimization problem: \begin{equation} \label{Gatys_model} \min_{f} L_{G}(f,\, f_0). \end{equation} The objective function $L_{G}$ is defined as: \begin{equation} \label{Gatys} L_{G}(f,\, f_0)=\sum_{l} \big\|\mathbf{G}\big(\mathcal{F}^{(l)}(f) \big) - \mathbf{G}\big(\mathcal{F}^{(l)}(f_0) \big)\big\|_F, \end{equation} where $\mathcal{F}$ is a pretrained ConvNet, $\mathcal{F}^{(l)}$ is the feature map at layer $l$, and $\|\cdot\|_F$ is the Frobenius norm. $\mathbf{G}$ is the Gram matrix defined as: \begin{equation}\label{Gram} \mathbf{G} = F^{T} F \, \in \mathbb{R}^{C \times C}, \end{equation} where $F = \mathcal{F}^{(l)}(f) \in \mathbb{R}^{N \times C}$ is a feature map with $C$ channels and $N$ elements in each channel. This model is trained by gradient descent using back propagation. Each step follows \begin{equation}\label{gatys_Langevin} f_{t+1} = f_{t}- \epsilon \frac{\partial L_{G}(f_{t},\, f_0)}{\partial f_{t}}, \end{equation} where $\epsilon$ is the learning rate. \paragraph{\bf TextureNet~\cite{ulyanov2017improved}} \emph{TextureNet} is a forward version of Gatys' method. It learns a generator network $g_{\mathbf \theta}$ with trainable weights $\mathbf \theta$, which maps a sample of random noise $z \sim \mathcal{N}(0, I)$ to a local minimum of Eqn.~\eqref{Gatys}. This amounts to solve the following optimization problem: \begin{equation} \label{texture_net} \min_{\theta} \mathbb{E}_{z \sim \mathcal{N}(0, I)} \Big(L_{G}(g_{\mathbf \theta}(z),\, f_0)\Big). \end{equation} $g_{\mathbf \theta}$ is trained by gradient decent with approximate gradients: \begin{equation} \label{texture_net_train} \frac{\partial L_{TN}(\theta)}{\partial \theta} = \frac{1}{N} \sum_{i} \frac{\partial L_{G}(g_{\mathbf \theta}(z_i),\, f_0)}{\partial \theta}, \end{equation} where $z_1,...,z_N$ are $N$ samples from $\mathcal{N}(0, I)$. \paragraph{\bf Generative ConvNet (gCNN)~\cite{xie2016theory}} \emph{gCNN} is defined on a more general setting. It aims to estimate the underlying distribution of a set of images $\{f_k\}_{k=0}^K$ and generate new images by sampling from this distribution. In our work, we only consider the specific case where the input set contains only one image $f_0$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $K=0$, and $f_0$ is a stationary texture exemplar. gCNN defines a distribution of $f$ in image space: \begin{equation}\label{gCNN_model} \mathbf{P}(f; \,\alpha) = \frac{1}{Z(\alpha)} e^{-E_{g}(f; \, \alpha)}, \end{equation} where $Z(\alpha)=\sum_f e^{-E_{g}(f; \, \alpha)}$ is the normalization factor. $E_{g}(f; \,\alpha)$ is the energy function defined by \begin{equation} \label{gCNN_energy} E_{g}(f; \,\alpha) = - \sum \mathcal{F}_{\alpha}(f), \end{equation} where $\mathcal{F}_{\alpha}$ is the output of a ConvNet with learnable weights $\alpha$. gCNN is trained by maximum likelihood estimation (MLE). \paragraph{\bf CoopNet~\cite{xie2018cooperative}} \emph{CoopNet} extends gCNN by combining gCNN with a latent variable model~\cite{han2017alternating} which takes the form of \begin{equation} f = g_{l}(z; \theta)+ \sigma , z \sim \mathcal{N}(0, I), \sigma \sim \mathcal{N}(0, \delta ^2), \end{equation} where $g_{l}$ is a forward ConvNet parametrized by $\theta$ and $f$ is the synthesized image. The $g_{l}$ is trained by MLE, which is to iterate the following four steps: \begin{itemize} \item[1)] Generate samples $f = g_{l}(z; \theta)$ using random $z\sim\mathcal{N}(0, I)$. \item[2)] Feed $f$ to gCNN, run $l_{d}$ steps of Langevin dynamics for $f$: $\hat{f}=f -\epsilon \frac{\partial}{\partial f} E_{g}(f) + noise$. \item[3)] Run $l_{g}$ steps of Langevin dynamics for $z$: $\hat{z}=z - \epsilon \frac{\partial}{\partial z} ||\hat{f} - g_{l}(z; \theta)||^2 + noise$. \item[4)] Update $\theta$ using gradient descent: $\theta$ = $\theta - \epsilon \frac{\partial}{\partial \theta} ||\hat{f} - g_{l}(\hat{z}; \theta)||^2$ \end{itemize} \section{Conditional generative ConvNet} \label{Sec_our_model} In this section, we first present the definition of our conditional generative ConvNet (cgCNN) model, and then explore two forms of cgCNN, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the canonical cgCNN (c-cgCNN) and the forward cgCNN (f-cgCNN). Finally, we conclude this section with some theoretical explanations of cgCNN. \subsection{Model formulation} Let $f_0$ represent an image, dynamic or sound texture exemplar, and note that the shape of $f_0$ depends on its type. Specifically, $f_0 \in \mathbb{R}^{H \times W \times T \times 3}$ represents a dynamic texture exemplar; $f_0 \in \mathbb{R}^{H \times W \times 3}$ represents an image texture exemplar, and $f_0 \in \mathbb{R}^{ T }$ represents a sound texture exemplar, where $H \times W$ and $T$ are spatial and temporal sizes. Given a texture exemplar $f_0$, cgCNN defines a conditional distribution of synthesized texture $f$: \begin{equation}\label{our_model} \mathbf{P}(f \,|\, f_0; \,w) = \frac{1}{Z(w)} e^{-E_{cg}(f,\, f_0; \,w)}, \end{equation} where $Z(w)=\sum_f e^{-E_{cg}(f,f_0; w)} $ is the normalization factor. $E_{cg}(f,\, f_0; \,w)$ is the energy function which is supposed to capture the visual difference between $f$ and $f_0$ by assigning lower values to $f$'s that are visually closer to $f_0$. As an analogue to $L_{G}$, we define $E_{cg}$ by \begin{equation} \label{our_energy} E_{cg}(f,\, f_0; \,w) = \sum_l \| \mathbf{S}(\mathcal{D}^{(l)}_{w}(f_0)) - \mathbf{S}(\mathcal{D}^{(l)}_{w}(f)) \|_F, \end{equation} where $\mathcal{D}_{w}$ is a deep network with learnable weight $w$, and $\mathcal{D}_{w}^{(l)}$ is the feature maps at the $l$-th layer. $\mathbf S$ is a statistic measurement, such as \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot Gram matrix $\mathbf{G}$ defined in Eqn.~\eqref{Gram}. We also test spatial mean vector $\mathbf{M}$ as an alternative measurement in our experiment section. For simplicity, in the rest of this paper, we denote $\mathbf{P}(f \,|\, f_0; \,w)$ by $\mathbf{P}(w)$ when the meaning is clear from the text. \subsection{Training and sampling} The objective of training cgCNN is to estimate the conditional distribution $\mathbf{P}(w)$ using only one input data $f_0$. This is achieve by minimizing the KL divergence between the empirical data distribution, which is a Kronecker delta function $\delta_{f_0}$, and the estimated distribution $\mathbf{P}(w)$. The KL divergence $KL(w)$ can be written as: \begin{align} KL(w) &= KL(\delta_{f_0} || \mathbf{P}(w)) \nonumber \\ &= -H(\delta_{f_0}) - \log \mathbf P(f_0\,|\,f_0; \,w) \nonumber \\ &= -\log \mathbf P(f_0\,|\,f_0; \,w) \nonumber \\ &= \log Z(w), \end{align} where $H(\cdot)$ denotes the entropy and $H(\delta_{f_0})=0$. Note that minimizing $KL(w)$ is equivalent to MLE, where the log-likelihood $L(w)$ is defined as the log-likelihood of the input $f_0$ given $f_0$ itself as the condition: \begin{equation} L(w) = \log \mathbf P(f_0\,|\,f_0; \,w) = -\log Z(w). \end{equation} For the consistency of notation, in the rest of this paper, we use $KL(w)$ instead of $L(w)$ as the objective function. The gradient of $KL(w)$ can be written as follows: \begin{equation}\label{our_gradient_L} \frac{\partial KL(w)}{\partial w} = \mathbb{E}_{f \sim \mathbf{P}(w)}\Big(-\frac{\partial E_{cg}(f, \, f_0; \, w)}{\partial w} \Big). \end{equation} Note that the expectation term $\mathbb{E}_{f \sim \mathbf{P}(w)}(\cdot)$ in Eqn.~\eqref{our_gradient_L} is analytical intractable, and has to be approximated by the Monte Carlo method. Suppose we have $K$ samples $f^{(1)},...,f^{(K)}$ drawn from $\mathbf{P}(w)$, the gradient of $KL(w)$ can be approximated as: \begin{equation}\label{our_gradient_L2} \frac{\partial KL(w)}{\partial w} = -\frac{1}{K}\sum_{k=1}^{K} \frac{\partial E_{cg}(f^{(k)}, \, f_0; \,w)}{\partial w}. \end{equation} We can then minimize $KL(w)$ using gradient decent according to Eqn.~\eqref{our_gradient_L2}. Therefore, the key of training cgCNN is sampling from $\mathbf{P}(w)$. We use 1) Langevin dynamics and 2) a generator net for sampling, which lead to c-cgCNN and f-cgCNN respectively. \subsubsection{c-cgCNN} c-cgCNN uses Langevin dynamics to sample from $\mathbf{P}(w)$. Specifically, starting from a random noise $f$, it uses the following rule to update $f$: \begin{equation}\label{our_Langevin} f_{t+1} = f_{t}- \frac{\epsilon^2}{2} \frac{\partial E_{cg}(f_t, \, f_0; \, w)}{\partial f_t} + \epsilon N_t, \end{equation} where $f_{t}$ is a sample at step $t$, $\epsilon$ is the step size, and $N_t \sim \mathcal{N}(0, 1)$ is a Gaussian noise. A training algorithm for c-cgCNN can be derived by combining Langevin sampling in Eqn.~\eqref{our_Langevin} and approximated gradient in Eqn.~\eqref{our_gradient_L2}. Starting from a random noise $f$, the algorithm iteratively goes through $\mathcal{D}$-learning step and Langevin sampling step: \begin{itemize} \item[-] {\bf Langevin sampling:} draw samples using Langevin dynamics according to Eqn.~\eqref{our_Langevin}. \item[-] {\bf $\mathcal{D}$-learning:} update network $\mathcal{D}$ using approximated gradient according to Eqn.~\eqref{our_gradient_L2}. \end{itemize} The detailed training process is presented in Alg.~\ref{our_algorithm_1}. \begin{algorithm}[h] \caption{Training and sampling from c-cgCNN}\label{our_algorithm_1} \begin{algorithmic} \Require a texture exemplar $f_0$, Langevin sampling steps $N$, training steps $T$, and the number of synthesis textures $K$. \Ensure Synthesized textures $\hat{f}$ and learned network $\mathcal{D}_w$. \vspace{5mm} \State {\bf Initialize} $t \leftarrow 0$; \; $\hat{f}^{(k)} \leftarrow \mathcal{N}(0, \, 1), \, k=1, \ldots, K$; \; \For{$t= 1, \ldots, T$} \State {\bf Langevin sampling:} Run $N$ Langevin steps for all $K$ textures $\{\hat{f}^{(k)}\}_{k=1}^K$. Each step follows Eqn.~\eqref{our_Langevin}. \State {\bf $\mathcal{D}$-learning:} Update network $\mathcal{D}_w$: $w \leftarrow w - \frac{\partial KL(w)}{\partial w}$, where $\frac{\partial KL(w)}{\partial w}$ is calculated according to Eqn.~\eqref{our_gradient_L2}. \EndFor \end{algorithmic} \end{algorithm} \subsubsection{f-cgCNN} The Langevin dynamics used in c-cgCNN is slow, and may be the bottleneck of the Alg.~\ref{our_algorithm_1}. As an alternative, we may also use a generator net as a fast approximated sampler of $\mathbf{P}(w)$. Specifically, we introduce a generator network $\mathcal{G}_{\theta}$ with learnable weights $\theta$, which maps the normal distribution $\mathcal{N}(0, I)$ to a parametrized distribution $\mathbf{Q}(\theta)$. The training object is to match $\mathbf{Q}(\theta)$ and $\mathbf{P}(w)$, so that samples of $\mathbf{P}(w)$ can be approximated by samples of $\mathbf{Q}(\theta)$. In other words, when $\mathcal{G}_{\theta}$ is trained, approximated samples of $\mathbf{P}(w)$ can be drawn by forwarding a noise $z\sim\mathcal{N}(0, I)$ through network $\mathcal{G}_{\theta}$, which is much faster than Langevin dynamics in Eqn.~\eqref{our_Langevin}. Formally, network $\mathcal{G}_{\theta}$ is trained by minimizing the KL divergence between $\mathbf{Q}(\theta)$ and $\mathbf{P}(w)$: \begin{align} \label{train_G} KL(\theta)&=KL(\mathbf{Q(\theta)}||\mathbf{P}(w)) \nonumber \\ &=-H(\mathbf{Q}(\theta))-\mathbb{E}_{f \sim \mathbf{Q}(\theta)}log\mathbf{P}(f \,|\, f_0; \,w) \nonumber \\ &=-H(\mathbf{Q}(\theta))+\mathbb{E}_{f \sim \mathbf{Q}(\theta)}E_{cg}(f, \, f_0; \, w) + const. \end{align} The first term $H(\mathbf{Q}(\theta))$ in Eqn.~\eqref{train_G} is the entropy of distribution $\mathbf{Q}(\theta)$, which is analytical intractable. Following TextureNet~\cite{ulyanov2017improved}, we use Kozachenko-Leonenko estimator~\cite{kozachenko1987sample} (KLE) to approximate this term. Given $K$ samples $f^{(1)},...,f^{(K)}$ drawn from $\mathbf{Q}(\theta)$, KLE is defined as: \begin{equation} KLE(\theta) = \sum_{0<i,j<K}||f^{(i)} - f^{(j)}||_F. \end{equation} The second term in Eqn.~\eqref{train_G} is an expectation of our energy function $E_{cg}$. It can be approximated by taking average over a batch of samples of $\mathbf{Q}(\theta)$. Now, since both terms in Eqn.~\eqref{train_G} can be approximated, the gradient of $KL(\theta)$ can be calculated as: \begin{equation} \label{g_gradient} \frac{\partial KL(\theta)}{\partial \theta} = - \frac{\partial KLE(\theta)}{\partial \theta} + \frac{1}{K}\sum_{k=1}^{K} \frac{\partial E_{cg}(\mathcal{G}_{\theta}(z_k), \, f_0; \,w)}{\partial \theta} , \end{equation} where $z_1, z_2, ... ,z_K$ are $K$ samples drawn from $\mathcal{N}(0, I)$. The complete training algorithm of f-cgCNN can be derived by training network $\mathcal{G}$ and $\mathcal{D}$ jointly. Formally, the goal is to match three distributions: $\mathbf{Q}(\theta)$, $\mathbf{P}(w)$ and $\delta_{f_0}$ by optimizing the following objective function, \begin{equation} \min_{\mathcal{D}_{w}}\min_{\mathcal{G}_{\theta}} KL(\theta) + KL(w). \end{equation} To achieve this goal, f-cgCNN is trained by iteratively going through the following three steps: \begin{itemize} \item[-] {\bf $\mathcal{G}$-synthesis:} generate $K$ samples using network $\mathcal{G}$. \item[-] {\bf $\mathcal{D}$-learning:} update network $\mathcal{D}$ using approximated gradient according to Eqn.~\eqref{our_gradient_L2}. \item[-] {\bf $\mathcal{G}$-learning:} update network $\mathcal{G}$ using approximated gradient according to Eqn.~\eqref{g_gradient}. \end{itemize} The detailed algorithm is presented in Alg.~\ref{our_algorithm_2}. \begin{algorithm}[h] \caption{Training f-cgCNN}\label{our_algorithm_2} \begin{algorithmic} \Require a texture exemplar $f_0$, training steps $T$, batch size $K$. \Ensure learned network $\mathcal{D}_w$ and learned network $\mathcal{G}_\theta$. \vspace{5mm} \State {\bf Initialize} $t \leftarrow 0$; \; \For{$t= 1, \ldots, T$} \State {\bf $\mathcal{G}$-synthesis:} Sample a batch of $K$ noise $z_1$, ... $z_K$ from $\mathcal{N}(0, I)$, then generate $\mathcal{G}_{\theta}(z_1)$, ... , $\mathcal{G}_{\theta}(z_K)$. \State {\bf $\mathcal{D}$-learning:} Update network $\mathcal{D}_w$: $w \leftarrow w - \frac{\partial KL(w)}{\partial w}$, where $\frac{\partial KL(w)}{\partial w}$ is calculated according to Eqn.~\eqref{our_gradient_L2}. \State {\bf $\mathcal{G}$-learning:} Update network $\mathcal{G}_\theta$: $\theta \leftarrow \theta - \frac{\partial KL(\theta)}{\partial \theta}$, where $\frac{\partial KL(\theta)}{\partial \theta}$ is calculated according to Eqn.~\eqref{g_gradient}. \EndFor \end{algorithmic} \end{algorithm} \subsection{Theoretical understanding of cgCNN} We present some theoretical understandings of cgCNN by relating it to other neural models. We first point out cgCNN is conceptually related to GAN~\cite{goodfellow2014generative} as it can be written in a min-max adversarial form. Then we show that: 1) c-gCNN and f-cgCNN are generalizations of Gatys' method~\cite{gatys2015texture} and TextureNet~\cite{ulyanov2017improved} respectively. 2) c-cgCNN is a variation of gCNN~\cite{xie2016theory} with extra deep statistics, and the forward structures in f-cgCNN and CoopNet are consistent. The main properties of these models are summarized in Tab.~\ref{model_comparison}. \subsubsection{An adversarial interpretation} The adversarial form of f-cgCNN can be written as: \begin{equation} \label{Our2} \min_{\mathcal{G}_\theta} \max_{\mathcal{D}_w} \mathbb{E}_{z\sim \mathcal{N}(0, I)} [ E_{cg}(\mathcal{G}_{\theta}(z) , \, f_0; \,w) ]. \end{equation} This adversarial form has an intuitive explanation: network $\mathcal{G}_\theta$ tries to synthesize textures that are more visually similar to the input exemplar, and network $\mathcal{D}_w$ tries to detect the differences between them. The training process ends when the adversarial game reaches an equilibrium. Similarly, we have the min-max form of c-cgCNN: \begin{equation} \label{Our1} \min_{f} \max_{\mathcal{D}_w} E_{cg}(f, \, f_0; \,w), \end{equation} where the synthesized texture $f$ plays the role that is played by $\mathcal{G}_\theta$ in f-cgCNN. \subsubsection{Related to Gatys' method and TextureNet} \label{model_compare} It is easy to see that c-cgCNN is a generalization of Gatys' method with an extra step to learn the network $\mathcal{D}$. Because if we fix network $\mathcal{D}$ to be a pretrained ConvNet with weights $w_0$ in Eqn.~\eqref{Our1}, c-cgCNN becomes $\min_{f} E_{cg}(f, \, f_0; \,w_0)$, which is exactly Gatys' method defined in Eqn.~\eqref{Gatys_model}. Furthermore, since f-cgCNN and TextureNet are built on c-cgCNN and Gatys' method respectively, and they use the same forward structures, we can conclude that f-cgCNN is a generalization of TextureNet as defined in Eqn.~\eqref{texture_net}. In summary, we have the following proposition: \begin{proposition} Gatys' method and TextureNet are special cases of c-cgCNN and f-cgCNN respectively, where the net $\mathcal{D}$ is fixed to be a pretrained ConvNet. \end{proposition} Comparing to Gatys' method, samples of c-cgCNN are less likely to be trapped in local minimums for too long, because the $\mathcal{D}$-learning step always seeks to increase the energy of current samples. For example, if $f_{t}$ is a local minimal at step $t$, the subsequent $\mathcal{D}$-learning step will increase $f_{t}$'s energy, thus the energy of $f_{t}$ may be higher than its neighborhood at the beginning of step $t+1$, and the Langevin steps will sample $f_{t+1}$ different from $f_{t}$. In our experiments, we find this property enables us to synthesize highly structured textures without extra penalty terms. Unlike TextureNet and Gatys' method, both c-cgCNN and f-cgCNN can synthesize other types of textures besides image texture, because they do not rely on pretrained ConvNets. In addition, thanks to the their forward structures, both f-cgCNN and TextureNet can synthesize textures that are larger than the input. \subsubsection{Related to gCNN and CoopNet} In general, c-cgCNN can be regarded as a variation of gCNN in texture synthesis. It should be noticed that the energy $E_g$ defined in gCNN dose not involve any deep statistics, hence it can be used to synthesis both texture and non-texture images, such as human faces. However, the energy $E_{cg}$ defined in cgCNN incorporates deep statistics (Gram matrix or mean vector) specifically designed for texture modelling, hence it is more powerful in texture synthesis but can not handle non-texture images. CoopNet uses a latent variable model as the forward structure to accelerate the Langevin dynamics in gCNN. Note that the forward structures in CoopNet and f-cgCNN are consistent, as they both seek to learn the distribution defined by their respective backward structures,\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot gCNN and cgCNN. Furthermore, they are equivalent in a special setting as stated in the following proposition. \begin{proposition} If we 1) disable all noise term in Langevin dynamics, 2) set $l_{d}=1, l_{g}=0$ in CoopNet, and 3) discard the entropy term in f-cgCNN, the forward structures in CoopNet and f-cgCNN become equivalent. \end{proposition} In this setting, denote the output of the latent variable model in gCNN as $f$, then the target $\hat{f}$ is defined as $l_{d}=1$ step Langevin dynamics starting from $f$, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $\hat{f}=f- \frac{\partial E_{g} }{\partial f}$. Training $g_l$ amounts to minimize the objective function $||\hat{f}-f||^2 $ via gradient descent. Note the gradient of the objective function can be calculated as $\frac{\partial E_{g} }{\partial f} \frac{\partial f}{\partial \theta}$, which is exactly back-propagation for minimizing $E_{g}$ according to the chain rule. Because the generator net in f-cgCNN is also trained using back-propagation, it is clear that the forward structures in CoopNet and f-cgCNN are equivalent. All of cgCNN, CoopNet and gCNN can synthesize various types of textures. However, unlike f-cgCNN whose synthesis step is a simple forward pass, the synthesis step of CoopNet involves several Langevin steps of gCNN, it is therefore difficult to expand textures using CoopNet. \begin{table*}[ht!] \scriptsize \begin{center} \caption{Comparison among several related models. Comparing to Gatys' method and TextureNet, cgCNN can synthesize more types of textures besides image textures. Comparing to gCNN and CoopNet, cgCNN incorporates extra multi-scale deep statistics which are more suitable for texture modelling.} \label{model_comparison} \begin{tabular}{r c c p{1.4cm} p{1.2cm} p{1.2cm} p{1.1cm} p{1.2cm}} \hline \textbf{Model} & \textbf{Forward structure} & \textbf{Backward structure} & \textbf{Multi-scale statistics} & \textbf{Dynamic texture synthesis}& \textbf{Sound texture synthesis} & \textbf{Texture expansion} & \textbf{Fast sampling}\\ \hline \centering Gatys'~\cite{gatys2015texture} & \line(1,0){4} & pretrained ConvNet &\cmark & \xmark & \xmark & \xmark & \xmark \\ TextureNet~\cite{ulyanov2017improved} & generator & pretrained ConvNet &\cmark & \xmark & \xmark & \cmark& \cmark \\ gCNN~\cite{xie2016theory} & \line(1,0){4} & gCNN & \xmark &\cmark & \line(1,0){4} & \xmark& \xmark\\ CoopNet~\cite{xie2018cooperative} & latent variable model & gCNN &\xmark & \cmark & \line(1,0){4} & \xmark & \cmark\\ \hline c-cgCNN \textbf{(Ours)} & \line(1,0){4} & cgCNN & \cmark &\cmark & \cmark & \xmark& \xmark \\ f-cgCNN \textbf{(Ours)} & generator & cgCNN & \cmark &\cmark & \cmark & \cmark & \cmark \\ \hline \end{tabular} \end{center} \vspace{-4mm} \end{table*} \section{Synthesize and inpaint image, dynamic and sound textures} \label{Dynamic_and_Sound} \subsection{Texture synthesis} In our model, we use the same training algorithms described in Alg.~\ref{our_algorithm_1} and Alg.~\ref{our_algorithm_2} and statistics (Gram matrix and mean vector) for all types of textures. Therefore, in order to synthesize different types of textures, we only need to modify the network dimensions accordingly, and all other settings remain the same. \subsubsection{Image texture synthesis} Similar to previous image texture models~\cite{gatys2015texture,liu2016texture}, we use a 2-dimensional ConvNet to capture the spatial statistics. Multi-scale statistics are captured in different layers in the networks. \subsubsection{Dynamic texture synthesis} Dynamic textures can be regarded as image textures with an extra temporal dimension. Therefore, we simply use 3-dimensional spatial-temporal convolutional layers in cgCNN to capture the spatial appearances and the temporal dynamics simultaneously. In other words, unlike the methods~\cite{tesfaldet2018two, doretto2003dynamic} that model spatial and temporal statistics independently, our model treats them equally by regarding a clip of dynamic texture as a spatial-temporal volume, in which both the spatial and the temporal dimensions are stationary. \subsubsection{Sound texture synthesis} Sound textures can be regarded as a special case of dynamic textures, where spatial dimensions are not considered. However, modelling sound texture is not a simple task, because the sampling frequency of sound textures ($\sim$10 kHz) is usually far higher than that of dynamic textures ($\sim$10 Hz). As a result, sound textures show more complicated long-range temporal dependencies and multi-scale structures than dynamic textures. In our model, we simply use 1-dimensional temporal convolutional layers in cgCNN to extract temporal statistics. We use atrous~\cite{chen2017deeplab} convolutions to ensure large receptive fields, which enable us to learn long-range dependencies. Unlike Antognini's model~\cite{antognini2019audio} which applies fixed random ConvNets to the spectrograms, our model learns the ConvNet using raw waveforms. \subsection{Texture inpainting} \label{Section_inpainting} As a proof of concept, we present a simple algorithm for texture inpainting based on our texture synthesis algorithm described in Alg.~\ref{our_algorithm_1}. Given an input texture $f_0$ with a corrupted region $\Omega$, the texture inpainting problem is to fill $\Omega$ so that the inpainted texture appears as natural as possible. In other words, $\overline{\Omega}$ must be visually close to at least one patch in the uncorrupted region $f_0 \setminus \Omega$, where $\overline{\Omega}$ is the corrupted region with its border. Our texture synthesis algorithm described in Alg.~\ref{our_algorithm_1} can be easily generalized to a texture inpainting algorithm, which iteratively searches for a template in $f_0 \setminus \Omega$ and updates $\Omega$ according to the template. Specifically, our method iterates a searching step and a synthesis step. In the searching step, we first measure the energy $E_{cg}(\phi, \overline{\Omega};w)$ between $\overline{\Omega}$ and all candidate patches $\phi \in f_0 \setminus \Omega$. Then we select the patch $\phi^*$ with the lowest energy to be the template. In the synthesis step, we update $\Omega$ according to template $\phi^*$ using Alg.~\ref{our_algorithm_1}. It is obvious that this algorithm ensures the inpainted region is visually similar to at least one patch (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot the template) in the uncorrupted region. In the searching step, we use grid search to find the template $\phi^*$. Note the template $\phi^*$ can also be assigned by the user~\cite{igehy1997image}. It is possible to replace the grid search by more advanced searching techniques such as PatchMatch~\cite{barnes2009patchmatch}, and use gradient penalty~\cite{darabi2012image} or partial mask~\cite{igehy1997image} to ensure a smooth transition near the border of $\Omega$. However, these contradict the purpose of this algorithm, which is to show the effectiveness of the proposed c-cgCNN method by combining it with other simplest possible methods. The detailed inpainting algorithm is presented in Alg.~\ref{our_algorithm_1}. \begin{algorithm}[h] \caption{Texture inpainting using c-cgCNN}\label{our_algorithm_3} \begin{algorithmic} \Require a texture exemplar $f$ with corrupted region $\Omega$. Langevin sampling step $N$, searching step $T$, updating step~$S$, network $\mathcal{D}_w$. \Ensure inpainted image $\tilde{f}$ and learned network $\mathcal{D}_w$. \vspace{5mm} \State {\bf Initialize} $t \leftarrow 0$; \; $\Omega \leftarrow 0$ \For{$t= 1, \ldots, T$} \State ({\bf Template searching}) \State Find the patch $\phi^*$ with the lowest energy $E_{cg}(\phi, \overline{\Omega};w)$ amongst all patches $\phi \in f_0 \setminus \Omega$. Set $\phi^*$ to be the template. \State ({\bf c-cgCNN synthesis with exemplar $\phi^*$}) \For{$s= 1, \ldots, S$} \State Run $N$ Langevin steps for $\Omega$. Each step follows Eqn.~\eqref{our_Langevin}. \State Update network $\mathcal{D}_w$ $w$: $w \leftarrow w - \frac{\partial KL(w)}{\partial w}$, where $\frac{\partial KL(w)}{\partial w}$ is calculated according to Eqn.~\eqref{our_gradient_L2}. \EndFor \EndFor \end{algorithmic} \end{algorithm} \section{Experiments and Analysis} \label{experiments} In this section, we evaluate the proposed cgCNN model and compare it with other texture models. We first perform self evaluations of c-cgCNN in Sec.~\ref{Bounded_experiment}-Sec.~\ref{Ablation_experiment}. Specifically, we investigate several key aspects of c-cgCNN including the influence of bounded constraints and the diversity of synthesis results. We also carry out two ablation studies concerning the network structure and the training algorithm respectively. Then we evaluate the performance of c-cgCNN and f-cgCNN in texture synthesis and expansion by comparing them with other theoretically related or the state-of-the-art methods in Sec.~\ref{Synthesis_experiment}-Sec.~\ref{Expansion_experiment}. We finally evaluate our texture inpainting method in Sec.~\ref{Inpainting_experiment}. \subsection{Experimental setting} \paragraph{Exemplar} The image exemplars are collected from DTD dataset~\cite{cimpoi14describing} and Internet, and all examples are resized to $(256,256)$. The dynamic texture exemplars are adopted from~\cite{tesfaldet2018two}, where each video has 12 frames ane each frame is resized to $(128, 128)$. We use sound textures that were used in~\cite{mcdermott2009sound}, which are recorded at $22050Hz$. For our experiments, we clip the first $50000$ sample points (about 2 seconds) of each audio as exemplars. \paragraph{Network architecture} The network $\mathcal{D}$ used in cgCNN is shown in Fig.~\ref{network}. It consists of a deep branch and a shallow branch. The deep branch consists of convolutional layers with small kernel size focusing on details in textures, and the shallow branch consists of three convolutional layers with large kernel size focusing on larger-scale and non-local structures. The combination of these two branches enables cgCNN to model both global and local structures. When synthesizing dynamic or sound textures, we use spatial-temporal or temporal convolutional layers respectively. We use the hard sigmoid function as activation function in the network, which is defined as: \begin{equation} \label{hardsigmoid} \sigma(x) = \min(\max(x,0),1). \end{equation} \begin{figure}[htb!] \vspace{-3mm} \centering \includegraphics[width=.8\linewidth]{result/network.png} \caption{The network $\mathcal{D}$ used in cgCNN.} \label{network} \vspace{-2mm} \end{figure} \paragraph{Parameters} We sample $K=3$ textures in each iteration and each sample is initialized as Gaussian noise with variance $0.01$. We run $N=10$ or $50$ Langevin steps in each iteration. The training algorithm stops with a maximal of $T=5000$ iterations. We use RMSprop~\cite{Tieleman2012} or Adam~\cite{kingma2014adam} to update networks and synthesized images, with the initial learning rate set to $0.001$. In all our experiments, we follow these settings except where explicitly stated. All the results are available at \url{http://captain.whu.edu.cn/cgcnn-texture/}, where one can check dynamic and sound textures. \subsection{Bounded constraint} \label{Bounded_experiment} \begin{figure}[t!] \vspace{-2mm} \subfigure[Exemplar]{ \begin{minipage}[b]{0.231\linewidth} \includegraphics[width=1\linewidth]{result/Image/033} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Image/034} \end{minipage} } \hspace{-2ex} \subfigure[{\em hard sigmoid}]{ \begin{minipage}[b]{0.231\linewidth} \includegraphics[width=1\linewidth]{result/Our/Activation/_Ini0_0_033_jpg_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Activation/_Ini0_0_034_jpg_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg} \end{minipage} } \hspace{-2ex} \subfigure[{\em tanh}]{ \begin{minipage}[b]{0.231\linewidth} \includegraphics[width=1\linewidth]{result/Our/Activation/_Ini0_0_033_jpg_Try_112_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Activation/_Ini0_0_034_jpg_Try_112_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg} \end{minipage} } \hspace{-2ex} \subfigure[{\em sigmoid}]{ \begin{minipage}[b]{0.231\linewidth} \includegraphics[width=1\linewidth]{result/Our/Activation/_Ini0_0_033_jpg_Try_113_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Activation/_Ini0_0_034_jpg_Try_113_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg} \end{minipage} } \vspace{-4mm} \caption{Results using different activation functions. The {\em hard sigmoid} generates the most satisfactory results. Zooming in to check the artifacts generated by {\em tanh} and {\em sigmoid}.} \label{Compare_Activation} \vspace{-3mm} \end{figure} We find it is crucial to constrain the magnitude of the energy $E_{cg}$ in order to stabilize the training process, because the energy $E_{cg}$ often grows too large and causes the exploding gradient problem. In this work, we use bounded activation function in the network architecture to ensure the energy $E_{cg}$ is upper bounded. We notice the choice of activation function has subtle influences on the synthesis results. This is shown in Fig.~\ref{Compare_Activation}, where we present the results using different activation functions, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot {\em hard sigmoid, tanh} and {\em sigmoid} respectively. We observe the use of {\em hard sigmoid} produces the most satisfactory results, while {\em tanh} often generates some unnatural colors, and the results using {\em sigmoid} exhibit some check-board artifacts. Comparing with other constraints such as weight clipping~\cite{ArjovskyCB17}, gradient penalty~\cite{GulrajaniAADC17} and spectral normalization~\cite{abs-1802-05957}, our method does not have extra computational cost, is nonparametric and easy to implement. \subsection{Diversity of synthesis} \label{Diversity_experiment} \begin{figure}[t!] \vspace{-2mm} \subfigure[Exemplar]{ \begin{minipage}[b]{0.231\linewidth} \includegraphics[width=1\linewidth]{result/Image/bubbly_0038}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Image/pebbles1.jpg}\\ \vspace{-2ex} \end{minipage} } \subfigure[Our results]{ \begin{minipage}[b]{0.77\linewidth} \begin{minipage}[b]{0.3\linewidth} \includegraphics[width=1\linewidth]{result/Our/diversity/_Ini0_0_bubbly_0038_jpg_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/diversity/_Ini0_0_pebbles1_jpg_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \includegraphics[width=1\linewidth]{result/Our/diversity/_Ini0_0_bubbly_0038_jpg_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_1_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/diversity/_Ini0_0_pebbles1_jpg_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_1_5000_.jpg}\\ \vspace{-2ex} \end{minipage} \begin{minipage}[b]{0.3\linewidth} \includegraphics[width=1\linewidth]{result/Our/diversity/_Ini0_0_bubbly_0038_jpg_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_2_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/diversity/_Ini0_0_pebbles1_jpg_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_2_5000_.jpg}\\ \vspace{-2ex} \end{minipage} \end{minipage} } \vspace{-4mm} \caption{Diversity of synthesis. The produced textures are visually similar to the inputs but are not identical to each other.} \label{Diversity} \vspace{-4mm} \end{figure} It is important for a texture synthesis algorithm to be able to synthesis diversified texture samples using a given exemplar. For the proposed c-cgCNN model, the diversity of the synthesized textures is a direct result of the randomness of the initial Gaussian noise, thus one does not need to make extra effort to ensure such diversity. This is shown in Fig.~\ref{Diversity}, where a batch of three synthesized samples for each exemplars are presented. Note that all synthesized textures are visually similar to the exemplars, but they are not identical to each other. \subsection{Ablation study of the learning algorithm} In order to verify the importance of $\mathcal{D}$-learning step in Alg.~\ref{our_algorithm_1}, we test a fixed random method which is to disable $\mathcal{D}$-learning step. This fixed random method is actually optimizing the synthesized image using a fixed random ConvNet. Fig.~\ref{random_optimization} presents the comparison between our Alg.~\ref{our_algorithm_1} and such fixed random method. Clearly, our method produces more favorable results than this fixed random method, as our results are sharper and clearer while this method can only produce blurry and noisy textures. We can therefore conclude that $\mathcal{D}$-learning step is key to the success of our algorithm, as it enables us to learn better deep filters than a random ConvNet. \begin{figure}[h!] \vspace{-2mm} \begin{center} \subfigure[Exemplar]{ \includegraphics[width=0.18\linewidth]{result/Our/Random/019} } \subfigure[Energy evolutions.]{ \includegraphics[width=0.45\linewidth]{result/Our/Random/Plot_Loss2} }\\ \subfigure{ \begin{minipage}[b]{0.03\linewidth} \rotatebox{90}{Ours} \\ \vspace{3mm} \rotatebox{90}{Random} \end{minipage} } \hspace{-1.3ex} \subfigure[Iter 0]{ \begin{minipage}[b]{0.18\linewidth} \includegraphics[width=1\linewidth]{result/Our/Random/_Random_0_Ini0_0_inner_10_019_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_0_} \\\vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Random/_Random_1_Ini0_0_inner_10_019_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_0_} \\\vspace{-2ex} \end{minipage} } \subfigure[Iter 300]{ \begin{minipage}[b]{0.18\linewidth} \includegraphics[width=1\linewidth]{result/Our/Random/_Random_0_Ini0_0_inner_10_019_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_300_} \\\vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Random/_Random_1_Ini0_0_inner_10_019_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_300_} \\\vspace{-2ex} \end{minipage} } \subfigure[Iter 900]{ \begin{minipage}[b]{0.18\linewidth} \includegraphics[width=1\linewidth]{result/Our/Random/_Random_0_Ini0_0_inner_10_019_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_900_} \\\vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Random/_Random_1_Ini0_0_inner_10_019_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_900_} \\\vspace{-2ex} \end{minipage} } \subfigure[Iter 4000]{ \begin{minipage}[b]{0.18\linewidth} \includegraphics[width=1\linewidth]{result/Our/Random/_Random_0_Ini0_0_inner_10_019_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_4000_} \\\vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Random/_Random_1_Ini0_0_inner_10_019_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_4000_} \\\vspace{-2ex} \end{minipage} } \end{center} \vspace{-4mm} \caption{Compare with fixed random method. The differences highlight the effectiveness of $\mathcal{D}$-learning step in Alg.~\ref{our_algorithm_1}.} \label{random_optimization} \vspace{-3mm} \end{figure} \subsection{Ablation study of the network architecture} \label{Ablation_experiment} \begin{figure*}[th!] \vspace{-2mm} \begin{center} \subfigure[Exemplar]{ \begin{minipage}[b]{0.12\linewidth} \includegraphics[width=1\linewidth]{result/Image/porous}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Image/small}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Image/pebbles} \end{minipage} } \hspace{-1.5ex} \subfigure[$(1 \mathbf D \oplus 0 \mathbf S)$]{ \begin{minipage}[b]{0.12\linewidth} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_porous_png_Try_11_num_layer_0_depth_1_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_small_jpg_Try_11_num_layer_0_depth_1_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_pebbles_jpg_Try_11_num_layer_0_depth_1_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_10000_.jpg} \end{minipage} } \hspace{-1.5ex} \subfigure[$(3 \mathbf D \oplus 0 \mathbf S)$]{ \begin{minipage}[b]{0.12\linewidth} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_porous_png_Try_11_num_layer_0_depth_3_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_small_jpg_Try_11_num_layer_0_depth_3_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_pebbles_jpg_Try_11_num_layer_0_depth_3_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_10000_.jpg} \end{minipage} } \hspace{-1.5ex} \subfigure[$(9 \mathbf D \oplus 0 \mathbf S)$]{ \begin{minipage}[b]{0.12\linewidth} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_porous_png_Try_11_num_layer_0_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_small_jpg_Try_11_num_layer_0_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_pebbles_jpg_Try_11_num_layer_0_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_10000_.jpg} \end{minipage} } \hspace{-1.5ex} \subfigure[$(9 \mathbf D \oplus 1 \mathbf S)$]{ \begin{minipage}[b]{0.12\linewidth} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_porous_png_Try_11_num_layer_1_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_small_jpg_Try_11_num_layer_1_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_pebbles_jpg_Try_11_num_layer_1_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_1_5000_.jpg} \end{minipage} } \hspace{-1.5ex} \subfigure[$(9 \mathbf D \oplus 3 \mathbf S)$]{ \begin{minipage}[b]{0.12\linewidth} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_porous_png_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_small_jpg_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/Scale/_Ini0_0_pebbles_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_7000_.jpg} \end{minipage} } \end{center} \vspace{-4mm} \caption{Texture synthesized with different sub-network. One can check that the synthesized textures contain larger scale structures as the receptive field of the selected sub-network increases.} \label{Compare_Scale} \end{figure*} In order to investigate the roles played by different layers in our network in Fig.~\ref{network}, we carry out an ablation study by using different sub-networks. Note the original network has two branches consisting of $9$ deep layers and $3$ shallow layers respectively. We denote a sub-network with $m$ deep layers and $n$ shallow layers by $(m \mathbf D \oplus n \mathbf S)$. For instance, a sub-network $(3 \mathbf D \oplus 1 \mathbf S)$ consists of the first $3$ layers in the deep branch and the first $1$ layer in the shallow branch. We experiment with $m=1, 3, 9$ and $n=0, 1, 3$. Fig.~\ref{Compare_Scale} presents the results of five sub-networks with increasingly large receptive field, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot $(1 \mathbf D \oplus 0 \mathbf S), (3 \mathbf D \oplus 0 \mathbf S), (9\mathbf D \oplus 0\mathbf S), (9\mathbf D \oplus 1 \mathbf S), (9\mathbf D \oplus 3 \mathbf S)$. As we can see, the synthesized textures capture larger scale structures as the receptive field increases. In general, to generate high fidelity samples, the network must be able to model structures of different scales contained in the input image. As shown in Fig.~\ref{Compare_Scale}, $(1 \mathbf D \oplus 0 \mathbf S)$ generates results with serious artifacts because the receptive field is only $3$ pixels wide, which is too small for any meaningful texture elements. For the {\em porous} texture which consists of small scale elements, a sub-network with a relatively small receptive field, e.g. $(3 \mathbf D \oplus 0 \mathbf S)$ and $(9\mathbf D \oplus 0\mathbf S)$, is sufficient to produce high quality textures. However, for textures containing larger-scale structures, like {\em cherries} and {\em pebbles}, larger receptive fields are often required for producing better results. \subsection{Results on texture synthesis} \label{Synthesis_experiment} \begin{figure*}[h!] \begin{center} \subfigure[Exemplar]{ \begin{minipage}[b]{0.13\linewidth} \includegraphics[width=1\linewidth]{result/Image/041.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Image/009.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Image/Texture45.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Image/Texture54.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Image/InkSmoky0065_512.png} \\ \vspace{-2ex} \end{minipage} } \hspace{-1.5ex} \subfigure[Gatys'~\cite{gatys2015texture}]{ \begin{minipage}[b]{0.13\linewidth} \includegraphics[width=1\linewidth]{result/Gatys/041.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Gatys/009.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Gatys/Texture45.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Gatys/Texture54.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Gatys/InkSmoky0065_512.png} \\ \vspace{-2ex} \end{minipage} } \hspace{-1.5ex} \subfigure[gCNN~\cite{xie2016theory}]{ \begin{minipage}[b]{0.13\linewidth} \includegraphics[width=1\linewidth]{result/gcnn/041.jpg_dense_net_frame_3_mle_l2l/layer_03_001.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/gcnn/009.jpg_dense_net_frame_3_mle_l2l/layer_03_001.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/gcnn/Texture45.png_dense_net_frame_3_mle_l2l/layer_03_001.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/gcnn/Texture54.png_dense_net_frame_3_mle_l2l/layer_03_001.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/gcnn/InkSmoky0065_512.png_dense_net_frame_3_mle_l2l/layer_03_001.png}\\ \vspace{-2ex} \end{minipage} } \hspace{-1.5ex} \subfigure[CoopNet~\cite{xie2018cooperative}]{ \begin{minipage}[b]{0.13\linewidth} \includegraphics[width=1\linewidth]{result/CoopNet/041.jpg/des_001000_001000.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/CoopNet/009.jpg/des_001000_001000.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/CoopNet/Texture45.png/des_001000_001000.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/CoopNet/Texture54.png/des_001000_001000.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/CoopNet/InkSmoky0065_512.png/des_001000_001000.png}\\ \vspace{-2ex} \end{minipage} } \hspace{-1.5ex} \subfigure[Self tuning~\cite{kaspar2015self}]{ \begin{minipage}[b]{0.13\linewidth} \includegraphics[width=1\linewidth]{result/self_tuning/041.jpg/img.png}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/self_tuning/009.jpg/img.png}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/self_tuning/Texture45.png/img.png}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/self_tuning/Texture54.png/img.png}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/self_tuning/InkSmoky0065_512.png/img.png}\\ \vspace{-2ex} \end{minipage} } \hspace{-1.5ex} \subfigure[c-cgCNN-Mean]{ \begin{minipage}[b]{0.13\linewidth} \includegraphics[width=1\linewidth]{result/Our/show/041_jpg_IsBig_0_IsMean_1_NoBatch_1_IsAdam_0_4900_.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/show/_Ini0_0_009_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_1_Adam_0_0_5000_.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/show/_Ini0_0_Texture45_png_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_1_Adam_0_0_3000_.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/show/_Ini0_0_Texture54_png_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_iter_50_IsMean_1_Adam_0_0_3000_.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/show/_Ini0_0_InkSmoky0065_512_png_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_1_Adam_0_2_5000_.jpg} \\ \vspace{-2ex} \end{minipage} } \hspace{-1.5ex} \subfigure[c-cgCNN-Gram]{ \begin{minipage}[b]{0.13\linewidth} \includegraphics[width=1\linewidth]{result/Our/show/041_jpg_IsBig_0_IsMean_0_NoBatch_1_IsAdam_0_4900_.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/show/_Ini0_0_009_jpg_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/show/_Ini0_0_Texture45_png_Try_11_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_2_3000_.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/show/_Ini0_0_Texture54_png_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_iter_50_IsMean_0_Adam_0_0_3000_.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Our/show/_Ini0_0_InkSmoky0065_512_png_Try_114_num_layer_3_depth_9_Onlypool_0_IsBig_0_IsMean_0_Adam_0_0_5000_.jpg} \\ \vspace{-2ex} \end{minipage} } \end{center} \vspace{-4mm} \caption{Textures synthesized by different methods. See texts for more details.} \label{Compare_All} \vspace{-4mm} \end{figure*} \begin{figure*}[t!] \vspace{-4mm} \begin{center} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/sea_4/frame_00000001.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/sea_4/frame_00000002.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/sea_4/frame_00000003.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/sea_4/frame_00000004.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/sea_4/frame_00000005.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/sea_4/frame_00000006.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/sea_4/frame_00000007.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/sea_4/frame_00000008.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/sea_4/frame_00000009.png}\\ \vspace{1mm} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/sea_4/iter_6000_frame_1_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/sea_4/iter_6000_frame_2_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/sea_4/iter_6000_frame_3_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/sea_4/iter_6000_frame_4_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/sea_4/iter_6000_frame_5_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/sea_4/iter_6000_frame_6_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/sea_4/iter_6000_frame_7_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/sea_4/iter_6000_frame_8_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/sea_4/iter_6000_frame_9_1.png} \\ \vspace{1mm} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/gram/frame_00000001} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/gram/frame_00000002} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/gram/frame_00000003} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/gram/frame_00000004} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/gram/frame_00000005} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/gram/frame_00000006} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/gram/frame_00000007} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/gram/frame_00000008} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/gram/frame_00000009} \\ \vspace{1mm} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/mean/frame_00000001} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/mean/frame_00000002} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/mean/frame_00000003} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/mean/frame_00000004} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/mean/frame_00000005} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/mean/frame_00000006} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/mean/frame_00000007} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/mean/frame_00000008} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/sea_4/mean/frame_00000009} \\ \vspace{1mm} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/smoke_1/frame_00000001.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/smoke_1/frame_00000002.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/smoke_1/frame_00000003.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/smoke_1/frame_00000004.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/smoke_1/frame_00000005.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/smoke_1/frame_00000006.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/smoke_1/frame_00000007.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/smoke_1/frame_00000008.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/ori/smoke_1/frame_00000009.png}\\ \vspace{1mm} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/smoke_1/iter_6000_frame_1_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/smoke_1/iter_6000_frame_2_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/smoke_1/iter_6000_frame_3_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/smoke_1/iter_6000_frame_4_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/smoke_1/iter_6000_frame_5_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/smoke_1/iter_6000_frame_6_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/smoke_1/iter_6000_frame_7_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/smoke_1/iter_6000_frame_8_1.png} \includegraphics[width=0.1\linewidth]{result/Dynamic/two_stream/smoke_1/iter_6000_frame_9_1.png} \\ \vspace{1mm} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/gram/frame_00000001} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/gram/frame_00000002} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/gram/frame_00000003} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/gram/frame_00000004} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/gram/frame_00000005} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/gram/frame_00000006} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/gram/frame_00000007} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/gram/frame_00000008} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/gram/frame_00000009} \\ \vspace{1mm} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/mean_1/frame_00000001} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/mean_1/frame_00000002} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/mean_1/frame_00000003} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/mean_1/frame_00000004} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/mean_1/frame_00000005} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/mean_1/frame_00000006} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/mean_1/frame_00000007} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/mean_1/frame_00000008} \includegraphics[width=0.1\linewidth]{result/Dynamic/Our/smoke_1/mean_1/frame_00000009} \\ \end{center} \vspace{-4mm} \caption{Comparison between c-cgCNN and two-stream algorithm~\cite{tesfaldet2018two} for dynamic texture synthesis. For each dynamic texture, we present the exemplar (1-st row), results of two-stream method (2-nd row), c-cgCNN-Gram (3-rd row) and c-cgCNN-Mean (4-th row). While two-stream method suffers from low level noise and greyish effect, our method is free from these artifacts.} \label{Dynamic_texture} \end{figure*} \begin{figure*}[htb!] \vspace{-2mm} \begin{center} \subfigure[Exemplar]{ \begin{minipage}[b]{0.18\linewidth} \includegraphics[width=1\linewidth]{result/Sound/Synthesis/ori_norm_Enthusiastic_applause} \end{minipage} } \subfigure[McDermott's~\cite{mcdermott2009sound}]{ \begin{minipage}[b]{0.18\linewidth} \includegraphics[width=1\linewidth]{result/Sound/Synthesis/MC_norm_Enthusiastic_applause.jpg} \end{minipage} } \subfigure[Antognini's~\cite{antognini2019audio}]{ \begin{minipage}[b]{0.18\linewidth} \includegraphics[width=1\linewidth]{result/Sound/Synthesis/A_Applause2wav.png} \end{minipage} } \subfigure[c-cgCNN-Gram]{ \begin{minipage}[b]{0.18\linewidth} \includegraphics[width=1\linewidth]{result/Sound/Synthesis/our_gram_norm_Enthusiastic_applause.jpg} \end{minipage} } \subfigure[c-cgCNN-Mean]{ \begin{minipage}[b]{0.18\linewidth} \includegraphics[width=1\linewidth]{result/Sound/Synthesis/our_mean_norm_Enthusiastic_applause.jpg} \end{minipage} } \end{center} \vspace{-3mm} \caption{Comparison of c-cgCNN, McDermott's~\cite{mcdermott2009sound} and Antognini's model~\cite{antognini2019audio} for sound texture synthesis. Their results are comparable. Sound texture shown here is ``applause''.} \label{Sound_texture} \vspace{-4mm} \end{figure*} For image texture synthesis, we compare the following methods, which are theoretically related to our model or reported state-of-the-art performance. \begin{itemize} \item[-] {\textbf{c-cgCNN-Gram}}: Our c-cgCNN with the Gram matrix as the statistic measurement. \item[-] {\textbf{c-cgCNN-Mean}}: Our c-cgCNN where the mean vector is used as the statistic measurement instead of Gram matrix. \item[-] {\textbf{Gatys' method}~\cite{gatys2015texture}}: A texture model relying on pretrained ConvNets. It is a special case of our \textbf{c-cgCNN-Gram} model with pretrained ConvNet. \item[-] {\textbf{gCNN}~\cite{xie2016theory}}: A generative model reviewed in Sec.~\ref{Preliminaries}. It is a variation of our \textbf{c-cgCNN} model without deep statistics. \item[-] {\textbf{CoopNet}~\cite{xie2018cooperative}}: A generative model reviewed in Sec.~\ref{Preliminaries}. It is a combination of gCNN and a latent variable model. \item[-] {\textbf{Self tuning}~\cite{kaspar2015self}}: A recent patch-based EBTS algorithm that utilizes optimization technique. \end{itemize} Fig.~\ref{Compare_All} shows the qualitative comparison of these algorithms. We observe that Gatys' method fails to capture global structures (the 3-rd and 4-th textures) because the optimization process converges to a local minimum where global structures are not preserved, and it also generates artifacts such as unnatural color and noises (Zoom in the 1-st and 5-th textures). Meanwhile, although gCNN and CoopNet can capture most of the large-scale structures, they loss too many details in the results, probably because they do not use any deep statistics. Self tuning is excel at generating regular textures (the 3-rd and 4-th textures), but it sometimes losses the global structures (the 1-st, 2-nd and 4-th textures) due to the lack of global structure modeling in this method. In contrast, c-cgCNN-Gram and c-cgCNN-Mean can both produce better samples than other baseline methods, since they not only capture large-scale structures but also reproduce small-scale details, even for highly structured textures (1-st, 3-rd and 4-th textures). This is because c-cgCNN use both deep statistics and effective sampling strategy that are not likely to be trapped in bad local minimums. It is also worth noticing that the results of c-cgCNN-Gram and c-cgCNN-Mean are comparable in most cases even though they use different statistics. For quantitative evaluation, we measure multi-scale structural similarity~\cite{Multiscale} (MS-SSIM) between the synthesized texture and the exemplar. A higher score indicate higher visual similarity. The quantitative results are summarized in Tab.~\ref{model_comparison_quantitative}. The results show that our methods outperform other baseline methods in most cases. \begin{table}[h!] \scriptsize \begin{center} \caption{Quantitative evaluation of texture synthesis results shown in Fig.~\ref{Compare_All} using MS-SSIM.} \label{model_comparison_quantitative} \begin{tabular}{r c c c c c c} \hline & painting & lines & wall & scaly & ink \\ \hline Gatys' ~\cite{gatys2015texture} &0.01 & 0.01& 0.09& 0.08& 0.34 \\ gCNN~\cite{xie2016theory} &0.05 & 0.05& 0.11& 0.11& 0.35 \\ self-tuning~\cite{kaspar2015self} &0.03 & \textbf{0.17}& 0.07& 0.01& 0.42 \\ CoopNet~\cite{xie2018cooperative} &0.05 & 0.09& 0.20& 0.08& 0.32 \\ \hline c-cgCNN-Gram &0.10 & 0.09& \textbf{0.31}& \textbf{0.36} & 0.43\\ c-cgCNN-Mean &\textbf{0.14} & 0.10& 0.10& 0.00& \textbf{0.46} \\ \hline \end{tabular} \end{center} \vspace{-4mm} \end{table} For dynamic texture synthesis, we use the network $(6 \mathbf D \oplus 0 \mathbf S)$, and we sample $M=2$ dynamic textures in sampling step. Fig.~\ref{Dynamic_texture} presents the qualitative comparison between c-cgCNN method and recent advanced two-stream method~\cite{tesfaldet2018two}. We notice the results of two-stream model suffer from artifacts such as greyish (the 1-st texture) or low level noise (the 2-nd texture), and sometime exhibit temporal inconsistency. While the results of both c-cgCNN-Gram and c-cgCNN-Mean are more favorable as they are cleaner and show better temporal consistency. For qualitative evaluation, we measure the average of MS-SSIM metric between each frame of synthesized results and the corresponding frame in the exemplar. The results are shown in Tab.~\ref{Dynamic_texture_quantitative}, where both c-cgCNN-Mean and c-cgCNN-Gram outperform two-stream method. \begin{table}[h!] \scriptsize \begin{center} \caption{Quantitative evaluation of dynamic texture synthesis results shown in Fig.~\ref{Dynamic_texture} using MS-SSIM.} \label{Dynamic_texture_quantitative} \begin{tabular}{r c c} \hline & ocean & smoke \\ \hline TwoStream~\cite{tesfaldet2018two} & 0.08 & 0.01 \\ \hline c-cgCNN-Gram & \textbf{0.17} & 0.79 \\ c-cgCNN-Mean & 0.13 & \textbf{0.86} \\ \hline \end{tabular} \end{center} \vspace{-4mm} \end{table} For sound texture synthesis, we use the network $(4 \mathbf D \oplus 0 \mathbf S)$ where the kernel size and number of filters in each layer are $25$ and $128$, and the strides in each layer is $10$ except the first layer where the stride is $5$. We do not use pooling layers in this network. Fig.~\ref{Sound_texture} presents the results of sound texture synthesis using c-cgCNN, McDermott's model~\cite{mcdermott2009sound} and Antognini's model~\cite{antognini2019audio} in waveforms. Unlike other two methods which act on frequency domain, c-cgCNN only uses raw audios. We observe that the results of these methods are generally comparable, except for some cases where our results are noisier than baseline methods. It is probably because of the loss of short temporal dependencies caused by the large strides in the shallow layers. It also suggests that our results might be further improved by using more carefully designed networks. \subsection{Results on texture expansion} \label{Expansion_experiment} The structure of generator net $\mathcal{G}_{\theta}$ used in f-cgCNN is borrowed from TextureNet, with two extra residual blocks at the output layer. See~\cite{ulyanov2017improved} for details. When expanding dynamic or sound texture, the spatial convolutional layers in $\mathcal{G}_{\theta}$ is replaced by spatial-temporal or temporal convolutional layers accordingly. Fig.~\ref{Compare_TextureNet} presents a comparison between f-cgCNN and TextureNet in image texture expansion. The results of f-cgCNN and TextureNet are generally comparable, because both of them are able to learn the stationary elements in the exemplars. In addition, f-cgCNN are generally slower to converge than TextureNet in the training phase because it trains an extra net $\mathcal{D}$, but their synthesis speed is the same as their synthesis both involve a forward pass through the generator net. Fig.~\ref{Dynamic_Expansion} presents the results of dynamic texture expansion using f-cgCNN. The exemplar dynamic texture is expanded to 48 frames and the size of each frame is $512 \times 512$. We observe that f-cgCNN successfully reproduces stationary elements and expands the exemplar dynamic texture in both temporal and spatial dimensions, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot the synthesized textures have more frames and each frame is larger than the input exemplars. It should be noticed that f-cgCNN is the first neural texture model that enables us to expand dynamic textures. Fig.~\ref{Sound_Expansion} presents the results of sound texture expansion. In this experiment, we clip the first 16384 data points (less than 1 second) in each sound texture as exemplars, and expand the exemplar to 122880 data points (about 5 seconds) using f-cgCNN. Similar to the case of dynamic texture expansion, f-cgCNN successfully expands the exemplar sound texture while preserving sound elements that occur most frequently. Notice f-cgCNN is also the first texture model that enables us to expand sound textures. \begin{figure}[htb!] \begin{center} \subfigure{ \includegraphics[width=0.25\linewidth]{result/Compare_TextureNet/Ori/047} \includegraphics[width=0.37\linewidth]{result/Compare_TextureNet/TextureNet/047} \includegraphics[width=0.37\linewidth]{result/Compare_TextureNet/fcgcnn/047} }\\ \subfigure{ \includegraphics[width=0.25\linewidth]{result/Compare_TextureNet/Ori/058} \includegraphics[width=0.37\linewidth]{result/Compare_TextureNet/TextureNet/058} \includegraphics[width=0.37\linewidth]{result/Compare_TextureNet/fcgcnn/058} }\\ \end{center} \vspace{-4mm} \caption{Comparison between TextureNet (2-nd column) and f-cgCNN (3-rd column) for image texture expansion. Their results are comparable.} \label{Compare_TextureNet} \vspace{-2mm} \end{figure} \begin{figure}[htb!] \begin{flushleft} \includegraphics[width=0.15\linewidth]{result/Dynamic/ori/water_5/frame_00000001.png} \includegraphics[width=0.15\linewidth]{result/Dynamic/ori/water_5/frame_00000002.png} \includegraphics[width=0.15\linewidth]{result/Dynamic/ori/water_5/frame_00000003.png} \includegraphics[width=0.15\linewidth]{result/Dynamic/ori/water_5/frame_00000004.png} \\ \vspace{1mm} \includegraphics[width=0.23\linewidth]{result/Dynamic/Expansion/water_d/frame_00000000.png} \includegraphics[width=0.23\linewidth]{result/Dynamic/Expansion/water_d/frame_00000001.png} \includegraphics[width=0.23\linewidth]{result/Dynamic/Expansion/water_d/frame_00000002.png} \includegraphics[width=0.23\linewidth]{result/Dynamic/Expansion/water_d/frame_00000003.png} \\ \vspace{1mm} \end{flushleft} \vspace{-4mm} \caption{Results of dynamic texture expansion using f-cgCNN. We present the first 4 frames of exemplar (1-st row) and the first 4 frames of the synthesized results (2-nd row).} \label{Dynamic_Expansion} \vspace{-2mm} \end{figure} \begin{figure}[htb!] \vspace{-2mm} \begin{center} \subfigure[Exemplar]{ \includegraphics[width=0.68\linewidth]{result/Sound/Expansion/ori_norm_shaking_paper.jpg} }\\ \vspace{-4mm} \subfigure[f-cgCNN]{ \includegraphics[width=1\linewidth]{result/Sound/Expansion/our_f_norm_shaking_paper.jpg} } \end{center} \vspace{-4mm} \caption{Results of sound texture expansion using f-cgCNN. Sound texture shown here is ``shaking paper''.} \label{Sound_Expansion} \vspace{-4mm} \end{figure} \subsection{Results on texture inpainting} \label{Inpainting_experiment} \begin{figure}[htb!] \subfigure[Corrupted]{ \begin{minipage}[b]{0.231\linewidth} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Masked/masked_brick_border} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Masked/masked_Camo3} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Masked/masked_fibrous_0145} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Masked/masked_porous} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Masked/masked_water.png} \end{minipage} } \hspace{-2ex} \subfigure[Deep prior~\cite{ulyanov2018deep}]{ \begin{minipage}[b]{0.231\linewidth} \includegraphics[width=1\linewidth]{result/Inpainting/Image/DeepPrior/inpainted_brick_border_png_mask2_png.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/DeepPrior/inpainted_Camo3_jpg_mask2_png.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/DeepPrior/inpainted_fibrous_0145_jpg_mask2_png.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/DeepPrior/inpainted_porous_jpg_mask2_png.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/DeepPrior/inpainted_water_png_mask2_png.png} \end{minipage} } \hspace{-2ex} \subfigure[Deep fill~\cite{yu2018generative}]{ \begin{minipage}[b]{0.231\linewidth} \includegraphics[width=1\linewidth]{result/Inpainting/Image/ContextualAttention/inpainted_brick_border.png} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/ContextualAttention/inpainted_Camo3.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/ContextualAttention/inpainted_fibrous_0145.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/ContextualAttention/inpainted_porous.jpg} \\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/ContextualAttention/inpainted_water.png} \end{minipage} } \hspace{-2ex} \subfigure[cgCNN]{ \begin{minipage}[b]{0.231\linewidth} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Our_Select/brick_border_png_mask2_png_model_1_depth_4_mean_1_inner_30_tv_0_0_fou_0_0001600__jpg.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Our_Select/Camo3_jpg_mask2_png_model_1_depth_4_mean_1_inner_10_tv_0_0200__jpg.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Our_Select/fibrous_0145_jpg_mask2_png_model_1_depth_4_mean_1_inner_10_tv_0_0700__jpg.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Our_Select/porous_jpg_mask2_png_model_1_depth_4_mean_1_inner_10_tv_1e-091000__jpg.jpg}\\ \vspace{-2ex} \includegraphics[width=1\linewidth]{result/Inpainting/Image/Our_Select/water_png_mask2_png_model_1_depth_4_mean_0_inner_10_tv_0_01000__jpg.jpg} \end{minipage} } \hspace{-2ex} \caption{Comparison of several neural inpainting methods. It can be seen that our method produces the clearest results, while the results of other methods are relatively blurry.} \label{image_inpainting} \vspace{-4mm} \end{figure} \begin{table}[b!] \scriptsize \begin{center} \caption{Quantitative evaluation of inpainting results shown in Fig.~\ref{image_inpainting} using MS-SSIM.} \label{image_inpainting_quantitative} \begin{tabular}{r c c c c c c c} \hline & brick & camouflage & fiber & sponge & water \\ \hline DeepPrior~\cite{ulyanov2018deep} &0.978 & 0.897 & 0.856 & 0.904 & 0.956 \\ DeepFill~\cite{yu2018generative} &0.966 & 0.922 & 0.900 & 0.900 & \textbf{0.962} \\ \hline cgCNN &\textbf{0.984} & \textbf{0.930} & \textbf{0.905} & \textbf{0.914} & 0.912 \\ \hline \end{tabular} \end{center} \vspace{-4mm} \end{table} \begin{figure*}[bt!] \vspace{-2mm} \begin{center} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000000.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000001.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000002.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000003.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000004.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000005.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000006.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000007.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000008.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Masked/grass_3/frame_00000009.png}}\\ \vspace{-1.5ex} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000000.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000001.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000002.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000003.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000004.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000005.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000006.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000007.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000008.png}} \subfigure{\includegraphics[width=0.09\linewidth]{result/Inpainting/Dynamic/Our/grass_3/frame_00000009.png}}\\ \end{center} \vspace{-4mm} \caption{Dynamic texture inpainting using our method. We present the corrupted dynamic textures (1-st and 3-rd rows) and inpainted dynamic textures (2-nd and 4-th rows).} \label{dynamic_inpainting} \vspace{-4mm} \end{figure*} \begin{figure}[htb!] \begin{center} \subfigure[Corrupted]{ \includegraphics[width=0.6\linewidth]{result/Inpainting/Sound/ori_norm_Bees} }\\ \subfigure[Inpainted]{ \includegraphics[width=0.6\linewidth]{result/Inpainting/Sound/our_norm_Bees} } \end{center} \vspace{-4mm} \caption{Sound texture inpainting using our method. Sound texture used here is ``bees''.} \label{Sound_inpainting} \vspace{-4mm} \end{figure} For image texture inpainting, we evaluate our algorithm by comparing it with the following two deep image inpainting methods: \begin{itemize} \item[-] {\textbf{Deep prior}}~\cite{ulyanov2018deep}: An inpainting algorithm that utilizes the prior of a random ConvNet. This method dose not require extra training data. \item[-] {\textbf{Deep fill}}~\cite{yu2018generative}: The state-of-the-art image inpainting algorithm. It requires extra training data, and we use the model pretrained on ImageNet for our experiment. \end{itemize} We use the network $(4 \mathbf D \oplus 0 \mathbf S)$ where the number of channels is 64. We first prepare a rectangle mask of size $(60, 60)$ near the center of a image, then we obtain the corrupted texture by applying the mask to a raw texture, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot all pixels within the masked area are set to zero. The border width is set to 4 pixels. All inpainting methods have access to the mask and the corrupted texture, but do not have access to the raw textures. Fig.~\ref{image_inpainting} presents the qualitative comparison. In general, although the baseline methods can handle textures with non-local structures relatively well (the 1-st texture), they can not handle random elements in textures (from the 2-nd to the 5-th textures). Most results of baseline methods are blurry, and the results of deep fill sometimes show obvious color artifacts (the 1-st and 5-th textures). Clearly, our method outperforms other baseline methods, as it is able to inpaint all corrupted exemplars with convincing textural content, and does not produce blurry or color artifacts. Tab.~\ref{image_inpainting_quantitative} presents the quantitative comparison. We calculate the MS-SSIM score between the inpainted textures and the corresponding raw textures (not shown). A higher score indicates a better inpainting result. It can be seen that our method outperforms other baseline methods in most cases. For dynamic texture inpainting, we prepare a mask of size $(25, 25)$, and apply this mask to each frame of dynamic textures. The border width is set to 2 pixels. We use the network $(4 \mathbf D \oplus 0 \mathbf S)$ where the number of channels is reduced to 32. The template is assigned by the user because the grid search may cause memory overhead for GPUs. For sound texture inpainting, the mask covers the interval from 20000-th to 30000-th data point. The border width is set to 1000 data points. We use the same network settings as in the sound texture synthesis experiment. Fig.~\ref{dynamic_inpainting} and Fig.~\ref{Sound_inpainting} present the results of dynamic texture and sound texture inpainting using our method. Similar to the case of image inpainting, we observe that our method successfully fills the corrupted region with convincing textural content, and the overall inpainted textures are natural and clear. It should be noticed that our proposed method is the first neural algorithm for sound texture inpainting. \section{Conclusion} \label{conclusion} In this paper, we present cgCNN for exemplar-based texture synthesis. Our model can synthesize high quality image texture, dynamic texture and sound textures in a unified manner. The experiments demonstrate the effectiveness of our model in texture synthesis, expansion and inpainting. There are several issues need further investigations. We notice that one limitation of cgCNN is that it cannot synthesis dynamic patterns without spatial stationarity, such as the ones studied in~\cite{xie2017synthesizing}. Extending cgCNN to those dynamic patterns would be an interesting direction for further work. Another limitation is that current cgCNN can not learn multiple input textures, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, it can only learn one texture a time. Future works should extend cgCNN to the batch training setting, and explore its potential in down-stream tasks such as texture feature extraction~\cite{xia2017texture} and classification~\cite{xie2018learning}. \bibliographystyle{IEEEtran}
{'timestamp': '2019-12-18T02:13:35', 'yymm': '1912', 'arxiv_id': '1912.07971', 'language': 'en', 'url': 'https://arxiv.org/abs/1912.07971'}
\section{Introduction} \label{sec:intro} Digital signal processing systems face two parallel challenges. On the one hand, with ubiquitous computing power, memory and communication bandwidth, the pressure is on {\em acquisition} devices, such as analog-to-digital converters and digital cameras, to capture signals at ever increasing sampling rates. To date, signal acquisition has been governed by the Shannon/Nyquist sampling theorem, which states that all the information contained in a signal is preserved if it is uniformly sampled at a rate twice as fast as the bandwidth of its Fourier transform. On the other hand, to counter the resulting deluge of Nyquist-rate samples, DSP systems must utilize efficient {\em compression} schemes that preserve the essential information contained in the signals of interest. Transform compression of a discrete-time signal $x \in \mathbb{R} ^N$ involves representing the signal in a suitable basis expansion $x = \Psi \alpha$, with $\Psi$ an $N \times N$ basis matrix, and storing only the $K$ largest basis coefficients. The number of large coefficients in $\alpha$ is known as the {\em sparsity} $K$ of the signal in the basis $\Psi$. For many classes of interesting signals, $K \ll N$, and hence efficient signal compression can be achieved. An intriguing question can thus be asked: can a system simultaneously attain the twin goals of signal acquisition and compression? Surprisingly, the answer in many cases is {\em yes}. This question forms the core of the burgeoning field of Compressive Sensing (CS)~\cite{Donoho04A,Candes04C}. A prototypical CS system works as follows: a signal $x$ of length $N$ is sampled by measuring its inner products with $M \ll N$ vectors; the output of the sampling system is thus given by the vector $y = \Phi x = \Phi \Psi \alpha$, where $\Phi \in \mathbb{R} ^{M \times N}$ is a non-invertible matrix. The CS theory states that with high probability, $x$ can be exactly reconstructed from $y$ provided that ($i$) the elements of $\Phi$ are chosen randomly from subgaussian probability distributions, and ($ii$) the number of samples $M$ is $\bigo{K \log(N/K)}$. Further, this recovery can be carried out in polynomial time using efficient greedy approaches or optimization based methods~\cite{OMP,BPDN}. For some applications, there exist more restrictive signal models than simple sparsity that encode various types of inter-dependencies among the locations of the nonzero signal components. Recent work has led to the development of CS theory and algorithms that are based on {\em structured sparsity} models that are equivalent to a finite union of subspaces~\cite{modelCS,EldarUSS}. By exploiting the dependencies present among the nonzero coefficients, $M$ can be significantly reduced; for certain structured sparsity models, with high probability the number of measurements $M$ required for exact recovery is merely $\bigo{K}$ (without the additional logarithmic dependence on the signal length $N$). Despite the utility of sparsity models, in many real-world sensing applications the assumption of sparsity itself is an oversimplification. For example, a electrophysiological recording of a neuron is often approximated as a series of spikes but can be better modeled as a series of more elongated pulses, the pulse shape being characteristic to the particular neuron. As another example, a high-resolution image of the night sky consists of a field of points (corresponding to the locations of the stars) convolved with the point spread function of the imaging device. Such signals can be modeled as an $S$-sparse {\em spike stream} that have been convolved with an unknown $F$-sparse {\em impulse response} so that the resulting overall sparsity $K = SF$. We call such a signal a {\em pulse streams}. For the compressive sensing and recovery of a pulse stream, the number of measurements $M$ would incur a corresponding multiplicative increase by a factor of $F$ when compared to sensing merely the underlying spike streams; this can be prohibitive in some situations. Thus, it is essential to develop a CS framework that can handle not just sparse signals but also more general pulse streams. In this paper, we take some initial steps towards such a CS pulse stream framework. First, we propose a deterministic signal model for pulse streams. We show that our proposed model is equivalent to an {\em infinite union of subspaces}. Second, as our main theoretical contribution, we derive a bound on the number of random linear measurements $M$ required to preserve the essential information contained in such signals. The proof relies on the particular high-dimensional geometry exhibited by the proposed model. Our derivation shows that $M = \bigo{(S + F) \log N}$; i.e., $M$ is proportional to the number of degrees of freedom of the signal $S + F$ but {\em sublinear} in the total sparsity $K = SF$. Third, we develop algorithms to recover signals from our model from $M$ measurements. Under certain additional restrictions on the signals of interest, one of the algorithms provably recovers both the spike stream and the impulse response. We analyze its convergence, computational complexity, and robustness to variations in the pulse shape. Numerical experiments on real and synthetic data sets demonstrate the benefits of the approach. As demonstrated in Figure~\ref{fig:ex1}, we obtain significant gains over conventional CS recovery methods, particularly in terms of reducing the number of measurements required for recovery. \begin{figure}[!t] \centering \begin{tabular}{cc} {\includegraphics[width=0.3\hsize]{ex11_orig.eps}} & {\includegraphics[width=0.3\hsize]{ex11_pulse2.eps}} \\ (a) & (b) \\ {\includegraphics[width=0.3\hsize]{ex11_cosamp.eps}} & {\includegraphics[width=0.3\hsize]{ex11_csdecon.eps}} \\ (c) & (d) \end{tabular} \caption{\small\sl (a) Test signal of length $N = 1024$ obtained by convolving a spike stream with $S = 6$ with an impulse response of length $F = 11$, so that the total signal sparsity $K =SF=66$. (b) Profile of one pulse . Signal reconstruction from $M=100$ random Gaussian measurements performed using (c) a state-of-the-art CS recovery algorithm (CoSaMP~\cite{cosamp}, MSE = 13.42), and (d) our proposed Algorithm~\ref{alg:csdecon2} (MSE = 0.0028). \label{fig:ex1}} \end{figure} This paper is organized as follows. In Section~\ref{sec:back}, we review the rudiments of standard and structured sparsity-based CS. In Section~\ref{sec:fss}, we propose a deterministic signal model for pulse streams and discuss its geometric properties. In Section~\ref{sec:rip}, we derive bounds on the number of random measurements required to sample signals belonging to our proposed model. In Section~\ref{sec:rec}, we develop an algorithm for stable signal recovery and analyze its convergence and robustness to model mismatch. Numerical results are presented in Section~\ref{sec:exp}, followed by a concluding discussion in Section~\ref{sec:conc}. \section{Background on Compressive Sensing} \label{sec:back} \subsection{Sparse signal models} A signal $x \in \mathbb{R} ^N$ is $K$-{\em sparse} in the orthonormal basis $\Psi \in \mathbb{R} ^{N \times N}$ if the corresponding basis representation $\alpha = \Psi^T x$ contains no more than $K$ nonzero elements. Without loss of generality, we assume the sparsity basis $\Psi$ to be the identity matrix for $ \mathbb{R} ^N$. The locations of the nonzeros of $x$ can additionally be encoded by a binary vector of length $N$ with a 1 indicating a nonzero; this vector $\sigma(x)$ is called the {\em support} of $x$. Denote the set of all $K$-sparse signals in $ \mathbb{R} ^N$ as $\Sigma_K$. Geometrically, $\Sigma_K$ can be identified as the union of ${N \choose K}$ subspaces of $ \mathbb{R} ^N$, with each subspace being the linear span of exactly $K$ canonical unit vectors of $ \mathbb{R} ^N$. For a general $x \in \mathbb{R} ^N$, we define its best $K$-sparse approximation $x_K$ as $$ x_K = \arg \min_{u \in \Sigma_K} \| x -u \|_2 . $$ Many signals of interest exhibit more complex dependencies in terms of their nonzero values and locations. For instance, signals that permit only a small number of admissible support configurations can be modeled by a restricted union of subspaces, consisting only of $L_K$ canonical subspaces (so that $L_K$ is typically much smaller than ${N \choose K}$). Thus, if $\Sigma = \{ \sigma_1, \sigma_2, \ldots, \sigma_{L_K} \}$ denotes the restricted set of admissible supports, then a {\em structured sparsity model}~\cite{modelCS} is the set \begin{equation} \mathcal{M}_K := \{ x : \sigma(x) \in \Sigma \} .\label{eq:ssm} \end{equation} \subsection{Signal acquisition via nonadaptive linear measurements} Suppose that instead of collecting all the coefficients of a vector $x \in \mathbb{R} ^N$, we merely record $M$ inner products (measurements) of $x$ with $M < N$ pre-selected vectors; this can be represented in terms of a linear transformation $y = \Phi x, \Phi \in \mathbb{R} ^{M \times N}$. $\Phi$ is called the {\em sampling matrix}; it is at most rank-$M$ and hence has a nontrivial nullspace. The central result in Compressive Sensing (CS) is that despite the non-invertible nature of $\Phi$, if $x$ is {\em sparse}, then it can be exactly recovered from $y$ if $\Phi$ satisfies a condition known as the restricted isometry property (RIP): \begin{DEFI}~\cite{CandesCS} An $M\times N$ matrix $\Phi$ has the $K$-RIP with constant $\delta_K$ if, for all $x \in \Sigma_K$, \begin{equation} (1-\delta_K) \|x\|_2^2 \le \|\Phix\|_2^2 \le (1+\delta_K) \|x\|_2^2. \label{eq:rip} \end{equation} \end{DEFI} A matrix $\Phi$ with the $K$-RIP essentially ensures a {\em stable embedding} of the set of {\em all} $K$-sparse signals $\Sigma_K$ into a subspace of dimension $M$. The RIP requires $\Phi$ to leave the norm of every sparse signal approximately invariant; also, $\Phi$ must necessarily not contain any sparse vectors in its nullspace. At first glance, it is unclear if a matrix $\Phi$ that satisfies the RIP should even exist if $M < N$; indeed, deterministic design of a sampling matrix having the RIP is an NP-complete problem. Nevertheless, it has been shown~\cite{CandesCS} that provided $M \ge \bigo{K \log(N/K)}$, a matrix $\Phi$ whose elements are i.i.d. samples from a random subgaussian distribution possesses the RIP with high probability. Thus, $M$ can be linear in the sparsity of the signal set $K$ and {\em only logarithmic} in the signal length $N$. An analogous isometry condition holds for structured sparsity models containing $L_K$ canonical subspaces~\cite{samplingunion,modelCS,EldarUSS}. This is known as the {\em model-based RIP} and is defined thus: $\Phi$ satisfies the {\em $\mathcal{M}_K$-RIP} if (\ref{eq:rip}) holds for all $x \in \mathcal{M}_K$. It can be shown~\cite{samplingunion} that the number of measurements $M$ necessary for a subgaussian sampling matrix to have the $\mathcal{M}_K$-RIP with constant $\delta$ and with probability $1-e^{-t}$ is bounded as \begin{equation} M \ge \frac{c}{\delta^2}\left(\ln(2 L_K) + K \ln \frac{12}{\delta}+t \right). \label{eq:blum} \end{equation} We can make two inferences from (\ref{eq:blum}). First, the number of measurements $M$ is logarithmic in the {\em number} of subspaces in the model; thus, signals belonging to a more concise model can be sampled using fewer random linear measurements. Second, $M$ is {\em at least linear} in the sparsity $K$ of the measured signal. \subsection{Recovery methods} Given measurements $y = \Phi x$, CS recovery methods aim to find the ``true'' sparse signal $x$ that generated $y$. One possible method is to seek the sparsest $x$ that generates the measurements $y$, i.e., \begin{equation} \widehat x = \arg \min_{x'} \| x' \|_0~~\textrm{subject to}~~y = \Phi x' . \label{eq:l0} \end{equation} where the $\ell_0$ ``norm'' of a vector $x'$ denotes the number of nonzero entries in $x'$. This method can be used to obtain the true solution $x$ provided $M \geq 2K$. However, minimizing the $\ell_0$ norm can be shown to be NP-complete and is not stable in the presence of noise in the measurements~\cite{CandesCS}. If the sampling matrix $\Phi$ possesses the RIP, then tractable algorithms for CS recovery can be developed. These broadly follow two different approaches. The first approach entails solving a convex relaxation of (\ref{eq:l0}), e.g., \begin{equation} \widehat x = \arg \min_{x'} \| x' \|_1~~\textrm{subject to}~~y = \Phi x' , \label{eq:l1} \end{equation} which corresponds to a linear program and hence can be solved in polynomial time. A common variant of this formulation includes accounting for noise of bounded magnitude in the measurements~\cite{BPDN}. The second approach entails an iterative, greedy selection of the support $\sigma(x)$ of the true solution $x$. This approach is employed by several algorithms such as orthogonal matching pursuit (OMP)~\cite{OMP}, compressive sampling matching pursuit (CoSaMP)~\cite{cosamp}, and iterative hard thresholding~\cite{iht} Both kinds of approaches provide powerful stability guarantees in the presence of noise while remaining computationally efficient. Given noisy measurements of any signal $x \in \mathbb{R} ^N$ so that $y = \Phi x + n$, if $\Phi$ possesses the RIP, then the signal estimate $\widehat x$ obtained by these algorithms has bounded error: \begin{equation} \| x - \widehat{\x} \|_2 \leq C_1 \|x-x_K\|_2 + \frac{C_2}{\sqrt{K}} \|x-x_K\|_1 + C_3 \|n\|_2, \label{eq:recbound} \end{equation} where $x_K$ is the best $K$-sparse approximation to $x$ and $C_1, C_2$ are constants. Furthermore, with a simple modification, algorithms like CoSaMP and iterative hard thresholding can be used to reconstruct signals belonging to any structured sparsity model~\cite{modelCS}. To summarize, at the core of CS lie three key concepts: a signal model exhibiting a particular type of low-dimensional geometry in high-dimensional space, a low-rank linear mapping that provides a stable embedding of the signal model into a lower dimensional space, and algorithms that perform stable, efficient inversion of this mapping. \section{Signal Models for Pulse Streams} \label{sec:fss} Our objective is to extend the CS theory and algorithms to pulse stream signals. The conventional sparse signal model $\Sigma_K$ does not take into account the dependencies between the values and locations of the nonzeros in such signals. Indeed, these dependencies cannot be precisely captured by any structured sparsity model $\mathcal{M}_K$ that merely comprises a reduced subset of the subspaces in $\Sigma_K$. This necessitates richer models that capture the {\em convolutional} structure present in the nonzero coefficients of pulse streams. \subsection{General model} Consider the following deterministic model for signals that can modeled by the convolution of an $S$-sparse spike stream $x \in \mathbb{R} ^N$ with an $F$-sparse impulse response $h \in \mathbb{R} ^N$. \begin{DEFI} Let $\mathcal{M}_S \subset \mathbb{R} ^N$ be a union of $S$-dimensional canonical subspaces, as defined in (\ref{eq:ssm}). Similarly, let $\mathcal{M}_F \subset \mathbb{R} ^N$ be a union of $F$-dimensional canonical subspaces. Consider the set \begin{equation} \mathcal{M}^z_{S,F} := \{ z \in \mathbb{R} ^N : z = x \ast h,~\textrm{such that}~x \in \mathcal{M}_S~\textrm{and}~h \in \mathcal{M}_F\} , \label{eq:fss} \end{equation} where $\ast$ denotes the circular convolution operator. Then, $\mathcal{M}^z_{S,F}$ is called a {\em pulse stream model}. \label{def:psm} \end{DEFI} We make two immediate observations: \subsubsection{Commutativity} Owing to the commutative property of the convolution operator, an element $z$ in $\mathcal{M}^z_{S,F}$ can be represented in multiple ways: \begin{equation} z=x \ast h = h \ast x = Hx = Xh , \label{eq:filt} \end{equation} where $H$ (respectively, $X$) is a square circulant matrix with its columns comprising circularly shifted versions of the vector $h$ (respectively, $x$). Therefore, Definition~\ref{def:psm} remains unchanged if the roles of $x$ and $h$ are reversed. We exploit this property during signal recovery from CS measurements in Section~\ref{sec:rec}. \subsubsection{Geometry} It is useful to adopt the following geometric point of view: for a fixed $h \in \mathcal{M}_F$, the set $\{h \ast x: x \in \mathcal{M}_S\}$ forms a finite union of $S$-dimensional subspaces, owing to the fact that it is generated by the action of $h$ on $L_S$ canonical subspaces. Denote this set by $h(\mathcal{M}_S)$. Then, the pulse stream model in (\ref{eq:fss}) can be written as $$ \mathcal{M}^z_{S,F} = \bigcup_{h \in \mathcal{M}_F} h(\mathcal{M}_S) . $$ Thus, our signal model can be interpreted as an {\em infinite union of subspaces}.\footnote{A general theory for sampling signals from infinite unions of subspaces has been introduced in~\cite{dosamplingunion}.} Note that (\ref{eq:blum}) cannot be applied in this case since it only considers finite unions of subspaces. However, let $K =SF$ denote the maximum sparsity of the signals in Definition~\ref{def:psm}. Then, it is clear that the set $\mathcal{M}^z_{S,F}$ is a very small subset of $\Sigma_K$, the set of all $SF$-sparse signals. We exploit this property while proving our main sampling results in Section~\ref{sec:rip}. Note that the exact definition of convolution operator changes depending on the domain of the signals of interest. For one-dimensional (1D) time domain signals of length $N$, the square matrix $H$ is formed by all $N$ circular shifts of the vector $h$; for 2D images of size $N$ pixels, $H$ is formed by all 2D circular shifts of $h$, and so forth. \subsection{Special case: Disjoint pulses} The model proposed in Definition~\ref{def:psm} is general and applicable even to signals in which successive pulses overlap with each other. In Section~\ref{sec:rip} we develop a lower bound on the number of samples required to preserve the essential information contained in an arbitrary pulse stream. However, feasible recovery of such general pulse streams from CS measurements is rather difficult; we examine this in detail in Section~\ref{sec:rec}. Therefore, we will also consider a more restrictive model where the pulses are assumed to not overlap. For concreteness, consider 1D time domain signals as specified by (\ref{eq:filt}). Note that $H$ and $x$ need not be unique for a given $z$; any ordered pair $(\alpha H, x / \alpha)$ satisfies (\ref{eq:filt}), and so does $(H',x')$, where $H'$ is generated by a circularly shifted version of $h$ by a time delay $+\tau$ and $x'$ is a circularly shifted version of $x$ by $-\tau$. To eliminate these ambiguities, we make the following two assumptions: \begin{enumerate} \item the impulse response $h$ is {\em concentrated}, i.e., all the nonzero coefficients of $h$ are contiguously located in its first $F$ indices. Thus, the structured sparsity model $\mathcal{M}_F$ for the vector $h$ consists of the lone subspace spanned by the first $F$ canonical unit vectors. \item the spikes are sufficiently separated in time. In particular, any two consecutive spikes in the vector $x$ are separated at least by $\Delta$ locations, where $\Delta \geq F$. A structured sparsity model for such time-domain signals with sufficiently separated nonzeros has been introduced in~\cite{spikemodel}. \end{enumerate} The notion of disjoint pulses can be immediately generalized to signals defined over domains of arbitrary dimension. Consider $S$-sparse spike streams $x$ defined over a domain of dimension $n$. Suppose that at most one spike in $x$ can occur in a hypercube in $ \mathbb{R} ^n$ with side $\Delta$. This defines a special structured sparsity model for the spike streams of interest; denote this model as $\mathcal{M}_S^\Delta$. Further, let the $F$ nonzero coefficients in $h$ be concentrated within a hypercube centered at the domain origin whose side length is no greater than $\Delta$. Then, a deterministic model for sums of non-overlapping pulses of arbitrary dimension can be proposed as follows. \begin{DEFI} Let $\mathcal{M}^\Delta_S$ be the structured sparsity model for spike streams as defined above. Let $\mathcal{M}_F$ be the subspace of concentrated impulse responses of sparsity $F$. Define the set \begin{equation} \mathcal{M}(S,F, \Delta) = \{ z \in \mathbb{R} ^N : z = x \ast h,\text{such that}~x \in \mathcal{M}^\Delta_S~\text{and}~h \in \mathcal{M}_F\} . \end{equation} Then, $\mathcal{M}(S,F,\Delta)$ is called the {\em disjoint pulse stream model}. \end{DEFI} This model eliminates possible ambiguities that arise due to the shift-invariant nature of convolution; i.e., the locations of the nonzero spikes that generate a disjoint pulse stream are {\em uniquely} defined. This property proves to be essential in developing and analyzing a feasible method for signal recovery (Section~\ref{sec:rec}). See Figure~\ref{fig:ex1}(a) for an example stream of disjoint pulses in 1D. \section{Sampling Theorems for Pulse Streams} \label{sec:rip} Pulse streams can be modeled as an infinite union of low-dimensional subspaces. The next ingredient in the development of a CS framework for such signals is a bound on the number of linear samples required to preserve the essential information of this signal set. \subsection{General pulse streams} We derive a sampling theorem for signals belonging to the model $\mathcal{M}^z_{S,F}$ proposed in Definition~\ref{def:psm}. Suppose that $K=SF$. As mentioned above, $\mathcal{M}^z_{S,F}$ is a subset of the set of all $K$-sparse signals $\Sigma_{K}$. On the other hand, only a small fraction of all $K$-sparse signals can be written as the convolution of an $S$-sparse spike stream with an $F$-sparse impulse response. Thus, intuition suggests that we should be able to compressively sample signals from this set using fewer random linear measurements than that required for the set of all $K$-sparse signals. The following theorem makes this precise. \begin{THEO} Suppose $\mathcal{M}^z_{S,F}$ is the pulse stream model from Definition~\ref{def:psm}. Let $t > 0$. Choose an $M \times N$ i.i.d. subgaussian matrix $\Phi$ with \begin{equation} M \ge \bigo{\frac{1}{\delta} \left( (S + F)\ln\left(\frac{1}{\delta}\right) + \log(L_S L_F) + t \right) }. \label{eq:sfrip} \end{equation} Then, $\Phi$ satisfies the following property with probability at least $1 - e^{-t}$: for every pair $z_1, z_2 \in \mathcal{M}^z_{S,F}$, \begin{equation} (1-\delta) \|z_1 - z_2\|_2^2 \le \|\Phi z_1 - \Phi z_2\|_2^2 \le (1+\delta) \|z_1 - z_2\|_2^2. \label{eq:zrip} \end{equation} \label{thm:zrip} \end{THEO} The proof of this theorem is presented in Appendix~\ref{app:jl}. An important consequence of the theorem is that, by definition, $\mathcal{M}_S$ is a subset of the set of all $S$-dimensional canonical subspaces. In particular, \begin{equation} L_S \leq {N \choose S} \approx \left( \frac{eN}{S} \right)^S . \end{equation} Similarly, $L_F \leq \left( \frac{eN}{F} \right)^F .$ Therefore, the logarithmic term in the expression for $M$ in (\ref{eq:sfrip}) scales as: \begin{equation} \log(L_S L_F) \leq S + S \log(N/S) + F + F \log(N/F) \leq 2(S + F) \log N \end{equation} Thus, (\ref{eq:sfrip}) indicates that the number of measurements $M$ required for the sampling of signals in $\mathcal{M}^z_{S,F}$ is proportional to $(S + F)$. Therefore, $M$ is {\em sublinear} in the total sparsity of the signals $K = SF$. In contrast, conventional structured sparsity models would require at least $2K = 2SF$ linear measurements to ensure a stable embedding of the signal set~\cite{samplingunion}. In addition, the number of degrees of freedom of each signal can be considered to be $\bigo{S + F}$, corresponding to the positions and locations of the coefficients of the sparse signal and impulse response. Therefore, the bound in Theorem~\ref{thm:zrip} is essentially optimal for the signal class $\mathcal{M}^z_{S,F}$. \subsection{Special case: Disjoint pulse streams} Theorem~\ref{thm:zrip} is valid for signals belonging to the general model $\mathcal{M}_{S,F}^z$. In the case of disjoint pulse streams, we can derive a more stringent lower bound. By definition, the $F$ nonzero coefficients of $h$ are concentrated in a hypercube around the domain origin. Therefore, $h$ lies in a lone subspace spanned by $F$ basis vectors of $ \mathbb{R} ^N$, and hence $L_F = 1$. Further, a simple modification of Theorem 1 of~\cite{spikemodel} states that the number of subspaces in the structured sparsity model $\mathcal{M}_S^\Delta$ is given by \begin{equation} L_S = \binom{N - S\Delta + S-1}{S-1} . \label{eq:suppcount} \end{equation} Thus, for the disjoint pulse stream model $\mathcal{M}(S,F,\Delta)$, we obtain the following easy corollary to Theorem~\ref{thm:zrip}. \begin{COROLLARY} If $t > 0$ and \begin{equation} M \ge \bigo{\frac{1}{\delta} \left( (S + F)\ln\left(\frac{1}{\delta}\right) + S \log(N/S - \Delta) + t \right) } \label{eq:sfdrip} \end{equation} then an $M \times N$ i.i.d.\ gaussian matrix $\Phi$ will satisfy (\ref{eq:zrip}) with probability at least $1 - e^{-t}$ for any pair of signals $z_1, z_2$ belonging to the $\mathcal{M}(S,F,\Delta)$ model. \label{corr:deltam} \end{COROLLARY} Note that the parameter $\Delta$ can be at most $N/S$, since $S$ spikes must be packed into $N$ coefficient locations with at least $\Delta$ locations separating any pair of spikes. A higher value of $\Delta$ implies that the model $\mathcal{M}^\Delta_S$ admits a smaller number of configurations; thus, (\ref{eq:sfdrip}) implies that fewer measurements are needed to sample pulse streams in which the pulses are widely separated. \section{Recovery of Pulse Streams} \label{sec:rec} The final ingredient in our extended CS framework for pulse streams consists of new algorithms for the stable recovery of the signals of interest from compressive measurements. This problem can be stated as follows. Suppose $z \in \mathcal{M}_{S,F}^z$. If w are given the noisy measurements $$ y = \Phi z + n = \Phi H x + n = \Phi X h + n , $$ then we aim to reconstruct $z$ from $y$. The main challenge stems from the fact that {\em both} $x$ (respectively, $X$) and $h$ (respectively, $H$) are unknown and have to be simultaneously inferred. This problem is similar to performing sparse approximation with {\em incomplete} knowledge of the dictionary in which the target vector (either $x$ or $h$) is sparse. This problem has received some interest in the literature~\cite{bestbasis,herobd,cspt}; the common approach has been to first assume that a training set of vectors $\{x_i\}$ exists for a fixed impulse response $h$, then infer the coefficients of $h$ using a sparse learning algorithm (such as LASSO~\cite{lasso} or basis pursuit~\cite{BPDN}), and then solve for the coefficients $\{x_i\}$. In the absence of training data, we must infer both the spike locations and the impulse response coefficients. Therefore, our task is also similar to {\em blind deconvolution}~\cite{bdecon}; the main differences are that we are only given access to the random linear measurements $y$ as opposed to the Nyquist rate samples $z$, and that our primary aim is to reconstruct $z$ as faithfully as possible as opposed to merely reconstructing $x$. Our general approach will be to fix an estimate of $h$, obtain the ``best possible" estimate of $x$, update our estimate of $h$, and iterate. This is commonly known as {\em alternating minimization} (AM) and has been shown to be suitable for blind deconvolution settings~\cite{tchan}. As demonstrated below in the proof of Theorem~\ref{thm:alg1}, we require that the best possible estimate of the spike stream $x$ and the impulse response $h$ at each iteration are unique. For this reason, we will assume that our target signal $z$ belongs to the disjoint pulse stream model $\mathcal{M}(S,F,\Delta)$. \subsection{Alternating minimization with exhaustive search} \label{subsec:am} Consider $z \in \mathcal{M}(S,F,\Delta)$, so that $z = x \ast h$. This implies that the spikes in $x$ are separated by a minimum separation distance $\Delta$ and that the impulse response $h$ is concentrated. Suppose first that we are given noiseless CS measurements $y = \Phi z$. We fix a candidate support configuration $\sigma$ for the spike stream (so that $\sigma$ contains $S$ nonzeros.) Then, we form the circulant matrix $\widehat H$ from all possible shifts of the current estimate of the impulse response $\widehat{h}$ (denote this operation as $\widehat H=\mathbb{C}(\widehat{h})$). Further, we calculate the dictionary $\Phi \widehat H$ for the spike stream $x$, and select the submatrix formed by the columns indexed by the assumed spike locations $\sigma$ (denote this submatrix as $(\Phi \widehat H)_\sigma$). This transforms our problem into an overdetermined system, which can be solved using least-squares. In summary, we use a simple matrix pseudoinverse to obtain an estimate for $\widehat x$: $$ \widehat x = (\Phi \widehat{H})_{\sigma}\pinv y . $$ This gives us an estimate of the spike coefficients $\widehat{\x}$ for the assumed support configuration $\sigma$. We now exploit the commutativity of the convolution operator $\ast$. We form the circulant matrix $\widehat X$, form the dictionary $\Phi \widehat X$ for the impulse response and select the submatrix $(\Phi \widehat{X})_f$ formed by its first $F$ columns. Then, we solve a least-squares problem to obtain an estimate $\widehat{h}$ for the impulse response coefficients: $$ \widehat h = (\Phi \widehat{X})_f\pinv y . $$ Then, we form our signal estimate $\widehat z = \widehat{\x} \ast \widehat{h}$. The above two-step process is iterated until a suitable halting criterion (e.g., convergence in norm for the estimated signal $\widehat z$). This process is akin to the Richardson-Lucy algorithm for blind deconvolution~\cite{starck}. The overall reconstruction problem can be solved by repeating this process for every support configuration $\sigma$ belonging to the structured sparsity model $\mathcal{M}_S^\Delta$ and picking the solution with the smallest norm of the residual $r = y - \Phi \widehat z$. The procedure is detailed in pseudocode form in Algorithm~\ref{alg:csdecon1}. \begin{algorithm*}[!t] \caption{Alternating minimization with exhaustive search} \label{alg:csdecon1} \begin{tabbing} Inputs: Sampling matrix $\Phi$, measurements $y = \Phi x$, model parameters $\Delta$, $S$, $F$, threshold $\epsilon$ \\ Output: $\widehat z \in \mathcal{M}(S,F,\Delta)$ such that $y - \Phi \widehat z$ is small \\ $\widehat{\x}=0$, $\widehat h = (\mathbf{1}^T_F, 0, \ldots, 0) / \sqrt{F}$; $i = 0$ \hspace{18mm} \{initialize\} \\ {\bf for} \= $\sigma \in \mathcal{M}_S^\Delta$ {\bf do} \hspace{55mm}\= \\ \> 1. $\widehat H = \mathbb{C}(\widehat{h}), \Phi_h = (\Phi \widehat H)_\sigma$ \> \{form dictionary for spike stream\} \\ \> 2. $\widehat{\x} \leftarrow \Phi_h\pinv y$ \> \{update spike stream estimate \} \\ \> 3. $\widehat X = \mathbb{C}(\widehat{\x}), \Phi_x = (\Phi \widehat X)_f$ \> \{form dictionary for impulse response\} \\ \> 4. $\widehat{h} \leftarrow \Phi_x\pinv y$ \> \{update impulse response estimate\} \\ \> 5. $\widehat{\z} \leftarrow \widehat{\x} \ast \widehat{h}$ \> \{form signal estimate\} \\ {\bf if} \= $\|y- \Phi \widehat{\z}\|_2 < \epsilon$ \> \{check for energy in residual\} \\ \> {\bf return} $\widehat{\z}$\\ {\bf end if} \\ {\bf end for} \end{tabbing} \vspace*{-3mm} \end{algorithm*} Thus, Algorithm~\ref{alg:csdecon1} consists of performing alternating minimization for a given estimate for the support of the underlying spike stream $x$, and exhaustively searching for the best possible support. Under certain conditions on the sampling matrix $\Phi$, we can study the convergence of Algorithm~\ref{alg:csdecon1} to the correct answer $z$, as encapsulated in the following theorem. \begin{THEO} Let $z \in \mathcal{M}(S,F,\Delta)$ and suppose that $\Phi$ satisfies (\ref{eq:zrip}) with constant $\delta$ for signals belonging to $\mathcal{M}(S,F,\Delta)$. Suppose we observe $y = \Phi z$ and apply Algorithm~\ref{alg:csdecon1} to reconstruct the signal. Let $\widehat z_i$ be an intermediate estimate of $z$ at iteration $i$ of Algorithm~\ref{alg:csdecon1}. Then: \begin{enumerate} \item The norm of the residual $\|y - \Phi \widehat z_i \|_2$ monotonically decreases with iteration count $i$. \item If at any iteration $i$ $$ \|y - \Phi \widehat z_i \|_2 \leq \epsilon, $$ then we are guaranteed that $$ \|z - \widehat z_i\|_2 \leq c \epsilon, $$ where $c$ depends only on $\delta$. \end{enumerate} \label{thm:alg1} \end{THEO} The proof of this theorem is provided in Appendix~\ref{app:alg}. The first part of the theorem implies that for any given support configuration $\sigma$, Steps 1 through 4 in Algorithm~\ref{alg:csdecon1} are guaranteed to converge to a generalized fixed point~\cite{amtropp}. The second part of the theorem provides a condition on the detection of the true support configuration $\sigma$ in the following weak sense: if the energy of the residual of the signal estimate is small, then the signal has been accurately reconstructed. \subsection{Model mismatch} \label{subsec:mm} In practical situations, we would expect to have minor variations in the shapes of the $S$ pulses in the signal $z$. In this case, $z$ can no longer be expressed as $H x$ where $H$ is a circulant matrix. Let $\{h_1, h_2, \ldots, h_S\}$ be length-$F$ vectors corresponding to each of the $S$ pulses in the signal, and let the length-$S$ spike stream $\widetilde{x} = (\alpha_1, \alpha_2, \ldots, \alpha_S)$. Further, let $\mathbb{S}_i$ be the circular shift operator that maps the $i^{\textrm{th}}$ pulse shape $h_i$ into its corresponding vector in $ \mathbb{R} ^N$. Then, we have \begin{equation} z = \sum_{i = 1}^S \alpha_i \mathbb{S}_i(h_i), \label{eq:varpuls} \end{equation} or equivalently, $$ z = \widetilde{H} \widetilde{x}, $$ where $\widetilde{H} = [\mathbb{S}_1(h_1), \ldots, \mathbb{S}_S (h_S)]$ is an $N \times S$ matrix. Assuming that the spikes in $x$ are separated by at least $\Delta$ locations, the matrix $\widetilde{H}$ is {\em quasi-Toeplitz}~\cite{cri}, i.e., the columns of $\widetilde{H}$ are circular shifts of one another with no more than one nonzero entry in every row. An attractive property of quasi-Toeplitz matrices is that there exist analytical expressions for their pseudo-inverses. Suppose the measurement matrix $\Phi$ equals the identity, i.e., we are given Nyquist-rate samples of $z$. Then the matrices $\Phi_h$ and $\Phi_x$ in Step 2 of Algorithm~\ref{alg:csdecon1} are also quasi-Toeplitz, and hence $\Phi_h\pinv$ and $\Phi_x\pinv$ can be computed in closed form. Thus, given an estimate of the pulse shape $\widehat{h}_0$, we can derive closed-form expressions for the next impulse reponse estimate. Additionally, we can obtain an intermediate estimate for the spike stream $\widetilde{x}$. Suppose the innermost loop of Algorithm~\ref{alg:csdecon1} converges to a fixed point estimate $\widehat{h}$. Since the least-squares equations are homogenous, we may assume that $\| \widehat{h} \|_2 = 1$ without loss of generality. We dub $\widehat{h}$ the {\em anchor pulse} for the set of pulse shapes $\{h_1, h_2, \ldots, h_S\}$. The following theorem provides an expression relating the anchor pulse to the component pulse shapes. \begin{THEO} Consider $z$ as defined in (\ref{eq:varpuls}). Let $\widehat{h}$ be the anchor pulse for the set of pulse shapes $\{h_1, h_2, \ldots, h_S\}$. Define $c_i = \langle h_i, \widehat{h} \rangle$ for $i = 1,\ldots,S$.~ Then, we have that \begin{equation} \widehat{h} = \frac{\sum_{i=1}^S c_i \alpha_i^2 h_i}{\sum_{i=1}^S c_i^2 \alpha_i^2} . \label{eq:ap} \end{equation} \label{thm:anchor} \end{THEO} The proof of this theorem is provided in Appendix~\ref{app:anchor}. Equation~(\ref{eq:ap}) implies that the anchor pulse $\widehat{h}$ is a weighted linear combination of the component pulses $h_i,~i = 1,\ldots,S$, with the weights defined by the corresponding spike coefficients $\alpha_i$ and the inner products $c_i$. The anchor pulse remains unchanged if the spike coefficient vector $\widetilde{x}$ is multiplied by any constant $C$. Therefore, the anchor pulse can be viewed as a {\em scale-invariant average} of the component pulse shapes. Theorem~\ref{thm:anchor} applies to Nyquist-rate samples of the signal $z$. In the case of low-rate CS measurements $y = \Phi z$, the convergence analysis of Algorithm~\ref{alg:csdecon1} for the general case of $S$ different pulse shapes becomes more delicate. If $\Phi$ possesses the RIP only for $z \in \mathcal{M}(S,F,\Delta)$, then it could be that two different pulse streams $z_1, z_2$ (each with varying shapes across pulses) are mapped by $\Phi$ to the same vector in $ \mathbb{R} ^M$, i.e., $\Phi z_1 = \Phi z_2$; thus, the unique mapping argument employed in the proof of Theorem~\ref{thm:alg1} cannot be applied in this case. One way to analyze this case is to recognize that by allowing arbitrary pulse shapes $\{h_1, h_2, \ldots, h_S\}$, our space of signals of interest is equivalent to a special structured sparsity model that consists of all $K$-sparse signals whose non-zeros are arranged in $S$ blocks of size $F$ and the starting locations of consecutive blocks are separated by at least $\Delta$ locations. As discussed in Section~\ref{sec:back}, stable CS reconstruction for signals from this model requires at least $M = 2SF = 2K$ measurements; thus, Algorithm~\ref{alg:csdecon1} converges in the general case given that $M$ is proportional to $K$. Thus, in the case of arbitrary pulse shapes, the number of measurements required by Algorithm~\ref{alg:csdecon1} is on the same order as the number of measurements required for conventional structured sparsity-based CS recovery. \subsection{Iterative support estimation} \label{subsec:fa} Algorithm~\ref{alg:csdecon1} involves iteratively solving a combinatorial number of estimation problems. This becomes infeasible for even moderate values of $N$. A simpler method can be proposed as follows: instead of cycling through every possible support configuration $\sigma_i$ for the spike stream $x$, we instead retain an {\em estimate} of the support configuration, based on the current estimates of the spike stream $\widehat{\x}$ and impulse response $\widehat{h}$, and update this estimate with each iteration. In order to ensure that the support estimate belongs to $\mathcal{M}_S^\Delta$, we leverage a special CS recovery algorithm for signals belonging to $\mathcal{M}_S^\Delta$ that is based on CoSaMP~\cite{cosamp}. We provide an outline of the algorithm here for completeness; see~\cite{spikemodel} for details. At each iteration, given an estimate of the spike coefficients $x$, we need to solve for the best $\mathcal{M}^\Delta_S$-approximation to $x$. Let $x = (x_1, x_2, \ldots,x_N)^T$. Given any binary vector $s = (s_1, s_2, \ldots,s_N)^T$ of length $N$, let: $$ x_{\vert s} := (s_1 x_1, s_2 x_2, \ldots, s_N x_N) , $$ so that $x_{\vert s}$ is the portion of the signal $x$ lying within the support $s$. Our goal is to solve for the choice of support $s$ so that $x_{\vert s}$ belongs to $\mathcal{M}^\Delta_S$ and $\|x - x_{\vert s} \|_2$ is minimized. The following constraints on the support vector $s$ follow from the definition of $\mathcal{M}_S^\Delta$: \begin{eqnarray} s_1 + s_2 + \ldots + s_N&\leq&S , \label{eq:sps}\\ s_j + s_{j+1} \ldots + s_{j + \Delta-1} &\leq&1,~\textrm{for}~j = 1,\ldots,N, \label{eq:ineq} \end{eqnarray} where the subscripts are computed modulo $N$. The first inequality (\ref{eq:sps}) specifies that the solution contains at most $S$ nonzeros; the other $N$ inequalities (\ref{eq:ineq}) specify that there is at most one spike within any block of $\Delta$ consecutive coefficients in the solution. It can be shown that minimizing $\|x - x_{\vert s} \|_2$ is equivalent to maximizing $c^T s$ where $c = (x_1^2,x_2^2,\ldots,x_N^2)$, i.e., maximizing the portion of the energy of $x$ that lies within $s$. Define $W \in \mathbb{R} ^{(N+1) \times N}$ as a binary indicator matrix that captures the left hand side of the inequality constraints (\ref{eq:sps}) and (\ref{eq:ineq}). Next, define $u \in \mathbb{R} ^{N+1} = (S,1,1,\ldots,1)$; this represents the right hand side of the constraints (\ref{eq:sps}) and (\ref{eq:ineq}). Thus, we can represent (\ref{eq:sps}) and (\ref{eq:ineq}) by the following binary integer program: \begin{equation*} s^*=\arg \min~c^T s ,~\textrm{subject to}~W s \leq u. \\ \end{equation*} Next, we relax the integer constraints on $s$ to obtain a computationally tractable linear program. Denote this linear program by $\mathbb{D}(\cdot)$. In~\cite{spikemodel}, it is shown that the solutions to the integer program and its relaxed version are identical. Thus, we have a computationally feasible method to obtain an estimate of the support of the best $\mathcal{M}^\Delta_S$-approximation to $x$. Once an updated support estimate has been obtained, we repeat Steps 2, 3 and 4 in Algorithm~\ref{alg:csdecon1} to solve for the spike stream $x$ and impulse $h$. This process is iterated until a suitable halting criterion (e.g., convergence in norm for the estimated pulse stream $\widehat z$.) The overall algorithm can be viewed as an iterative sparse approximation procedure for the $\mathcal{M}_S^\Delta$ model that continually updates its estimate of the sparsifying dictionary. The procedure is detailed in pseudocode form in Algorithm~\ref{alg:csdecon2}. \begin{algorithm*}[!t] \caption{Iterative support estimation} \label{alg:csdecon2} \begin{tabbing} Inputs: Sampling matrix $\Phi$, measurements $y = \Phi z + n$, model parameters $\Delta$, $S$, $F$. \\ Output: $\mathcal{M}(S,F,\Delta)$-sparse approximation $\widehat z$ to true signal $z$ \\ Initialize $\widehat{\x}=0$ , $\widehat h = (\mathbf{1}^T_F, 0, \ldots, 0)$, $i = 0$ \hspace{29mm} \\ {\bf while} \= halting criterion false {\bf do} \hspace{30mm}\= \\ \> 1. $i \leftarrow i+1$ \\ \> 2. $\widehat z \leftarrow \widehat{\x} \ast \widehat{h}$ \> \{current pulse stream estimate\}\\ \> \{estimate spike locations and amplitudes\} \\ \> 3. $\widehat H = \mathbb{C}(\widehat{h}), \Phi_h = \Phi \widehat H$ \> \{form dictionary for spike stream\} \\ \> 4. $e \leftarrow \Phi_h^T (y - \Phi_h \widehat{\x}) $ \> \{residual\} \\ \> 5. $\omega \leftarrow \sigma(\mathbb{D}(e))$ \> \{obtain model-approximation of residual\} \\ \> 6. $\sigma \leftarrow \omega \cup \sigma(\widehat{\x}_{i-1})$ \> \{merge supports\} \\ \> 7. $x\vert_\sigma \leftarrow (\Phi_h)_\sigma\pinv y$, $x\vert_{\sigma^C} = 0$ \> \{update spike stream estimate\} \\ \> 8. $\widehat{\x} \leftarrow \mathbb{D}(x)$ \> \{prune spike stream estimate\} \\ \> \{estimate impulse response\} \\ \> 9. $\widehat X = \mathbb{C}(\widehat{\x}), \Phi_x = (\Phi \widehat X)_f $ \> \{form dictionary for impulse response\} \\ \> 10. $\widehat{h} \leftarrow \Phi_x\pinv y$ \> \{update impulse response estimate\} \\ {\bf end while} \\ {\bf return} $\widehat{\z} \leftarrow \widehat{\x} \ast \widehat{h}$ \end{tabbing} \vspace*{-2mm} \end{algorithm*} \subsection{Stability and convergence} Like many other algorithms for blind deconvolution, the analysis of Algorithm~\ref{alg:csdecon2} is not straightforward. The dictionaries $\Phi \widehat X$ and $\Phi \widehat H$ are only approximately known at any intermediate iteration, and hence the proof techniques employed for the analysis for CoSaMP do not apply. In principle, given access to a sufficient number of measurements, we may expect similar convergence behavior for Algorithm~\ref{alg:csdecon2} as Algorithm~\ref{alg:csdecon1}. Empirically, Algorithm~\ref{alg:csdecon2} can be shown to be stable to small amounts of noise in the signal as well as in the CS measurements and to minor variations in the pulse shape. We demonstrate this with the help of various numerical experiments in Section~\ref{sec:exp}. \subsection{Computational complexity} The primary runtime cost of Algorithm~\ref{alg:csdecon2} is incurred in solving the linear program $\mathbb{D}(\cdot)$. For a length-$N$ signal, the computational complexity of solving a linear program is known to be $\bigo{N^{3.5}}$. The total computational cost also scales linearly in the number of measurements $M$ and the number of iterations $T$ of the outer loop executed until convergence; thus, overall the algorithm runs in polynomial time. \section{Numerical experiments} \label{sec:exp} We now present a number of results that validate the utility of our proposed theory and methods. All numerical experiments reported in this section have been performed using Algorithm~\ref{alg:csdecon2} for recovery of disjoint pulse streams. \subsection{Synthetic 1D pulse streams} Figure~\ref{fig:ex1} demonstrates the considerable advantages that Algorithm~\ref{alg:csdecon2} can offer in terms of the number of compressive measurements required for reliable reconstruction. The test signal was generated by choosing $S = 8$ spikes with random amplitudes and locations and convolving this spike stream with a randomly chosen impulse response of length $F=11$. The overall sparsity of the signal $K = SF = 88$; thus, standard sparsity-based CS algorithms would require at least $2K = 176$ measurements. Our approach (Algorithm~\ref{alg:csdecon2}) returns an accurate estimate of both the spike stream as well as the impulse response using merely $M = 90$ measurements. Figure~\ref{fig:err} displays the averaged results of a Monte Carlo simulation of Algorithm~\ref{alg:csdecon2} over 200 trials. Each trial was conducted by generating a sample signal belonging to $\mathcal{M}(S,F,\Delta)$, computing $M$ linear random Gaussian measurements, reconstructing with different algorithms, and recording the magnitude of the recovery error for different values of $M/K$. It is clear from the figure that Algorithm~\ref{alg:csdecon2} outperforms both conventional CS recovery (CoSaMP~\cite{cosamp}) with target sparsity $K = SF$ as well as block-based reconstruction~\cite{modelCS} with knowledge of the size and number of blocks (respectively $F$ and $S$). In fact, our algorithm performs nearly as well as the ``oracle decoder'' that possesses perfect prior knowledge of the impulse response coefficients and aims to solve only for the spike stream. We show that Algorithm~\ref{alg:csdecon2} is stable to small amounts of noise in the signal and the measurements. In Figure~\ref{fig:ex6}, we generate a length $N =1024$ signal from a disjoint pulse stream model with $S=9$ and $F=11$; add a small amount of Gaussian noise (SNR = 13.25dB) to all its components, compute $M = 150$ noisy linear measurements, and reconstruct using Algorithm~\ref{alg:csdecon2}. The reconstructed signal is clearly a good approximation of the original signal. \begin{figure}[!t] \centering {\includegraphics[width=0.4\hsize]{decon_vs_others2.eps}} \caption{\small \sl Normalized reconstruction MSE vs.\ $M/K$ for different reconstruction algorithms averaged over 200 sample trials. Signal parameters: $N = 1024$, $S = 8$, $F=11$. Algorithm 2 outperforms standard and structured sparsity-based methods, particularly when $M/K$ is small.} \label{fig:err} \end{figure} \begin{figure}[!t] \centering \begin{tabular}{cc} {\includegraphics[width=0.3\hsize]{ex2_orig_edit.eps}}& {\includegraphics[width=0.3\hsize]{ex2_csdecon_edit.eps}} \\ (a) & (b) \end{tabular} \caption{\small\sl (a) Synthetic noisy signal (SNR = 13.25dB). (b) Recovery from $M=150$ random measurements using Algorithm~\ref{alg:csdecon2}. \label{fig:ex6}} \end{figure} \subsection{Neuronal signals} We test Algorithm~\ref{alg:csdecon2} on a real-world neuronal recording. Figure~\ref{fig:realdata}(a) shows the temporal electrochemical spiking potential of a single neuron. The shape of the pulses is characteristic of the neuron and should ideally be constant across different pulses. However, there exist minor fluctuations in the amplitudes, locations and profiles of the pulses. Despite the apparent model mismatch, our algorithm recovers a good approximation to the original signal (Figure~\ref{fig:realdata}(b)) as well as an estimate of the anchor pulse shape (Figure~\ref{fig:realdata}(c)). \begin{figure}[t] \centering \begin{tabular}{ccc} {\includegraphics[width=0.30\hsize]{real_orig_edit.eps}}& {\includegraphics[width=0.30\hsize]{real_rec_edit.eps}} & {\includegraphics[width=0.30\hsize]{real_ps_edit.eps}} \\ (a) & (b) & (c) \end{tabular} \caption{\small\sl CS recovery of a real-world neuronal signal. (a)~Original recording. (b)~Recovered signal using $M = 150$ random measurements. (c)~Estimated anchor pulse shape ($F=11$). \label{fig:realdata}} \end{figure} \subsection{Synthetic 2D pulse streams} Theorem~\ref{thm:zrip} and Algorithm~\ref{alg:csdecon2} can easily be extended to higher dimensional signals. For instance, suppose that the signals of interest are 2D images that can be modeled by a sparse sum of disjoint 2D pulses. We test Algorithm~\ref{alg:csdecon2} on a synthetic image (Figure~\ref{fig:ex2}(a)) of size $N = 64 \times 64 = 4096$ that comprises $S=7$ spikes blurred by an unknown 2D impulse response of size $F = 5 \times 5 = 25$, so that the overall sparsity $K = SF = 175$. We acquire merely $M = 290$ random Gaussian measurements (approximately 7\% the size of the image $N$) and reconstruct the image using CoSaMP as well as Algorithm~\ref{alg:csdecon2}. We assume that both algorithms possess an oracular knowledge of the number of spikes $S$ as well as the size of the impulse response $F$. Figure~\ref{fig:ex2} displays the results of the reconstruction procedures using CoSaMP and Algorithm~\ref{alg:csdecon2}. It is evident both perceptually and in terms of the MSE values of the reconstructed images that our proposed approach is superior to traditional CS recovery. \begin{figure*}[t] \centering \begin{tabular}{ccc} {\includegraphics[width=0.3\hsize]{origpulses2.eps}}& {\includegraphics[width=0.3\hsize]{cosamppulses2.eps}} & {\includegraphics[width=0.3\hsize]{cspfpulses2.eps}} \\ (a) & (b) & (c) \end{tabular} \caption{\small\sl Example CS recovery of a sum of 2D pulses. (a) Synthetic test image: $N = 4096$, $S = 7$, $F = 25$. Images are recovered from $M=290$ random Gaussian measurements using (b) CoSaMP (MSE = 16.95), and (c) Algorithm~\ref{alg:csdecon2} (MSE = 0.07). \label{fig:ex2}} \end{figure*} \subsection{Astronomical images} Finally, we test Algorithm~\ref{alg:csdecon2} on a real astronomical image. Our test image is a $N = 64 \times 64$ region of a high-resolution image of V838 Monocerotis (a nova-like variable star) captured by the Hubble Space Telescope~\cite{hubble} (highlighted by the green square in Figure~\ref{fig:ex7}(a)). Note the significant variations in the shapes of the three large pulses in the test image (Figure~\ref{fig:ex7}(b)). We measure this image using $M = 330$ random measurements and reconstruct using both CoSaMP and Algorithm~\ref{alg:csdecon2}. For our reconstruction methods, we assumed an oracular knowledge of the signal parameters; we use $S = 3, F = 120, K = 360$ and $\Delta = 20$. As indicated by Figure~\ref{fig:ex7}, conventional CS does not provide useful results with this reduced set of measurements. In contrast, Algorithm~\ref{alg:csdecon2} gives us excellent estimates for the locations of the pulses. Further, our algorithm also provides a circular impulse response estimate that can be viewed as the anchor pulse of the three original pulses. \begin{figure*}[t] \centering \begin{tabular}{cc} {\includegraphics[width=0.30\hsize]{bw0405.eps}}& {\includegraphics[width=0.30\hsize]{realstar_orig.eps}} \\ (a) & (b) \\ {\includegraphics[width=0.30\hsize]{realstar_cosamp.eps}} & {\includegraphics[width=0.30\hsize]{realstar_csdecon.eps}} \\ (c) & (d) \end{tabular} \caption{\small\sl (a) Black-and-white image of V838 Monocerotis, a nova-like star, captured by the Hubble Space Telescope on February 8, 2004~\cite{hubble}. (b) Test image is a zoomed in-version of the region highlighted in green (resolution $N = 64 \times 64 = 4096$). Reconstruction of test image is performed from $M = 330$ random Gaussian measurements using (c) CoSaMP and (d) Algorithm~\ref{alg:csdecon2}. \label{fig:ex7}} \end{figure*} \section{Discussion and Conclusions} \label{sec:conc} In this paper, we have introduced and analyzed a new framework for the compressive sampling of pulse streams. Our signals of interest are modeled as an infinite union of subspaces which exhibits a particular geometric structure. This structure enables us to quantitatively deduce the number of random linear measurements needed to sample such signals. We have proposed two methods for signal recovery. Our first method (Algorithm~\ref{alg:csdecon1}) is relatively easy to analyze, but suffers from combinatorial complexity. Our second method (Algorithm~\ref{alg:csdecon2}) is a feasible, if suboptimal, algorithm and formed the basis for our numerical experiments. While our framework is applicable to signals defined over domains of arbitrary dimension, we have illustrated its benefits in the context of 1D time signals and 2D images. There are several avenues for future work. We have discussed sparse signals and images as represented in the identity basis; our method can be extended to wavelet-sparse and Fourier-sparse signals. While our results are promising, we still do not possess a complete characterization of the convergence properties of Algorithm~\ref{alg:csdecon2} as well as its sensitivity to factors such as noise and model mismatch under random projections. Additionally, it is unclear how to deal with situations where the pulses in the signal of interest are allowed to overlap. To the best of our knowledge, the issue of robust recovery of signals convolved with an unknown arbitrary impulse response is an open question even for the case of Nyquist-rate samples. We defer these challenging open questions to future research. The framework developed in this paper can be related to various existing concepts in the literature such as best basis compressive sensing~\cite{bestbasis}, simultaneous sparse approximation and dictionary learning~\cite{ybd}, and the classical signal processing problem of blind deconvolution~\cite{bdecon}. Compressive sensing of time-domain pulse streams has been studied by Naini {\em et al.\ }\cite{cspt}. However, in their setting the impulse response is assumed to be known, and hence the CS measurement system can be viewed as a modification of random Fourier subsampling. Our framework is related to recent results on compressed blind deconvolution by Saligrama and Zhao~\cite{cbdfsp}. As opposed to pulse streams, their signals of interest consist of sparse signals driven through an all-pole auto-regressive (AR) linear system. They propose an optimization-based algorithm for recovery of the signal and impulse response from CS measurements. However, their measurement system is tailored to impulse responses corresponding to AR linear models; our approach can handle arbitrary impulse responses. Further, our main theoretical result indicates that the number of measurements to compressively sample a pulse stream is linear only in the number of degrees of freedom of the signal and thus answers an open question (Remark 3.1) posed by the authors in the affirmative. Finally, the main approach in this paper can be related to recent work by Asif {\em et al.\ }\cite{randomcoding,randcodingbd}, who propose channel coding methods to combat the effect of unknown multipath effects in a communication channel that can be described by a sparse impulse response. Their coding strategy follows the one advocated by Cand\`{e}s and Tao~\cite{Candes04A}: their channel code consists of a random matrix $\Phi \in \mathbb{R} ^{M \times N}$ where $M > N$, so that the linear mapping $y = \Phi x$ is now not undercomplete, but overcomplete. Thus, their observations consist of an unknown sparse channel response $h$ convolved with the transmitted signal $y$ and their objective is to reconstruct the original signal $x$. The main aspects of our theoretical analysis could conceivably be modified to quantify system performance in this setting. \section*{Acknowledgements} The authors would like to thank Dr.\ Volkan Cevher, Dr.\ Marco Duarte, Eva Dyer, and Mona Sheikh for helpful discussions, and Dr. Manjit Sanghera of the Dallas Presbyterian Hospital for providing the neuronal data used in Figure~\ref{fig:realdata}. \appendices \section{} \label{app:jl} We prove Theorem~\ref{thm:zrip}. By Definition~\ref{def:psm}, the model $\mathcal{M}_{S,F}^z$ is generated via the convolution operation by the structured sparsity models $\mathcal{M}_{S}$ and $\mathcal{M}_{F}$. Recall that both structured sparsity models are themselves defined in terms of canonical subspaces of $ \mathbb{R} ^N$ and their convolution results in a low-dimensional geometrical structure that is best described by an infinite union of subspaces. Thus, if $x \in \mathcal{M}_S$ lies in a particular subspace $\Omega$ and $h \in \mathcal{M}_F$ lies in a particular subspace $\Lambda$, then every signal $z \in \mathcal{M}_{S,F}^z$ can be identified with at least one infinite union of subspaces $U_{\Omega,\Lambda}$. The overall approach is as follows: we first construct a net of points $Q$ in $ \mathbb{R} ^N$ such that $$ \min_{q \in Q} \| z - q \| < \delta, $$ for all $z \in U_{\Omega,\Lambda}$ with $\| z \| = 1$ and some constant $\delta$. We then apply the Johnson-Lindenstrauss Lemma~\cite{JL} for stable embedding of point clouds to this finite set of points $Q$, and extend the stable embedding to all possible signals $z \in U_{\Omega, \Lambda}$. Finally, we derive our main result through a union bound over all possible choices of subspaces $\Omega$ and $\Lambda$. Consider a fixed vector $h \in \Lambda$. Suppose the coefficients of $h$ are normalized so that $\| h \|= 1$. By virtue of its circulant nature, the spectral norm of the corresponding matrix $\| H \| \leq 1$. Now, consider a fixed $S$-dimensional subspace $\Omega \in \mathcal{M}_S$. It is easy to see that $$ \Omega_{h} = \{ z = H x ~|~ x \in \Omega \} $$ also forms an $S$-dimensional subspace in $ \mathbb{R} ^N$. Thus, by Lemma 5.1 of~\cite{jlcs}, we can find a finite set of points $Q_{\Omega,h} \subset \Omega_h$ with cardinality $|Q_{\Omega,h}| \leq (3/\delta')^S$ such that $$ \min_{q \in Q_{\Omega,h}} \| H x - q \| \leq \delta', ~~\forall ~\|x\| \leq 1, x \in \Omega . $$ This is an upper bound on the size of $Q_{\Omega,h}$; assuming a worst-case scenario, we may list out the points in this set so that $$ Q_{\Omega,h} = \{ q_1, q_2, \ldots, q_{(3/\delta')^S} \} = \{ H x_1, H x_2, \ldots, H x_{(3/\delta')^S} \} . $$ Select any $x_l \in \{x_1, \ldots, x_{(3/\delta')^S} \}$ and an $F$-dimensional subspace $\Lambda \in \mathcal{M}_F$. Form the circulant matrix $X_l$; as above, $\| X_l \| \leq 1$. Therefore, $$ \Omega_{x_l} = \{ z = X_l h ~|~ h \in \Lambda \} $$ forms an $F$-dimensional subspace. Correspondingly, we can find a set of points $Q_{x_l,\Lambda} \subset \Omega_{x_l}$ with cardinality $|Q_{x_l,\Lambda}| \leq (3/\delta')^F$ such that $$ \min_{q \in Q_{x_l,\Lambda}} \| X_l h - q \| \leq \delta', ~~\forall ~\|h\| \leq 1, h \in \Lambda . $$ Using this process, define $Q_{x_l, \Lambda}$ for $l = 1, 2, \ldots, (3/\delta')^S$. Then, we have $$ Q_{\Omega, \Lambda} = \bigcup_l Q_{x_l,\Lambda} . $$ Thus, we have identified a finite set of points $Q_{\Omega, \Lambda}$ in the infinite union of subspaces $U_{\Omega, \Lambda}$. Observe that the cardinality of this set $Q_{\Omega,\Lambda} = (3/\delta')^S (3/\delta')^F$. Then, every vector in $U_{\Omega, \Lambda}$ with magnitude less than 1 lies `close' to at least one point in $Q_{\Omega, \Lambda}$, i.e., $Q_{\Omega, \Lambda}$ is a $\delta''$-net for $U_{\Omega,\Lambda}$. Suppose $\delta = 2 \delta''$. By the Johnson-Lindenstrauss Lemma, if $\Phi \in \mathbb{R} ^{M \times N}$ with the elements of $\Phi$ drawn from a random gaussian distribution, then for every pair of vectors $z_1, z_2 \in U_{\Omega, \Lambda}$, (\ref{eq:zrip}) will hold with failure probability $$ p_{\Omega,\Lambda} = 2 \left(\frac{3}{\delta}\right)^S \left(\frac{3}{\delta}\right)^F e^{-c_0 (\delta/2) M}. $$ This is for a fixed pair of subspaces $(\Omega, \Lambda) \in \mathcal{M}_S \times \mathcal{M}_F$. There are $L_S \times L_F$ such pairs of subspaces. Applying a simple union bound over all possible pairs, we obtain the overall failure probability as \begin{eqnarray*} p&\leq& \sum_{(\Omega,\Lambda)} p_{\Omega,\Lambda} \leq L_S L_F \left(\frac{3}{\delta}\right)^{S + F} e^{-c_0 (\delta/2) M} . \end{eqnarray*} Rearranging terms, we have that for a suitably chosen constant $C$ (that depends on $c_0$) and for any $t > 0$, if $$ M \geq C \left( \log(L_S L_F) + (S + F) \log \left(\frac{3}{\delta}\right) + t \right) , $$ the failure probability for the sampling bound is smaller than $e^{-t}$. The theorem follows easily from this result. \section{} \label{app:alg} We prove Theorem~\ref{thm:alg1}. Let $\widehat{\z}_i = \widehat{\x}_i \ast \widehat{h}_i$ be any intermediate estimate of Algorithm~\ref{alg:csdecon1}. Let $\widehat H_i = \mathbb{C}(\widehat{h}_i)$. Suppose that our candidate configuration for the support of $x$ is given by the sparsity pattern $\sigma$ belonging to the structured sparsity model $\mathcal{M}_S^\Delta$. Then, if $(\cdot)_\sigma$ indicates the submatrix formed by the columns indexed by $\sigma$, the dictionary for the spike stream is given by $(\Phi \widehat H_i )_\sigma = \Phi (\widehat H_i)_\sigma$. By virtue of the least-squares property of the pseudo-inverse operator, the subsequent estimate $\widehat{\x}_{i+1}$ according to Step 2 is given by \begin{equation} \widehat{\x}_{i+1} = \arg \min_{x} \|y - \Phi (\widehat H_i)_\sigma x\|_2^2 , \label{eq:inf} \end{equation} where $x$ belongs to the $K$-dimensional subspace defined by the support configuration $\sigma$. Since we are minimizing a convex loss function (squared error) on a subspace in $ \mathbb{R} ^N$, the minimum $\widehat{\x}_{i+1}$ is unique. Therefore, we may view Step 2 of the algorithm as a unique-valued {\em infimal map} $f$ from a given $\widehat{h}_i \in \mathcal{M}_F$ to a particular $\widehat{\x}_{i+1} \in \mathcal{M}_S^\Delta$. Similarly, we may view Step 4 of Algorithm~\ref{alg:csdecon1} as another unique-valued infimal map $g$ from $\mathcal{M}_S^\Delta$ to $\mathcal{M}_F$. Therefore, the overall algorithm is a repeated application of the composite map $f \circ g$. From a well-known result on single-valued infimal maps~\cite{fiorothuard,amtropp}, the algorithm is strictly monotonic with respect to the loss function. Thus, the norm of the residual $y - \Phi \widehat{\z}_i$ decreases with increasing iteration count $i$. Further, any intermediate estimate $\widehat{\z}_i$ also belongs to the model $\mathcal{M}(S,F,\Delta)$. We know from (\ref{eq:zrip}) that $$ \| y - \Phi \widehat{\z}_i \|_2^2 = \| \Phi z - \Phi \widehat{\z}_i \|_2^2 \geq (1-\delta) \| z - \widehat{\z}_i \|_2^2 . $$ Therefore, if $\| y - \Phi \widehat{\z}_i \| \leq \epsilon$, $\|z - \widehat{\z}_i \| \leq \epsilon/\sqrt{1-\delta}$. \section{} \label{app:anchor} We prove Theorem~\ref{thm:anchor}. Suppose the target signal $z$ is composed of $S$ pulses $\{h_1,\ldots,h_S\}$, so that $$ z = \widetilde{H} \widetilde{x}, $$ where $\widetilde{H} = [\mathbb{S}_1(h_1), \ldots, \mathbb{S}_S (h_S)]$ is an $N \times S$ matrix and $\widetilde{x} = (\alpha_1, \alpha_2, \ldots, \alpha_S)$. Assume we are given access to the Nyquist samples $z$, i,e., $\Phi = I_{N \times N}$. Suppose the estimate of the impulse response at an intermediate iteration is given by $\widehat{h}$. Let $\widehat H$ be the matrix formed by the operator $\mathbb{C}(\cdot)$ acting on $\widehat{h}$ and let $\sigma$ be the candidate support configuration for the spike stream, so that the dictionary $\Phi_h$ in this case is given by the submatrix $\widehat{H}_\sigma$. Note that $\widehat{H}_\sigma$ is quasi-Toeplitz, owing to the assumption that the separation $\Delta$ is at least as great as the impulse response length $F$. Thus, Step 2 of Algorithm~\ref{alg:csdecon1} can be represented by the least-squares operation $$ \widehat{\x} = \widehat{H}_\sigma\pinv z . $$ Due to the quasi-Toeplitz nature of $\widehat H_\sigma$, the pseudo-inverse $\widehat{H}_\sigma\pinv = (\widehat{H}_\sigma^\top \widehat{H}_\sigma)^{-1} \widehat{H}_\sigma^\top$ essentially reduces to a scaled version of the identity multiplied by the transpose of $\widehat{H}$ (the scaling factor is in fact the squared norm of $\widehat{h}$). Thus, the spike coefficients are given by $$ \widehat{\x} = \frac{1}{\| \widehat{h} \|^2} \widehat{H}_\sigma^\top \widetilde{H} \widetilde{x} . $$ Simplifying, we obtain the expression for the estimated $i^{\textrm{th}}$ spike coefficient $\widehat{\alpha}_i$ as $$ \widehat{\alpha}_i = \alpha_i \frac{\langle h_i, \widehat{h} \rangle}{\| \widehat{h} \|^2} . $$ If $\widehat{h}$ is normalized, we may write $\widehat{\alpha}_i = c_i \alpha_i$, where $c_i = \langle h_i, \widehat{h} \rangle$. Once the spike coefficients $\widehat{\x} = (c_1 \alpha_1,\ldots,c_S \alpha_S)$ have been estimated, we can form the dictionary $\Phi_x$ by considering the quasi-Toeplitz matrix $\widehat{X}$ formed by the operation $\mathbb{C}(\widehat{\x})$. In the same manner as above, an updated estimate of the pulse shape $\widehat{\widehat{h}}$ is given by $$ \widehat{\widehat{h}} = \widehat{X}\pinv z = \frac{1}{\sum_{i=1}^{S} c_i^2 \alpha_i^2 } \widehat{X}^{T} z . $$ Writing out $\widehat{X}$ and $z$ in terms of $(h_1,\ldots,h_S)$ and $(\alpha_1,\ldots,\alpha_S)$ and simplifying, we obtain $$ \widehat{\widehat{h}} = \frac{ \sum_{i=1}^S c_i \alpha_i^2 h_i} {\sum_{i=1}^S c_i^2 \alpha_i^2} , $$ where $c_j = \langle h_j, \widehat{h} \rangle$. Thus, we have a closed-expression for the updated estimate of the impulse response coefficients $\widehat{\widehat{h}}$ in terms of the previous estimate $\widehat{h}$. In the event that the algorithm converges to a fixed point, we can replace $\widehat{\widehat{h}}$ by $\widehat{h}$, thus proving the theorem. {{
{'timestamp': '2010-04-20T02:02:51', 'yymm': '1004', 'arxiv_id': '1004.3273', 'language': 'en', 'url': 'https://arxiv.org/abs/1004.3273'}
\section{Introduction}\label{sect:intro} Minimal models play a vital role in many systems that are dedicated to knowledge representation and reasoning. The concept of minimal model is at the heart of several tasks in Artificial Intelligence including circumscription \cite{circ1,circ2,lif-circ}, default logic \cite{Rei80}, minimal diagnosis \cite{dKMR92}, planning \cite{KautzMS96}, and in answering queries posed on logic programs under the stable model semantics \cite{gl:negation,BiFr87} and deductive databases under the generalized closed-world assumption \cite{minker}. On the more formal side, the task of reasoning with minimal models has been the subject of several studies \cite{Cad92a,Cad92b,KoPa90,EiGo93,CT93,BeDe96b,Be05,KiKo03}. Given a propositional CNF theory $\Pi$, among others, the tasks of {\em Minimal Model Finding} and {\em Minimal Model Checking} have been considered. The former task consists of computing a minimal model of $\Pi$, the latter one is the problem of checking whether a given set of propositional letter is indeed a minimal model for $\Pi$. Findings regarding the complexity of reasoning with minimal models show that these problems are intractable in the general case. Indeed, it turns out that even when the theory is positive (that is, it does not contain constraints), finding a minimal model is $\rm P^{NP[O(\log n)]}$-hard \cite{Cad92a} (note that positive theories always have a minimal model)\footnote{We recall that $\rm P^{NP[O(\log n)]}$ is the class of decision problems that are solved by polynomial-time bounded deterministic Turing machines making at most a logarithmic number of calls to an oracle in $\rm NP$. For a precise characterization of the complexity of model finding, given in terms of complexity classes of functions, see \cite{CT93}.}, and checking whether a model is minimal for a given theory is co-NP-complete \cite{Cad92b}. The above formidable complexities characterizing the two above mentioned problems have motivated several researchers to look for heuristics \cite{LoTr06,BeDe96b,Be05,AvBe07} as long as, due to the complexity results listed above and to the still unresolved P vs NP conundrum, all exact algorithms for solving these problems remain exponential in the worst case. One orthogonal direction of research concerns singling out significant fragments of CNF theories for which dealing with minimal models is tractable. The latter approach has also the merit of providing insights that can help improve the efficiency of heuristics for the general case. For instance, algorithms designed for a specific subset of general CNF theories can be incorporated into algorithms for computing minimal models of general CNF theories \cite{BeDe96b,SKC96,GSS03,HHST04}. Within this scenario, in \cite{Ben-Eliyahu-ZoharyP97} efficient algorithms are presented for computing and checking minimal models of a restricted subset of positive CNF theories, called {\em Head Cycle Free} (HCF) theories \cite{BenEliyahuD94}. To illustrate, HCF theories are positive CNF theories satisfying the constraint that there is no cyclic dependence involving two positive literals occurring in the same clause. Head-cycle-freeness can also be checked efficiently \cite{BenEliyahuD94}. These results have been then exploited by other authors to improve model finding algorithms for general theories. For example, the system {\tt dlv} looks for HCF fragments into general disjunctive logic programs to be processed in order to improve efficiency \cite{LRS97,KoLe99}. The research presented here falls into the groove traced in \cite{Ben-Eliyahu-ZoharyP97}. The central contribution of this work is a polynomial time algorithm for computing a minimal model for (a superset of) the class of positive HEF (Head Elementary-Set Free) CNF theories, the definition of which we adapt from the homonym one given in \cite{GebserLL06} for disjunctive logic programs and which form, in their turn, a strict superset of the class of HCF theories studied in \cite{Ben-Eliyahu-ZoharyP97}. To the best of our knowledge positive HCF theories form the largets class of CNFs for which a polynomial time algorithm solving the \textit{Minimal Model Finding} problem is known so far. Since HCF theories are a strict subset of HEF ones, our main contribution is the enlargement of the tractability frontier for the minimal model finding problem. It is worth noting that a relevant difference holds here that while HCF theories are recognizable in polynomial time, for HEF ones the same task is co-NP-complete \cite{FassettiP10}. Although this undesirable property seems to reduce the applicability of the above result, we will show that our approach leads to techniques to compute a model of any positive CNF theory in polynomial time, while the computed model is guaranteed to be minimal at least for all positive HEF theories. Notice that this latter property holds without the need to recognize whether the input theory is HEF or not. The rest of the paper is organized as follows. In Section \ref{sect:probl_contrib}, we provide preliminary definitions about CNF theories, present the problems and the sub-classes of CNF theories of interest here, depict contributions of the work, and discuss application examples. In Section \ref{sect:algo}, we introduce the Generalized Elimination Algorithm (GEA), that is the basic algorithm presented in this paper, and the concept of {\em eliminating operator} that it makes use of. Then, in Section \ref{sect:hef}, we formally define HEF CNF theories and then construct an eliminating operator that enables GEA to compute a minimal model for a positive HEF CNF theory in polynomial time. In Section \ref{sect:beyond}, we study the behavior the GEA when applied to a general CNF theory and introduce the Incomplete GEA which is able to compute a minimal model for a positive HEF CNF theory in polynomial time without the need to know in advance whether the input theory is HEF or not. Concluding remarks are provided in Section \ref{sect:conclusions}. For the sake of presentation, some of the intermediate result proofs are reported in the Appendix. \section{Our problems and application scenarios} \label{sect:probl_contrib} In this section, first we define the problems we are dealing with in this paper and then depict some application scenarios. \begin{table}\small \centering \begin{tabular}[h]{|c||p{0.7\textwidth}|} \hline \bf Symbol & \bf Description \\ \hline $\P$ & A CNF theory \\ \hline $\P^{nd}$ & A non-disjunctive theory obtained from $\P$ by deleting all disjunctive clauses\\ \hline $\atom{\P}$ & The set of atoms appearing in $\P$ \\ \hline $c_X$ & The clause obtained by projecting the clause $c$ on the set of atoms $X$: if $c\equiv H\leftarrow B$ then $c_X \equiv H_X \leftarrow B_X$ with $H_X = H\cap X$ and $B_X = B\cap X$ \\ \hline $c_{X\leftarrow}$ & The clause obtained by projecting the head of the clause $c$ on the set of atoms $X$: if $c\equiv H\leftarrow B$ then $c_{X\leftarrow} \equiv H_X \leftarrow B$ with $H_X = H\cap X$\\ \hline $\P_X$ & The set of all the non-empty head clauses $c_X$ with $c$ in $\P$\\ \hline $\P_{X\leftarrow}$ & The set of all the non-empty head clauses $c_{X\leftarrow}$ with $c$ in $X$\\ \hline $\simpl{\P}{{\cal M}}$ & The set of all the non-empty clauses $c_{\cal M}$ with $c$ in $\P$\\ \hline $\ssimpl{\P}{{\cal M}}$ & A shortcut for $(\simpl{\P}{{\cal M}})_{{\cal M}\setminus\S}$\\ \hline ${\mathcal{G}}(\P)$ & The dependency graph associated with the theory $\P$\\ \hline $\eG(\P)$ & The elementary graph associated with the non-disjunctive theory $\P$\\ \hline \end{tabular} \caption{Summary of the symbols employed throughout the paper.} \label{table:symbols} \end{table} \subsection{Preliminary definitions}\label{sect:prelim} In this section we recall or adapt the definitions of propositional CNF theories and their subclasses (head-cycle-free, head-elementary-set-free) which are of interest here. An \textit{atom} is a propositional \emph{letter} (aka, \textit{positive} literal). A \textit{clause} (aka, \textit{rule} -- in the following we shall make use of the two terms interchangeably) is an expression of the form $H \leftarrow B$, where $H$ and $B$ are sets of atoms\footnote{We prefer to adopt the implication-based syntax for clauses in the place of the more usual disjunction-based one to slightly ease the foregoing presentation.}. $H$ and $B$ are referred to as, respectively, the \textit{head} and \textit{body} of the clause; the atoms in $H$ are also called head atoms while the atoms in $B$ are also called body atoms. With a little abuse of terminology, if $|H|>1$, we shall say the clause is \textit{disjunctive}, otherwise it is a \textit{Horn}, or \textit{non-disjunctive}\footnote{We will use the terms \textit{Horn} and \textit{non-disjunctive} interchangeably.}. Moreover, if $|H| = 1$ the clause is called \textit{single-head}. A \textit{fact} is a single-head rule with empty body. A \textit{theory} $\P$ is a finite set of clauses. If there is some disjunctive rule in $\P$ then $\P$ is called \textit{disjunctive}, otherwise it is called \textit{non-disjunctive}. $\atom{\P}$ denotes the set of all the atoms occurring in $\P$. A set $S$ of atoms is called a \textit{disjunctive set} for $\P$ if there exists at least one rule $H \leftarrow B$ in $\P$ such that $|H\cap S|>1$. A \textit{constraint} is an empty-head clause. A theory $\P$ is said to be \textit{positive} if no costraint occurs in $\P$. The semantics of CNF theories relies on the concepts of {\em interpretation} and {\em model}, which are recalled next. An \textit{interpretation} $I$ for the theory $\P$ is a set of atoms from $\P$. An atom is {\em true} (resp., {\em false}) in the interpretation $I$ if $a \in I$ (resp., $a \not\in I$). A rule $H \leftarrow B$ is true in $I$ if either at least one atom occurring in $H$ is true in $I$ or at least one atom occurring in $B$ is false in $I$. An interpretation $I$ is a \textit{model} for a theory $\P$ if all clauses occurring in $\P$ are true in $I$. A model $M$ for $\P$ is \textit{minimal} if no proper subset of $M$ is a model for $\P$. A directed graph ${\mathcal{G}}(\P)$, called \emph{positive dependency graph}, can be associated with a theory $\P$. Specifically, nodes in ${\mathcal{G}}(\P)$ are associated with atoms occurring in $\P$ and, moreover, there is a directed edge $(m,n)$ from a node $m$ to a node $n$ in ${\mathcal{G}}(\P)$ if and only if there is a clause $H \leftarrow B$ of $\P$ such that the atom associated with $m$ is in $B$ and the atom associated with $n$ is in $H$. Given a clause $c\equiv H\leftarrow B$ and a set of atoms $X$, $c_{X\leftarrow}$ denotes the clause $H\cap X\leftarrow B$, whereas $c_{X}$ denotes the clause $H\cap X\leftarrow B\cap X$. Given a theory $\P$ and a set of atoms $X$, the theory $\P_{X\leftarrow}$ includes all \textit{non-empty head} clauses $c_{X\leftarrow}$, with $c$ a clause in $\P$. Analogously, the theory $\P_{X}$ includes all \textit{non-empty head} clauses $c_{X}$, with $c$ a clause in $\P$. Given a theory $\P$, the theory $\P^{nd} \subseteq \P$ includes all Horn clauses of $\P$. In the following, we assume that the operators $\cdot_X$ and $\cdot_{X\leftarrow}$ have precedence over the operator $\cdot^{nd}$, thus that the expression $\P^{nd}_X$ ($\P^{nd}_{X\leftarrow}$, resp.) is to be intended equivalent to $(\P_X)^{nd}$ ($(\P_{X\leftarrow})^{nd}$, resp.). Table \ref{table:symbols} summarizes some of the symbols used throughout the paper (some of them are defined in subsequent sections). \begin{table}[t] \small \centering \begin{tabular}{|l|c|c|c|c|c|} \hline \it Class of CNF Theory & REC & MFP & MMP & MMCP & MMFP \\ \hline\hline \it General & --- & NP-{\it h} & $\rm P^{NP[O(\log n)]}$-{\it h} & coNP & $\rm \Sigma^P_2$-{\it h} \\ \hline \it Positive general & P & FP & $\rm P^{NP[O(\log n)]}$-{\it h} & coNP & $\rm P^{NP[O(\log n)]}$-{\it h} \\ \hline \it HEF & coNP & NP-{\it h} & \bf FP & \bf P & NP-{\it h} \\ \hline \it Positive HEF & coNP & FP & \bf FP & \bf P & \bf FP \\ \hline \it HCF & P & FP & FP & P & FP \\ \hline \end{tabular} \caption{Problems and their complexity.} \label{table:probl_compl} \end{table} \subsection{Problems}\label{sect:model_finding} Table \ref{table:probl_compl} summarizes the problems and the classes of CNF theories of interest here and reports the associated complexities. As for the classes of CNF theories, other than general one here we consider HEF and HCF theories: \begin{itemize} \item[---] \textit{Head Cycle Free} (HCF) theories \cite{BenEliyahuD94} are CNF theories such that no connected component of the associated dependency graph contains two positive literals occurring in the same clause; \item[---] \textit{Head Elementary-Set Free} (HEF) CNF theories, the definition of which we adapt from the homonym one given in \cite{GebserLL06} for disjunctive logic programs (see Section \ref{sect:hef} for the formal definition of HEF theories), form a strict superset of the class of HCF theories. \end{itemize} The problems (listed in the table) are: \begin{itemize} \item[---] \textit{Recognition Problem} (REC): Given a CNF theory $\P$ and a class $\cal C$ of CNF theories, decide if $\P$ belongs to the class $\cal C$; \item[---] \textit{Model Finding Problem} (MFP): Given a CNF theory $\P$, compute a model ${\cal M}$ for $\P$; \item[---] \textit{Model Minimization Problem} (MMP): Given a CNF theory $\P$ and a model ${\cal M}$ for $\P$, compute a minimal model ${\rm \cal MM}$ for $\P$ contained in ${\cal M}$; \item[---] {\em Minimal Model Checking Problem} (MMCP): Given a CNF theory $\P$ and a model ${\cal M}$ for $\P$, check if ${\cal M}$ is indeed a minimal model for $\P$; \item[---] {\em Minimal Model Finding Problem} (MMFP): Given a CNF theory $\P$, compute a minimal model ${\cal M}$ for $\P$. \end{itemize} The MFP problem is NP-hard unless the theory is positive. Indeed, in the latter case, the set consisting of all the literals occurring in the theory is always a model. In this work we will focus on the MMP, MMCP, and MMFP problems. As for MMFP, it turns out that, over positive CNF theories, this is hard to solve. In particular, it is known that on positive theories MMFP is ${\rm P}^{{\rm NP}[O(\log n)]}$-hard \cite{Cad92a} (even though positive CNF theories always have a minimal model!). Given a CNF theory $\P$ and a model ${\cal M}$ for $\P$, it is worth noticing that the theory $\P_{\cal M}$ is always a positive CNF and {that the models of $\P_{\cal M}$ are a subset of those of $\P$}. This explains the fact that the complexity of the MMP and MMCP problems, which have in input a model ${\cal M}$ other than the theory $\P$, does not depend on positiveness of the theory. Moreover, we notice that MMFP is not easier than MMP and MMCP since the latter problems can be reduced to the former one as follows: \begin{itemize} \item[---] As for MMP, return ${\rm MMFP}(\P_{\cal M})$; \item[---] As for MMCP, return true if ${\rm MMFP}(\P_{\cal M}) = {\cal M}$ and false otherwise. \end{itemize} Thus, if for a certain class of theories the MMFP were tractable, then both MMP and MMCP would become tractable as well. Moreover, if attention is restricted to positive theories, the MMP and MMFP problems coincide (since this time MMFP can be reduced to MMP by setting ${\cal M}$ to the set of all the literals occurring in $\P$) and, consequently, MMP on general theories is equivalent to MMFP on positive theories. We notice that, on the other hand, for head-cycle-free CNF theories things are easier than for the general case: indeed it was proved in \cite{Ben-Eliyahu-ZoharyP97} that the MMFP is solvable in polynomial time if the input theory is HCF. All that given, the following section details the contributions of the paper. \subsection{Contributions and algorithms road map} In this work we investigate the MMP and MMCP problems on CNF theories and the MMFP on positive CNF theories. Among the main contributions offered here, we will show that MMP and MMCP are tractable on generic HEF theories, while MMFP is tractable on positive HEF theories. In order to provide a uniform treatment of these problems, we will concentrate on algorithms for the MMP, which can be considered the most general of them since its input consists of both a CNF theory and a (not necessarily) non-minimal model of the theory. Specifically, we provide a polynomial time algorithm solving the MMP on general HEF CNF theories which, because of the observations made above, can be directly used to solve in polynomial time the following five problems (see also cells of Table \ref{table:probl_compl} reported in bold): ($i$) MMP on non-positive HEF CNF theories, ($ii$) MMCP on non-positive HEF CNF theories, ($iii$) MMP on positive HEF CNF theories, ($iv$) MMCP on positive HEF CNF theories, and ($v$) MMFP on positive HEF CNF theories. Also already noticed, differently from HCF theories, which turn out to be recognizable in polynomial time \cite{BenEliyahuD94}, recognizing HEF theories is an intractable problem \cite{FassettiP10}. This undesirable property may seem to limit the applicability of the above complexity results. However, as better explained next, we show that our MMP algorithm can be fed with any CNF theory $\Pi$ and any model ${\cal M}$ of $\Pi$ and it is guaranteed to corectly minimize ${\cal M}$ at least in the case that the theory $\Pi$ is HEF. Notice that this property holds without the need to recognize whether the input theory is HEF or not. To illustrate, we start by presenting an algorithmic schema, called the {\em Generalized Elimination Algorithm (GEA)} for model minimization over CNF theories. The GEA invokes a suitable \textit{eliminating operator} in order to converge towards a minimal model of the input theory. Intuitively, an eliminating operator is any function that, given a model as the input, returns a model strictly included therein, if one exists. Therefore, the actual complexity of the GEA depends on the complexity of the specific {eliminating operator} one decides to employ. Clearly, the trivial eliminating operator may enumerate (in exponential time) all the interpretations contained in the given model and check for satisfiability of the theory, while we shall consider actually interesting only those eliminating operators that accomplish their task in polynomial time. A specific eliminating operator, denoted by $\xi_{\rm HEF}$, is henceforth defined, by which the GEA computes a minimal model of any HEF theory in polynomial time. However, the intractability of the recognition problem for HEF CNF theories may seem to narrow the applicability of the results sketched above and to reduce their significance to a mere theoretical result. This seemingly relevant limitation can fortunately be overcome by suitably readapting the structure of our algorithm: to this end, we introduce the \textit{Incomplete Generalized Elimination Algorithm} (IGEA) that, once instantiated with a suitable operator, outputs a model of the input theory, which is guaranteed to be minimal at least over HEF theories. The design of IGEA leverages on the notion of \textit{fallible eliminating operator}, which is defined later in this paper. Then, by coupling IGEA with the $\xi_{\rm HEF}$ operator, we call this instance of the algorithm $\rm IGEA_{\xi_{\rm HEF}}$, we obtain a polynomial-time algorithm that always minimizes the input model of a HEF CNF theory without the need of knowing in advance whether the input CNF theory is HEF or not. As for non-HEF theories, we show that $\rm IGEA_{\xi_{\rm HEF}}$ always returns a model of the input theory which may be minimal or not, depending on the structure of the input theory. This kind of behavior on non-HEF theories is clearly the expected one since, as already noticed, recognizing HEF theories is co-NP-complete. Interestingly, this latter characteristics of $\rm IGEA_{\xi_{\rm HEF}}$ further enhances its relevance, since its application is not restricted to the class of HEF CNF theories, but to a even broader class thereof. \begin{figure}[t] \centering \begin{tabular}{cc} \fbox{ $\begin{array}{rrll} \P = \{ & g \vee j & \leftarrow & \\ & f \vee h & \leftarrow & \\ & b & \leftarrow a & \\ & c & \leftarrow b & \\ & a & \leftarrow c & \\ & d & \leftarrow a, b & \\ & c & \leftarrow d & \\ & e & \leftarrow b & \\ & h & \leftarrow b & \\ & f & \leftarrow e, i & \\ & i & \leftarrow e, j & \\ & g & \leftarrow f & \\ & e & \leftarrow g & \\ & j & \leftarrow e & \\ & h & \leftarrow j & \\ & j & \leftarrow h & \\ & c & \leftarrow h, e & \} \\ \end{array}$} & \qquad \fbox{ \begin{minipage}{0.45\textwidth} \includegraphics[width=1.0\textwidth]{running.eps} \end{minipage} } \end{tabular} \caption{A positive CNF theory and the associated dependency graph.} \label{fig:ex_poscnf} \end{figure} \subsection{Application scenarios} \label{sect:appl_scen} In this section we consider generic CNF theories without concentrating on the particular class (that is, general, HEF or HCF) they belong to. Later, in Section \ref{sect:appl_scen_hef}, we specialize some of the examples provided next in the context of HEF theories, which is a main focus in our investigation. As already noticed, the minimal model finding problem is a formidable one and remains intractable even in the case attention is restricted to positive CNFs. The following positive CNF theory will be employed in order to describe the various concepts introduced throughout the paper. \begin{myexample}{Minimal models of positive CNF theories}\rm \label{ex:pos_cnf} Figure \ref{fig:ex_poscnf} reports an example of positive CNF theory $\P$ (on the left) together with the associated dependency graph ${\mathcal{G}}(\P)$ (on the right). The set $\atom{\P}$ is $\{a$, $b$, $c$, $d$, $e$, $f$, $g$, $h$, $i$, $j\}$ and it is the largest model of $\P$. This theory has several models, but only a minimal one, which is $\{j,h\}$. \end{myexample} \begin{figure}[t] \centering \[\begin{array}{cc} \fbox{$\begin{array}{rrll} P = \{ & d & \leftarrow \nbd c & \\ & b & \leftarrow a, e, \nbd d & \\ & a & \leftarrow b, e, \nbd d & \\ & a & \leftarrow c & \\ & b & \leftarrow c & \\ & c & \leftarrow a, b & \\ & a, b & \leftarrow \nbd f & \} \end{array}$} & \qquad \fbox{$\begin{array}{rrll} {\cal M} = \{ & a, d & \} \\ \\ P^{\cal M} = \{ & d & \leftarrow & \\ & a & \leftarrow c & \\ & b & \leftarrow c & \\ & c & \leftarrow a, b & \\ & a, b & \leftarrow & \} \end{array}$} \end{array}\] \caption{A logic program $P$, a model ${\cal M}$ of $P$, and the reduct $P^{\cal M}$.} \label{fig:ex_stable} \end{figure} To illustrate a setting in which positive CNFs natural arise, consider Logic Programming, a central tool in Knowledge Representation and Reasoning. In the field of Logic Programming, the notion of negation by default poses the problem of defining a proper notion of model of the program. Among the several proposed semantics for logic programs with negation, the Stable Models and Answer Sets semantics are nowadays the reference one for closed world scenarios \cite{GelfondL88}. An interesting application of our techniques concerns stable model (or answer set) checking. To illustrate, stable models exploit the concept of the reduct of the program, as clarified in the following definition. \begin{definition}[Stable Model \cite{GelfondL88}]\rm Given a logic program $P$ and a model ${\cal M}$ of $P$, the \textit{reduct} of $P$ w.r.t ${\cal M}$, also denoted by $P^{\cal M}$, is the program built from $P$ by ($i$) removing all rules that contain a negative literal $not~a$ in the body with $a \in {\cal M}$, and ($ii$) removing all negative literals from the remaining rules. A model ${\cal M}$ of $P$ is {\em stable} if ${\cal M}$ is a minimal model of $P^{\cal M}$. \end{definition} \begin{myexample}{Stable Models of Logic Programs} \label{ex:stable_models} Figure \ref{fig:ex_stable} shows, on the left, a logic program $P$ and, on the right, the reduct $P^{\cal M}$ of $P$ w.r.t. the model ${\cal M}=\{a,d\}$. In this case, ${\cal M}$ is a minimal model of $P^{\cal M}$ and, hence, it is a stable model of $P$. \end{myexample} It is worth noticing that $P^{\cal M}$ is a CNF since, by definition of the reduct, negation by default does not occur in any clause of $P^{\cal M}$. Moreover, ${\cal M}$ is always a model of $P^{\cal M}$ and is given in input. Therefore, by setting $\Pi=P^{\cal M}$ the problem of verifying if a given model ${\cal M}$ for the logic program $P$ is stable fits the minimal model checking problem for positive CNFs and, as such, can be suitably dealt with using the techniques this paper proposes. \begin{figure}[t] \centering \[\begin{array}{cc} \fbox{$\begin{array}{rrll} \P = \{ & b & \leftarrow a & \\ & c & \leftarrow a & \\ & a & \leftarrow b, c & \\ & b, c & \leftarrow & \\ & d & \leftarrow\\ & & \leftarrow b, d & \} \\ ~\\ ~\\ ~\\ ~\\ \end{array}$} & \qquad \fbox{$\begin{array}{rrll} \P^+ = \{ & b & \leftarrow a & \\ & c & \leftarrow a & \\ & a & \leftarrow b, c & \\ & b, c & \leftarrow & \\ & d & \leftarrow\\ & \phi & \leftarrow b, d & \\ & a & \leftarrow \phi & \\ & b & \leftarrow \phi & \\ & c & \leftarrow \phi & \\ & d & \leftarrow \phi & \} \end{array}$} \end{array}\] \caption{A CNF $\P$ and its positive form $\P^+$.} \label{fig:ex_posform1} \end{figure} In order to analyze a different application scenario, let us assume a positive CNF is given. Next we show that the given theory can be indeed reduced to a positive theory whose models have some clear relationship with the models of the original theory. Let us first consider the definition of positive form of a CNF. \begin{definition}[Positive Form of a CNF theory]\label{def:positive_form}\rm The theory $\P^+$, also said the \textit{positive form} of $\P$, is defined as follows: (1) for each clause $H \leftarrow B$ of $\P$, if $H$ is not empty then the clause $H \leftarrow B$ is in $\P^+$; (2) for each clause $\leftarrow B$ of $\P$, the clause $\phi \leftarrow B$ is in $\P^+$; (3) for each atom $a$ occurring in $\P$, the clause $a\leftarrow\phi$ is in $\P^+$. \end{definition} The following result relates models of $\P$ with minimal models of $\P^+$. \begin{proposition} Given a CNF theory $\P$, {if $\phi$ belongs to the (unique) minimal model of $\P^+$ then $\P$ is inconsistent, otherwise the set of minimal models of $\P$ and $\P^+$ coincide}. \end{proposition} \begin{proof} First of all, we will observe that each model of $\P$ is a model of $\P^+$ as well and, then, $\phi$ is not in ${\cal M}$. \begin{observation}\label{obs:model} Let ${\cal M}$ be a model for $\P$ and consider the theory $\P^+$. All the clauses (1) in $\P^+$ are also in $\P$ and then are true. Since ${\cal M}$ is a model for $\P$ all the empty-head clauses of $\P$ don't have the body fully contained in ${\cal M}$ and, therefore, ${\cal M}$ satisfies all the clauses (2) of $\P^+$. Finally, since $\phi$ is not in ${\cal M}$ all the clauses (3) are true. \end{observation} Now, let ${\cal M}^+$ be a minimal model of $\P^+$ and $\atom{\P^+}$ be the set of all atoms occurring in $\P^+$. Note that, because of the presence of the set of clauses (3), two cases are possible, that are: either ${\cal M}^+$ contains $\phi$ and then all the atoms occurring in $\P^+$; or ${\cal M}^+\subset \atom{\P^+}$ and, in particular, $\phi \not \in {\cal M}^+$. \begin{enumerate} \item As for the first case, if $\P$ had a model ${\cal M}$ then, due to Observation \ref{obs:model}, ${\cal M}$ would be a model of $\P^+$ as well and then ${\cal M}^+$ would not be minimal. Thus, $\P$ is inconsistent. \item As for the second case, ${\cal M}^+$ does not contain $\phi$. Consider now the theory $\P$. All the non-empty-head clauses in $\P$ are also in $\P^+$ and, then, are satisfied by ${\cal M}^+$. Consider, now, the empty-head clauses in $\P$. Because of the presence of clauses (2) in $\P^+$, and since ${\cal M}^+$ does not contain $\phi$, it is the case that the body of such clauses is not fully contained in ${\cal M}^+$. Thus, the correspondent clauses in $\P$ are satisfied by ${\cal M}^+$. \end{enumerate} This implies that ${\cal M}^+$ is a model of $\P$ as well. \end{proof} To illustrate, consider the following example. \begin{myexample}{General CNF theories} \label{ex:posform} Consider the CNF reported in Figure \ref{fig:ex_posform1} on the left. In the same figure, one the right, it is reported the positive form $\P^+$ of $\P$. $\P$ has only one minimal model, namely $\{c, d\}$, which is precisely the unique minimal model of $\P^+$. \end{myexample} \section{Generalized Elimination Algorithm}\label{sect:algo} In this section, a generalization of the elimination algorithm proposed in \cite{Ben-Eliyahu-ZoharyP97}, called Generalized Elimination Algorithm, is introduced. We begin by providing some preliminary concepts, notably, those of \textit{steady set} and \textit{eliminating operator}. Intuitively, given a model ${\cal M}$ for a theory $\P$, the steady set is the subset of ${\cal M}$ containing atoms which ``cannot'' be erased from ${\cal M}$, for otherwise ${\cal M}$ would no longer be a model for $\P$. As proved next, the steady set can be obtained by computing the model of a certain non-disjunctive theory. \begin{definition}[Steady set] Given a CNF theory $\P$ and a model ${\cal M}$ for $\P$, the minimal model $\S \subseteq {\cal M}$ of the theory $\P^{nd}_{M\leftarrow}$ is called the \emph{$\steady$ set} of ${\cal M}$ for $\P$. \end{definition} Note that the steady set $\S$ of ${\cal M}$ for $\P$ always exists and is unique. Indeed, $\P^{nd}_{M\leftarrow}$ is a Horn positive CNF and it is known that these kinds of theories have one and only one minimal model (which can be computed in polynomial time) \cite{DowlingG84,Lloyd1987}. \begin{property}[$\mathcal{MM}$-containment]\label{prop:mm_cont} Given a positive CNF theory $\P$, a model ${\cal M}$ for $\P$ and the steady set $\S$ of ${\cal M}$ for $\P$, it holds that each model of $\P$ contained in ${\cal M}$ contains $\S$. \end{property} \begin{proof} First, notice that the models of the positive CNF theory $\P$ which are contained in the model ${\cal M}$ of $\P$ coincide with the models of the positive CNF theory $\P_{{\cal M}\leftarrow}$. Since $\P^{nd}_{{\cal M}\leftarrow}$ is contained in $\P_{{\cal M}\leftarrow}$, by monotonicity of propositional logic, it follows that all logical consequences of $\P^{nd}_{{\cal M}\leftarrow}$ are also logical consequences of $\P_{{\cal M}\leftarrow}$ and, hence, each model of $\P_{{\cal M}\leftarrow}$ contains the unique minimal model of $\P^{nd}_{{\cal M}\leftarrow}$, which is the steady set of ${\cal M}$ for $\P$. \end{proof} \begin{definition}[Erasable set] Let ${\cal M}$ be a model of a positive CNF theory $\P$. A non-empty subset ${\mathcal{E}}$ of ${\cal M}$ is said to be \emph{erasable} in ${\cal M}$ for $\P$ if ${\cal M} \setminus {\mathcal{E}}$ is a model of $\P$. \end{definition} The following result holds. \begin{proposition} Let ${\cal M}$ be a model of a positive CNF theory $\P$, let $\S$ be the steady set of ${\cal M}$ for $\P$, and let ${\mathcal{E}}$ be a set erasable in ${\cal M}$ for $\P$. Then, ${\mathcal{E}}\subseteq{\cal M}\setminus\S$. \end{proposition} \begin{proof} For the sake of contradiction, assume that ${\mathcal{E}}\cap\S\neq\emptyset$. Then, ${\cal M}\setminus{\mathcal{E}}$ is a model of $\P$ that does not contain $\S$, which contradicts the fact that $\S$ has the $\mathcal{MM}$-containment property in ${\cal M}$ for $\P$ (See Property \ref{prop:mm_cont}). \end{proof} \begin{definition}[Eliminating operator] Let ${\cal M}$ be a model of a positive CNF theory $\P$. An \emph{eliminating operator} $\xi$ is a mapping that, given ${\cal M}$ and $\P$ in input, returns an erasable set in ${\cal M}$ for $\P$, if one exists, and an the empty set, otherwise. \end{definition} It immediately follows that if $\xi(\P,{\cal M}) = \emptyset$ then ${\cal M}$ is a minimal model of $\P$. This is easily shown by observing that $\xi(\P,{\cal M}) = \emptyset$ implies that there is no erasable set in ${\cal M}$, namely no subset of ${\cal M}$ is a model for $\P$. We are now ready to present our algorithmic schema, referred to as the Generalized Elimination Algorithm (GEA) throughout the paper, which is summarized in Figure \ref{fig:elimination_algo}. Note that GEA has an operator $\xi$ as its parameter\footnote{The term {\em schema} is used here since actual algorithms are obtained only after instantiating the generic $\xi$ operator invoked in the GEA to a specific operator.}. \newcommand{{\M^\ast}}{{{\cal M}^\ast}} \begin{figure}[t] \begin{algorithm}[H] \footnotesize \KwIn{A CNF theory $\P$ and a model ${\cal M}$ of $\P$} \KwOut{A minimal model ${\M^\ast}$ of $\P$ contained in ${\cal M}$} \BlankLine remove all constraints from $\P$\; $stop$ = $false$\; \Repeat{$stop$ \label{line:end_out_cycle}}{\label{line:start_out_cycle} compute the minimal model $\S$ of $\P^{nd}_{{\cal M}\leftarrow}$\; \If{$\S$ is a model of $\P$}{ ${\M^\ast} = \S$ $stop$ = $true$\; }\Else{ ${\mathcal{E}} = \xi(\P,{\cal M})$\; \If{$({\mathcal{E}} = \emptyset)$}{ ${\M^\ast} = {\cal M}$\; $stop$ = $true$\; }\Else{ ${\cal M} = {\cal M} \setminus {\mathcal{E}}$\label{line:compute_xi_S}\; } } } \Return ${\M^\ast}$ \caption{Generalized Elimination Algorithm with operator $\xi$, $GEA_{\xi}(\P,{\cal M})$} \label{algo:elimination} \end{algorithm} \caption{Generalized Elimination Algorithm with operator $\xi$, $\rm GEA_{\xi}(\P,{\cal M})$} \label{fig:elimination_algo} \end{figure} Our first result states that GEA is correct under the condition that the operator parameter $\xi$ is an eliminating operator. \begin{theorem}[GEA correctness]\label{theo:gea_correct} Let $\P$ be a CNF theory and ${\cal M}$ be a model of $\P$. If $\xi$ is an eliminating operator, then the set returned by $\rm GEA_{\xi}$ on input $\P$ and ${\cal M}$ is a minimal model for $\P$ contained in ${\cal M}$. \end{theorem} \begin{proof} First of all, since ${\cal M}$ is a model of $\P$, by definition of model all the constraints (aka empty-head clauses) of $\P$ are true in ${\cal M}$ and are also true in any subset of ${\cal M}$. Hence, they can be disregarded during the subsequent steps (see line 1). Moreover, note that, by definition of steady set, it follows that the set $\S$ computed at the beginning of each iteration of the algorithm (line 4) is a (not necessarily proper) subset of every minimal model contained in ${\cal M}$. Let $n$ be the number of atoms in the model ${\cal M}$ computed at line 1 of the GEA. Three cases are possible, which are discussed next: \begin{enumerate} \item {\em $\S$ is a model of $\P$.} Since $\S$ is the steady set of ${\cal M}$ for $\Pi$, if $\S$ is a model for $\P$, then it is also minimal; so the algorithm stops and returns a correct solution.\footnote{We recall that if the steady set $\S$ of M for $\P$ is a model of $\P$ then it is the unique minimal model of $\P$ contained in ${\cal M}$. Hence, the test at line $4$ serves the purpose of accelerating the termination of the algorithm. However, operations in lines 3-6 could be safely dropped without affecting the correctness of the algorithm.} \item {\em ${{\mathcal{E}}} = \emptyset$}. By definition of eliminating operator, if ${\mathcal{E}}$ is empty, then ${\cal M}$ is a minimal model; so the algorithm stops and returns a correct solution. \item {\em ${\mathcal{E}} \neq \emptyset$.} In this case, a non-empty set of atoms is deleted from ${\cal M}$, letting (by definitions of eliminating operator and erasable set) ${\cal M}$ still be a model for $\P$. Thus, at the next iteration, the algorithms will work with a smaller (possibly not minimal) model ${\cal M}$. Hence, after at most $n$ iterations, either case 1 or case 2 applies. \end{enumerate} \end{proof} The next result states the time complexity of the GEA that, clearly, will depend on the complexity $C_\xi$ associated with the evaluation of the eliminating operator $\xi$. \begin{proposition}\label{prop:gea_cost} Let $n$ and $m$ denote the number of atoms occurring in the heads of $\P$ and, overall, in $\P$, respectively. Then, for any model ${\cal M}$ of $\P$, $\rm GEA_{\xi}(\P,{\cal M})$ runs in time $O(n m + n C_\xi)$. \end{proposition} \begin{proof} Since at each iteration (if the stopping condition is not matched) at least one atom is removed, the total number of iterations is $O(n)$. As for the cost spent at each iteration, the dominant operations are: $(i)$ computing the (unique) minimal model of a non-disjunctive theory (line 4) which can be accomplished in linear time w.r.t. $m$ by the well-known unit propagation procedure \cite{DowlingG84}; $(ii)$ checking if a set of atoms is a model (line 5) which can be accomplished in linear time in $m$ as well; $(iii)$ applying the eliminating operator (line 8), whose cost is $C_\xi$. This closes the proof. \end{proof} In particular, consider the naive operator $\xi_{\exp}$ that enumerates all the $2^n$ non-empty subsets of ${\cal M}$ and either returns one of these, call it ${\mathcal{E}}$, such that ${\cal M} \setminus{\mathcal{E}}$ is a model for $\P$, or an empty set if such a set ${\mathcal{E}}$ does not exist. The resulting algorithm $\rm GEA_{\xi_{\exp}}$ returns a minimal model of $\P$ but requires exponential running time. \medskip Conversely, as an example of instance of the GEA algorithm having polynomial time complexity on a specific class of CNF theories, consider the Elimination Algorithm presented in \cite{Ben-Eliyahu-ZoharyP97}. This algorithm can be obtained from the GEA by having the operator $\xi_{\rm HCF}$ (described next) as the eliminating operator $\xi$ and the set $\atom{\P}$ as the input model ${\cal M}$. Indeed, as shown in \cite{Ben-Eliyahu-ZoharyP97}, the Elimination Algorithm computes a minimal model of a positive HCF theory in polynomial time. The definition of $\xi_{\rm HCF}$ operator follows \cite{Ben-Eliyahu-ZoharyP97}. Let $\P$ be a {positive} HCF CNF theory and let ${\cal M}'$ be the set of the heads of the disjunctive rules in $\P_{{\cal M}\leftarrow}$ which are false in ${\cal M}$. Then, $\xi_{\rm HCF}(\P, {\cal M})$ is defined to return a \textit{source} of ${\cal M}'$, where a source of the set of atoms ${\cal M}'$ is a connected component in the subgraph of ${\mathcal{G}}(\P)$ induced by ${\cal M}'$ which does not have incoming arcs. \bigskip Before leaving this section, we provide two further results which will be useful when discussing the MMCP and the MMFP. \begin{lemma}\label{lemma:LemmaCheck} Given a CNF theory $\P$, an eliminating operator $\xi$ and a model ${\cal M}$ of $\P$, ${\cal M}$ is minimal for $\P$ if and only if $\rm GEA_{\xi}(\P,{\cal M})$ outputs ${\cal M}$. \end{lemma} \begin{proof} The proof follows by noticing that GEA always outputs a (possibly non-proper) minimal sub-model of the initial model ${\cal M}$ as its output. \end{proof} \begin{lemma}\label{lemma:LemmaFind} Given a positive CNF theory $\P$ and an eliminating operator $\xi$, then $\rm GEA_{\xi}(\P,\atom{\P})$ outputs a minimal model of $\P$. \end{lemma} \begin{proof} The proof follows by noticing that $\atom{{\cal M}}$ is a model of ${\cal M}$, being $\P$ a positive theory, and by Lemma \ref{lemma:LemmaCheck}. \end{proof} \section{Model minimization on HEF CNF theories}\label{sect:hef} We have noticed above that the complexity of GEA depends on the complexity characterizing, in its turn, the specific elimination operator it invokes. On the other hand, the MMP being ${\rm P}^{{\rm NP}[O(\log n)]}$-hard \cite{Cad92a} implies that, unless the polynomial hierarchy collapses, the GEA will generally require exponential time to terminate when called on a generic input CNF theory. Therefore, it is sensible to single out significant subclasses of CNF theories for which it is possible to devise a specific eliminating operator guaranteeing a polynomial running time for the GEA. In this respect, it is a simple consequence of the results presented in \cite{Ben-Eliyahu-ZoharyP97} that a model of any head-cycle-free theory can be indeed minimized in polynomial time using the Elimination Algorithm. So, the interesting question remains open of whether we can do better than this. Our answer to this question is affirmative and this section serves the purpose of illustrating this result. In particular, we shall show that by carefully defining the eliminating operator, we can have that the GEA minimizes in polynomial time a model of any HEF CNF theory. In Section \ref{sect:beyond}, we shall moreover show that there also exist CNF theories which are not HEF but for which the algorithm, equipped with a proper eliminating operator, efficiently minimizes a model. \subsection{Head-elementary-set-free theories and $\sel$ sets} \label{HEF} Next, we recall the definition of head-cycle-free theories \cite{BenEliyahuD94}, adapt that of head-elementary-cycle-free theories \cite{GebserLL06} to our propositional context and provide a couple of preliminary results which will be useful in the following. We proceed by introducing the concepts of outbound and elementary set. \begin{definition}[Outbound Set (adapted from \cite{GebserLL06})] Let $\P$ be a CNF theory. For any set $Y$ of atoms occurring in $\P$, a subset $Z$ of $Y$ is \emph{outbound} in $Y$ for $\P$ if there is a clause $H \leftarrow B$ in $\P$ such that: {\rm({\it i})} $H\cap Z\neq \emptyset$; {\rm({\it ii})} $B\cap (Y\backslash Z)\neq \emptyset$; {\rm({\it iii})} $B\cap Z = \emptyset$ and {\rm({\it iv})} $H\cap (Y\backslash Z) = \emptyset$. \end{definition} Intuitively, $Z \subseteq Y$ is outbound in $Y$ for $\P$ if there exists a rule $c$ in $\P$ such that the partition of $Y$ induced by $Z$ (namely, $\langle Z; Y\setminus Z \rangle$) ``separates'' head and body atoms of $c$. \begin{myexample}{Outbound set}\label{ex:elementary_set} Consider the theory \begin{eqnarray*} \begin{array}{rrll} \P = \{& b, c & \leftarrow a & \\ & b & \leftarrow c & \\ & c & \leftarrow b & \\ & a & \leftarrow b & \\ & d & \leftarrow b,c & \} \end{array} \end{eqnarray*} and the set $E=\{a,b,c\}$. Consider, now, the subset $O=\{a,b\}$ of $E$. $O$ is outbound in $E$ for $\P$ because of the clause $b\leftarrow c$, since $c \in E \setminus O$, $c\not\in O$, $b\in O$ and $b\not\in E \setminus O$. \end{myexample} Let $O$ be a non-outbound set in $X$ for $\P$. $O$ is \textit{minimal non-outbound} if any proper subset $O' \subset O$ is outbound in $X$ for $\P$. \begin{definition}[Elementary Set (adapted from \cite{GebserLL06})] Let $\P$ be a CNF theory. For any non-empty set $Y \subseteq \atom{\P}$, $Y$ is \emph{elementary} for $\P$ if all non-empty proper subsets of $Y$ are outbound in $Y$ for $\P$. \end{definition} For example, the set $E_{ex}$ of Example \ref{ex:elementary_set} is elementary for the theory $\P_{ex}$, since each non-empty proper subset of $E_{ex}$ is outbound in $E_{ex}$ for $\P_{ex}$. \begin{definition}[Head-Elementary-Set-Free CNF theory (adapted from \cite{GebserLL07})]\label{def:hef_theory} Let $\P$ be a CNF theory. $\P$ is \emph{head-elementary-set-free (HEF)} if for each clause $H \leftarrow B$ in $\P$, there is no elementary set $E$ for $\P$ such that $|E\cap H|>1$. \end{definition} So, a CNF theory $\P$ is HEF if there is no elementary set containing two or more atoms appearing in the same head of a rule of $\P$. An immediate consequence of Definition \ref{def:hef_theory} is the following property. \begin{property}\label{prop:hef_def} A theory $\P$ is not HEF if and only if there exists a set $X$ of atoms of $\P$ such that $X$ is both a disjunctive and an elementary set for $\P$. \end{property} For instance, the theory $\P_{ex}$ of Example \ref{ex:elementary_set} is not HEF, since for the rule $b,c\leftarrow a$ and the elementary set $E_{ex}$, we have $|E_{ex} \cap \{b,c\}| > 1$. \subsection{Examples of HEF theories} \label{sect:appl_scen_hef} Now the examples already introduced in Section \ref{sect:appl_scen} are discussed in the context of HEF CNF theories. \begin{exampleContinued}{\ref{ex:pos_cnf}}{Minimal models of positive CNF theories}\rm Consider the theory reported in Figure \ref{fig:ex_poscnf}. This is an HEF CNF theory since no superset of $\{g,j\}$ and no superset of $\{f,h\}$ is an elementary set for $\P$. \end{exampleContinued} \begin{exampleContinued}{\ref{ex:stable_models}}{Stable Models of Logic Programs} A logic program $P$ is HEF if the CNF $\widehat{P}$ obtained by removing all the literals of the form $not~a$ from the body of its rules is HEF \cite{GebserLL07}. Importantly, it holds that if the logic program $P$ is HEF and ${\cal M}$ is a model of $P$, then also $P^{\cal M}$ is HEF. This follows since, by definition, a logic program $P$ is HEF if and only if the CNF $\widehat{P}$ is HEF, and by Lemma \ref{theo:HEF_monotonicity} (reported in Section \ref{sect:sel_set}) any subset of clauses of a HEF CNF is HEF as well, and $P^{\cal M}$ is precisely a subset of $\widehat{P}$. Notably, even if $P$ is not HEF, it could be anyway the case that $P^{\cal M}$ is HEF, and this broadens the range of applicability of the techniques proposed here. As an example, consider again Figure \ref{fig:ex_stable}. The program $P$ there reported is not HEF, since the set $S=\{a,b,c\}$ is both disjunctive and elementary. Conversely, $P^{\cal M}$ is HEF since the set $S$ is no longer elementary because the subsets $\{a,c\}$ and $\{b,c\}$ of $S$ are not outbound in $S$. {Moreover, we notice that the subgraph of ${\mathcal{G}}(P^{\cal M})$ induced by $S$ is a connected component and then both $P$ and $P^{\cal M}$ are not HCF.} \end{exampleContinued} \begin{exampleContinued}{\ref{ex:posform}}{General CNF theories} Given a non-positive CNF $\P$, it holds that if $\P$ is not HEF, then also $\P^+$ is not HEF. Let $\P'$ be the subset of $\P$ obtained by removing the contraints in $\P$. Notice that $\P'$ can be obtained from $\P^+$ by first removing the clauses of the form $a\leftarrow\phi$ for each $a\in\atom{\P}$ (see point (3) of Definition \ref{def:positive_form}) and then projecting it on $\atom{\P}$. Since the HEF property does not depend on the costraints, it follows from Lemma \ref{theo:HEF_monotonicity} (reported in Section \ref{sect:sel_set}) that if $\P$ is not HEF, then $\P^+$ is not HEF as well. Conversely, if $\P$ is HEF, then $\P^+$ can happen to be either HEF or not. As an example, consider the theories displayed in Figure \ref{fig:ex_posform1} of Section \ref{sect:appl_scen}. In this case $\P$ and $\P^+$ are both HEF. Conversely, consider the theories reported in Figure \ref{fig:ex_posform2}. In this case, $\P$ is HEF, whereas $\P^+$ is not. \end{exampleContinued} \begin{figure}[t] \centering \[\begin{array}{cc} \fbox{$\begin{array}{rrll} \P = \{ & b & \leftarrow a & \\ & c & \leftarrow a & \\ & a & \leftarrow b, c & \\ & b & \leftarrow c & \\ & b, c & \leftarrow & \\ & d & \leftarrow\\ & & \leftarrow b, d & \\ & & \leftarrow c, d & \} \\ ~\\ ~\\ ~\\ ~\\ \end{array}$} & \qquad \fbox{$\begin{array}{rrll} \P^+ = \{ & b & \leftarrow a & \\ & c & \leftarrow a & \\ & a & \leftarrow b, c & \\ & b & \leftarrow c & \\ & b, c & \leftarrow & \\ & d & \leftarrow\\ & \phi & \leftarrow b, d & \\ & \phi & \leftarrow c, d & \\ & a & \leftarrow \phi & \\ & b & \leftarrow \phi & \\ & c & \leftarrow \phi & \\ & d & \leftarrow \phi & \} \end{array}$} \end{array}\] \caption{An HEF CNF $\P$ and its positive form $\P^+$ which is not HEF.} \label{fig:ex_posform2} \end{figure} \subsection{$\Sel$ sets} We introduce next the definition of \emph{simplified theory} and of $\emph{\sel}$ set that will play a relevant role in the definition of the eliminating operator for HEF theories. \begin{definition}[Simplified theory] Let $\P$ be a CNF theory and ${\cal M}$ be a model of $\P$. Then the \emph{simplified theory of $\P$ w.r.t. ${\cal M}$}, denoted as $\ssimpl{\P}{{\cal M}}$, is the CNF theory $\left(\simpl{\P}{{\cal M}}\right)_{{\cal M}\setminus\S}$, where $$\simpl{\P}{{\cal M}} = \{ H\leftarrow B \in \P : H\cap\S=\emptyset \mbox{ and } {\cal M}\supseteq B \}$$ and $\S$ is the steady set of ${\cal M}$ in $\P$. \end{definition} The clauses in $\simpl{\P}{{\cal M}}$ are those clauses of $\P$ having the body fully contained in ${\cal M}$ and some atoms of the head contained in ${\cal M}$ but not in $\S$. Note that it cannot be the case for the head of any clause in $\P$ to have empty intersection with ${\cal M}$ (or, analogously, the head is empty) since, in such a case, ${\cal M}$ would not be a model for $\P$. Then, intuitively, $\simpl{\P}{{\cal M}}$ contains the subset of the clauses of $\P$ which could be falsified if atoms would be eliminated from the model ${\cal M}$, so that we would have a model for $\P$ no longer. Note that, we do not consider the case that atoms of $\S$ are eliminated from ${\cal M}$ since, by definition of steady set, if any atom of $\S$ were eliminated we would have no longer models for $\P$ in ${\cal M}$. Simplified theories enjoy two useful properties. As for the first, we observe that, for any CNF theory $\P$ and model ${\cal M}$ of $\P$, $\simpl{\P}{{\cal M}}$ is positive. The second one, summarized in the following Lemma, tells that $\simpl{\P}{{\cal M}}$ contains no facts. \begin{lemma}\label{prop:equiv_th_atom} Let $\P$ be a CNF theory, let ${\cal M}$ be a model of $\P$, and let $\S$ be the steady set of ${\cal M}$ for $\P$. Then no clause of the form $h\leftarrow$, with $h$ a single letter, occurs in the theory $\ssimpl{\P}{{\cal M}}$. \end{lemma} Next, we introduce the notion of $\sel$ set which will be used for defining the eliminating operator for HEF theories. \begin{definition}[$\Sel$ set] Given a CNF theory $\P$ and a set $X \subseteq \atom{\P}$, $X$ is {\em $\sel$} for $\P$ if $X$ is both an elementary set for $\P$ and a non-outbound set in $\atom{\P}$ for $\P$. \end{definition} Intuitively, a {\em $\sel$} set $X$ for $\P$ is a set of atoms such that for no disjunctive clause $c$ in $\P$, the body of $c$ is satisfied by atoms not occurring in $X$ and its head is contained in $X$ (as will be clear in the proof of Theorem \ref{theo:eliminable_set}). Notice that, as a consequence, no clause may become unsatisfied by removing a {\em $\sel$} set $X$ from a model. \subsection{On the erasability properties of $\Sel$ sets}\label{sect:sel_set} Next, we are going to show that, given any theory $\P$ and model ${\cal M}$ of $\P$, any {\em $\sel$} set is erasable in ${\cal M}$ for $\P$. In order to do that, we shall: \begin{enumerate} \item demonstrate a one-to-one correspondence between the erasable sets in ${\cal M}$ for $\P$ and the erasable sets in ${\cal M}\setminus\S$ for $\ssimpl{\P}{{\cal M}}$, where $\S$ is the steady set of ${\cal M}$ for $\P$ (Lemma \ref{th:equiv_theories}), \item show that the property of a theory $\P$ being HEF is retained by the subsets of $\P$ (Lemma \ref{theo:HEF_monotonicity}): this implies that if a theory $\P$ is HEF then, for each model ${\cal M}$ of $\P$, also $\ssimpl{\P}{{\cal M}}$ is HEF, \item prove that, for any HEF theory $\P$ and any model ${\cal M}$ of $\P$, any $\sel$ set is erasable in $\ssimpl{\P}{{\cal M}}$ (Theorem \ref{theo:eliminable_set}), whereby the sought result is obtained. \end{enumerate} Le following results are conducive the the achievement of the aforementioned objectives. To ease readability, some of the proof are reported in the appendix. \begin{lemma}\label{th:equiv_theories} Let $\P$ be a CNF theory, let ${\cal M}$ be a model of $\P$ and let $\S$ be the steady set of ${\cal M}$ for $\P$. A set of atoms ${\mathcal{E}}$ is erasable in ${\cal M}$ for $\P$ if and only if ${\mathcal{E}}$ is erasable in $({\cal M}\setminus\S)\supseteq\atom{\ssimpl{\P}{{\cal M}}}$ for $\ssimpl{\P}{{\cal M}}$. \end{lemma} \begin{lemma}\label{theo:HEF_monotonicity} Let $\P$ be a HEF CNF theory. For each set of clauses $\P'\subseteq\P$ and for each set of atoms $X$, the theory $\P'_X$ is HEF. \end{lemma} \begin{theorem}\label{theo:eliminable_set} Let $\P$ be a HEF CNF theory, let ${\cal M}$ be a model for $\P$ and let $\S$ be the steady set of ${\cal M}$ for $\P$, If ${\mathcal{E}} \subseteq \atom{\ssimpl{\P}{{\cal M}}}$ is $\sel$ for $\ssimpl{\P}{{\cal M}}$ then ${\mathcal{E}}$ is erasable in ${\cal M}$ for $\P$. \end{theorem} \begin{proof} By Lemma \ref{th:equiv_theories} it suffices to prove that ${\mathcal{E}}$ is erasable in $\atom{\ssimpl{\P}{{\cal M}}}$ for $\ssimpl{\P}{{\cal M}}$, which is accounted for next. First of all, recall that $\ssimpl{\P}{{\cal M}}$ is a positive theory. Moreover, by Lemma \ref{theo:HEF_monotonicity}, $\ssimpl{\P}{{\cal M}}$ is HEF, since $\P$ is HEF. Clearly, $\atom{\ssimpl{\P}{{\cal M}}}$ is a model of $\ssimpl{\P}{{\cal M}}$. It must be proved that each clause of $\ssimpl{\P}{{\cal M}}$ is true in $\atom{\ssimpl{\P}{{\cal M}}}\setminus{\mathcal{E}}$. Let $H \leftarrow B$ be a generic clause of $\ssimpl{\P}{{\cal M}}$ such that ${\mathcal{E}}$ contains $H$: this is the only kind of clause that might become false in $\atom{\ssimpl{\P}{{\cal M}}}\setminus{\mathcal{E}}$. Next, it is proved that $H\leftarrow B$ is true in $\atom{\ssimpl{\P}{{\cal M}}}\setminus{\mathcal{E}}$. First notice that, by definition of $\ssimpl{\P}{{\cal M}}$, it cannot be the case that $H$ is empty and $|B|\ge 1$. Thus, the following three cases have to be considered: \begin{enumerate} \item {\em $B$ is empty and $|H|=1$.} By Lemma \ref{prop:equiv_th_atom}, such a clause cannot exist. \item {\em $B$ is empty and $|H|>1$.} Notice that, since ${\mathcal{E}}$ is an elementary set for $\ssimpl{\P}{{\cal M}}$, it cannot be the case that $|H\cap{\mathcal{E}}|>1$ or, in other words, that ${\mathcal{E}}\supseteq H$, since the theory $\ssimpl{\P}{{\cal M}}$ is HEF. Hence, the clause $H\leftarrow B$ is true also in $\atom{\ssimpl{\P}{{\cal M}}}\setminus{\mathcal{E}}$; \item {\em $B$ is not empty.} By contradiction, assume that $H\leftarrow B$ is false in $\atom{\ssimpl{\P}{{\cal M}}}\setminus{\mathcal{E}}$. Then, $H\leftarrow B$ is such that $H \subseteq {\mathcal{E}}$ and $B \subseteq \atom{\ssimpl{\P}{{\cal M}}} \setminus {\mathcal{E}}$, namely, none of the atoms in $B$ occurs in ${\mathcal{E}}$. But this rule cannot exist, since ${\mathcal{E}}$ is non-outbound in $\atom{\ssimpl{\P}{{\cal M}}}$ for $\ssimpl{\P}{{\cal M}}$. \end{enumerate} \end{proof} \subsection{On the existence of a $\sel$ set in a HEF theory} \label{sect:sel_exists} Next, we are going to show that, under the condition that $\atom{\P}=\atom{\P^{nd}}$, any HEF theory $\P$ has a $\sel$ set. This result, stated as Theorem \ref{theo:existence} below, shall be attained by preliminarly proving that: \begin{enumerate} \item a $\sel$ set for the non-disjunctive subset of a positive CNF theory is also $\sel$ for the whole theory (Lemma \ref{theo:pnd->p}), \item each HEF CNF theory $\P$ has a \sel~set (Lemma \ref{theo:minimal->elem}). \end{enumerate} \begin{lemma}\label{theo:pnd->p} Let $\P$ be a HEF CNF theory. If $O\subseteq\atom{\P^{nd}}$ is $\sel$ for $\P^{nd}$ then $O$ is $\sel$ for $\P$. \end{lemma} \begin{lemma}\label{theo:minimal->elem} Let $\P$ be a non-disjunctive CNF theory. Each minimal non-outbound set in $\atom{\P}$ for $\P$ is $\sel$ for $\P$. \end{lemma} The following result eventually states another key property of $\sel$ sets in HEF CNF theories. \begin{theorem}\label{theo:existence} Let $\P$ be a disjunctive HEF CNF theory such that $\atom{\P}=\atom{\P^{nd}}$. Then, there exists a non-empty set of atoms $O\subseteq\atom{\P}$ such that $O$ is $\sel$ for $\P$. \end{theorem} \begin{proof} Since $\P$ is a disjunctive HEF CNF theory, it cannot be the case that $\atom{\P}=\atom{\P^{nd}}$ is elementary for $\P^{nd}$, for otherwise $\atom{\P}$ would be elementary also for $\P$, implying that $\P$ is not HEF. Since $\atom{\P^{nd}}$ is not elementary in $\P^{nd}$, by definition, there exists a set of atoms which is non-outbound in $\atom{\P^{nd}}$ for $\P^{nd}$ and, in particular, there exists a minimal non-outbound set $O\subset\atom{\P^{nd}}$ for $\atom{\P^{nd}}$ in $\P^{nd}$. To conclude, $O$ is $\sel$ for $\P^{nd}$ by Lemma \ref{theo:minimal->elem} and, since $O\subset\atom{\P^{nd}}=\atom{\P}$, $O$ is $\sel$ for $\P$ by Lemma \ref{theo:pnd->p}. \end{proof} \subsection{Computing a $\sel$ set}\label{sect:comp_sel_set} This section is devoted to proving that a $\sel$ set of an HEF CNF theory can be, in fact, computed in polynomial time. \begin{figure}[t] \begin{function}[H] \footnotesize \KwIn{A non-disjunctive CNF theory $\P$ \newline a set of atoms $X\subseteq\atom{\P}$} \KwOut{The elementary subgraph $\eG(\P,X)$ of $\P$} \BlankLine $i = 0$\; $E_i = \emptyset$\; $\eG_i = \langle X, E_i\rangle$\; \Repeat{$C=\emptyset$}{ let $C$ be the set of clauses $h \leftarrow B$ in $\P$ s.t. the subgraph of $\eG_i$ induced by $B$ is strongly connected\; $E_{i+1} = E_i\cup\{(b,h) \mid b\in B \mbox{ and } h\leftarrow B\in C\}$\; $\eG_{i+1} = \langle X, E_{i+1}\rangle$\; remove $C$ from $\P$\; $i = i + 1$\; } \Return $\eG_i$ \label{fig:procPE} \caption{compute\_elementary\_subgraph()} \label{proc:compute_elem_set} \end{function} \caption{The \ref{proc:compute_elem_set} function} \label{fig:compute_elem_set} \end{figure} The task of computing a $\sel$ set is accomplished by the function \ref{funct:find_elem_nonout_set} shown in Figure \ref{fig:funct_find_sel}. At each iteration, the function \ref{funct:find_elem_nonout_set} makes use of the function \textit{\ref{proc:compute_elem_set}}, which is detailed in Figure \ref{fig:compute_elem_set}. The latter function receives as input a theory $\P$ and a set of atoms $X$, an returns a graph, also denoted by $\eG(\P,X)$, called the \textit{elementary subgraph of $X$ for $\P$} \cite{GebserLL06}. The function reported in Figure \ref{fig:compute_elem_set} is substantially the same as that described at pag. 4 of \cite{GebserLL06}. Specifically, in the pseudo-code, by $\langle X, E\rangle$ it is denoted a graph where $X$ is the set of nodes and $E$ is the set of arcs. \begin{myexample}{Elementary subgraph} Figure \ref{fig:elem_graph_example} reports an example of computation of an elementary subgraph. Since $E_0 = \emptyset$, $\eG_0$ is a graph including nodes but no arcs (see Figure \ref{fig:elem_graph_example}(c)). The clauses in $\P$ whose body is fully contained in one strongly connected component of $\eG_0$ are all the clauses with just one atom in the body, namely $C=\{ c_1$, $c_2$, $c_3$, $c_5 \}$. Thus, $E_1$ consists in set of arcs $\{(a, b), (c, a), (a, c), (d,a)\}$ and the clauses $c_1$, $c_2$, $c_3$ and $c_5$ are removed from $\P$. The graph $\eG_1$ is shown in Figure \ref{fig:elem_graph_example}(d). The unique clause left in $\P$ whose body is fully contained in a strongly connected is $c_4$, then $C=\{c_4\}$, $E_2 = \{(a, d), (c, d)\}$, and $c_4$ is removed from $\P$. Figure \ref{fig:elem_graph_example}(e) reports the graph $\eG_2$. Since the body of $c_6$ does not belong to a strongly connected component of $\eG_2$, the procedure stops returning $\eG_2$ as the elementary subgraph $\eG(\P,X)$ of $X$ for $\P$. \begin{figure}[h] \centering \subfloat[The program $\P$ and the set of atoms $X$]{ \begin{minipage}{0.5\textwidth} \small \begin{equation*} \begin{array}{l@{\quad}l} \begin{split} \P=\{ c_1 \equiv & ~b \leftarrow a\\ c_2 \equiv & ~a \leftarrow c\\ c_3 \equiv & ~c \leftarrow a\\ c_4 \equiv & ~d \leftarrow a, c\\ c_5 \equiv & ~a \leftarrow d\\ c_6 \equiv & ~e \leftarrow a, b\}\\ \end{split} & \begin{split} X = \{a, b, c, d, e\} \end{split} \end{array} \end{equation*} \end{minipage} }\qquad \subfloat[The dependency graph of $\P$]{ \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=0.75\textwidth]{elem_graph.eps} \end{minipage} }\\ \subfloat[$\eG_{0}=\langle X, E_0\rangle$]{ \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=0.75\textwidth]{elem_graph_0.eps} \end{minipage} } \subfloat[$\eG_{1}=\langle X, E_1\rangle$]{ \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=0.75\textwidth]{elem_graph_1.eps} \end{minipage} } \subfloat[$\eG_{2}=\langle X, E_2\rangle$]{ \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=0.75\textwidth]{elem_graph_2.eps} \end{minipage} } \caption{An example of elementary subgraph construction} \label{fig:elem_graph_example} \end{figure} \end{myexample} Next, we recall the main result stated in \cite{GebserLL06}, concerning elementary subgraphs. \begin{proposition}[Theorem 2 of \cite{GebserLL06}]\label{prop:elem_subgraph_connected} For any non-disjunctive theory $\P$ and any set $X$ of atoms occurring in $\P$, $X$ is an elementary set for $\P$ if and only if the elementary subgraph of $X$ for $\P$ is strongly connected. \end{proposition} Moreover, as also proved in \cite{GebserLL06}, the following proposition holds. \begin{proposition}[\cite{GebserLL06}]\label{prop:elem_subgraph_cost} The procedure \ref{proc:compute_elem_set} terminates in polynomial time. \end{proposition} Indeed, at each iteration, a non-empty set of clauses (for otherwise the algorithm would stop) is taken into account and each clause of the theory is considered at most once. Thus, the number of iterations is at most linear w.r.t. the number of clauses of the theory. As for the cost of a single iteration, we have first to find a clause $c$ such that the subgraph of $\eG_i$ induced by the body of $c$ is strongly connected. This task can be clearly accomplished in polynomial time. Second, we have to build the new graph $\eG_{i+1}$ by adding new arcs to $\eG_i$, a task that can be accomplished also in polynomial time. \medskip Let us now resort to the function \ref{funct:find_elem_nonout_set} (see Figure \ref{fig:funct_find_sel}). Assume that the set $\atom{\P^{nd}}$ is not elementary for $\P^{nd}$. Then the elementary graph $\eG\left(\atom{\P^{nd}}, \P^{nd}\right)$ is not strongly connected (by Proposition \ref{prop:elem_subgraph_connected}). Therefore, the graph $\eG\left(\atom{\P^{nd}}, \P^{nd}\right)$ can be partitioned into the sets ${\mathcal{C}}_1,\dots,{\mathcal{C}}_k$ of its maximal strongly connected components and organized into $m\ge 1$ levels, such that if there is an arc from a node in a connected component $C_i$ to a node in a connected component $C_j$, then the level of $C_i$ precedes the level of $C_j$. Isolated connected components possibly occurring in the graph are assumed to be part of the last level $m$. \begin{figure} \begin{function}[H] \KwIn{An HEF CNF theory $\P$ such that $\atom{\P}=\atom{\P^{nd}}$} \KwOut{A $\sel$ set in $\atom{\P}$ for $\P$} \BlankLine $X_0 = \atom{\P}$\; $i = 0$\; $stop = false$\; \Repeat{$stop$}{ compute the elementary subgraph ${\mathcal{G}}_i = \eG(X_{i}, \P^{nd})$\; \If{${\mathcal{G}}_i$ is strongly connected}{ $stop = true$\; }\Else{ select a connected component ${\mathcal{C}}$ in the last level of ${\mathcal{G}}_i$\; $X_{i+1} = X_{i} \setminus{{\mathcal{C}}}$\; $i = i+1$\; } } \Return $X_i$ \caption{\textit{find\_\sel\_set}()} \label{funct:find_elem_nonout_set} \end{function} \caption{The \ref{funct:find_elem_nonout_set} function.} \label{fig:funct_find_sel} \end{figure} The following Theorem states the correctness of the function \ref{funct:find_elem_nonout_set}. \begin{theorem}\label{theo:correct_xiHEF} Let $\P$ be a disjunctive HEF CNF theory such that $\atom{\P}=\atom{\P^{nd}}$. Then, the function \ref{funct:find_elem_nonout_set}{\rm (}$\P${\rm )} computes a $\sel$ set for $\P$. \end{theorem} In order to prove the theorem the following result is useful. \begin{claim}\label{claim:non_outbound} For each $i\ge 0$, $X_i$ is a non-empty non-outbound set in $\atom{\P^{nd}}$ for $\P^{nd}$. \end{claim} \begin{proofOf}{Claim \ref{claim:non_outbound}} The proof is by induction. We start by noticing that the non-empty set $X_0=\atom{\P}=\atom{\P^{nd}}$ is non-outbound in $\atom{\P^{nd}}$ for $\P^{nd}$, by definition of outbound set. Moreover, consider the graph ${\mathcal{G}}_0$, namely, the elementary graph associated with the set of atoms $X_0 = \atom{\P^{nd}}$ and the theory $\P^{nd}$. Note that this graph is not strongly connected since $\P$ is, by hypothesis, a disjunctive HEF theory such that $\atom{\P}=\atom{\P^{nd}}$ and then $\atom{\P}$ is not elementary for $\P^{nd}$. Now, for $i>1$, assume by induction hypothesis that $X_{i}$ is non-outbound in $\atom{\P^{nd}}$ for $\P^{nd}$ and that the graph ${\mathcal{G}}_i$ is not strongly connected (for otherwise the algorithm would have stopped). Consider a strongly connected component ${\mathcal{C}}$ of the last level of ${\mathcal{G}}_i$ and the set $X_{i+1} = X_{i}\setminus{\mathcal{C}}$. Note that $X_{i+1}$ is non-empty since, by induction hypothesis, ${\mathcal{G}}_i$ is not strongly connected and note, moreover, that also ${\mathcal{C}}$ is not empty. Next, it is shown that $X_{i+1}$ is non-outbound in $\atom{\P^{nd}}$ for $\P^{nd}$ or, in other words, that there does not exist any clause $c\equiv h\leftarrow B$ such that $B \subseteq X_0 \setminus X_{i+1}$ and $h \in X_{i+1}$, (note that this means that, without loss of generality, we can limit ourselves to focus only on such single-head clauses where the atom in the head belongs to $X_{i+1}$ and the body is in $X_0 \setminus X_{i+1}$). So, assume by contradiction that one such a clause $c$ indeed exists. Two cases are possible. \begin{description} \item $B \cap {\mathcal{C}} = \emptyset$. In this case, $B \subseteq X_0\setminus X_{i}$. Therefore, $c$ cannot exist in $\P^{nd}$ since $X_{i}$ is non-outbound in $\atom{\P^{nd}}$ for $\P^{nd}$. \item $B\cap {\mathcal{C}}\neq\emptyset$. Also in this case, the clause $c$ cannot exist in $\P^{nd}$. Indeed, the clause $c_{X_{i}}$ obtained by projecting $c$ on $X_{i}$, has its body contained in ${\mathcal{C}}$ and its head in $X_{i+1}$. Since this clause would belong to $\P^{nd}_{X_{i}}$, then it would be the case that ${\mathcal{C}}$ would not belong to the last level of ${\mathcal{G}}_i$. \end{description} This concludes the proof of Claim \ref{claim:non_outbound}. \end{proofOf} Using Claim \ref{claim:non_outbound}, the statement of Theorem \ref{theo:correct_xiHEF} easily follows, as shown next. \begin{proofOf}{Theorem \ref{theo:correct_xiHEF}} When the algorithm \ref{funct:find_elem_nonout_set} stops, the last set $X_i$ is elementary for $\P^{nd}$, since the graph ${\mathcal{G}}_i$ is strongly connected. By Claim \ref{claim:non_outbound}, the set $X_i$ is also non-empty and non-outbound in $\atom{\P}$ for $\P^{nd}$. To conclude, by Lemma \ref{theo:pnd->p}, the set $X_i$ is $\sel$ for $\P$. \end{proofOf} \begin{figure}[t] \centering \begin{tabular}{cc} \fbox{ \begin{minipage}{0.4\textwidth} \includegraphics[width=1.0\textwidth]{running2.eps} \end{minipage} } & \qquad \fbox{ \begin{minipage}{0.4\textwidth} \centering \includegraphics[width=0.8\textwidth]{running3.eps} \end{minipage} } \end{tabular} \caption{Example of execution of the function \ref{funct:find_elem_nonout_set}.} \label{fig:ex_poscnf_supelem} \end{figure} \begin{exampleContinued}{\ref{ex:pos_cnf}}{Minimal models of positive CNF theories}\rm Consider again the theory $\P$ reported in Figure \ref{fig:ex_poscnf} and the function \ref{funct:find_elem_nonout_set}($\P$). The connected components of the elementary subgraph ${\mathcal{G}}_0$ are shown in Figure \ref{fig:ex_poscnf_supelem} on the left. Thus, there is a unique connected component in the last level of ${\mathcal{G}}_0$, which is ${\mathcal{C}}=\{j,h\}$, and $X_1$ is set to $\{a,b,c,d,e,f,g,i\}$. Notice that the connected components of the elementary subgraph ${\mathcal{G}}_1$, which are reported in Figure \ref{fig:ex_poscnf_supelem} on the right, are not a subset of those of ${\mathcal{G}}_0$. The set $X_2$ is then $\{a,b,c,d\}$ and it is the $\sel$ set returned by the function. \end{exampleContinued} The next theorem accounts for the complexity of the function \ref{funct:find_elem_nonout_set}. \begin{theorem}\label{theo:ptime_xiHEF} For any CNF theory $\P$, the function \ref{funct:find_elem_nonout_set}{\rm (}$\P${\rm )} terminates in polynomial time in the size of the theory. \end{theorem} \begin{proof} Initially $X_0$ contains all the atoms occurring in the input theory. Then, at each iteration, either the graph ${\mathcal{G}}_i$ is strongly connected and then the function stops and returns $X_i$, or ${\mathcal{G}}_i$ is not strongly connected and in such case some node is removed from $X_i$. In the latter case, there exist at least two strongly connected components in graph ${\mathcal{G}}_i$. ${\mathcal{C}}$ is one of them and is such that $X_i \supset{\mathcal{C}}\supset\emptyset$. Thus, $X_{i+1}$ is always non-empty. As for the convergence, it is ensured by the fact that the singleton set is strongly connected by definition. The number of iterations executed by the \ref{funct:find_elem_nonout_set} function is at most equal to the number of atoms occurring in the input theory, since in the worst case ${\mathcal{C}}$ consists in just one single atom at each iteration. The statement follows by the fact that each iteration can be accomplished in polynomial time. \end{proof} \subsection{Defining an eliminating operator for HEF CNF theories} In previous sections, we showed that: \begin{itemize} \item[--] given a HEF CNF theory $\P$ and a model ${\cal M}$ for $\P$, a $\sel$ set for $\ssimpl{\P}{{\cal M}}$ is erasable in ${\cal M}$ for $\P$ (Theorem \ref{theo:eliminable_set} in Section \ref{sect:sel_set}), \item[--] given a HEF CNF theory $\P$, if the set of atoms of $\P$ coincides with that of its non-disjunctive fragment, a $\sel$ set always exists (see Theorem \ref{theo:existence} in Section \ref{sect:sel_exists}) and can be indeed computed in polynomial time (see Theorems \ref{theo:correct_xiHEF} and \ref{theo:ptime_xiHEF} in Section \ref{sect:comp_sel_set}). \end{itemize} Putting things together, given an HEF CNF, it can be concluded that if $\atom{\ssimpl{\P}{{\cal M}}}$ coincides with $\atom{\ssimplnd{\P}{{\cal M}}}$, an erasable set ${\mathcal{E}}$ in ${\cal M}$ for $\P$ can be obtained by computing a $\sel$ set for $\ssimpl{\P}{{\cal M}}$ (as detailed in Section \ref{sect:comp_sel_set}). In order to build a suitable eliminating operator for HEF theories, it remains to prove that if $\atom{\ssimpl{\P}{{\cal M}}}$ is a strict superset of $\atom{\ssimplnd{\P}{{\cal M}}}$ then it is always possible to find in polynomial time a model ${\cal M}'\subseteq{\cal M}$ such that $\atom{\ssimpl{\P}{{\cal M}'}}$ coincides with $\atom{\ssimplnd{\P}{{\cal M}'}}$. \begin{proposition}\label{th:cnf_transf} Given a CNF theory $\P$ and a model ${\cal M}$ for $\P$, a model ${\cal M}'\subseteq{\cal M}$ such that $\atom{\ssimpl{\P}{{\cal M}'}}$ coincides with $\atom{\ssimplnd{\P}{{\cal M}'}}$ can be computed in polynomial time. \end{proposition} The above result, which is valid not only for HEF CNF theories but, rather, for any CNF theory, will make the strategy above depicted generally applicable to any HEF CNF theory. In order to prove Proposition \ref{th:cnf_transf}, the intermediate results stated in technical Lemmas \ref{th:worthless_atoms} and \ref{theo:singleton_erasable} are preliminarily needed. \begin{lemma}\label{th:worthless_atoms} Let $\P$ be a CNF theory, let ${\cal M}$ be a model of $\P$ and let $\S$ be the steady set of ${\cal M}$ for $\P$. Then, ${\mathcal{E}} = ({\cal M}\setminus\S) \setminus \atom{\ssimpl{\P}{{\cal M}}}$ is erasable in ${\cal M}$ for $\P$. \end{lemma} \begin{lemma}\label{theo:singleton_erasable} Let $\P$ be a CNF theory and let ${\cal M}$ be a model of $\P$. If there exists an atom $a$ such that $a\in{\cal M}\setminus\atom{\P^{nd}_{{\cal M}\leftarrow}}$ then $\{a\}$ is erasable in ${\cal M}$ for $\P$. \end{lemma} We are now in the position of proving Proposition \ref{th:cnf_transf}. \begin{proofOf}{Proposition \ref{th:cnf_transf}} Let $\S$ denote the steady set of ${\cal M}$ in $\P$. The two following transformations (see points 1-2) can be recursively applied, till the condition $\atom{\ssimpl{\P}{{\cal M}}}=\atom{\ssimplnd{\P}{{\cal M}}}$ is met: \begin{enumerate} \item \underline{\em If ${\cal M}\setminus\S$ is a strict superset of $\atom{\ssimpl{\P}{{\cal M}}}$ then} by Lemma \ref{th:worthless_atoms} the atoms in the non-empty set ${\mathcal{E}} = ({\cal M}\setminus\S)\setminus\atom{\ssimpl{\P}{{\cal M}}}$ are erasable in ${\cal M}$ for $\P$ and ${\cal M}'$ can be set to ${\cal M}\setminus{\mathcal{E}}$; \item \underline{\em Else if ${\cal M}\setminus\S$ is a strict superset of $\atom{\ssimplnd{\P}{{\cal M}}}$ then} any atom $a \in ({\cal M}\setminus\S)\setminus\atom{\ssimplnd{\P}{{\cal M}}}$ is such that $\{a\}$ is erasable in ${\cal M}\setminus\S$ for $\ssimpl{\P}{{\cal M}}$ (by Lemma \ref{theo:singleton_erasable}, since ${\cal M}\setminus\S$ is a model for $\ssimplnd{\P}{{\cal M}}$) and also erasable in ${\cal M}$ for $\P$ (by Lemma \ref{th:equiv_theories}); hence, let ${\mathcal{E}}=\{a\}$ an arbitrarily chosen atom in $({\cal M}\setminus\S)\setminus\atom{\ssimplnd{\P}{{\cal M}}}$, then ${\cal M}'$ can be set to ${\cal M}\setminus{\mathcal{E}}$; \item \underline{\em Else} it is the case that $\atom{\ssimpl{\P}{{\cal M}}}=\atom{\ssimplnd{\P}{{\cal M}}}$. \end{enumerate} The whole process can be completed polynomial time. \end{proofOf} \begin{figure} \begin{function}[H] \footnotesize \KwIn{An HEF CNF theory $\P$ and a model ${\cal M}$ of $\P$} \KwOut{An erasable set ${\mathcal{E}}$ in ${\cal M}$ for $\P$} \BlankLine ${\mathcal{E}}'=\emptyset$\; \Repeat{$\Delta{\mathcal{E}}=\emptyset$}{ ${\cal M}={\cal M}\setminus{\mathcal{E}}'$\; Compute the steady set $\S$ of ${\cal M}$ for $\P$\; $\Delta{\mathcal{E}}=\emptyset$\; \If{$({\cal M}\setminus\S)\supset\atom{\ssimpl{\P}{{\cal M}}}$}{ $\Delta{\mathcal{E}} = ({\cal M}\setminus\S)\setminus\atom{\ssimpl{\P}{{\cal M}}}$ }\ElseIf{$({\cal M}\setminus\S)\supset\atom{\ssimplnd{\P}{{\cal M}}}$}{ Select an atom $a$ in $({\cal M}\setminus\S)\setminus\atom{\ssimplnd{\P}{{\cal M}}}$\; $\Delta{\mathcal{E}}=\{a\}$\; } ${\mathcal{E}}'={\mathcal{E}}'\cup\Delta{\mathcal{E}}$\; } \If{$\ssimpl{\P}{{\cal M}}$ is non-disjunctive}{ ${\mathcal{E}}'' = {\cal M}\setminus\S$\; }\Else{ ${\mathcal{E}}'' = find\_super{\rm -}elementary\_set$($\ssimpl{\P}{{\cal M}}$)\; } ${\mathcal{E}}={\mathcal{E}}'\cup{\mathcal{E}}''$\; \Return ${\mathcal{E}}$\; \caption{$\xi_{HEF}$()} \label{proc:hef_elim} \end{function} \caption{The $\xi_{HEF}$ eliminating operator.} \label{fig:hef_elim} \end{figure} Before describing the $\xi_{HEF}$ eliminating operator, the following technical result is needed. \begin{lemma}\label{theo:erasable_nd} Let $\P$ be a CNF theory, let ${\cal M}$ be a model of $\P$, and let $\S$ be the steady set of ${\cal M}$ for $\P$. If the theory $\ssimpl{\P}{{\cal M}}$ is non-disjunctive, then $\emptyset$ is its minimal model. \end{lemma} Figure \ref{fig:hef_elim} shows a realization of the $\xi_{HEF}$ eliminating operator. The following theorem asserts the most relevant result of this section, that is, that a minimal model for an HEF CNF theory can be indeed computed in polynomial time. \begin{theorem} \label{theo:GEApolynomial} Let $\P$ be a HEF CNF theory and ${\cal M}$ be a model of $\P$. Then, $\rm GEA_{\xi_{\rm HEF}}(\P,{\cal M})$ computes, in polynomial time, a minimal model of $\P$ contained in ${\cal M}$. \end{theorem} \begin{proof} Because of Theorem \ref{theo:gea_correct} and Proposition \ref{prop:gea_cost}, in order to prove the statement, it is sufficient to show that $(i)$ $\xi_{\rm HEF}$ returns an erasable set, if such a set exists, and an empty one otherwise (namely that $\xi_{\rm HEF}$ is, in fact, an eliminating operator) and that $(ii)$ $\xi_{\rm HEF}$ runs in polynomial time. Let us consider first point ($i$). Lines 2-12 in Figure \ref{fig:hef_elim} serve the purpose of finding a subset ${\cal M}'\subseteq{\cal M}$ such that $\atom{\ssimpl{\P}{{\cal M}'}}$ coincides with $\atom{\ssimplnd{\P}{{\cal M}'}}$ according to the strategy depicted in the proof of Proposition \ref{th:cnf_transf} Notice that, the set ${\mathcal{E}}'={\cal M}\setminus{\cal M}'$ is an erasable set. We can now assume that $\atom{\ssimpl{\P}{{\cal M}}}$ coincides with $\atom{\ssimplnd{\P}{{\cal M}}}$. If the theory $\ssimpl{\P}{{\cal M}}$ is non-disjunctive, then by Lemmata \ref{theo:erasable_nd} and \ref{th:equiv_theories}, the set ${\mathcal{E}}''={\cal M}\setminus\S$ is an erasable set in ${\cal M}$ for $\P$ and the operator returns ${\mathcal{E}}'\cup{\mathcal{E}}''$ (see lines 13-14). Otherwise, $\ssimpl{\P}{{\cal M}}$ is disjunctive. Then, by Theorem \ref{theo:existence} there exists a non-empty set of atoms ${\mathcal{E}}''\subseteq({\cal M}\setminus\S)$ such that ${\mathcal{E}}''$ is super-elementary for $\ssimpl{\P}{{\cal M}}$ and, by Theorem \ref{theo:eliminable_set}, the set ${\mathcal{E}}''$ is erasable in ${\cal M}\setminus\S$ for $\ssimpl{\P}{{\cal M}}$. In this case, the operator returns the erasable set ${\mathcal{E}}'\cup{\mathcal{E}}''$. As far as point $(ii)$ is concerned, this is a direct consequence of Theorem \ref{theo:ptime_xiHEF} and this concludes the proof. \end{proof} As for minimal model checking, we have the following result. \begin{theorem} Given a positive HEF CNF theory $\P$ and a set of atoms ${\cal N}\subseteq\atom{\P}$, checking if ${\cal N}$ is a minimal model of $\P$ can be accomplished in polynomial time. \end{theorem} \begin{proof} The proof follows immediately from Theorem \ref{lemma:LemmaCheck} and Theorem \ref{theo:GEApolynomial}. \end{proof} \begin{exampleContinued}{\ref{ex:pos_cnf}}{Minimal models of positive CNF theories}\rm Let us consider the execution of $\rm GEA_{\xi_{\rm HEF}}(\P,{\cal M})$, where $\P$ is the HEF theory $\P$ reported in Figure \ref{fig:ex_poscnf} and ${\cal M}=\atom{\P}$. During the first main iteration, the eliminating operator $\xi_{\rm HEF}$ returns the $\sel$ set $\{a,b,c,d\}$, as shown in the example of Section \ref{sect:comp_sel_set} and ${\cal M}$ is set to $\{e$, $f$, $g$, $h$, $i$, $j\}$. As for the next iteration, the output of $\xi_{\rm HEF}$ is $\{e$, $f$, $g$, $i\}$ and ${\cal M}$ becomes $\{j,h\}$. Since now ${\cal M}$ coincides with the steady set of $\P_{\cal M}=\{ j\leftarrow; h\leftarrow; h\leftarrow j; j\leftarrow h\}$, the algorithm stops returning $\{j,h\}$ as a minimal model of $\P$. \end{exampleContinued} \section{Beyond HEF}\label{sect:beyond} In the previous section, we have shown that GEA($\xi_{\rm HEF}$) computes a minimal model of a positive HEF CNF theory in polynomial time. Unfortunately, however, deciding if a given theory is head-elementary-free is a ${\rm coNP}$-complete problem \cite{FassettiP10}\footnote{We note that the reduction therein presented is still valid for positive HEF CNFs.}. In other words, while a minimal model for an input HEF CNF theory $\P$ can be indeed computed in polynomial time, checking whether $\P$ is actually HEF is intractable. Thus, it is sensible to study the behavior of GEA($\xi_{\rm HEF}$) as applied to a general CNF theory, which is the subject of this section. Recall that, by Theorem \ref{theo:ptime_xiHEF}, the \ref{funct:find_elem_nonout_set} function runs in polynomial time independently of the kind of theory it is applied to. Next, we will show that there are non-HEF theories for which GEA($\xi_{\rm HEF}$) successfully returns a minimal model and others for which GEA($\xi_{\rm HEF}$) ends failing to construct a correct output\footnote{That the algorithm is not always returning the correct answer is indeed the expected behavior due to the intractability of the general problem and since $\xi_{\rm HEF}$ runs in polynomial time (under the assumption that ${\rm P} \neq {\rm coNP}$).} (recall that, on the basis of the results of the previous section, GEA always returns a correct solution on HEF theories). The following example should help in clarifying this latter issue. \begin{myexample}{Behavior on non-HEF theories}\rm Consider the following two theories: \begin{eqnarray*} \begin{array}{rrll} {\cal P} = \{& a & \leftarrow & \\ & b,c & \leftarrow a & \\ & c & \leftarrow b & \\ & b & \leftarrow c & \} \end{array}\qquad\qquad \begin{array}{rrll} {\cal Q} = \{& a & \leftarrow & \\ & b,c,d & \leftarrow a & \\ & c & \leftarrow b & \\ & b & \leftarrow c & \\ & d & \leftarrow c & \} \end{array} \end{eqnarray*} Both theories are not HEF. Indeed, the set $\{b,c\}$ is a disjunctive elementary set, both for ${\cal P}$ and for ${\cal Q}$. However, while GEA$\rm (\xi_{ HEF})$ does not return a minimal model of ${\cal P}$, it does correctly compute a minimal model of ${\cal Q}$. To show that, consider first running GEA$\rm (\xi_{ HEF})$ on ${\cal P}$. Let ${\cal M}$ be $\{a,b,c\}$ (this is the model obtained by taking the union of all the heads). At line 3 of GEA, $\S$ is set to $\{a\}$, which is not a model of ${\cal P}$ and, then, $\xi_{\rm HEF}$ is invoked. In particular, the \ref{funct:find_elem_nonout_set} function is executed on the theory ${\cal P}' = \{b,c \leftarrow; c \leftarrow b; b \leftarrow c\}$. In the execution of the function, $X_0$ is $\{b,c\}$. Since the elementary graph associated with ${{\cal P}'}^{nd}_{X_0}$ is strongly connected, the function stops and returns $\{b,c\}$. As a consequence, the set ${\mathcal{E}}$ is $\{b,c\}$ and the new set ${\cal M}$ is $\{a\}$. It turns out that, since the set ${\cal M}$ is not a model for ${\cal P}$ any longer, GEA$\rm (\xi_{ HEF})$ is not able to return a minimal model of ${\cal P}$. Specifically, at the second iteration of $GEA(\xi_{\rm HEF})$, $\xi_{\rm HEF}$ is invoked on the theory ${\cal P}$ and on the set ${\cal M}=\{a\}$. The steady set $\S$ computed at line 1 in Figure \ref{fig:hef_elim} is equal to $\{a\}$ {\rm(}since $\P^{nd}_{{\cal M}\leftarrow}$ is the theory $\{a\leftarrow\}${\rm)} and the theory $\overline{\P}$ is empty. The set of atoms ${\cal R} = {\cal M}\setminus\S$ computed at line 3 and $\overline{\P}_{{\cal R}}$ are empty too. Then the condition at line 9 is true and ${\cal R} = \emptyset$ is returned. Thus, at the second iteration of $GEA(\xi_{\rm HEF})$, ${\mathcal{E}}$ is empty and, then, $\S$ is set to ${\cal M} = \{a\}$ and returned. Concluding, $GEA(\xi_{\rm HEF})$ on $\cal P$ ends returning $\{a\}$ which is not a minimal model of $\P$. Consider, now, the theory ${\cal Q}$. Let ${\cal M}$ be $\{a,b,c,d\}$, which is the model obtained by taking the union of all the heads. The set $\S$ is set to $\{a\}$ which is not a model of ${\cal Q}$ and, then, $\xi_{\rm HEF}$ is invoked. In particular, the \ref{funct:find_elem_nonout_set} function is executed on the theory ${\cal Q}' = \{b,c,d \ {\leftarrow;} \ c\leftarrow b; b \leftarrow c; d\leftarrow c\}$. In the execution of the function, $X_0$ is $\{b,c,d\}$. The elementary graph associated with ${{\cal Q}'}^{nd}_{X_0}$ is not strongly connected; actually, it includes the strongly connected component $C_1$ containing $b$ and $c$ and the strongly connected component $C_2$ containing $d$. Moreover, there is an edge from $C_1$ to $C_2$ but not vice versa. Then, $C_2$ belongs to the last level of the graph, and $X_1$ is set to $X_0 \setminus \{d\} = \{b,c\}$. The elementary subgraph associated with ${{\cal Q}'}^{nd}_{X_1}$ is strongly connected; therefore the function stops and returns the set $X_1 = \{b,c\}$. As a consequence the set ${\mathcal{E}}$ is $\{b,c\}$ and, now, the set ${\cal M}$ is $\{a,d\}$ and the theory ${{\cal Q}'}^{nd}_{{\cal M}\leftarrow}$ is $\{a\leftarrow; d\leftarrow a\}$ whose minimal model is $\S = \{a,d\}$. Since $\S$ is a model of ${\cal Q}$, GEA($\xi_{\rm HEF}$) stops by returning $\S$ as the result, which is indeed a minimal model of ${\cal Q}$. \end{myexample} Summarizing, the algorithm GEA($\xi_{\rm HEF}$) always runs in polynomial time and correctly returns a minimal model of HEF CNF theories, but its correctness on non-HEF theories is seemingly unpredictable: the rest of this section is devoted to devise a suitable variant of GEA able to tell about the correctness of the result it returns. In order to proceed, some further definitions and results are needed. \begin{definition}[Fallible eliminating operator] Let ${\cal M}$ be a model of a positive CNF theory $\P$. A \emph{fallible eliminating operator} $\xi_f$ is a polynomial time computable function that returns a subset of ${\cal M}\setminus\S$, with $\S$ the steady set of $\P$, with the constraint that if $\xi_f$ returns the empty set, then ${\cal M}$ is a minimal model of $\P$. \end{definition} \begin{proposition} Let $\P$ be a positive CNF theory and $\xi_f$ be a fallible eliminating operator. Checking if the set returned by running GEA($\xi_f$) over $\P$ is a minimal model for $\P$ is attainable in polynomial time. \end{proposition} \begin{proof} By Theorem \ref{theo:gea_correct}, we know that if the set returned by the operator employed in GEA is an erasable set then the algorithm returns a minimal model. Thus, it is sufficient to check if, at each iteration, ${\mathcal{E}}$ is an erasable set, namely it must be checked if ${\cal M}\setminus{\mathcal{E}}$ is a model for $\P$. Since this latter operation can be done in polynomial time, the statement follows. \end{proof} As a consequence of our previous results, we are now able to present the modified GEA, called the \textit{Incomplete Generalized Elimination Algorithm} (\emph{IGEA}, for short), which is reported in Figure \ref{fig:incomplete_elimination_algo}. \begin{figure}[t] \begin{algorithm}[H] \footnotesize \KwIn{A positive CNF theory $\P$} \KwOut{A minimal model for $\P$ and an indication of a ``{\it success}'' or a model for $\P$ and an indication of a ``{\it failure}''} \BlankLine ${\cal M} = \{h \mid H \leftarrow B \in \P \mbox{ and } h \in H\}$ \tcp{ ${\cal M}$ is a (possibly non-minimal) model of $\P$} $stop$ = $false$\; \Repeat{$stop$ \label{line:end_out_cycle_inc}}{\label{line:start_out_cycle_inc} compute the minimal model $\S$ of $\P^{nd}_{{\cal M}\leftarrow}$\; \If{$\S$ is a model of $\P$}{ $stop$ = $true$\; }\Else{ ${\mathcal{E}} = \xi_f(\P,{\cal M})$\; \If{(${\mathcal{E}} = \emptyset$)}{ $\S = {\cal M}$\; $stop$ = $true$\; }\Else{ \If{(${\cal M}\setminus{\mathcal{E}}$ is not a model of $\P$)}{ \Return ${\cal M}$ and ``Failure'' } ${\cal M} = {\cal M} \setminus {\mathcal{E}}$\label{line:compute_xi_S_inc}\; } } } \Return $\S$ and ``Success'' \caption{Incomplete Generalized Elimination Algorithm, IGEA($\xi_f$)} \label{algo:elimination_incomplete} \end{algorithm} \caption{Incomplete Generalized Elimination Algorithm, IGEA($\xi_f$)} \label{fig:incomplete_elimination_algo} \end{figure} The following Theorem describes the correctness of IGEA as well as its computational complexity. \begin{theorem}\label{theo:igea} For any fallible eliminating operator $\xi_f$, IGEA{\rm(}$\xi_f${\rm)} always terminates (with either success or failure) in polynomial time, returning a model of the input theory. If it succeeds, then the returned model is a minimal one. \end{theorem} \begin{proof} If the \textit{if} branch at line 12 is never taken, then ${\cal M}$ is, at each iteration, a model for $\P$ and ${\mathcal{E}}$ is an erasable set. In this case, the fallible eliminating operator $\xi_f$ is indeed an eliminating operator, whereby IGEA($\xi_f$) behaves as GEA($\xi_f$) does. This immediately implies that if IGEA($\xi_f$) does not report a ``failure'' then it returns a minimal model for $\P$. As far as the time complexity of the algorithm is concerned, following the same line of reasoning as before, if the \textit{if} branch at line 12 is never taken, then IGEA($\xi_f$) requires exactly the same number of iterations as GEA($\xi_f$). Conversely, if the \textit{if} branch at line 12 is taken, the algorithm ends. Thus, IGEA($\xi_f$) does not require more iterations than GEA($\xi_f$). As for the cost of a single iteration, IGEA($\xi_f$) has only one operation more than GEA($\xi_f$), consisting in checking if ${\cal M} \setminus {\mathcal{E}}$ is a model for $\P$ (line 12). Since this operation is the same as that accomplished at line 4, the asymptotic temporal cost of the algorithm is not affected. Thus, the cost of IGEA($\xi_f$) is exactly that reported in Proposition \ref{prop:gea_cost} for the GEA, where $C_{\xi_f}$ is polynomial, by definition of fallible eliminating operator. \end{proof} To conclude this section, we show that $\xi_{\rm HEF}$ can, in fact, be safely adopted as fallible eliminating operator in IGEA. The following preliminary proposition is useful. \begin{proposition}\label{prop:xi_hef_nonempty} Let $\P$ be CNF theory, ${\cal M}$ be a model for $\P$ and $\S$ be the steady set of ${\cal M}$ for $\P$. If $\S$ is not a model for $\P$ then, on input $\P$ and ${\cal M}$, the operator $\xi_{\rm HEF}$ returns a non-empty set. \end{proposition} \begin{proof} Consider the theory $(\simpl{\P}{{\cal M}})_{{\cal M}\setminus\S}$. Since $\S$ is not a model for $\P$, there are rules in $\P$ which are not true in $\S$ but are true in ${\cal M}$, thus that $(\simpl{\P}{{\cal M}})_{{\cal M}\setminus\S}$ is not empty. Therefore, it is enough to prove that, whenever the function \ref{funct:find_elem_nonout_set} is run over a non-empty theory, it returns a non-empty set. Consider the function \ref{funct:find_elem_nonout_set} reported in Figure \ref{fig:funct_find_sel}. First of all, note that if $\P$ is a non-empty theory and $X$ is a non-empty set of atoms occurring in $\P$, then the elementary graph $\eG(\P^{nd}_X)$ is non-empty as well. Thus, in the function, if $X_i$ is non-empty then ${\mathcal{G}}_i$ is non-empty. The set $X_0$ at line 1 is non-empty since the function is invoked over a non-empty theory. By induction, assuming that $X_i$ is non-empty, we prove that $X_{i+1}$ is non-empty as well. Consider the $(i+1)$-th iteration. Two cases are possible: \begin{description} \item [$(i)$] ${\mathcal{G}}_i$ is strongly connected and the function ends returning the set $X_i$, which is non-empty by the induction hypothesis. \item [$(ii)$] ${\mathcal{G}}_i$ is not strongly connected and includes at least two connected components. In such a case, only the atoms of one of the connected components are removed from $X_i$, call it ${\mathcal{C}}$. Then $X_{i+1} = X_i \setminus {\mathcal{C}}$ is not empty. \end{description} \end{proof} \begin{proposition}\label{prop:hef_fallible_operator} The operator $\xi_{\rm HEF}$ is a fallible eliminating operator. \end{proposition} \begin{proof} Let $\P$ be a general positive CNF theory, ${\cal M}$ be a model for $\P$ and $\S$ be the steady set of ${\cal M}$ for $\P$. The proposition is an immediate consequence of the following facts: $(i)$ $\xi_{\rm HEF}(\P,{\cal M})$ runs in polynomial time (by Theorem \ref{theo:ptime_xiHEF}); $(ii)$ the set returned by the operator $\xi_{\rm HEF}(\P,{\cal M})$ is a subset of ${\cal M}\setminus\S$; $(iii)$ the set returned by $\xi_{\rm HEF}(\P,{\cal M})$ is not empty (by Proposition \ref{prop:xi_hef_nonempty}). \end{proof} Concluding, since $\xi_{\rm HEF}$ is a fallible eliminating operator, for any CNF theory $\P$, IGEA($\xi_{\rm HEF}$) runs in polynomial time returning a model and, on HEF theories, we are guaranteed that the returned model is minimal. Thus, the successful termination of IGEA($\xi_{\rm HEF}$) can be also seen as a necessary condition for a theory to be HEF (but it is not a sufficient condition, unless ${\rm coNP}$ collapses onto ${\rm P}$). The next theorem, finally, summarizes the results of this section. \begin{theorem} The algorithm IGEA($\xi_{\rm HEF}$) terminates in polynomial time for any input positive CNF theory. Moreover, if the input theory $\P$ is HEF, then {\rm IGEA(}$\xi_{\rm HEF}${\rm)} succeeds returning a minimal model for $\P$; otherwise either the algorithm declares success returning a minimal model for $\P$ or the algorithm declares failure returning a model for $\P$. \end{theorem} \begin{proof} The proof immediately follows from Theorem \ref{theo:igea} and Proposition \ref{prop:hef_fallible_operator}. \end{proof} \section{Conclusions}\label{sect:conclusions} Tasks related with computing with minimal models are relevant to several AI applications. The focus of this paper has been devising efficient algorithms to deal with minimal models of CNF theories. Particularly, three problems have been mainly considered, that are, minimal model checking, minimal model finding and model minimization. All these problems prove themselves to be intractable for general CNF theories, while it was known that they become tractable for the class of head-cycle-free theories \cite{Ben-Eliyahu-ZoharyP97} and, in fact, to the best of our knowledge, positive HCF theories form the largest class of CNFs for which polynomial time algorithms solving minimal model finding and minimal model checking are known so far. The research presented here follows the same research target as that of \cite{Ben-Eliyahu-ZoharyP97} and the main contribution of this work has been that of designing a polynomial time algorithm for computing a minimal model for (a superset of) the class of positive HEF (Head Elementary-Set Free) CNF theories, a strict superset of the class of HCF theories, whose definition naturally stems for the analogous one given in the context of logic programming in \cite{GebserLL06}. This contribution thus broadens the tractability frontier associated with minimal model computation problems. In more detail, we have introduced the {\em Generalized Elimination Algorithm (GEA)}, that computes a minimal model of {\em any} positive CNF, whose complexity depends on the complexity of the specific {\em eliminating operator} it invokes. Therefore, in order to attain tractability. a specific eliminating operator has been defined which allows for the algorithm to compute in polynomial time a minimal model for a class of CNF that strictly includes HEF theories. However, it is unfortunately already known that recognizing HEF theories is ``per s\'e'' an intractable problem, which may apparently limit the applicability range of our algorithmic schema. In order to overcome such a problem, an ``incomplete'' variant of the GEA (called IGEA) is proposed: the resulting schema, once instantiated with an appropriate elimination operator, always constructs a model of the input CNF, which is guaranteed to be minimal at least if the input theory is HEF. We note that this latter algorithm is able to ``declare'' if the returned model is indeed minimal or not. The research work presented here can be continued along some interesting direction. As a major research direction, since the IGEA is capable to deal also with theories that are not HEF, it would be relevant to define, via a syntactic specification, as those pinpointing HCF and HEF theories, a superset HEF theories coinciding with those on which the IGEA stops returning a success. While it is not at all clear if this can be reasonably attained, we might consider it enough to get close (from below) to this class of theories. Very related to the above line of research, there is the assessment of the practical occurrence of theories having the HEF property or the property of guaranteeing success to the IGEA and also the assessment of the success rate of the IGEA on generic CNF theories. Moreover, enhancing stable models and answer set engines for logic programs with the IGEA appears a potentially fruitful line of investigation. \bibliographystyle{plain}
{'timestamp': '2013-10-31T01:06:38', 'yymm': '1310', 'arxiv_id': '1310.8120', 'language': 'en', 'url': 'https://arxiv.org/abs/1310.8120'}
\section{Introduction} Word embeddings have been prevalently used in Natural Language Processing (NLP) applications due to the vector representations of words capturing useful semantic properties and linguistic relationships between words using deep neural networks \cite{mikolov2013linguistic, liu2016learning, levy2014dependency}. Word embeddings are commonly utilized as feature input to machine learning models, which enables machine learning techniques to process rew text data. There has been an increasing number of studies applying word embeddings in common NLP tasks, such as information extraction (IE) \cite{wang2017clinical, zeng2014relation, nguyen2014employing}, information retrieval (IR) \cite{ganguly2015word}, sentiment analysis \cite{tang2014learning,maas2011learning}, question answering \cite{ren2015exploring,dong2015question}, and text summarization \cite{yogatama2015extractive, rush2015neural}. Recently, in the biomedical domain, word embeddings have been remarkably utilized in applications such as biomedical named entity recognition (NER) \cite{tang2014evaluating, liu2015effects}, medical synonym extraction \cite{jagannatha2015mining}, relation extraction (RE) (e.g., chemical-disease relation \cite{jiang2015crd}, drug-drug interaction \cite{liu2016drug,wang2017dependency} and protein-protein interaction \cite{jiang2016general}), biomedical IR \cite{jo2016cbnu, wang2016ensemble} and medical abbreviation disambiguation \cite{wu2015clinical}. There are two main text resources utilized to train word embeddings for the biomedical NLP applications: internal task corpora (e.g., training data) \cite{jo2016cbnu} and external data resources (e.g., Wikipedia) \cite{gurulingappa2016semi}. The use of the former resource is straightforward as the internal corpora capture the nuances of language topic specific to the task \cite{diaz2016query}. Exploiting external data resources is based on an implicit assumption that the external resources contain knowledge that could be used to enhance domain tasks \cite{shen2016knowledge,shen2016predicate,shen2015bmqgen}. In addition, a number of pre-trained word embeddings are publicly available, such as the embeddings of Google News\footnote{https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit} and GloVe\footnote{https://nlp.stanford.edu/projects/glove/}. These embeddings could capture the semantics of general English words from a large corpus. However, one question remains unanswered: do we need to train word embeddings for a specific NLP task given a number of public pre-trained word embeddings? This question becomes more significant for domain areas, and particularly more important for the clinical domain. The reason is that little electrical health record (EHR) data is publicly available due to the Health Insurance Portability and Accountability Act (HIPAA) requirements, while biomedical literature is more widely available through resources such as PubMed\footnote{https://www.ncbi.nlm.nih.gov/pubmed/}. However, to the best of our knowledge, there has been little work done to evaluate word embeddings trained from these textual resources for biomedical NLP applications. In this study, we empirically evaluated word embeddings trained from four different corpora, namely clinical notes, biomedical publications, Wikipedia, and news. For the former two resources, we utilized clinical notes from the EHR system at Mayo Clinic and articles from PubMed Central (PMC)\footnote{http://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/} to train word embeddings. For the latter two resources, we used publicly available pre-trained word embeddings, GloVe and Google News. We performed the evaluation qualitatively and quantitatively. For the qualitative evaluation, we adopted the method used in Levy and Goldberg's study \cite{levy2014dependency}, and manually inspected five of the most similar medical words to each of arbitrarily selected medical words from three medical categories (disorder, symptom, and drug). In addition, we analyzed word embeddings through a 2-dimensional visualization plot of 377 medical words. For the quantitative evaluation, we conducted both intrinsic and extrinsic evaluation. The intrinsic evaluation directly tested semantic relationships between medical words using four published datasets for measuring semantic similarity between medical terms, i.e., Pedersen \cite{pedersen2007measures}, Hliaoutakis \cite{hliaoutakis2005semantic}, MayoSRS \cite{pakhomov2011towards}, and UMNSRS \cite{pakhomov2010semantic,pakhomov2016corpus}. For the extrinsic evaluation, we applied word embeddings to downstream NLP applications in the biomedical domain including clinical IE, biomedical IR, and RE, and measured the performance of word embeddings. \section{Related Work} Due to the success of word embeddings in a variety of NLP applications, some existing studies evaluate word embeddings in representing word semantics quantitatively. Most of them focus on evaluating the word embeddings generated by different approaches. Baroni et al. \cite{baroni2014don} presented the first systematic evaluation of word embeddings generated by four models, i.e., DISSECT\footnote{http://clic.cimec.unitn.it/composes/}, CBOW \cite{mikolov2013linguistic} using word2vec\footnote{https://code.google.com/p/word2vec/}, Distributional Memory model\footnote{http://clic.cimec.unitn.it/dm/}, and 4) Collobert and Weston model\footnote{http://ronan.collobert.com/senna/} using a corpus of 2.8 billion tokens in the general English domain. They tested these models on fourteen benchmark datasets in five categories, including semantic relatedness, synonym detection, concept categorization, selectional preferences, and analogy. They found that the word2vec model, CBOW, performed the best for almost all the tasks. Schnabel et al. \cite{schnabel2015evaluation} trained the CBOW model of word2vec \cite{mikolov2013linguistic}, C\&W embeddings \cite{collobert2011natural}, Hellinger PCA \cite{lebret2013word}, GloVe \cite{pennington2014glove}, TSCCA \cite{dhillon2012two}, and Sparse Random Projections \cite{li2006very} on a 2008 GloVe dump, and tested on the same fourteen datasets. They found that the CBOW outperformed other embeddings on 10 datasets. They also conducted an extrinsic evaluation by using the embeddings as input features to two downstream tasks, namely noun phrase chunking and sentiment classification. They found the results of CBOW were also among the best. Ghannay et al. \cite{ghannay2016word} conducted a similar intrinsic evaluation, they additionally evaluated the skip-gram models of word2vec \cite{mikolov2013linguistic}, CSLM word embeddings \cite{schwenk2013cslm}, dependency-based word embeddings \cite{levy2014dependency}, and combined word embeddings on four NLP tasks, including Part-Of-Speech tagging, chunking, named entity recognition, mention detection, and two linguistic tasks. They trained these word embeddings on the Gigaword corpus composed of 4 billion words and found that the dependency-based word embeddings gave the best performance on the NLP tasks and that the combination of embeddings yielded significant improvement. Nayak et al's study \cite{nayak2016evaluating} recommended that the evaluation of word embeddings should test both syntactic and semantic properties, and that the evaluation tasks should be closer to real-word applications. However, few of these studies evaluated word embeddings for tasks in the biomedical domain. As most of the aforementioned studies evaluate word embeddings in the general (i.e., non-biomedical) NLP domain, only one recent study by Pakhomov et al. \cite{pakhomov2016corpus} evaluates word embeddings in the biomedical domain, to the best of our knowledge. They trained the CBOW model on two biomedical corpora, namely clinical notes and biomedical publications, and one general English corpora, namely GloVe. The word embeddings were evaluated on subsets of UMNSRS dataset, which consisted of pairs of medical terms with the similarity of each pair assessed by medical experts, and on a document retrieval task and a word sense disambiguation task. They found that the semantics captured by the embeddings computed from biomedical publications were on par with that from clinical notes. We extended their evaluation of word embeddings by: 1) utilizing four datasets to evaluate word embeddings on capturing medical term semantics; 2) conducting a qualitative evaluation; and 3) examining word embeddings on more downstream applications with data provided by shared biomedical NLP tasks. \section{Word Embeddings and Parameter Settings} We utilized word2vec in this study as it has been shown that word2vec generates better word embeddings for most general NLP tasks than other approaches \cite{baroni2014don,schnabel2015evaluation}. Since no evidence shows that the CBOW architecture outperforms the skip-gram architecture or vice versa, we arbitrarily chose the skip-gram architecture for word2vec. Word embeddings can be represented as a mapping $V\rightarrow \mathbb{R}^D: w\mapsto \theta$, which maps a word $w$ from a vocabulary $V$ to a real-valued vector $\theta$ in an embedding space with the dimension of $D$. The skip-gram architecture, proposed by Mikolov et al. \cite{mikolov2013linguistic}, uses the focus word as the single input layer, and the target contextual words as the output prediction layer. To avoid expensive computation over every word in $V$, Mikolov et al. \cite{mikolov2013linguistic} proposed a technique called ``negative-sampling'' that samples a few output words and updates embeddings for this small sample in each iteration. We formulate the model mathematically in the following. Given a sequence of target word $w_1, w_2, ..., w_T$ and its contextual word $h_1, h_2, ..., h_T$, the training objective is to maximize the conditional log probability of observing the actual output contextual word given the input target word, i.e., \begin{equation}\label{equ.obj} \max J = \max \frac{1}{T}\sum_{t=1}^T \log P(h_t|w_t). \end{equation} where $J$ is the objective function, and $P(h|w)$ is the conditional probability in the neural probabilistic language model. $P(h|w)$ is usually defined by \begin{equation} P(h|w)=\frac{e^{\theta_h^\mathsf{T}\theta'_w}}{\sum_{h\in V} e^{\theta_h^\mathsf{T}\theta'_w}}, \end{equation} where $\theta'$ and $\theta$ are the input and output word embeddings, respectively. Accordingly, the log probability can be written as: \begin{equation}\label{equ.log} \log P(h|w) = \theta_h^\mathsf{T}\theta'_w-\log(\sum_{h\in V}e^{\theta_h^\mathsf{T}\theta'_w}). \end{equation} We can take the derivative of $J$ to obtain the embeddings, updating the equation iteratively. However, the computation is extremely expensive as in each iteration, the algorithm needs to go through the vocabulary $V$. By using negative-sampling, Mikolov et al. \cite{mikolov2013linguistic} defined an empirical log probability $P'(h|w)$ to approximate $P(h|w)$: \begin{equation}\label{equ.log.mod} P'(h|w) =\log \sigma(\theta_h^\mathsf{T}\theta'_w)+\sum_i^k \mathbb{E}_{h\sim P_n(h)} [ \log\sigma(-\theta_h^\mathsf{T}\theta'_w)], \end{equation} where $\sigma(x)=1/(1+\exp(-x))$ is a softmax function that normalizes a real vector into a probability vector, $P_n(h_i)=\frac{f(h_i)^{3/4}}{\sum_i^{|V|} f(h_i)^{3/4}}$ is an empirical distribution that generates $k$ negative samples with $f(h_i)$ being the term frequency for term $h_i$. The word embeddings $\theta$ can be computed by maximizing the objective function in Equation (\ref{equ.obj}) by replacing $P(h|w)$ with $P'(h|w)$. We tested different vector dimensions of $D$ (i.e., 20, 60, 100) for the vector representation trained on EHR and MedLit and chose 100 for EHR and 60 for MedLit according to the performance in our intrinsic evaluation. Similarly, we chose the dimension of 100 for GloVe, and 300 for Google News since only 300 was publicly available for Google News. The experimental results of using different vector dimensions for the word embeddings are provided in Appendix \ref{ap.1}. For training word embeddings on the EHR and MedLit, we set the window size to 5, the minimum word frequency to 7 (i.e., the words that occurred less than 7 times in the corpus were ignored), and the negative sampling parameter to 5. These parameters were selected based on previous studies \cite{mikolov2013linguistic,levy2014dependency,wang2017dependency}. \section{Data and Text Pre-prosessing} The first corpus, denoted as EHR, contains textual clinical notes for a cohort of 113k patients receiving their primary care at Mayo Clinic, spanning a period of 15 years from 1998 to 2013. The vocabulary size of this corpus is 103k. The second corpus, denoted as MedLit, is obtained from a snapshot of the Open Access Subset\footnote{http://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/} of PubMed Central (PMC)\footnote{http://www.ncbi.nlm.nih.gov/pmc/} in March 2016, which is an online digital database of freely available full-text biomedical literature. It contains 1.25 million biomedical articles, and 2 million distinct words in the vocabulary. As a comparison, additional public pre-trained word embeddings from two general English resources, i.e., Google News \footnote{https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing} and GloVe \footnote{http://nlp.stanford.edu/data/glove.6B.zip}, were utilized in the evaluation. The Google News embeddings have vector representations for 3 million words from Google News, trained by the word2vec \cite{mikolov2013linguistic}. The GloVe embeddings were trained by the GloVe model \cite{pennington2014glove}, and have 400k unique words in the vocabulary from a snapshot of Wikipedia in 2014 and Gigaword Fifth Edition\footnote{https://catalog.ldc.upenn.edu/LDC2011T07}. The MedLit and EHR corpora were pre-processed minimally by removing punctuation, lowercasing, and replacing all digits with "7". One exception is that we replaced `-' with `\_' if two or more words were connected by `-' and treated these words as one. For the MedLit corpus, we additionally removed website urls, email addresses, and twitter handles. For the EHR corpus, the clinical narratives are written by medical practitioners, and thus contain more incomplete sentences than research articles. Therefore, we conducted additional pre-processing on the EHR corpus specific for the clinical notes from Mayo Clinic. Specifically, the section of ``Family history" in the corpus was removed if it was semi-structured \cite{wang2017systematic}. As shown by an example in Table \ref{tab.family}, the semi-structured ``Family history'' section does not provide much valuable semantic information. The section of ``Vital Signs'' was totally removed since it did not contain contextual information for training word embeddings. Table \ref{tab.vital} shows an example of the ``Vital Signs'' section in the EHR corpus. Moreover, we replaced all text contractions with their respective complete text (e.g., ``can't'' is replaced with ``can not''), and removed all the clinical notes metadata and note section headers, dates, phone numbers, weight and height information, and punctuation. \begin{table}[!h] \centering \footnotesize \caption{An example of the semi-structured ``Family history'' section from the EHR corpus.} \label{tab.family} \begin{tabular}{l} \hline \textit{MOTHER} \\ \textit{Stroke/TIA} \\ \textit{BROTHERS} \\ \textit{4 brothers alive 1 brother deceased} \\ \textit{SISTERS} \\ \textit{2 sisters alive} \\ \textit{DAUGHTERS} \\ \textit{1 daughter alive} \\ \textit{Heart disease } \\ \hline \end{tabular} \end{table} \begin{table}[!h] \centering \footnotesize \caption{An example of the ``Vital Signs'' section from the EHR corpus.} \label{tab.vital} \begin{tabular}{c} \hline \textit{Height: 149.1 cm. Weight: 44.5 kg. BSA(G): 1.3573 M2. BMI: 20.02 KG/M2.} \\ \hline \end{tabular} \end{table} \section{Qualitative Evaluation} We arbitrarily selected medical words from three medical semantic categories, namely disorder, symptom, and drug. Word embeddings trained from four different corpora were utilized to compute the five most similar words to each selected medical word according to the cosine similarity. Then we adopted the method used in Levy and Goldberg's study \cite{levy2014dependency} and manually inspected the conceptual similarity between the target word and the most similar words. Suppose $w_1$ and $w_2$ are two words, the similarity between $w_1$ and $w_2$ is defined as \begin{equation}\label{equ.simi1} \text{similarity}(w_1, w_2)=\frac{\theta_1\cdot \theta_2}{\|\theta_1\|\|\theta_2\|}, \end{equation} where $\theta_1$ and $\theta_2$ are vector representations for $w_1$ and $w_2$ in the embedding space, respectively. If the target word is a medical phrase $s_1$ consisting of multiple words, i.e., $s_1={w_1, w_2, ..., w_n}$, the similarity function becomes \begin{equation}\label{equ.simi2} \text{similarity}(s_1, w_2)=\frac{\Theta_1\cdot \theta_2}{\|\Theta_1\|\|\theta_2\|} \end{equation} where $\Theta_1=\frac{1}{n}\sum_i^n \theta_i$ is the representation for $s_1$ in the embedding space. This is different from Pakhomov et al's study \cite{pakhomov2016corpus} where only single word terms were considered. We ranked the words in the vocabulary based on the similarity to the target word and chose the five top ranked words. Table \ref{tab.words} lists eight target words from the three medical categories, and the corresponding five most similar words computed by using the word embeddings trained from different resources. For the first target word describing a disorder, \textit{diabetes}, EHR and MedLit find its synonym, \textit{mellitus}, in the most similar words while GloVe and Google News fail to find it. EHR finds two terms related to co-morbidities of \textit{diabetes}, which are \textit{cholesterolemia} and \textit{dyslipidemia}, and a common adjective modifier term, \textit{uncontrolled}. MedLit finds terms relevant to co-existing conditions for \textit{diabetes}, such as \textit{cardiovascular} (possibly from \textit{cardiovascular disease}), \textit{nonalcoholic} (possibly from \textit{nonalcoholic fatty liver disease}), \textit{obesity}, and \textit{polycystic} (possibly from \textit{polycystic ovary syndrome} which is a hyperandrogenic disorder that is associated with a high-risk of development of Type 2 diabetes). Most of these terms are related with medical research topics and occur frequently in the biomedical research articles. GloVe finds two related terms, \textit{hypertension} and \textit{obesity}, while three other terms, i.e., \textit{arthritis}, \textit{cancer} and \textit{alzheimer}, are less relevant disease names. Google News finds two morphological terms, \textit{diabetics} and \textit{diabetic}, relevant to the target words, one synonym, \textit{diabetes\_mellitus}, and one related disease name, \textit{heart disease}. We can draw similar conclusions for the second and third disorder words. The \textit{dyspnea} example in the symptom category demonstrates the advantage of EHR and MedLit. EHR finds \textit{palpitations}, a common cause of \textit{dyspnea}, and \textit{orthopnea}, \textit{exertional}, and \textit{doe} (dyspnea on exertion) are synonyms or specific conditions for \textit{dyspnea}. MedLit finds related symptoms, \textit{sweats} and \textit{orthopnea}, a synonym \textit{breathlessness}, a relevant disorder \textit{hypotension}, and a term relevant to the symptom \textit{rhonchi}. GloVe finds synonyms \textit{shortness} and \textit{breathlessness}, and less relevant symptoms \textit{cyanosis} and \textit{photophobia}. Google News finds less relevant symptoms \textit{pruritus} and \textit{rhinorrhea} and less relevant disease \textit{nasopharyngitis}. Similar observations can be found for \textit{sore throat} and \textit{low blood pressure} as well. We can further observe that the semantics captured by the word embeddings trained from different corpora is disparate for the medical terms in the drug category. For \textit{opioid}, EHR finds \textit{opiate}, \textit{benzodiazepine}, \textit{sedative}, \textit{polypharmacy}, which are very relevant medications. MedLit finds \textit{nmda\_receptor}, \textit{affective\_motivational}, \textit{naloxone\_precipitated}, \textit{hyperlocomotion}, which are related to the mechanism of action of \textit{opioid}. GloVe finds \textit{analgesic} and less relevant \textit{anti-inflammatory}, and Google News finds \textit{opioid}-related phrases and relevant term \textit{antipsychotics}. For the target term \textit{aspirin}, EHR also finds very clinically relevant used terms and MedLit finds relevant terms in research articles while GloVe and Google News only find medication names. It is obviously shown from these target words and the corresponding similar words that EHR and MedLit can capture the semantics of medical terms better than GloVe and Google News and find more relevant similar medical terms. However, EHR and MedLit find similar medical terms from different perspectives due to their focus difference. EHR contains clinical narratives and thus it is closer to clinical language. It contains terms with different morphologies and even typos, such as \textit{melitis}, \textit{caner} and \textit{thraot} as listed in Table \ref{tab.words}. Differently, MedLit contains more medical terms used in research articles, and finds similar words mostly from a biomedical research perspective. \begin{table} \centering \footnotesize \caption{Selected medical words from three medical semantic categories (i.e., disorder, symptom, and drug) and the corresponding five most similar words induced by the word embeddings trained from different resources.} \label{tab.words} \begin{tabular}{p{1.5cm}|p{2.3cm}p{2.2cm}p{2.2cm}p{2.2cm}p{2.2cm}} \hline Semantic Category & Target Word & EHR & MedLit & GloVe & Google News \\ \hline \multirow{3}{*}{Disorder} & diabetes & mellitus, \newline uncontrolled, \newline cholesterolemia, \newline dyslipidemia, \newline melitis & cardiovascular, \newline nonalcoholic, \newline obesity, \newline mellitus, \newline polycystic & hypertension, \newline obesity, \newline arthritis, \newline cancer, \newline alzheimer & diabetics, \newline hypertension, \newline diabetic, \newline diabetes\_mellitus, \newline heart\_disease \\ \cline{2-6} & peptic ulcer disease & scleroderma, \newline duodenal, \newline crohn, \newline gastroduodenal, \newline diverticular & gastritis, \newline alcoholism, \newline rheumatic, \newline ischaemic, \newline nephropathy & ulcers, \newline arthritis, \newline diseases, \newline diabetes, \newline stomach & ichen\_planus, \newline Candida\_infection, \newline vaginal\_yeast\_infections, \newline oral\_thrush, \newline dermopathy \\ \cline{2-6} & colon cancer & breast, \newline ovarian, \newline prostate, \newline postmenopausally, \newline caner & breast, \newline mcf, \newline cancers, \newline tumor\_suppressing, \newline downregulation & breast, \newline prostate, \newline cancers, \newline tumor, \newline liver & breast, \newline prostate, \newline tumor, \newline pre\_cancerous\_lesion, \newline cancerous\_polyp \\ \hline \multirow{3}{*}{Symptom} & dyspnea & palpitations, \newline orthopnea, \newline exertional, \newline doe, \newline dyspnoea & sweats, \newline orthopnea, \newline breathlessness, \newline hypotension, \newline rhonchi & shortness, \newline breathlessness, \newline cyanosis, \newline photophobia, \newline faintness & dyspnoea, \newline pruritus, \newline nasopharyngitis, \newline symptom\_severity, \newline rhinorrhea \\ \cline{2-6} & sore throat & scratchy, \newline thoat, \newline cough, \newline runny, \newline thraot & runny, \newline rhinorrhea, \newline myalgia, \newline swab\_fecal, \newline nose & shoulder, \newline stomach, \newline nose, \newline chest, \newline neck & soreness, \newline bruised, \newline inflammed, \newline contusion, \newline sore\_triceps \\ \cline{2-6} & low blood pressure & readings, \newline pressue, \newline presssure, \newline bptru, \newline systolically & dose, \newline cardio\_ankle, \newline ncbav, \newline preload, \newline gr & because, \newline result, \newline high, \newline enough, \newline higher & splattering\_tombstones, \newline Zapping\_nerves\_helps, \newline pressue, \newline Marblehead\_Swampscott\_VNA, \newline pill\_Norvasc \\ \hline \multirow{2}{*}{Drug} & opioid & opiate, \newline benzodiazepine, \newline opioids, \newline sedative, \newline polypharmacy & opioids, \newline nmda\_receptor, \newline affective\_motivational, \newline naloxone\_precipitated, \newline hyperlocomotion & analgesic, \newline opiate, \newline opioids, \newline anti-inflammatory, \newline analgesics & opioids, \newline opioid\_analgesics, \newline opioid\_painkillers, \newline antipsychotics, \newline tricyclic\_antidepressants \\ \cline{2-6} & aspirin & ecotrin, \newline uncoated, \newline nonenteric, \newline effient, \newline onk & chads, \newline vasc, \newline newer, \newline cha, \newline angina & ibuprofen, \newline tamoxifen, \newline pills, \newline statins, \newline medication & dose\_aspirin, \newline ibuprofen, \newline statins, \newline statin, \newline calcium\_supplements \\ \hline \end{tabular} \end{table} In order to show different aspects of medical concepts captured by word embeddings trained from different corpora, we extracted 377 medical terms from the UMNSRS dataset \cite{pakhomov2010semantic,pakhomov2016corpus} and visualized the word embeddings for these medical terms in a two-dimensional plot using t-distributed stochastic neighbor embedding (t-SNE) \cite{maaten2008visualizing}. Example clusters of medical terms in the word embeddings are shown in Figure \ref{fig.clusters}. Figure \ref{fig.clusters.a} depicts a cluster of symptoms, such as \textit{heartburn}, \textit{vomiting} and \textit{nausea}, from the word embeddings trained on EHR. Figure \ref{fig.clusters.b} shows a cluster of antibiotic medications, such as \textit{bacitracin}, \textit{cefoxitin}, and \textit{chloramphenicol}, based on MedLit embeddings. Figures \ref{fig.clusters.c} and \ref{fig.clusters.d} illustrate clusters of symptoms from the GloVe and Google News embeddings, respectively. Since we did not employ any clustering method, these clusters were intuitively observed from the two-dimensional plot. The visualization of the entire set of 377 medical terms using word embeddings trained from four different corpora is provided in the supplementary file. \begin{figure}[!h] \centering \begin{tabular}{|c|c| \hline \subfloat[EHR]{\label{fig.clusters.a}\includegraphics[width=0.4\textwidth]{echcn1_sub.pdf}} & \subfloat[MedLit]{\label{fig.clusters.b}\includegraphics[width=0.23\textwidth]{pmc_sub.pdf}} \\ \hline \subfloat[GloVe]{\label{fig.clusters.c}\includegraphics[width=0.4\textwidth]{wiki_sub.pdf}} & \subfloat[Google News]{\label{fig.clusters.d}\includegraphics[width=0.4\textwidth]{google_sub.pdf}} \\ \hline \end{tabular} \caption{Examples of word clusters in the visualization of word embeddings trained from four corpora using t-SNE.} \label{fig.clusters} \end{figure} \section{Quantitative Evaluation} We conducted both extrinsic and intrinsic quantitative evaluation, where the former used four published datasets for measuring semantic similarity between medical terms and the latter used downstream biomedical NLP tasks to evaluate word embeddings. \subsection{Intrinsic Evaluation} We tested word embeddings on four published biomedical measurement datasets commonly used to measure semantic similarity between medical terms. The first is Pedersen's dataset \cite{pedersen2007measures} that consists of 30 medical term pairs that were scored by physician experts according to their relatedness. The second is Hliaoutakis's dataset \cite{hliaoutakis2005semantic} consisting of 34 medical term pairs with similarity scores obtained by human judgments. The third, the MayoSRS dataset developed by Pakhomov et al. \cite{pakhomov2011towards}, consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from Mayo Clinic. The relatedness of each term pair was assessed based on a four point scale: (4.0) practically synonymous, (3.0) related, (2.0) marginally related and (1.0) unrelated. We evaluated the word embeddings using the mean score of the physicians and medical coders. The fourth, UMNSRS similarity dataset developed by Pakhomov et al. \cite{pakhomov2010semantic}, consists of 566 medical term pairs whose semantic similarity was determined independently by eight medical residents from the University of Minnesota Medical School. The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness. For each pair of medical terms in the testing datasets, we used Equations (\ref{equ.simi1}) and (\ref{equ.simi2}) to calculate the semantic similarity for each pair. Since some medical terms may not exist in the vocabulary of word embeddings, we used fastText \cite{bojanowski2016enriching} to compute word vectors for these out-of-vocabulary (OOV) medical terms. Specifically, we built character n-gram vectors analogous to fastText's output by converting each word (e.g., ``abcdef'') in the word embeddings to 3-gram (i.e., trigram) format (i.e., ``abc'', ``bcd'', ``cde'', ``def'') with vector representation of each trigram the same as that of the original word. After converting all the words, we utilized the averaged vector for the identical trigram extracted from different words (e.g., the vector for ``abcdef'' is $\theta_1$ and that for ``defg'' is $\theta_2$, the final vector for trigram ``def'' is $\frac{1}{2}(\theta_1+\theta_2)$ since ``def'' is a shared trigram between the two words). Since each word with the number of characters greater than or equal to 3 can be represented as a bag of character trigrams, fastText represents an OOV medical term as the normalized sum of the vector representations of its trigrams \cite{bojanowski2016enriching}. The Pearson correlation coefficient was employed to calculate the correlation between similarity scores from human judgments and those from word embeddings. Table \ref{tab.similarity} lists the Pearson correlation coefficient results for the four datasets. Overall, the semantic similarity captured by the word embeddings trained on EHR are closer to human experts' judgments, compared with other word embeddings. MedLit performs worse than EHR but has a comparative result for the UMNSRS dataset. GloVe and Google News are inferior to EHR and MedLit, and perform similarly in representing medical semantics. Note that the four datasets and corresponding semantic similarity scores from both human experts and word embeddings are provided in the supplementary Excel file. \begin{table}[!h] \centering \footnotesize \caption{Pearson correlation coefficient between similarity scores from human judgments and those from word embeddings on four measurement datasets. The asterisk indicates that difference between word embeddings trained on EHR and those on other resources is statistically significant using t-test (p$<$0.01).} \label{tab.similarity} \begin{tabular}{ccccc} \hline Dataset & EHR & MedLit & GloVe & Google News \\ \hline Pedersen's & 0.632* & 0.569 & 0.403 & 0.357 \\ Hliaoutakis's & 0.482* & 0.311 & 0.247 & 0.243 \\ MayoSRS & 0.412* & 0.300 & 0.082 & 0.084 \\ UMNSRS & 0.440* & 0.404 & 0.177 & 0.154 \\ \hline \end{tabular} \end{table} \subsection{Extrinsic Evaluation} Extrinsic evaluations are used to measure the impact of word embeddings to specific biomedical NLP tasks. In this evaluation, we tested the word embeddings on three biomedical NLP tasks, namely clinical IE, biomedical IR, and RE. \subsubsection{Clinical Information Extraction} Two clinical IE tasks were utilized to evaluate the word embeddings. The first task is an institutional task while the second is a shared task. Using the first task, we would like to examine whether the word embeddings trained on our institutional corpus perform better than external pre-trained word embeddings on a local institutional IE task. We also would like to investigate whether the results are consistent on a global shared task. In the first experiment, we evaluated the word embeddings on an institutional IE task at Mayo Clinic. In this task, a set of 1000 radiology reports was given to detect whether a hand and figure/wrist fracture could be identified. Reports were drawn from a cohort of residents of Olmsted County, aged 18 or older, who experienced fractures in 2009-2011. Each report was annotated by a medical expert with multiple years of experience abstracting fractures by assigning ``1'' if a hand and figure/wrist fracture was found, or ``0'' otherwise. In our experiment, the word embeddings were employed as features for machine learning models and evaluated by precision, recall, and F1 scores \cite{wang2018distant}. For a clinical document $d=\{w_1,w_2,..,w_M\}$ where $w_i, i=1,2,...,M$ is the $i$th word and $M$ is the total number of words in this document, the feature vector $\mathbf{x}$ of document $d$ is defined by $$\mathbf{x}=\frac{1}{M} \sum_{i}^{M}\mathbf{x}_i,$$ where $\mathbf{x}_i$ is the embedding vector for word $w_i$ from the word embedding matrix. Then $\mathbf{x}$ was utilized as input to a conventional machine learning model, which is Support Vector Machine (SVM) in this experiment. We performed 10-fold cross validation on the dataset. The means of precision, recall, and F1 scores from the 10-fold cross validation was reported, which are defined below: $$Precision = \frac{1}{10}\sum_i Precision_i=\frac{1}{10}\sum_i \frac{TP_i}{TP_i+FP_i},$$ $$Recall=\frac{1}{10}\sum_i Recall_i=\frac{1}{10}\sum_i \frac{TP_i}{TP_i+FN_i},$$ $$F1\text{ }score=\frac{1}{10}\sum_i F1\text{ }score_i=\frac{1}{10}\cdot\sum_i \frac{2TP_i}{2TP_i+FP_i+FN_i},$$ where TP, TN, FP, and FN represent true positives, true negatives, false positives, and false negatives, respectively, and $i={1,2,...,10}$ represents the $i$th fold cross validation. As a comparison, the baseline method used term frequency features as input. The experimental results are listed in Table \ref{tab.localie}. The word embeddings trained on EHR are superior to other word embeddings in terms of all metrics (precision: 0.974, recall: 0.972, F1 score: 0.972) with statistical significance using t-test (p$<$0.01). The fracture dataset in this experiment is curated from the same EHR system as the EHR corpus used to train word embeddings, and thus they have identical sublanguage characteristics. The word embeddings trained on MedLit also have comparable results (precision: 0.946, recall: 0.943, F1 score: 0.942). Since this task is a medical task with specific medical terminologies, the word embeddings trained on Google News have the worst performance. However, the word embeddings trained on GloVe are close to those trained on EHR with 0.02 difference on F1 score without statistical significance (p$<$0.01). This experiment shows that word embeddings trained on a local corpus have the best performance for a local task but those trained on an external Wikipedia corpus also have comparable performance. \begin{table}[!h] \centering \footnotesize \caption{Results of the institutional fracture extraction task using word embeddings trained from four different corpora. The asterisk indicates that the difference between word embeddings trained on EHR and those on other resources is statistically significant using t-test (p$<$0.01).} \label{tab.localie} \begin{tabular}{cccccc} \hline Metric & baseline & EHR & MedLit & GloVe & Google News \\ \hline Precision & 0.612 & 0.974* & 0.946 & 0.951 & 0.809 \\ Recall & 0.612 & 0.972* & 0.943 & 0.950 & 0.856 \\ F1 score & 0.609 & 0.972* & 0.942 & 0.950 & 0.823 \\ \hline \end{tabular} \end{table} Secondly, we tested the word embeddings on the 2006 i2b2 (Informatics for Integrating Biology to the Bedside) smoking status extraction shared task \cite{uzuner2008identifying}. Participants of this task were asked to develop automatic NLP systems to determine the smoking status of patients from their discharge records in Partners HealthCare. For each discharge record, an automatic system should be able to categorize it into five pre-determined smoking status categories: past smoker, current smoker, smoker, non-smoker, and unknown, where a past and a current smoker are distinguished based on temporal expressions in the patient's medical records. The dataset contains a total of 389 documents, including 35 documents of current smoker, 66 of non-smoker, 36 of past smoker, and 252 of unknown. The settings of this shared task are identical to those of the previous local institutional IE task: SVM was utilized as the machine learning model; 10-fold cross validation was performed; term frequency features were used as input in the baseline; and the means of precision, recall and F1 scores were obtained as metrics. The experimental results are shown in Table \ref{tab.ie}. First, it is obvious that the word embedding features perform better than term frequency features due to the semantics embedded in word embeddings, which is consistent with the previous local institutional IE task. The word embeddings trained on EHR produced the best performance with a F1 score of 0.900. The reason might be that the smoking dataset has the similar sublanguage characteristics as the EHR corpus. This result indicates that the effective word embeddings can be shared across institutions for clinical IE tasks. Another interesting observation is that the performance of word embeddings trained on Google News is close to that trained on EHR corpus with a comparable F1 score and a better recall. The performance difference is not statistically significant (p$<$0.01). This implies that word embeddings trained on a public dataset may not be definitely inferior to these trained on a medically specific dataset for a medical IE task. The likely cause is that the terminology used in the smoking status extraction task also appears frequently in the news, such as medications and advice for smokers. \begin{table}[!h] \centering \footnotesize \caption{Results of the i2b2 2006 smoking status extraction task using word embeddings trained from four different corpora.} \label{tab.ie} \begin{tabular}{cccccc} \hline Metric & baseline & EHR & MedLit & GloVe & Google News \\ \hline Precision & 0.692 & \textbf{0.919} & 0.878 & 0.893 & 0.910 \\ Recall & 0.486 & 0.903 & 0.871 & 0.889 & \textbf{0.905} \\ F1 score & 0.539 & \textbf{0.900} & 0.867 & 0.884 & 0.897 \\ \hline \end{tabular} \end{table} \subsubsection{Biomedical Information Retrieval} To evaluate word embeddings for biomedical IR, we utilized the dataset provided by the Text REtreival Conference 2016 Clinical Decision Support (TREC 2016 CDS) track. The TREC 2016 CDS track focuses on biomedical literature retrieval that helps physicians find the precise literature information and make the best clinical decision at the point of care \cite{roberts2016overview}. The query topics were generated from EHRs in the MIMIC-III dataset \cite{johnson2016mimic}. Those topics were categorized into three most common types, \textit{Diagnosis}, \textit{Test} and \textit{Treatment}, according to physicians' information needs, and 10 topics were provided for each type. Each topic is comprised of a \textit{note} field (admission note), a \textit{description} field (jargons and clinical abbreviations are removed) and a \textit{summary} field (simplified version of the description). The participants were required to use only one of these three fields in their submissions and at least one submission must utilize the \textit{note} field. Submitted systems should retrieve relevant biomedical articles from a given PMC article collection for each given query topic to answer three corresponding clinical questions: \textit{What is the patient's diagnosis? What tests should the patient receive? How should the patient be treated?}. Each IR system can retrieve up to 1000 documents per query. In order to make the comparison as fair as possible, we first implemented a simple IR system as the baseline system using the original queries following the study in \cite{wang2016ensemble}, and then employed the simplest query expansion method using the word embeddings. We used the \textit{summary} field in the query, removed the stop words, and expanded each of the left query terms with five most similar terms from word embeddings. Take the query ``A 78 year old male presents with stools and melena'' as an example, the term ``male'' was expanded by ``female gentleman lady caucasian man'', ``stools'' by ``stooling liquidy voluminous semiformed tenesmus'', and ``melena'' by ``hematemesis hematochezia melana brbpr hematemasis''. We assigned weight 0.8 to the original query and 0.2 to the expanded query. Indri \cite{strohman2005indri} was utilized as our indexing and retrieval tool. The preprocessing for the corpus included stopword removal and Porter stemming. The stopword list was based on the MedLit stopwords \footnote{http://www.ncbi.nlm.nih.gov/books/NBK3827/table/pubmedhelp.T.stopwords/}. The \textit{article-id}, \textit{title}, \textit{abstract}, and \textit{body} fields of each document in the corpus were indexed. Language models with two-stage smoothing \cite{zhai2002two} was used to obtain all the retrieval results. Four official metrics, namely Inferred Normalized Discounted Cumulated Gain (infNDCG)\cite{yilmaz2008simple}, Inferred Average Precision (infAP)\cite{yilmaz2008simple}, Precision at 10 (P@10), and Mean Average Precision (MAP), were utilized to measure the IR performance. infNDCG measures the document ranking quality of an IR system; infAP measures the retrieval effectiveness given incomplete judgments for an IR system; P@10 is the number of relevance documents among the top 10; and MAP is the mean of the average precision scores for each query in a set of queries. Table \ref{tab.ir} lists the results of using the word embeddings trained from different resources for query expansion on the TREC 2016 CDS track. It is interesting that the word embeddings based query expansion method failed to improve the retrieval performance, and even worsened the performance when infAP and MAP were metrics. By comparing the retrieval performance, we observe that EHR and MedLit perform slight better than GloVe and Google News without statistical significance (p$<$0.01). This result implies that applying word embeddings trained from different resources has no significant improvement for the biomedical IR task. \begin{table}[!h] \centering \footnotesize \caption{Information retrieval results of using word embeddings trained from four different corpora for query expansion on the TREC 2016 CDS track.} \label{tab.ir} \begin{tabular}{cccccc} \hline Metric & baseline & EHR & MedLit & GloVe & Google News \\ \hline infNDCG & 0.249 & \textbf{0.250} & 0.249 & 0.249 & 0.238 \\ infAP & \textbf{0.058} & 0.056 & 0.055 & 0.051 & 0.052 \\ P@10 & 0.247 & 0.243 & \textbf{0.248} & 0.233 & 0.243 \\ MAP & \textbf{0.067} & 0.063 & 0.065 & 0.063 & 0.059 \\ \hline \end{tabular} \end{table} \subsubsection{Relation Extraction} For the RE task, we considered drug-drug interaction (DDI) extraction, which is a specific RE task in the biomedical domain. DDI is an unexpected change in a drug's effect on the human body when the drug and a second drug are co-prescribed and taken together. Automatically extracting DDI information from literature is a challenging and important research topic since the volume of the published literature grows rapidly and greatly. In this experiment, we evaluated the word embeddings on the DDIExtraction 2013 challenge corpus \cite{segura2013semeval}. The dataset for DDIExtraction 2013 was composed of sentences describing DDIs from the DrugBank database and MedLine abstracts. In this dataset, drug entities and DDIs were annotated at the sentence level and each sentence could contain two or more drugs. An RE system should be able to automatically extract DDI drug pairs from a sentence. We exploited the baseline system introduced in \cite{wang2017dependency} where features include words and word bigrams with binary values indicating their presence or absence in a sentence, cosine similarity between centroid vector of each class and the instance, negation (three features indicating negation before the first main drug, between two main drugs, and after the two main drugs). We concatenated the word embeddings to the baseline features and tested the performance. Since Random Forest \cite{breiman2001random} has the best performance in \cite{wang2017dependency}, we utilized it as the classifier with 10-fold cross validation. Table \ref{tab.re} shows the F1 scores of Random Forest using word embeddings trained from different resourceson the DDIExtraction 2013 challenge. We can see that the overall performance of word embeddings trained on Google News is the best. The reason is that the semantics of general English terms in the context of drug mentions are more important for determining the drug interactions. For example, in the sentence ``Acarbose may interact with metformin'', the term ``interact'' is crucial to classify the relation. Since these crucial terms are generally not medical terminology, word embeddings trained on Google News where the corpus represents general English are able to capture the semantics of these terms. However, Google News outperformed other resources but not conclusive in statistical significance using t-test (p$<$0.01). Another interesting observation is that word embeddings trained from MedLit have the best performance for the DrugBank corpus while these from Google News perform the best for the MedLine corpus. Though MedLine abstracts are from MedLit articles, this result shows that word embeddings trained from the same corpus are not necessarily superior to other embeddings. \begin{table}[!h] \centering \footnotesize \caption{F1 scores of the DDIExtraction 2013 challenge using word embeddings trained from four different corpora.} \label{tab.re} \begin{tabular}{cccccc} \hline Category & baseline & EHR & MedLit & GloVe & Google News \\ \hline DrugBank (5265 pairs) & 0.590 & 0.708 & \textbf{0.715} & 0.714 & 0.705 \\ MedLine (451 pairs) & 0.690 & 0.696 & 0.690 & 0.699 & \textbf{0.708} \\ Total (5716 pairs) & 0.760 & 0.789 & 0.788 & 0.787 & \textbf{0.790} \\ \hline \end{tabular} \end{table} \section{Conclusion and Discussion}\label{sec.con} In this study, we provide an empirical evaluation of word embeddings trained from four different corpora, namely clinical notes, biomedical publications, Wikipedia, and news. We performed the evaluation qualitatively and quantitatively. For the qualitative evaluation, we selected a set of medical words and impressionistically evaluated the five most similar medical words. We then analyzed word embeddings through the visualization of those word embeddings. We conducted both extrinsic and intrinsic evaluation for the quantitative evaluation. The intrinsic evaluation directly tested semantic relationships between medical words using four published datasets for measuring semantic similarity between medical terms while the extrinsic evaluation evaluated word embeddings in three downstream biomedical NLP applications, i.e., clinical IE, biomedical IR, and RE. Based on the evaluation results, we can draw the following conclusions. First, the word embeddings trained on EHR and MedLit can capture the semantics of medical terms better than those trained on GloVe and Google News, and find more relevant similar medical terms. However, EHR finds similar terms vis a vis clinical language while MedLit contains more medical terminology used in medical articles, and finds similar words mostly from a medical research perspective. Second, the medical semantic similarity captured by the word embeddings trained on EHR and MedLit are closer to human experts' judgments, compared to these trained on GloVe and Google News. Third, there does not exist a consistent global ranking of word embeddings for the downstream biomedical NLP applications. However, adding word embeddings as extra features will improve results on most downstream tasks. Finally, word embeddings trained from biomedical domain corpora do not necessarily have better performance than those trained on other general domain corpora. That is, there might be no significant difference when word embeddings trained from an out-domain corpus are employed for a biomedical NLP application. However, the performance of word embeddings trained from a local institutional corpus might perform better for local institutional NLP tasks. Our experiments implicitly show that applying word embeddings trained from corpora in a general domain, such as Wikipedia and news, is not significantly inferior to applying those obtained from biomedical or clinical domain, which is usually difficult to access due to privacy. This result is consistent with but more general than the conclusion drawn in \cite{pakhomov2016corpus}. Thus, a lack of access to a domain-specific corpus is not necessarily a barrier for the use of word embeddings in practical implementations. As a future direction, we would like to evaluate word embeddings on more downstream biomedical NLP applications, such as medial named entity recognition and clinical note summarization. We will investigate whether word embeddings trained from different resources represent language characteristics differently for a corpus, such as term frequency and medical concepts. We also want to assess word embeddings across health care institutions using different EHR systems and investigate how sublanguage characteristics affect the portability of word embeddings. Moreover, we want to apply clustering methods on word embeddings and compare the word-level and concept-level difference between clusters of medical terms. There are a few limitations in this study. First, we only examined the word embeddings trained on the EHR from Mayo Clinic, which might have introduced bias into the conclusion as the EHR quality may vary by institutions. However, it is challenging to obtain word embeddings trained on EHR data from multiple sites. We are currently exploring the use of privacy-preserving techniques for obtaining embeddings from multiple sites leveraging our prior work \cite{huang2018privacy} to have more generalizable embeddings. Second, we tested only two widely used public pre-trained word embeddings. There are a number of word embeddings publicly available\footnote{https://github.com/3Top/word2vec-api}. Third, the generalizability of the results for the biomedical IR and RE tasks may be questionable since we only used one shared task dataset for each task to evaluate the word embeddings. \section{Acknowledgement}\label{sec.ack} This work has been supported by the National Institute of Health (NIH) grants R01LM011934, R01GM102282, and U01TR002062. \bibliographystyle{IEEEtran} \section{Visualization of word embeddings.}\label{ap.2} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{echcn1.pdf} \caption{Visualization of word embeddings trained on EHR.} \label{fig.ehr} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{pmc.pdf} \caption{Visualization of word embeddings trained on MedLit.} \label{fig.pmc} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{wiki.pdf} \caption{Visualization of word embeddings trained on GloVe.} \label{fig.wiki} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{google.pdf} \caption{Visualization of word embeddings trained on Google News.} \label{fig.google} \end{figure} \end{document}
{'timestamp': '2018-07-20T02:00:42', 'yymm': '1802', 'arxiv_id': '1802.00400', 'language': 'en', 'url': 'https://arxiv.org/abs/1802.00400'}
\section{Introduction} \label{s:intro} Interacting particle systems in driven steady states are typically characterized by non-zero currents; among the recent advances in non-equilibrium statistical mechanics has been a considerable body of work on understanding the fluctuations of such currents. Indeed it is now well-established that, for \emph{Markovian} dynamics, the probability of seeing a time-averaged current away from the mean is generically captured by a large deviation principle with ``speed'' $t$~\cite{Touchette09b,Derrida07b,Bertini06b}. However, models with some form of \emph{non-Markovian} dynamics arguably describe better the long-range temporal correlations in many real scenarios~\cite{Mantegna99,Rangarajan03,Hoefling13}. In this direction, there is topical interest in both the typical behaviour and fluctuations for particle systems with memory. In particular, statistical physicists have recently studied a variety of memory-dependent random walkers in classical and quantum contexts, see e.g.,~\cite{Schutz04,Cressoni07,Serva13b,Rohde13} -- some of these can be related to the reinforced random walks and P\'olya urn models found in earlier mathematical literature and reviewed, for instance, in~\cite{Pemantle07}. Much less is known about non-Markovian \emph{many}-particle systems but some aspects of the stationary-state properties (e.g., mean current as a function of density, conditions for a condensation transition) have been investigated for models with internal states or non-exponential waiting times~\cite{Hirschberg09,Concannon14,Khoromskaia14}. Going beyond the typical behaviour, the current fluctuations in a temporally-correlated zero-range process have also recently been explored (and compared to the equivalent memoryless model) although exact analytical calculations proved possible only for a single site~\cite{Cavallaro15}. In the present contribution we build on earlier work in~\cite{Me09} to show how an expansion about fixed points of the dynamics can yield valuable information about the fluctuations in a particular class of non-Markovian interacting particle systems, even when full solution appears a formidable task. Specifically, this enables us to predict the speed of the current large deviation principle and hence the long-time scaling behaviour of fluctuations. We demonstrate this approach with perhaps one of the most famous models in non-equilibrium statistical mechanics: the totally asymmetric simple exclusion process (TASEP). Here we show how a current-dependent input rate leads to a modified phase diagram including a superdiffusive regime and we check our theoretical approximations against simulations and exact numerics. The remainder of the paper is structured as follows. In section~\ref{s:frame} we introduce the framework of systems with current-dependent rates and indicate the connections to other recent works, including some of those mentioned above. In section~\ref{s:fixed} we perform a stability analysis of fixed points and make a Gaussian expansion to study the fluctuations. The power of this approach is then illustrated by treatment of the TASEP in section~\ref{s:ASEP} before a concluding discussion and wider perspective in section~\ref{s:dis}. Finally, a short appendix provides a pedagogical treatment of a single-particle problem in order to demonstrate the formalism. \section{Interacting particle systems with current-dependent rates} \label{s:frame} We work within a discrete-space and continuous-time framework with the particle configuration at time $t$ labelled by $\sigma(t)$ and transition rate from state $\sigma$ to $\sigma'$ given by the matrix element $w_{\sigma',\sigma}$. Classical lattice-based many-particle models described in this way include exclusion processes (to which we will return later)~\cite{Derrida98c,Spitzer70}, zero-range models~\cite{Spitzer70,Evans05}, and inclusion processes~\cite{Giardina07}. In systems of this type, a time-integrated particle current $\mathcal{J}(t)$ can be defined as the net number of jumps across a given bond (or subset of bonds) from time zero up to time $t$. We choose the script style to indicate that the current is a functional of the stochastic history $\{\sigma(\tau),0\leq\tau\leq t\}$ but will suppress the explicit dependence on $t$ where no confusion should arise. It is well-known that $\mathcal{J}$ generically obeys a large deviation principle which can be loosely stated as \begin{equation} \mathrm{Prob}\left(\frac{\mathcal{J}}{t}=j\right) \sim \rme^{-I_w(j) t} \label{e:ldp} \end{equation} where $\sim$ denotes logarithmic equivalence in the long-time limit. $I_w(j)$ is known as the rate function and $t$, somewhat misleadingly, referred to as the speed. The $w$ subscript emphasizes that the rate function depends in some non-trivial way on the set of transition rates. Much recent industry has been devoted to calculating $I_w(j)$ for various models both within a ``microscopic'' lattice-based approach~\cite{Derrida07b} and in the ``macroscopic'' hydrodynamic limit~\cite{Bertini06b}. Here, following~\cite{Me09}, we introduce an element of memory by considering a class of models in which the rates at time $t$ depend on the current up to time $t$. To be precise, the rates $w_{\sigma',\sigma}$ now depend on the time average $\mathcal{J}/t$ of some specified particle current and will be denoted by $w_{\sigma',\sigma}(j)$. Obviously, the functional dependence on the current must be chosen so that the rates always remain positive. To avoid singularities at time zero we also assume that the time-averaged current starts at some fixed value $j_0$ at a time $t_0$ which is small compared to the overall measurement time $t$. Physically, taking the condition $0 \ll t_0 \ll t$ excludes any initial transient behaviour which could be governed by different rates. A conceptually simple generalization is to the case where rates depend on multiple currents (e.g., currents measured separately across different bonds or in different directions) -- much of the following analysis can be extended to that situation although we shall not consider it in detail. We emphasize that we specialize here to models with a functional dependence on the single variable $\mathcal{J}/t$ (\emph{time-averaged} current), rather than a more general dependence on $\mathcal{J}$ and $t$ separately. The closely related scenario of feedback depending on the \emph{time-integrated} current has also recently been explored, for example, in a quantum context~\cite{Brandes10}. For illustrative purposes we now consider a single particle, i.e., a random walker, with this type of memory and endeavour to describe its connection to a range of models in the literature which may, at first sight, appear rather disparate. The natural ``current'' for a particle on a one-dimensional lattice is just the net number of steps made in one direction, say towards the right, and we now assume left and right hopping rates which depend on the time average of this quantity (in other words, on the particle velocity). In the discrete-time version of this picture, the dependence is thus on the particle's position divided by the number of time steps elapsed. Dynamics in this category includes the ``elephant'' random walker of~\cite{Schutz04} as well as several other recent random walk scenarios~\cite{Hod04,Huillet08b,Kumar10c}, the voting model of~\cite{Hisakado10}, and some aspects of the behaviour in a discrete-choice model with dependence on the peak of past experience~\cite{Me15}. In fact, mathematically, these are all essentially equivalent to the much older P\'olya urn problem~\cite{Polya31} in which the probability for picking a black or white ball depends on the relative number (fraction) of such balls chosen in the past. If the functional form of the dependence is non-linear, then one has a generalized P\'olya process, for overviews see, e.g.,~\cite{Pemantle07,Hill80}. In passing, we remark that such models can also be considered as a limiting case of binary Markov chains with memory of a finite number of steps; see e.g.,~\cite{Usatenko03,Hisakado15} and note that the latter reference illustrates further connections to Kirman's ant colony model~\cite{Kirman93} (which has potential relevance to economic markets) and even the kinetic Ising model~\cite{Kawasaki72}. Our focus here is on continuous-time models with dependence on the current over the whole history. In the single-particle case we note that even when the particle remains stationary, the time-averaged current changes due to the continuous increase of time $t$ in the denominator of $\mathcal{J}/t$. The dynamics of the particle can be thought of as a type of continuous-time random walk (CTRW) or ``semi-Markov'' process with a complicated non-exponential distribution of waiting times which, in general, also depends on the time of the last jump (so that successive waiting times are not identically distributed). This correspondence is particularly clear in the case of a random walker moving only in one direction (see~\ref{A:single}) and provides a possible route to genuine continuous-time numerical simulations rather than the brute-force approach of using a discrete-time update rule with very short time steps. However, the situation is more complicated for many-particle systems, or those with a dependence on multiple currents, and it may be practically difficult to obtain explicit forms for the relevant waiting time distributions. There is a vast body of work on CTRWs with identically distributed, typically power law, waiting times (see~\cite{Burioni14b,Krusemann15} for just a couple of recent examples, discussing different scaling regimes and the effects of bias) as well as on more general time-homogeneous semi-Markov processes~\cite{Maes09b} and applications~\cite{Gorissen12c}. Helpful explanations of the connections between different commonly-employed formulations can be found in~\cite{Goychuk04} and~\cite{Qian06}. Feedback based on the time-averaged current clearly has the potential to introduce long-range temporal correlations and one might ask how these modify the current large deviation principle~\eref{e:ldp}, if indeed such a relationship still exists. The chief result of~\cite{Me09} was that if, for some $\gamma$, the limit \begin{equation} \tilde{I}(j) = \lim_{t \to \infty} \min_{q(\tau)} \frac{1}{t^\gamma} \int_{t_0}^t {I}_{w(q)}(q+\tau q') \, d\tau \label{e:mldp} \end{equation} exists (and is not everywhere zero), then it is the rate function for a modified large deviation principle with speed $t^\gamma$. In other words, we now have \begin{equation} \mathrm{Prob}\left(\frac{\mathcal{J}}{t} =j\right)\sim \rme^{-\tilde{I}(j) t^\gamma}, \label{e:mldp2} \end{equation} where $\gamma$ is not necessarily equal to unity. Note that, in~\eref{e:mldp}, $I_{w(q)}$ is the \emph{Markovian} rate function evaluated with transition rates $w(q)$, and $q(\tau)$ is a trajectory in the space of time-averaged currents with fixed initial condition ($q(t_0)=j_0$) and final condition ($q(t)=j$). This result can be derived heuristically by what has been dubbed a ``temporal additivity principle'' (a time-based analogue of the spatial additivity principle of Bodineau and Derrida~\cite{Bodineau04}) in which one notes that the time-averaged current changes very slowly for large times so can be approximated as constant over time slices long compared with the dynamics. Carefully taking the limit $t \to \infty$ such that both the length and the number of the time slices becomes infinite, this quasistatic (or adiabatic) argument gives an integral form for the probability of seeing a given path in current space. Furthermore, in the long-time limit a particular current fluctuation is overwhelmingly likely to be realised by the optimal (or typical) path which is found by minimizing over all $q(\tau)$ consistent with the required initial and final current conditions. For further details of this analysis we refer the interested reader to~\cite{Me09}.\footnote{Note that the technical assumptions involved may break down in models, such as the zero-range process, with infinite state space and dynamical phase transitions~\cite{Me05,Me06b}; particular care should be taken in such cases.} A proof also appears possible at a more rigorous mathematical level by employing older sample path large deviation results of Mogul'skii~\cite{Mogulskii76}. A natural assumption is that the optimal path minimizing the integral in~\eref{e:mldp} is arranged so that $q(\tau)$ is, in some sense, as close as possible to the temporally local mean current $\bar{j}_{w(q)}$, i.e., the expected current for fixed rates $w(q)$. Expanding about $\bar{j}_{w(q)}$ and substituting in~\eref{e:mldp} we then have \begin{equation} \tilde{I}(j) \approx \lim_{t \to \infty} \min_{q(\tau)} \frac{1}{t^\gamma} \int_{t_0}^t \frac{\left[ q + \tau q' - \bar{j}_{w(q)} \right]^2}{2D_{w(q)}} \, d\tau \label{e:mldp3} \end{equation} where $D_{w(q)}$ is the diffusion constant corresponding to rates $w(q)$. This form clearly reveals the similarity with the spatial additivity result of~\cite{Bodineau04} but it is worth emphasizing that, in the present context, it is only an approximation. For general final current $j$ it is impossible to find a minimizing path $q(\tau)$ which asymptotically converges to $\bar{j}_{w(q)}$. Indeed this is already obvious for fluctuations far from the mean in the standard Markovian case. Whilst~\eref{e:mldp} and~\eref{e:mldp2} may seem to be a powerful general result, their direct application is somewhat limited in practice since, even for those models in which the corresponding Markovian rate function is known, the Euler-Lagrange equations involved in the minimization are typically too complicated to be solved analytically whether or not the form~\eref{e:mldp3} is used. The known exceptions~\cite{Me09} include various types of history-dependent random walk, including those where the current for left and right jumps is counted separately. One particularly simple case discussed in~\ref{A:single} is the unidirectional model with rate $v(j)=aj$ which already demonstrates the existence of a large deviation principle with $\gamma$ smaller than unity for a range of ``strong'' memory dependence ($1/2<a<1$). Physically, this corresponds to a transition to superdiffusive behaviour where the fluctuations of integrated current (equivalently, the position of the random walker) scale faster than linearly with time. Such a transition was already seen in the elephant random walk and related models~\cite{Schutz04,Hod04,Huillet08b}. In the next section, we show how these features emerge from a more general approximate analysis which involves an expansion about the fixed points of the dynamics and can easily be applied to complicated many-particle systems. \section{Fixed point analysis} \label{s:fixed} Lightening the notation by defining $f(q):=\bar{j}_{w(q)}$, it is intuitively clear that a fixed point of the current must obey \begin{equation} q=f(q). \label{e:fp} \end{equation} In other words, the expected time-averaged current flowing in the next infinitesimal time interval must be the same as that observed in the past. We denote a fixed point value satisfying~\eref{e:fp} by $j^*$ and now turn to examine its stability which, as illustrated in figure~\ref{f:stab}, \begin{figure} \centering \psfrag{j}[][]{\textcolor{blue}{$q$}} \psfrag{w2}[Cr][Cr]{\textcolor{red}{$f(q)$}} \psfrag{w1}[Tr][Br]{\textcolor{red}{$f(q)$}} \psfrag{stable}[][]{} \psfrag{unstable}[][]{} \includegraphics[width=0.7\columnwidth]{Stability.eps} \caption{Sketch of current fixed points given by the intersection of the function $f(q):=\bar{j}_{w(q)}$ with the diagonal $q$: stable (left) and unstable (right) cases.} \label{f:stab} \end{figure} is determined by the slope \begin{equation} A^*:=\left. \frac{df}{dq} \right|_{q=j^*}. \end{equation} Specifically, if $A^*<1$ (left panel of figure~\ref{f:stab}) then fluctuations above the fixed point yield on average an instantaneous current $f(q)$ which is smaller than the historically-averaged current $q$ and thus there is a reduction back towards the fixed point. Similarly, a fluctuation below the fixed point has $f(q)>q$ so on average the current increases back towards the fixed point. Hence, a fixed point with $A^*<1$ is stable and, by the reverse argument, one with $A^*>1$ is unstable (right panel of figure~\ref{f:stab}).\footnote{This argument implicitly assumes that the system decays to stationarity on a timescale which is short compared with the rate of change of the time-averaged current, so that the instantaneous current is well described by $f(j)$. This is equivalent to the quasistatic assumption of the temporal additivity principle and, at least for finite state space, should always be true for long enough times.} This heuristic picture, which is essentially the continuous-time version of a ``cobweb'' stability analysis for a discrete mapping, can be made more precise by considering the differential equation for the time-dependence of the expected current. This latter confirms that decay towards, or growth away from, a fixed point is typically power law in nature which is physically related to the fact that the time-averaged current changes more slowly as time increases. It is relatively easy to construct models with multiple stable fixed points whose selection is influenced by the early-time behaviour, cf., e.g.,~\cite{Hill80, Mori15, Me15} for the discrete-time case. This is especially true in the case of non-monotonic current dependence or multiple currents.\footnote{For example, a bidirectional continuous-time random walk in which the hopping rates right and left depend separately on the time-averaged number of jumps right and left as $v_R(j_R,j_L)=aj_R/(j_R+j_L)$ and $v_L(j_R,j_L)=aj_L/(j_R+j_L)$, respectively, has fixed points for $j_R$ and $j_L$ satisfying $-a<j_R-j_L<a$ (with $j_R+j_L=a$). It can readily be checked, via exact minimization, that the rate function for the net current $j=j_R-j_L$ is zero for the corresponding range of values.} However, in this paper, we specialize to systems in which there is a unique stationary state corresponding to a stable fixed point of the dynamics with some current $j^*$ and slope $A^*$ less than unity. In this case we can expand $q(\tau)$ about $j^*$ in the numerator and denominator of~\eref{e:mldp3} and keep terms to leading order to obtain \begin{equation} \tilde{I}(j) \approx \lim_{t \to \infty} \min_{q(\tau)} \frac{1}{t^\gamma} \int_{t_0}^t \frac{\left[(1-A^*)(q-j^*)+\tau q' \right]^2}{2D^*} \, d\tau \label{e:gauss} \end{equation} where $D^*:=D_{w(j^*)}=(I''_{w(j)}(j)|_{j=j^*})^{-1}$ is assumed non-zero. Although we are now guaranteed to get a Gaussian form for $\tilde{I}(j)$ this approach should correctly capture the scaling behaviour of small fluctuations and, in particular, the dependence on $A^*$. The minimization in~\eref{e:gauss} is straightforwardly carried out; the corresponding Euler-Lagrange equations are linear and yield an optimal current path of the form \begin{equation} q(\tau)=j^*+K_1 \tau^{-A^*} + K_2 \tau^{A^*-1}. \label{e:ELsol} \end{equation} Here the integration constants are determined by the boundary conditions ($q(t_0)=j_0$, $q(t)=j$) as \begin{eqnarray} K_1&=&\frac{(j_0-j^*)t_0^{1-A^*}-(j-j^*)t^{1-A^*}}{t_0^{1-2A^*}-t^{1-2A^*}} \label{e:bc1} \\ K_2&=&\frac{(j_0-j^*)t_0^{A^*}-(j-j^*)t^{A^*}}{t_0^{2A^*-1}-t^{2A^*-1}} . \label{e:bc2} \end{eqnarray} We now substitute~\eref{e:ELsol} into the integrand of~\eref{e:gauss} and carry out the integration to find \begin{equation} \int_{t_0}^t {I}_{w(q)}(q+\tau q') \, d\tau = \frac{(1-2A^*)}{2D^*} (K_1)^2 (t^{1-2A^*}-t_0^{1-2A^*}). \label{e:int} \end{equation} Finally, inserting the form \eref{e:bc1} for $K_1$ reveals that the right-hand side of~\eref{e:int} scales asymptotically linearly with $t$ (so we need $\gamma=1$ for a non-zero limit) for $A^*< {1}/{2}$, and as $t^{2-2A^*}$ (so $\gamma=2-2A^*$) for $A^* > {1}/{2}$. To be precise, we end up with a modified large deviation principle of the form \begin{equation} \fl \mathrm{Prob}\left(\frac{\mathcal{J}_t}{t}=j\right)\sim \cases{ \exp\left[-\frac{(1-2A^*) (j-j^*)^2 }{2D^*}t \right] & for $A^*< \frac{1}{2}$ \\ \exp\left[-\frac{(2A^*-1) (j-j^*)^2}{2D^*}t_0^{2A^*-1}t^{2-2A^*} \right] & for $A^* > \frac{1}{2}$. \\ } \label{e:scale} \end{equation} Physically, for $A^* < {1}/{2}$, there is diffusive behaviour with a modified diffusion coefficient $D^*/(1-2A^*)$. We see clearly here that $A^*$ quantifies the effective strength of the feedback -- for $A^*$ negative, fluctuations are suppressed whilst, for $A^*$ positive, they are enhanced. For $A^* > {1}/{2}$, there is superdiffusive behaviour which retains an ageing-type dependence on the initial time $t_0$. At $A^*=1/2$ one expects logarithmic corrections whose analysis is beyond the scope of the current paper. As mentioned earlier, this transition is consistent with that already observed in the single-particle example of~\ref{A:single} and other random walk models~\cite{Schutz04,Hod04,Huillet08b,Me15}. In the next section we will illustrate the power of the general approach by appeal to a specific many-particle system. \section{Exclusion process with current-dependent memory} \label{s:ASEP} \subsection{Model} The totally asymmetric simple exclusion process (TASEP) was first introduced in 1968 to describe protein synthesis~\cite{MacDonald68} and, since then, has enjoyed widespread success both as a base model for various transport processes~\cite{Chowdhury00,Chowdhury05b} and as a vehicle for advancing theoretical understanding of non-equilibrium systems see, e.g.,~\cite{Derrida98c,Golinelli06b,Chou11} and references therein. We here start from the standard continuous-time version of this model on a one-dimensional lattice with open boundaries and modify it to include a current-dependent input rate. Obviously, many other forms of current dependence could be envisaged but this is a natural choice as a form of feedback -- the reader is invited to imagine controlling the arrival of cars onto a stretch of road. To be more concrete, our model is defined in the following manner (see also figure~\ref{f:ASEPj}). Each of the $L$ lattice sites has only two possible configurations: occupied (particle) or vacant (hole). A particle on site $l$ hops after an exponentially distributed waiting time with mean $1/p$ to site $l+1$ if, and only if, that site is vacant. Without loss of generality, we set the rate $p=1$ in the following. Particles are removed at the right-hand boundary (site $L$) with rate $\beta$ and injected subject to the exclusion rule at the left-hand boundary (site $1$) with a rate $\alpha(j)$ which, crucially, is a function of the time-averaged input current over the whole previous history. In fact, it is obvious from the continuity equation that, for a finite chain in the long-time limit, the time-averaged current must be the same between any pair of nearest-neighbour sites. \begin{figure} \centering \psfrag{a}{\textcolor{red}{$\alpha(j)$}} \psfrag{b}{$\beta$} \psfrag{1}[][]{1} \psfrag{2}[][]{2} \psfrag{3}[][]{3} \psfrag{l-1}[][]{$l\!-\!1$} \psfrag{l}[][]{$l$} \psfrag{l+1}[][]{$l\!+\!1$} \psfrag{L}[][]{$L$} \psfrag{p}[][]{$p$} \psfrag{ql}[][]{$q$} \includegraphics*[width=0.8\textwidth]{TASEPnew.eps} \caption{Schematic of one-dimensional TASEP with input rate depending on time-averaged current $j$ over the whole past history.} \label{f:ASEPj} \end{figure} For illustrative purposes (and in analogy with the single-particle analysis of~\ref{A:single}) we mainly consider a linear current dependence of the form \begin{equation} \alpha(j)=\alpha_0 + aj, \label{e:lin} \end{equation} where $0 \leq \alpha_0 \leq 1$ and $a>0$, before later touching on some other choices. An input rate of apparently similar form to~\eref{e:lin} was independently proposed by Sharma and Chowdhury~\cite{Sharma11b} to model the recycling of ribosomes in protein synthesis and implemented for the more general case of the $l$-TASEP with extended objects. We remark here that in~\cite{Sharma11b} one has the restriction $a \leq 1$ (as befits the biological context) and also, significantly, $j$ is the instantaneous mean (output) current rather than the average over the whole previous history. The relevance of these distinctions should become apparent in the discussion of phase diagrams and fluctuations below. \subsection{Mean current} It is well known (see, e.g.,~\cite{Derrida98c}) that, in the thermodynamic limit, the standard Markovian TASEP has the following three regimes. \begin{itemize} \item For $\alpha<1/2$, $\beta>\alpha$ there is a low-density (LD) phase in which the mean current is controlled by the input rate and given by $\alpha(1-\alpha)$. \item For $\alpha>\beta$, $\beta<1/2$ there is a corresponding high density (HD) phase in which the mean current is controlled by the output rate and given by $\beta(1-\beta)$. \item For $\alpha>1/2$, $\beta>1/2$ the system is in the maximal current (MC) phase where the mean current is limited by the bulk hopping rate and given simply by 1/4. \end{itemize} We now seek to determine the effect of the current-dependent memory on the phase boundaries and the mean current in each phase. Following the approach of the previous section, we argue that the mean current in the long-time limit is given by the fixed points in the three different regimes: \begin{equation} j^* = \cases{ \alpha(j^*)(1-\alpha(j^*)) & for $\alpha(j^*) < \frac{1}{2}, \beta > \alpha(j^*)$ [LD]\\ \beta(1-\beta) & for $\alpha(j^*) > \beta, \beta < \frac{1}{2}$ [HD] \\ \frac{1}{4} & for $\alpha(j^*) > \frac{1}{2}, \beta > \frac{1}{2}$ [MC]. \\ } \label{e:TASEPfp} \end{equation} Unsurprisingly, since the current dependence is in the input rate, the fixed point is unchanged in HD and MC phases. In the LD phase, however, some simple algebra yields \begin{equation} j^*=\frac{-(2\alpha_0a+1-a)+\sqrt{4\alpha_0 a+(1-a)^2}}{2 a^2} \label{e:fpld} \end{equation} with the other solution to the quadratic corresponding to an unphysical negative current. We can readily show that at the value of $j^*$ given by~\eref{e:fpld} \begin{equation} A^*=\left.\frac{d}{dj}\left[\alpha(j)(1-\alpha(j))\right]\right|_{j=j^*}=1-\sqrt{4\alpha_0 a+(1-a)^2}. \label{e:Astar} \end{equation} For $0<\alpha_0<{1}/{2}-{a}/{4}$ we have $0<A^*<1$ so this is a stable fixed point with ``positive'' feedback. Here the upper bound \begin{equation} \alpha_0=\frac{1}{2}-\frac{a}{4} \end{equation} corresponds to the LD-MC phase transition (determined by $\alpha(j)=1/2$). Furthermore the LD-HD transition line ($\beta=\alpha(j^*)$) becomes curved rather than straight and is given by \begin{equation} \beta=\frac{-(1-a)+\sqrt{4\alpha_0 a+(1-a)^2}}{2a}. \end{equation} As might be intuitively expected, the general effect of the positive feedback resulting from the $aj$ term is to increase the size of the maximal current phase. However, as exemplified by the representative cases in figure~\ref{f:pd}, \begin{figure} \centering \psfrag{a}[Tc][Tc]{$\alpha_0$} \psfrag{b}[Cr][Cr]{$\beta$} \psfrag{0b}[Tc][Tc]{\scriptsize{0}} \psfrag{1b}[Tc][Tc]{\scriptsize{1}} \psfrag{0}[Cr][Cr]{\scriptsize{0}} \psfrag{1}[Cr][Cr]{\scriptsize{1}} \psfrag{LD}[Cc][Cc][1][90]{\scriptsize{LD}} \psfrag{HD}[Cc][Cc]{\scriptsize{HD}} \psfrag{MC}[Cc][Cc]{\scriptsize{MC}} \subfigure[$a=0.8$]{\includegraphics[width=0.32\textwidth]{a08.eps}} \hfill \subfigure[$a=1.6$]{\includegraphics[width=0.32\textwidth]{a16.eps}} \hfill \subfigure[$a=2.4$]{\includegraphics[width=0.32\textwidth]{a24.eps}} \hfill \caption{Phase diagrams for current-dependent TASEP with $\alpha(j)=\alpha_0+aj$ and different values of $a$. Note that the picture in (c) is unchanged for all $a \geq 2$.} \label{f:pd} \end{figure} we predict the following three qualitatively different forms of phase diagram depending on the value of $a$. \begin{itemize} \item For $0 \leq a \leq 1$, the phase diagram reproduces that given in~\cite{Sharma11b} -- the distinction between dependence on historically-averaged and instantaneous current is irrelevant for calculation of the fixed point although not for the fluctuations (next subsection). The LD-HD transition line always passes through the origin and the phase diagram reduces to the Markovian case when $a=0$. \item For $1<a<2$ there is a qualitative difference in that the LD-HD phase transition line intersects the $\beta$ axis at $\beta>0$. The feedback is strong enough to ensure a non-zero mean-current in the LD phase even for $\alpha_0 \to 0$; at $\alpha_0=0$ there is an unstable fixed point at zero and a stable fixed point at $j^*=(a-1)/a^2$. \item For $a \geq 2$ there is no LD phase. In other words, $\alpha_0$ never controls the current -- for $\beta<1/2$ it is determined by the output rate and for $\beta>1/2$ by the bulk hopping rate. \end{itemize} The fixed points in the different regimes of these phase diagrams are confirmed by Monte Carlo simulations. For example, in figure~\ref{f:pdsim} we show a three-dimensional plot of the final current for a single long trajectory (as a function of boundary rates $\alpha_0$ and $\beta$) in the model with $a=0.8$ and $L=1000$; the accord with the theoretically predicted phase boundaries is self-evident. \begin{figure} \centering \psfrag{a}[Cc][Cc]{$\alpha_0$} \psfrag{b}[Cc][Cc]{$\beta$} \psfrag{j}[Cc][Cc]{$\mathcal{J}/t$} \psfrag{0.0}[Cr][Cr]{\scriptsize{0.0}} \psfrag{0.1}[Cr][Cr]{\scriptsize{0.1}} \psfrag{0.2}[Cr][Cr]{\scriptsize{0.2}} \psfrag{0.3}[Cr][Cr]{\scriptsize{0.3}} \psfrag{0.4}[Cr][Cr]{\scriptsize{0.4}} \psfrag{0.5}[Cr][Cr]{\scriptsize{0.5}} \psfrag{0.6}[Cr][Cr]{\scriptsize{0.6}} \psfrag{0.7}[Cr][Cr]{\scriptsize{0.7}} \psfrag{0.8}[Cr][Cr]{\scriptsize{0.8}} \psfrag{0.9}[Cr][Cr]{\scriptsize{0.9}} \psfrag{1.0}[Cr][Cr]{\scriptsize{1.0}} \psfrag{0.00}[Cl][Cl]{\scriptsize{0.00}} \psfrag{0.05}[Cl][Cl]{\scriptsize{0.05}} \psfrag{0.10}[Cl][Cl]{\scriptsize{0.10}} \psfrag{0.15}[Cl][Cl]{\scriptsize{0.15}} \psfrag{0.20}[Cl][Cl]{\scriptsize{0.20}} \psfrag{0.25}[Cl][Cl]{\scriptsize{0.25}} \includegraphics[width=0.8\textwidth]{pd08.eps} \caption{Monte Carlo simulation data for final value of time-averaged current $\mathcal{J}/t$ as a function of rates $\alpha_0$ and $\beta$ for a single trajectory of length $t=10^6$ in a system of size $L=1000$ with $\alpha(j)=\alpha_0+0.8j$. Initial condition used was $t_0=1$, $j_0=0$, and each site independently occupied by a particle with probability corresponding to the bulk density of a Markovian TASEP with the same input and output rates; for times $\tau>t_0$ a discrete-time random sequential update rule was used with 20 steps per unit time up to $\tau=1000$ (allowing for the fact that the time-averaged current $q(\tau)$ changes relatively fast at the beginning of the trajectory) and 2 steps per unit time thereafter. Data sampled at boundary rate increments of 0.02 and interpolated with gnuplot. Solid black lines show theoretically predicted phase boundaries.} \label{f:pdsim} \end{figure} As a further check, we then plot in figure~\ref{f:cross} the mean time-averaged current from 1000 different histories for the cross-section of the phase diagram with $\beta=0.6$. \begin{figure} \centering \psfrag{a}[Tc][Tc]{$\alpha_0$} \psfrag{j}[Cc][Cc]{${\langle \mathcal{J}\rangle}/{t}$} \psfrag{0.0}[Tc][Tc]{\scriptsize{0.0}} \psfrag{0.2}[Tc][Tc]{\scriptsize{0.2}} \psfrag{0.4}[Tc][Tc]{\scriptsize{0.4}} \psfrag{0.6}[Tc][Tc]{\scriptsize{0.6}} \psfrag{0.8}[Tc][Tc]{\scriptsize{0.8}} \psfrag{1.0}[Tc][Tc]{\scriptsize{1.0}} \psfrag{0.00}[Cr][Cr]{\scriptsize{0.00}} \psfrag{0.05}[Cr][Cr]{\scriptsize{0.05}} \psfrag{0.10}[Cr][Cr]{\scriptsize{0.10}} \psfrag{0.15}[Cr][Cr]{\scriptsize{0.15}} \psfrag{0.20}[Cr][Cr]{\scriptsize{0.20}} \psfrag{0.25}[Cr][Cr]{\scriptsize{0.25}} \psfrag{0.30}[Cr][Cr]{\scriptsize{0.30}} \includegraphics[width=0.8\textwidth]{figcross.eps} \caption{Monte Carlo simulation data for mean $\langle \mathcal{J} \rangle /t$ as a function of $\alpha_0$ with $\alpha(j)=\alpha_0+0.8j$ and $\beta=0.6$. Data from 1000 trajectories of length $t=10^6$ in a system of size $L=1000$. Other simulation details same as those used for figure~\ref{f:pdsim}. Blue dashed line is theoretical prediction of~\eref{e:fpld} for $j^*$.} \label{f:cross} \end{figure} The quantitative agreement of the mean current with the predicted fixed point $j^*$~\eref{e:fpld} is very good and similar confirmation is found for other rate parameters. However, due to the size of the system, one does need to simulate for relatively long times until the rate of change of the current is slow compared to the decay to stationarity and the quasistatic assumption is reasonable.\footnote{Formally the method requires that the long-time $t \to \infty$ limit is taken \emph{before} the thermodynamic $L \to \infty$ limit; this is also important for the study of fluctuations in the next section.} For smaller systems, the decay to stationarity is obviously faster but there are finite-size corrections for the mean currents~\cite{Derrida93b} which would require $L$-dependent expressions on the right-hand side of \eref{e:TASEPfp} and, in general, numerical solution for the LD fixed point. For different initial conditions the current may be different for short times but should eventually approach the same stable fixed point except for in the special case where the system is started exactly at an unstable fixed point. This latter is relevant for simulations at $\alpha_0=0$ in the $a>1$ case where an initial condition of $j_0=0$ is observed to lead to a zero current for all times whereas $j_0>0$ gives convergence to the stable fixed point $j^*=(a-1)/a^2$. In concluding this subsection we note that modified phase diagrams have been calculated for many other variants of the TASEP including those with stochastic gating (which can be thought of as the introduction of additional ``hidden'' variables in the standard Markovian model)~\cite{Wood09} and density feedback control~\cite{Woelki13}. However, we stress here that our approach enables us not only to predict the mean current but also to gain information about the fluctuations, as we shall see in the next subsection. \subsection{Fluctuations} According to the analysis of the Markovian TASEP in~\cite{Derrida95}, the diffusion constant in the MC phase scales asymptotically as $L^{-1/2}$ so, in the thermodynamic limit, $D^* \to 0$ and~\eref{e:gauss} is not applicable. However, in both HD and LD phases the diffusion constant approaches a finite limit -- here we aim to understand the effect of the memory on the fluctuations in the latter case. Starting from the Markovian result in~\cite{Derrida95} we have, for the LD phase, \begin{equation} D^*=\alpha(j^*)(1-\alpha(j^*))(1-2\alpha(j^*)) \label{e:Dstar} \end{equation} where, for our model with $\alpha(j)=\alpha_0+aj$, the fixed point $j^*$ is given by~\eref{e:fpld}. Now, as argued in section~\ref{s:fixed}, we expect long-time diffusive behaviour with modified diffusion coefficient $D^*/(1-2A^*)$ when $A^*$ of~\eref{e:Astar} is less than 1/2. Significantly, however there should be long-time superdiffusive behaviour for $1/2 < A^* < 1$ which is true for \begin{equation} \alpha_0 < \frac{{1}/{4}-(1-a)^2}{4a} =: \alpha_c. \end{equation} Note that $\alpha_c$ is positive only for $1/2<a<3/2$; in that range, we predict a subregime in the LD phase for which the fluctuations are superdiffusive and the variance of the time averaged current $\mathcal{J}/t$ scales in the long-time limit as $t^{2A^*-2}$. This asymptotic scaling is fairly convincingly supported by a log-log plot of variance against time for selected values of $\alpha_0$ in the $a=0.8$ case (figure~\ref{f:scale}). \begin{figure} \centering \psfrag{t}[Tc][Tc]{$t$} \psfrag{s}[Cc][Cc]{$\mathrm{Var}(\mathcal{J}/t)$} \psfrag{ 1000}[Tc][Tc]{\scriptsize{$10^3$}} \psfrag{ 10000}[Tc][Tc]{\scriptsize{$10^4$}} \psfrag{ 100000}[Tc][Tc]{\scriptsize{$10^5$}} \psfrag{ 1e+06}[Tc][Tc]{\scriptsize{$10^6$}} \psfrag{ 0.1}[Tc][Tc]{\scriptsize{$10^{-1}$}} \psfrag{ 0.01}[Tc][Tc]{\scriptsize{$10^{-2}$}} \psfrag{ 0.001}[Tc][Tc]{\scriptsize{$10^{-3}$}} \psfrag{ 0.0001}[Cr][Cr]{\scriptsize{$10^{-4}$}} \psfrag{ 1e-05}[Cr][Cr]{\scriptsize{$10^{-5}$}} \psfrag{ 1e-06}[Cr][Cr]{\scriptsize{$10^{-6}$}} \psfrag{ 1e-07}[Cr][Cr]{\scriptsize{$10^{-7}$}} \psfrag{ 1e-08}[Cr][Cr]{\scriptsize{$10^{-8}$}} \includegraphics[width=0.8\textwidth]{decay_asep.eps} \caption{Variance of $\mathcal{J}/t$ as a function of time for selected values of $\alpha_0$ with other parameters as in figure~\ref{f:cross} ($a=0.8$, $\beta=0.6$). Points show simulation data for (top to bottom): $\alpha_0 = 0.01, 0.05, 0.08, 0.12, 2$. Black solid lines are fits corresponding to power laws with negative exponent $\min(1, 2 - 2A^*)$; logarithmic corrections are expected at the critical point $\alpha_c= 0.065625$.} \label{f:scale} \end{figure} Additionally, figure~\ref{f:super} shows a naive check on the predicted coefficient $D^*/|1-2A^*|$ across a constant-$\beta$ cross-section of the phase diagram with the divergence at $\alpha_c$ clearly to be seen. \begin{figure} \centering \psfrag{a}[Tc][Tc]{$\alpha_0$} \psfrag{j}[Cc][Cc]{$\mathrm{Var}(\mathcal{J})/t$, $\mathrm{Var}(\mathcal{J})/t^{2A^*}$} \psfrag{0.0}[Tc][Tc]{\scriptsize{0.0}} \psfrag{0.2}[Tc][Tc]{\scriptsize{0.2}} \psfrag{0.4}[Tc][Tc]{\scriptsize{0.4}} \psfrag{0.6}[Tc][Tc]{\scriptsize{0.6}} \psfrag{0.8}[Tc][Tc]{\scriptsize{0.8}} \psfrag{1.0}[Tc][Tc]{\scriptsize{1.0}} \psfrag{0.00}[Cr][Cr]{\scriptsize{0.0}} \psfrag{0.20}[Cr][Cr]{\scriptsize{0.2}} \psfrag{0.40}[Cr][Cr]{\scriptsize{0.4}} \psfrag{0.60}[Cr][Cr]{\scriptsize{0.6}} \psfrag{0.80}[Cr][Cr]{\scriptsize{0.8}} \psfrag{1.00}[Cr][Cr]{\scriptsize{1.0}} \includegraphics[width=0.8\textwidth]{figcross_var.eps} \caption{Monte Carlo simulation data for variance of $\mathcal{J}$ as a function of $\alpha_0$ for parameters of figure~\ref{f:cross} ($a=0.8$, $\beta=0.6$). Red $+$ symbols show $(\langle\mathcal{J}^2\rangle-\langle\mathcal{J}\rangle^2)/t$, expected to have finite long-time limit in diffusive regime ($\alpha > \alpha_c = 0.065625$); green $\times$ symbols show $(\langle\mathcal{J}^2\rangle-\langle\mathcal{J}\rangle^2)/t^{2A^*}$, predicted to be finite in superdiffusive regime ($\alpha < \alpha_c$). Blue dashed line is theoretical prediction for $D^*/|1-2A^*|$ from~\eref{e:Astar} and~\eref{e:Dstar}.} \label{f:super} \end{figure} The slight theoretical overestimation of the $t=10^6$ data for small $\alpha_0$ (corresponding to small mean current) may be related to the fact that, for totally asymmetric systems such as this, the current distribution must be cut off at $j=0$ and the Gaussian approximation is thus expected to be less good for small $j^*$ (and inapplicable for $j^*=0$). For the same model with $a=1.6$, preliminary simulations (not shown) support the assertion that there is no superdiffusion and, in fact, suggest that the width of the time-averaged current distribution decays somewhat faster than the diffusive prediction of~\eref{e:scale}, at least for intermediate times. Again this may be related to the $j=0$ cut-off but further investigation for longer times would certainly be desirable. More generally, the existence of a superdiffusive subregime in the phase diagram clearly depends on the precise form of the current dependence. For example, in another tractable case $\alpha(j)=\alpha_0 + a\sqrt{j}$ we also see the MC phase extended at the expense of the LD phase but predict that fluctuations throughout the LD phase remain diffusive for all values of $a$. In this case too, the mean current and absence of superdiffusion are confirmed by simulation but more work is still needed to definitively determine the applicability of~\eref{e:scale}. \subsection{Exact numerical minimization} Going beyond the mean and diffusion coefficient, the current large deviations in the Markovian TASEP have remarkably been fully characterized recently for all hopping rates and systems sizes~\cite{DeGier11b,Lazarescu11b,Gorissen12b}. We can now use these results to evaluate~\eref{e:mldp} directly and thus to check the consistency of the Gaussian approximation applied in the previous subsections. In the LD phase the scaled cumulant generating function (the Legendre transform of the rate function) approaches an $L$-independent limit as the system size increases. Specifically, we have \begin{equation} \fl \lim_{t \to \infty} \frac{1}{t} \log \langle e^{-\lambda \mathcal{J}} \rangle = \alpha(1-\alpha)\left( \frac{1-e^{-\lambda}}{1-\alpha+\alpha e^{-\lambda}} \right) \quad \textrm{for}~~-\log\left(\frac{1-\alpha}{\alpha}\right) < \lambda < \infty \label{e:TASEPscgf} \end{equation} which straightforwardly corresponds to \begin{equation} \fl I_\alpha(j)=\frac{(2\alpha-1)+\sqrt{1-4j}}{2} + j \log \left[ \frac{(1-\alpha)(1-2j-\sqrt{1-4j})}{2 \alpha j} \right]\quad \textrm{for}~~0 < j < 1/4. \label{e:TASEPrf} \end{equation} The form~\eref{e:TASEPscgf} was obtained by Bethe ansatz in~\cite{DeGier11b} and via a general parametric representation in~\cite{Lazarescu11b}. It can also be derived within the framework of macroscopic fluctuation theory~\cite{Bodineau06}. One can readily check that the rate function~\eref{e:TASEPrf} has a zero at the mean current $\bar{j}_\alpha=\alpha(1-\alpha)$ with (inverse) second derivative at that point corresponding to the diffusion coefficient $\alpha(1-\alpha)(1-2\alpha)$. At $j = 1/4$ there is a dynamical phase transition to a regime in which $I_\alpha(j)$ retains a dependence on $L$. Nevertheless, at least away from this transition we claim that the rate function of the non-Markovian current-dependent process should be given by minimizing the integral in~\eref{e:mldp} with an integrand $I_{\alpha(q)}(q+\tau q')$ which is simply obtained from~\eref{e:TASEPrf} via the replacement of $\alpha$ with $\alpha(q)$. In practice, this integral is much too complicated to approach analytically so we resort to exact numerical calculations using Mathematica. Some computational difficulties are encountered here, apparently related to stiffness of the differential equations (as well as perhaps the finite range of applicability for $I_\alpha(j)$ and the impossibility of negative currents). However, notwithstanding this, the approach enables us to push the bounds of investigation beyond the Gaussian regime discussed above. Returning to our favourite example with $\alpha(j)=\alpha_0 + a j$, we focus now on small values of $a$ because they lead to more stable numerics and yet clearly illustrate the effect of even weak memory dependence on the current large deviations. Figure~\ref{f:numerics} shows the finite-time quantity \begin{equation} \tilde{I}(j,t) = \min_{q(\tau)} \frac{1}{t} \int_{t_0}^t {I}_{\alpha(q)}(q+\tau q') \, d\tau \end{equation} evaluated at $t=1000$ for fixed $\alpha_0$ and both zero and non-zero values of $a$. \begin{figure} \centering \psfrag{0.00b}[Tc][Tc]{\scriptsize{0.00}} \psfrag{0.05b}[Tc][Tc]{\scriptsize{0.05}} \psfrag{0.10b}[Tc][Tc]{\scriptsize{0.10}} \psfrag{0.15b}[Tc][Tc]{\scriptsize{0.15}} \psfrag{0.20b}[Tc][Tc]{\scriptsize{0.20}} \psfrag{0.25b}[Tc][Tc]{\scriptsize{0.25}} \psfrag{0.00}[Cr][Cr]{\scriptsize{0.00}} \psfrag{0.02}[Cr][Cr]{\scriptsize{0.02}} \psfrag{0.04}[Cr][Cr]{\scriptsize{0.04}} \psfrag{0.06}[Cr][Cr]{\scriptsize{0.06}} \psfrag{0.08}[Cr][Cr]{\scriptsize{0.08}} \psfrag{0.10}[Cr][Cr]{\scriptsize{0.10}} \psfrag{0.12}[Cr][Cr]{\scriptsize{0.12}} \psfrag{0.14}[Cr][Cr]{\scriptsize{0.14}} \psfrag{0.16}[Cr][Cr]{\scriptsize{0.16}} \psfrag{0.18}[Cr][Cr]{\scriptsize{0.18}} \psfrag{0.10i}[Tc][Tc]{\scriptsize{0.10}} \psfrag{0.11i}[Tc][Tc]{\scriptsize{0.11}} \psfrag{0.12i}[Tc][Tc]{\scriptsize{0.12}} \psfrag{0.13i}[Tc][Tc]{\scriptsize{0.13}} \psfrag{0.14i}[Tc][Tc]{\scriptsize{0.14}} \psfrag{0.15i}[Tc][Tc]{\scriptsize{0.15}} \psfrag{0.16i}[Tc][Tc]{\scriptsize{0.16}} \psfrag{0.17i}[Tc][Tc]{\scriptsize{0.17}} \psfrag{0.18i}[Tc][Tc]{\scriptsize{0.18}} \psfrag{0.19i}[Tc][Tc]{\scriptsize{0.19}} \psfrag{0.20i}[Tc][Tc]{\scriptsize{0.20}} \psfrag{0.000}[Cr][Cr]{\scriptsize{0.000}} \psfrag{0.002}[Cr][Cr]{\scriptsize{0.002}} \psfrag{0.004}[Cr][Cr]{\scriptsize{0.004}} \psfrag{0.006}[Cr][Cr]{\scriptsize{0.006}} \psfrag{0.001}[Cr][Cr]{\scriptsize{}} \psfrag{0.003}[Cr][Cr]{\scriptsize{}} \psfrag{0.005}[Cr][Cr]{\scriptsize{}} \psfrag{0.007}[Cr][Cr]{\scriptsize{}} \psfrag{0.008}[Cr][Cr]{\scriptsize{0.008}} \psfrag{0.009}[Cr][Cr]{\scriptsize{}} \psfrag{j}[Tc][Tc]{$j$} \psfrag{I}[Tc][Tc]{$\tilde{I}(j,1000)$} \psfrag{ji}[Tc][Tc]{} \psfrag{Ii}[Tc][Tc]{} \defr{r} \topinset{\includegraphics[width=0.45\textwidth]{taseplin_in.eps}}{\includegraphics[width=0.8\textwidth]{taseplin.eps}}{0.05\textwidth}{0.10\textwidth} \caption{Mathematica results for LD-phase $\tilde{I}(j,1000)$ in case $\alpha(j)=\alpha_0 + a j$ with $\alpha_0=0.2$, $a=0$ (black $+$ symbols) and $a=0.1$ (red $\times$ symbols); initial condition $t_0=1$, $j_0=0$. Black dotted line is exact expression~\eref{e:TASEPrf} for Markovian rate function $I_{\alpha_0}(j)$; red dashed line is Gaussian expansion of non-Markovian $\tilde{I}(j)$ for $a=0.1$. Inset shows close-up around mean current.} \label{f:numerics} \end{figure} In the $a=0$ case we find excellent agreement with the Markovian rate function~\eref{e:TASEPrf} and we anticipate that our numerical method converges fast towards the long-time limit also in the non-Markovian case. For $a>0$ the mean current (zero of the rate function) is shifted to a higher value and the width of the distribution increased. The approximation from the fixed point analysis of the preceding subsections matches very well the behaviour for small fluctuations but, as expected, is inaccurate for larger fluctuations. In particular, by construction, the Gaussian fails to capture the asymmetry of the rate function about the mean -- we need the full minimization to see that the probability of large fluctuations below the mean is hardly affected by the memory whereas large fluctuations above the mean become much more likely than in the Markovian case (presumably because, in this model, the feedback can increase but never decrease the hopping rate). As a second example, we take a current dependence which illustrates the possibility of negative, as well as positive, feedback. Specifically we set \begin{equation} \alpha(j)=\alpha_0 e^{\kappa(j-\bar{j}_{\alpha_0,\beta})} \end{equation} where $\bar{j}_{\alpha_0,\beta}$ is the mean current of a Markovian TASEP with boundary rates $\alpha_0$ and $\beta$. Note that $\alpha(j)$ thus has a different expression in each of the three regimes of the $(\alpha_0,\beta)$ phase diagram. For any choice of these boundary rates it is easy to see that $j^*=\bar{j}_{\alpha_0,\beta}$ is a fixed point for all $\kappa$ and, in fact, using the now-established method we find that for $\kappa<8$ this fixed point is always stable. In other words, for $\kappa<8$ the mean current and phase diagram are identical to the underlying Markovian model but, of course, the fluctuations are different. In the LD phase, we have \begin{equation} A^* = \kappa \alpha_0 (1 - 2 \alpha_0) \end{equation} and so, for $\kappa>4$, there is a superdiffusive subregime centred around $\alpha_0=1/4$. On the other hand, for $\kappa<4$ we predict diffusive fluctuations throughout the LD phase with modified effective diffusion coefficient \begin{equation} \frac{D^*}{1-2A^*} = \frac{\alpha_0(1-\alpha_0)(1-2\alpha_0)}{1-2\alpha_0(1-2\alpha_0)\kappa}. \end{equation} In accordance with intuition, negative values of $\kappa$ act to suppress fluctuations and reduce the width of the distribution about the mean current while positive values promote fluctuations and increase the width of the distribution. This is confirmed by the results shown in figure~\ref{f:posneg} which again demonstrate that the Gaussian approximation agrees closely with the full numerical minimization for small fluctuations but not for large ones (especially below the mean). \begin{figure} \centering \psfrag{0.00b}[Tc][Tc]{\scriptsize{0.00}} \psfrag{0.05b}[Tc][Tc]{\scriptsize{0.05}} \psfrag{0.10b}[Tc][Tc]{\scriptsize{0.10}} \psfrag{0.15b}[Tc][Tc]{\scriptsize{0.15}} \psfrag{0.20b}[Tc][Tc]{\scriptsize{0.20}} \psfrag{0.25b}[Tc][Tc]{\scriptsize{0.25}} \psfrag{0.00}[Cr][Cr]{\scriptsize{0.00}} \psfrag{0.02}[Cr][Cr]{\scriptsize{0.02}} \psfrag{0.04}[Cr][Cr]{\scriptsize{0.04}} \psfrag{0.06}[Cr][Cr]{\scriptsize{0.06}} \psfrag{0.08}[Cr][Cr]{\scriptsize{0.08}} \psfrag{0.10}[Cr][Cr]{\scriptsize{0.10}} \psfrag{0.12}[Cr][Cr]{\scriptsize{0.12}} \psfrag{0.14}[Cr][Cr]{\scriptsize{0.14}} \psfrag{0.10i}[Tc][Tc]{\scriptsize{0.10}} \psfrag{0.11i}[Tc][Tc]{\scriptsize{0.11}} \psfrag{0.12i}[Tc][Tc]{\scriptsize{0.12}} \psfrag{0.13i}[Tc][Tc]{\scriptsize{0.13}} \psfrag{0.14i}[Tc][Tc]{\scriptsize{0.14}} \psfrag{0.15i}[Tc][Tc]{\scriptsize{0.15}} \psfrag{0.16i}[Tc][Tc]{\scriptsize{0.16}} \psfrag{0.000}[Cr][Cr]{\scriptsize{0.000}} \psfrag{0.002}[Cr][Cr]{\scriptsize{0.002}} \psfrag{0.004}[Cr][Cr]{\scriptsize{0.004}} \psfrag{0.006}[Cr][Cr]{\scriptsize{0.006}} \psfrag{0.001}[Cr][Cr]{\scriptsize{}} \psfrag{0.003}[Cr][Cr]{\scriptsize{}} \psfrag{0.005}[Cr][Cr]{\scriptsize{}} \psfrag{0.007}[Cr][Cr]{\scriptsize{}} \psfrag{j}[Tc][Tc]{$j$} \psfrag{I}[Tc][Tc]{$\tilde{I}(j,1000)$} \psfrag{ji}[Tc][Tc]{} \psfrag{Ii}[Tc][Tc]{} \defr{r} \topinset{\includegraphics[width=0.45\textwidth]{tasepexp_in.eps}}{\includegraphics[width=0.8\textwidth]{tasepexp.eps}}{0.05\textwidth}{0.15\textwidth} \caption{Mathematica results for LD-phase $\tilde{I}(j,1000)$ in case $\alpha(j)=\alpha_0 e^{\kappa(j-\bar{j}_{\alpha_0,\beta})}$ with $\alpha_0=0.15$, $\kappa=0.5$ (red $\times$ symbols) and $\kappa=-0.5$ (green $+$ symbols); initial condition $t_0=1$, $j_0=0$. Black dotted line is exact expression~\eref{e:TASEPrf} for $I_{\alpha_0}(j)$; coloured dashed lines are Gaussian expansions of $\tilde{I}(j)$ for $\kappa=\pm 0.5$. Inset shows close-up around mean current.} \label{f:posneg} \end{figure} \section{Discussion} \label{s:dis} In this paper we have investigated some aspects of a class of interacting particle systems where the rates depend on the time-averaged current $\mathcal{J}/t$. This memory dependence is effectively a form of feedback which can act to suppress or enhance fluctuations. In particular, we here considered the application of a recently proposed ``temporal additivity principle''~\cite{Me09} for obtaining the large deviation rate function for current fluctuations in such non-Markovian models via a minimization involving the Markovian rate function. Using a heuristic analysis based on fixed points of the dynamics we detailed how a Gaussian approximation for the behaviour of small fluctuations emerges from the full minimization problem and were thus able to highlight the conditions for long-time superdiffusive behaviour. Whilst this approach fails in general for large fluctuations, it nevertheless provides a means to gain some information about the effects of memory even when the full Markovian rate function is unknown or analytical minimization impossible. This claim was corroborated by checking the predictions with simulation data for the current mean and variance in a paradigmatic exclusion process model, as well as comparing corresponding exact numerical minimization results. In order to explore further the underlying assumptions for the temporal additivity principle and its approximation, it would be interesting both to put the central arguments of this paper on a more rigorous mathematical footing and to develop computational methods (perhaps along the lines of the ``cloning'' algorithm~\cite{Giardina06,Lecomte07} for Markovian models) to efficiently access the full rate function in simulations. Although our chief example here was the \emph{totally} asymmetric simple exclusion process one can also apply similar considerations to models with partially asymmetric dynamics. Indeed, both the range of applicability of the Gaussian approximation and the stability of numerics would potentially be improved without the cut-off at $j=0$. In the context of jumps in both forward and backward directions, a topical question is whether one finds a so-called fluctuation relation~\cite{Evans93,Gallavotti95,Lebowitz99} governing the probabilities of positive and negative currents. Within the Gaussian approximation developed above (for $A^*<1$), we find \begin{equation} \fl \frac{\mathrm{Prob}(\mathcal{J}_t/t=-j)}{\mathrm{Prob}(\mathcal{J}_t/t=j)} \sim \cases{ \exp\left[-\frac{2(1-2A^*)j^*}{D^*} \times jt \right] & for $A^*< \frac{1}{2}$ \\ \exp \left[-\frac{2(2A^*-1)j^*}{D^*}t_0^{2A^*-1}\times j t^{2-2A^*}\right] & for $A^*> \frac{1}{2}$. \\ } \end{equation} This suggests the standard Gallavotti-Cohen-type fluctuation symmetry for $A^*< {1}/{2}$ and a modified form for $A^*> {1}/{2}$. The latter is reminiscent of a similar finding for anomalous dynamics in a different setting~\cite{Chechkin09} but a word of caution is necessary here. Any Gaussian distribution for the current will necessarily have a ratio between positive and negative currents whose exponent is linear in $j$. This does not guarantee that the same symmetry holds in the non-Gaussian tails of the distribution which are neglected by this approximation. For time-homogeneous CTRWs or semi-Markov processes (with finite state space and finite mean waiting time), earlier work~\cite{Esposito08,Andrieux08d} asserts that a sufficient condition for the standard symmetry in the full current distribution (arising from a time-reversal relation at the level of microscopic trajectories) is the ``direction-time independence'' property~\cite{Qian06,Maes09b}. In our framework, an exactly solvable model with the analogous condition that the ratio of jumps left and right is a constant (see also~\ref{A:single}), was indeed found in~\cite{Me09} to obey the symmetry. It would be interesting to see if there are similar necessary/sufficient conditions for a modified symmetry relation in the case of superdiffusive fluctuations. However, in general it is not clear that any such relation exists, much less that it has a meaningful interpretation in terms of entropy (cf.\ the discussion in~\cite{Esposito08}). Other scenarios worthy of closer attention include fluctuations in models with multiple stable fixed points and fluctuations beyond dynamical phase transitions. In the latter case, one anticipates the possibility of non-convex rate functions corresponding to non-differentiable points in the scaled cumulant generating function.\footnote{For a mathematical demonstration of a non-convex rate function appearing in another type of non-Markovian model, see~\cite{Duffy08}.} Usually for Markovian models, a Maxwell-type construction gives phase separation in time and a linear section in the rate function but the introduction of long-range temporal correlations means the phase boundary may acquire a finite probabilistic cost even in the long-time limit. This is completely analogous to the manner in which long-range spatial correlations can give rise to non-concave entropies in equilibrium~\cite{Campa09}. Finally we remark that, as already mentioned in~\cite{Me09} the additivity formalism should also be applicable to intrinsically non-Markovian models, such as the Alzheimer random walk~\cite{DaSilva05,Cressoni07,Kenkre07}, but with a non-local minimization problem involving delay differential equations. The full analytical treatment of such problems appears an even more formidable task but a stability analysis of the dynamics could provide some hope. Understanding fluctuations in systems with memory is clearly important from both foundational and practical viewpoints but there is much work still to be done in establishing connections between different approaches (especially from complementary mathematics and physics traditions) as well as in forging new ground. In this context we expect that the workhorses of statistical mechanics such as random walk models and exclusion processes will continue to play an important role. \ack The author is grateful to many colleagues for discussions which have contributed to the development of this material over a number of years. Particular thanks are due to Ajeet Sharma and Debashish Chowdhury for pointing out the connection to~\cite{Sharma11b}, as well as to Hugo Touchette and Massimo Cavallaro for detailed comments on various aspects. The work has also benefited from the kind hospitality of several research centres especially the Galileo Galilei Institute for Theoretical Physics (GGI) Florence and the National Institute for Theoretical Physics (NITheP) Stellenbosch.
{'timestamp': '2015-12-24T02:06:01', 'yymm': '1505', 'arxiv_id': '1505.08095', 'language': 'en', 'url': 'https://arxiv.org/abs/1505.08095'}
\section{Introduction} Under the ongoing COVID-19 pandemic (caused by the SARS-CoV-2 coronavirus), recommendations and common practices regarding face mask use by the general public have varied greatly and are in rapid flux: Mask use by the public in public spaces has been controversial in the US, although as of April 3, 2020, the US Centers for Disease Control and Prevention (CDC) is recommending the public wear cloth masks. Public mask use is far more prevalent in many Asian countries, which have longer experience with novel coronavirus epidemics; public mask use may have been effective at limiting community spread during the 2003 SARS epidemic \cite{Wu2004,Lau2004}, and widespread mask use is a prominent feature of the relatively successful COVID-19 response in Taiwan \cite{Wang2020}, for example. Masks have also been suggested as method for limiting community transmission by asymptomatic or at least clinically undetected carriers \cite{Chan2020}, who may be a major driver of transmissions of COVID-19 \cite{Li2020}. Various experimental studies suggest that masks may both protect the wearer from acquiring various infections \cite{Lai2012,Davies2013} or transmitting infection \cite{Dharmadhikari2012}. Medical masks (i.e., surgical masks and N95 respirators) in healthcare workers appear to consistently protect against respiratory infection under metanalysis \cite{MacIntyre2017,Offeddu2017}, although clinical trials in the community have yielded more mixed results \cite{MacIntyre2009,Cowling2009,Canini2010}. While medical-grade masks should be prioritized for healthcare providers, homemade cloth masks may still afford significant, although variable and generally lesser, protection \cite{Sande2008,Davies2013}, but clinical trials in the community remain lacking. Given the flux in recommendations, and uncertainty surrounding the possible community-wide impact of mass face masks (especially homemade cloth masks) on COVID-19 transmission, we have developed a multi-group Kermack-McKendrick-type compartmental mathematical model, extending prior work geared towards modeling the COVID-19 pandemic (e.g. \cite{Li2020,Ferguson2020,Tang2020}), as well as models previously used to examine masks in a potential influenza pandemic \cite{Brienen2010,Tracht2010}. This initial framework suggests that masks could be effective even if implemented as a singular intervention/mitigation strategy, but \textit{especially} in combination with other non-pharmaceutical interventions that decrease community transmission rates. Whether masks can be useful, even in principle, depends on the mechanisms for transmission for SARS-CoV-2, which are likely a combination of droplet, contact, and possible airborne (aerosol) modes. The traditional model for respiratory disease transmission posits infection via infectious droplets (generally 5--10 $\mu$m) that have a short lifetime in the air and infect the upper respiratory tract, or finer aerosols, which may remain in the air for many hours \cite{Leung2020}, with ongoing uncertainties in the relative importance of these modes (and in the conceptual model itself \cite{Bourouiba2020}) for SARS-CoV-2 transmission \cite{Han2020,Bourouiba2020}. The WHO \cite{WHO2020} has stated that SARS-CoV-2 transmission is primarily via coarse respiratory droplets and contact routes. An experimental study \cite{Doremalen2020} using a nebulizer found SARS-CoV-2 to remain viable in aerosols ($<$5 $\mu$m) for three hours (the study duration), but the clinical relevance of this setup is debatable \cite{WHO2020}. One out of three symptomatic COVID-19 patients caused extensive environmental contamination in \cite{Ong2020}, including of air exhaust outlets, though the air itself tested negative. Face masks can protect against both coarser droplet and finer aerosol transmission, though N95 respirators are more effective against finer aerosols, and may be superior in preventing droplet transmission as well \cite{MacIntyre2017}. Metanalysis of studies in healthy healthcare providers (in whom most studies have been performed) indicated a strong protective value against clinical and respiratory virus infection for both surgical masks and N95 respirators \cite{Offeddu2017}. Case control data from the 2003 SARS epidemic suggests a strong protective value to mask use by community members in public spaces, on the order of 70\% \cite{Wu2004,Lau2004}. Experimental studies in both humans and manikins indicate that a range of mask provide at least some protective value against various infectious agents \cite{Sande2008,Davies2013,Driessche2015,Stockwell2018,Leung2020}. Medical masks were potentially highly effective as both source control and primary prevention under tidally breathing and coughing conditions in manikin studies \cite{Lai2012,Patel2016}, with higher quality masks (e.g. N95 respirator vs. surgical mask) offering greater protection \cite{Patel2016}. It is largely unknown to what degree homemade masks (typically made from cotton, teacloth, or other polyesther fibers) may protect against droplets/aerosols and viral transmission, but experimental results by Davies et al. \cite{Davies2013} suggest that while the homemade masks were less effective than surgical mask, they were still markedly superior to no mask. A clinical trial in healthcare workers \cite{MacIntyre2015} showed relatively poor performance for cloth masks relative to medical masks. Mathematical modeling has been influential in providing deeper understanding on the transmission mechanisms and burden of the ongoing COVID-19 pandemic, contributing to the development of public health policy and understanding. Most mathematical models of the COVID-19 pandemic can broadly be divided into either population-based, SIR (Kermack-McKendrick)-type models, driven by (potentially stochastic) differential equations \cite{Li2020,Wu2020,Tang2020,Kucharski2020,Calafiore2020,Simha2020,Dehning2020,Nesteruk2020,Zhang2020,Anastassopoulou2020,Moore2020}, or agent-based models \cite{Ferguson2020,Wilder2020,Biswas2020,Chang2020,Estrada2020}, in which individuals typically interact on a network structure and exchange infection stochastically. One difficulty of the latter approach is that the network structure is time-varying and can be difficult, if not impossible, to construct with accuracy. Population-based models, alternatively, may risk being too coarse to capture certain real-world complexities. Many of these models, of course, incorporate features from both paradigms, and the right combination of dynamical, stochastic, data-driven, and network-based methods will always depend on the question of interest. In \cite{Li2020}, Li et al. imposed a metapopulation structure onto an SEIR-model to account for travel between major cities in China. Notably, they include compartments for both documented and undocumented infections. Their model suggests that as many as 86\% of all cases went undetected in Wuhan before travel restrictions took effect on January 23, 2020. They additionally estimated that, on a per person basis, asymptomatic individuals were only 55\% as contagious, yet were responsible for 79\% of new infections, given their increased prevalence. The importance of accounting for asymptomatic individuals has been confirmed by other studies (\cite{Ferguson2020}, \cite{Calafiore2020}). In their model-based assessment of case-fatality ratios, Verity et al. \cite{Verity2020} estimated that 40--50\% of cases went unidentified in China, as of February 8, 2020, while in the case of the Princess Diamond cruise ship, 46.5\% of individuals who tested positive for COVID-19 were asymptomatic \cite{Moriarty2020}. Further, Calafiore et al.~\cite{Calafiore2020}, using a modified SIR-model, estimated that, on average, cases in Italy went underreported by a factor of 63, as of March 30, 2020. Several prior mathematical models, motivated by the potential for pandemic influenza, have examined the utility of mask wearing by the general public. These include a relatively simple modification of an SIR-type model by Brienen et al. \cite{Brienen2010}, while Tracht et al. \cite{Tracht2010} considered a more complex SEIR model that explicitly disaggregated those that do and do not use masks. The latter concluded that, for pandemic H1N1 influenza, modestly effective masks (20\%) could halve total infections, while if masks were just 50\% effective as source control, the epidemic could be essentially eliminated if just 25\% of the population wore masks. We adapt these previously developed SEIR model frameworks for transmission dynamics to explore the potential community-wide impact of public use of face masks, of varying efficacy and compliance, on the transmission dynamics and control of the COVID-19 pandemic. In particular, we develop a two-group model, which stratifies the total population into those who habitually do and do not wear face masks in public or other settings where transmission may occur. This model takes the form of a deterministic system of nonlinear differential equations, and explicitly includes asymptomatically-infectious humans. We examine mask effectiveness and coverage (i.e., fraction of the population that habitually wears masks) as our two primary parameters of interest. We explore possible nonlinearities in mask coverage and effectiveness and the interaction of these two parameters; we find that the product of mask effectiveness and coverage level strongly predicts the effect of mask use on epidemiologic outcomes. Thus, homemade cloth masks are best deployed \textit{en masse} to benefit the population at large. There is also a potentially strong nonlinear effect of mask use on epidemiologic outcomes of cumulative death and peak hospitalizations. We note a possible temporal effect: Delaying mass mask adoption too long may undermine its efficacy. Moreover, we perform simulated case studies using mortality data for New York and Washington state. These case studies likewise suggest a beneficial role to mass adoption of even poorly effective masks, with the relative benefit likely greater in Washington state, where baseline transmission is less intense. The absolute potential for saving lives is still, however, greater under the more intense transmission dynamics in New York state. Thus, early adoption of masks is useful regardless of transmission intensities, and should not be delayed even if the case load/mortality seems relatively low. In summary, the benefit to routine face mask use by the general public during the COVID-19 pandemic remains uncertain, but our initial mathematical modeling work suggests a possible strong potential benefit to near universal adoption of even weakly effective homemade masks that may \textit{synergize with}, not replace, other control and mitigation measures. \section{Methods} \subsection{Baseline mathematical models} \subsubsection{Model with no mask use} We consider a baseline model without any mask use to form the foundation for parameter estimation and to estimate transmission rates in New York and Washington state; we also use this model to determine the equivalent transmission rate reductions resulting from public mask use in the full model. We use a deterministic susceptible, exposed, symptomatic infectious, hospitalized, asymptomatic infectious, and recovered modeling framework, with these classes respectively denoted $S(t)$, $E(t)$, $I(t)$, $H(t)$, $A(t)$, and $R(t)$; we also include $D(t)$ to track cumulative deaths. We assume that some fraction of detected infectious individuals progress to the hospitalized class, $H(t)$, where they are unable to pass the disease to the general public; we suppose that some fraction of hospitalized patients ultimately require critical care (and may die) \cite{Zhou2020}, but do not explicitly disaggregate, for example, ICU and non-ICU patients. Based on these assumptions and simplifications, the basic model for the transmission dynamics of COVID-19 is given by the following deterministic system of nonlinear differential equations: \begin{eqnarray} \frac{dS}{dt} &=& -\beta(t) (I + \eta A) \frac{S}{N}, \\ \frac{dE}{dt} &=& \beta(t) (I + \eta A) \frac{S}{N} - \sigma E, \\ \frac{dI}{dt} &=& \alpha \sigma E - \phi I - \gamma_I I, \\ \frac{dA}{dt} &=& (1 - \alpha) \sigma E - \gamma_A A, \\ \frac{dH}{dt} &=& \phi I - \delta H - \gamma_H H, \\ \frac{dR}{dt} &=& \gamma_I I + \gamma_A A + \gamma_H H, \\ \frac{dD}{dt} &=& \delta H, \end{eqnarray} where \begin{equation} N = S + E + I + A + R, \end{equation} is the total population in the community, and $\beta(t)$ is the baseline infectious contact rate, which is assumed to vary with time in general, but typically taken fixed. Additionally, $\eta$ accounts for the relative infectiousness of asymptomatic carriers (in comparison to symptomatic carriers), $\sigma$ is the transition rate from the exposed to infectious class (so $1/\sigma$ is the disease incubation period), $\alpha$ is the fraction of cases that are symptomatic, $\phi$ is the rate at which symptomatic individuals are hospitalized, $\delta$ is the disease-induced death rate, and $\gamma_A$, $\gamma_I$ and $\gamma_H$ are recovery rates for the subscripted population. We suppose hospitalized persons are not exposed to the general population. Thus, they are excluded from the tabulation of $N$, and do not contribute to infection rates in the general community. This general modeling framework is similar to a variety of SEIR-style models recently employed in \cite{Li2020,Ferguson2020}, for example. For most results in this paper, we use let $\beta(t) \equiv \beta_0$. However, given ongoing responses to the COVID-19 pandemic in terms of voluntary and mandated social distancing, etc., we also consider the possibility that $\beta$ varies with time and adopt the following functional form from Tang et al. \cite{Tang2020}, with the modification that contact rates do not begin declining from the initial contact rate, $\beta_0$, until time $t_0$, \begin{equation} \beta(t) = \left\{ \begin{array}{lr} \beta_0, & t < t_0 \\ \beta_{min} + (\beta_0 - \beta_{min}) \exp(-r (t - t_0)), & t \geq t_0 \end{array} \right. \end{equation} where $\beta_{min}$ is the minimum contact rate and $r$ is the rate at which contact decreases. \subsection{Baseline epidemiological parameters} \begin{table} \footnotesize \centering \setlength{\extrarowheight}{4pt} \begin{tabular}{| l | c | c | c |} \hline \rowcolor[rgb]{.9 .7 .7} \textbf{Parameter} & \textbf{Likely range (references)} & \textbf{Default value} \\ \hline $\beta$ (infectious contact rate) & 0.5--1.5 day$^{-1}$ \cite{Shen2020,Read2020,Li2020}, this work & 0.5 day$^{-1}$ \\ \rowcolor[rgb]{.9, .9, .9} $\sigma$ (transition exposed to infectious) & 1/14--1/3 day$^{-1}$ \cite{Lauer2020,Li2020} & 1/5.1 day$^{-1}$ \\ $\eta$ (infectiousness factor for asymptomatic carriers) & 0.4--0.6 \cite{Ferguson2020,Li2020} & 0.5 \\ \rowcolor[rgb]{.9, .9, .9} $\alpha$ (fraction of infections that become symptomatic) & 0.15--0.7 \cite{Li2020,Ferguson2020,Verity2020,Moriarty2020}& 0.5 \\ $\phi$ (rate of hospitalization) & 0.02--0.1 \cite{Zhou2020,Ferguson2020}& 0.025 day$^{-1}$ \\ \rowcolor[rgb]{.9, .9, .9} $\gamma_A$ (recovery rate, Asymptomatic) & 1/14-1/3 day$^{-1}$ \cite{Tang2020,Zhou2020} & 1/7 day$^{-1}$ \\ $\gamma_I$ (recovery rate, symptomatic) & 1/30-1/3 day$^{-1}$ \cite{Tang2020,Zhou2020} & 1/7 day$^{-1}$ \\ \rowcolor[rgb]{.9, .9, .9} $\gamma_H$ (recovery rate, hospitalized) \cite{Tang2020,Zhou2020} & 1/30-1/3 day$^{-1}$ & 1/14 day$^{-1}$ \\ $\delta$ (death rate, hospitalized) & 0.001--0.1 \cite{Ferguson2020} & 0.015 day$^{-1}$ \\ \hline \end{tabular} \caption{Baseline model parameters with brief description, likely ranges based on modeling and clinical studies (see text for further details), and default value chosen for this study.} \label{table:parameters} \end{table} \normalsize The incubation period for COVID-19 is estimated to average 5.1 days \cite{Lauer2020}, similar to other model-based estimates \cite{Li2020}, giving $\sigma = 1/5.1$ day$^{-1}$. Some previous model-based estimates of infectious duration are on the order of several days \cite{Li2020,Ferguson2020,Tang2020}, with \cite{Tang2020} giving about 7 days for asymptomatic individuals to recover. However, the clinical course of the disease is typically much longer: In a study of hospitalized patients \cite{Zhou2020}, average total duration of illness until hospital discharge or death was 21 days, and moreover, the median duration of viral shedding was 20 days in survivors. The effective transmission rate (as a constant), $\beta_0$, ranges from around 0.5 to 1.5 day$^{-1}$ in prior modeling studies \cite{Read2020,Shen2020,Li2020}, and typically trends down with time \cite{Tang2020,Li2020}. We have left this as a free parameter in our fits to Washington and New York state mortality data, and find $\beta_0 \approx 0.5$ and $\beta_0 \approx 1.4$ day$^{-1}$ for these states, respectively, values this range. The relative infectiousness of asymptomatic carriers, $\eta$, is not known, although Ferguson et al.~\cite{Ferguson2020} estimated this parameter at about 0.5, and Li et al.~\cite{Li2020} gave values of 0.42--0.55. The fraction of cases that are symptomatic, $\alpha$, is also uncertain, with Li et al.~\cite{Li2020} suggesting an overall case reporting rate of just 14\% early in the outbreak in China, but increasing to 65--69\% later; further, $\alpha = 2/3$ was used in \cite{Ferguson2020}. In the case of the Diamond Princess Cruise ship \cite{Moriarty2020}, 712 (19.2\%) passengers and crews tested positive for SARS-CoV-2, with 331 (46.5\%) asymptomatic at the time of testing. Therefore, we choose $\alpha = 0.5$ as our default. Given an average time from symptom onset to dyspnea of 7 days in \cite{Zhou2020}, and 9 days to sepsis, a range of 1--10 days to hospitalization, a midpoint of 5 days seems reasonable (see also \cite{Ferguson2020}); $\phi \approx 0.025$ day$^{-1}$ is consistent with on the order of 5--15\% of symptomatic patients being hospitalized. If about 15\% of hospitalized patients die \cite{Ferguson2020}, then $\delta \approx 0.015$ day$^{-1}$ (based on $\gamma_H = 1/14$ day$^{-1}$). \subsubsection{Model with general mask use} We assume that some fraction of the general population wears masks with uniform inward efficiency (i.e., primary protection against catching disease) of $\epsilon_i$, and outward efficiency (i.e., source control/protection against transmitting disease) of $\epsilon_o$. We disaggregate all population variables into those that typically do and do not wear masks, respectively subscripted with $U$ and $M$. Based on the above assumptions and simplifications, the extended multi-group model for COVID-19 (where members of the general public wear masks in public) is given by: \small \begin{eqnarray} \frac{dS_U}{dt} &=& -\beta (I_U + \eta A_U) \frac{S_U}{N} - \beta \bigl( (1-\epsilon_o) I_M + (1-\epsilon_o) \eta A_M \bigr) \frac{S_U}{N}, \\ \frac{dE_U}{dt} &=& \beta (I_U + \eta A_U) \frac{S_U}{N} + \beta ((1-\epsilon_o) I_M + (1-\epsilon_o) \eta A_M) \frac{S_U}{N} - \sigma E_U, \\ \frac{dI_U}{dt} &=& \alpha \sigma E_U - \phi I_U - \gamma_I I_U, \\ \frac{dA_U}{dt} &=& (1 - \alpha) \sigma E_U - \gamma_A A_U, \\ \frac{dH_U}{dt} &=& \phi I_U - \delta H_U - \gamma_H H_U, \\ \frac{dR_U}{dt} &=& \gamma_I I_U + \gamma_A A_U + \gamma_H H_U, \\ \frac{dD_U}{dt} &=& \delta H_U, \\ \frac{dS_M}{dt} &=& -\beta (1 - \epsilon_i) (I_U + \eta A_U) \frac{S_M}{N} - \beta (1 - \epsilon_i) ((1-\epsilon_o) I_M + (1-\epsilon_o) \eta A_M) \frac{S_M}{N}, \\ \frac{dE_M}{dt} &=& \beta (1 - \epsilon_i) (I_U + \eta A_U) \frac{S_M}{N} + \beta (1 - \epsilon_i) ((1-\epsilon_o) I_M + (1-\epsilon_o) \eta A_M) \frac{S_M}{N} - \sigma E_M, \\ \frac{dI_M}{dt} &=& \alpha \sigma E_M - \phi I_M - \gamma_I I_M, \\ \frac{dA_M}{dt} &=& (1 - \alpha) \sigma E_M - \gamma_A A_M, \\ \frac{dH_M}{dt} &=& \phi I_M - \delta H_M - \gamma_H H_M, \\ \frac{dR_M}{dt} &=& \gamma_I I_M + \gamma_A A_M + \gamma_H H_M, \\ \frac{dD_M}{dt} &=& \delta H_M, \\ \end{eqnarray} \normalsize where \begin{equation} N = S_U + E_U + I_U + A_U + R_U + S_M + E_M + I_M + A_M + R_M. \end{equation} While much more complex than the baseline model, most of the complexity lies in what are essentially bookkeeping terms. We also consider a reduced version of the above model (equations not shown), such that only symptomatically infected persons wear a mask, to compare the consequences of the common recommendation that only those experiencing symptoms (and their immediate caretakers) wear masks with more general population coverage. \subsection{Mask efficiency parameters} We assume a roughly linear relationship between the overall filtering efficiency of a mask and clinical efficiency in terms of either inward efficiency (i.e., effect on $\epsilon_i$) or outward efficiency ($\epsilon_o$), based on \cite{Brienen2010}. The fit factor for homemade masks averaged 2 in \cite{Davies2013}, while the fit factor averaged 5 for surgical masks. When volunteers coughed into a mask, depending upon sampling method, the number of colony-forming units resulting varied from 17\% to 50\% for homemade masks and 0--30\% for surgical masks, relative to no mask \cite{Davies2013}. Surgical masks reduced \textit{P. aeruginosa} infected aerosols produced by coughing by over 80\% in cystic fibrosis patients in \cite{Driessche2015}, while surgical masks reduced CFU count by $>$90\% in a similar study \cite{Stockwell2018}. N95 masks were more effective in both studies. Homemade teacloth masks had an inward efficiency between 58 and 77\% over 3 hours of wear in \cite{Sande2008}, while inward efficiency ranged 72--85\% and 98--99\% for surgical and N95-equivalent masks. Outward efficiency was marginal for teacloth masks, and about 50--70\% for medical masks. Surgical masks worn by tuberculosis patients also reduced the infectiousness of hospital ward air in \cite{Dharmadhikari2012}, and Leung et al. \cite{Leung2020} very recently observed surgical masks to decrease infectious aerosol produced by individuals with seasonal coronaviruses. Manikin studies seem to recommend masks as especially valuable under coughing conditions for both source control \cite{Patel2016} and prevention \cite{Lai2012}. We therefore estimate that inward mask efficiency could range widely, anywhere from 20--80\% for cloth masks, with $\geq$50\% possibly more typical (and higher values are possible for well-made, tightly fitting masks made of optimal materials), 70--90\% typical for surgical masks, and $>$95\% typical for properly worn N95 masks. Outward mask efficiency could range from practically zero to over 80\% for homemade masks, with 50\% perhaps typical, while surgical masks and N95 masks are likely 50--90\% and 70--100\% outwardly protective, respectively. \subsection{Data and model fitting} We use state-level time series for cumulative mortality data compiled by Center for Systems Science and Engineering at Johns Hopkins University \cite{data}, from January 22, 2020, through April 2, 2020, to calibrate the model initial conditions and infective contact rate, $\beta_0$, as well as $\beta_{min}$ when $\beta(t)$ is taken as an explicit function of time. Other parameters are fixed at default values in Table \ref{table:parameters}. Parameter fitting was performed using nonlinear least squares algorithm implemented using the \verb|lsqnonlin| function in MATLAB. We consider two US states in particular as case studies, New York and Washington, and total population data for each state was defined according to US Census data for July 1, 2019 \cite{Census}. \section{Results} \subsection{Analytic results} Closed-form expressions for the basic reproduction number, $\mathcal{R}_0$, for the baseline model without masks and the full model with masks are given, for $\beta(t) \equiv \beta_0$, in Appendix A and B, respectively. \subsection{Masks coverage/efficacy/time to adoption in simulated epidemics} \subsubsection{Mask/efficacy interaction under immediate adoption} \begin{figure} \centering \includegraphics[scale=.4]{base_results.eps} \caption{Relative peak hospitalizations and cumulative mortality under simulated epidemics, under either a base $\beta_0$ = 0.5 or 1.5 day $^{-1}$, under different general mask coverage level and efficacies (where $\epsilon_o = \epsilon_i = \epsilon$). Results are relative to a base case with no mask use. The left half of the figure gives these metrics as two-dimensional functions of coverage and efficacy. The right half gives these metrics as one-dimensional functions of coverage $\times$ efficacy.} \label{fig:base_results} \end{figure} We run simulated epidemics using either $\beta_0$ = 0.5 or 1.5 day $^{-1}$, with other parameters set to the defaults given in Table \ref{table:parameters}. These parameter sets give epidemic doubling times early in time (in terms of cumulative cases and deaths) of approximately seven or three days, respectively, corresponding to case and mortality doubling times observed (early in time) in Washington and New York state, respectively. We use as initial conditions a normalized population of 1 million persons, all of whom are initially susceptible, except 50 initially symptomatically infected (i.e., 5 out 100,000 is the initial infection rate), not wearing masks. We choose some fraction of the population to be initially in the masked class (``mask coverage''), which we also denote $\pi$, and assume $\epsilon_o = \epsilon_i = \epsilon$. The epidemic is allowed to run its course (18 simulated months) under constant conditions, and the outcomes of interest are peak hospitalization, cumulative deaths, and total recovered. These results are normalized against the counterfactual of no mask coverage, and results are presented as heat maps in Figure \ref{fig:base_results}. Note that the product $\epsilon \times \pi$ predicts quite well the effect of mask deployment: Figure \ref{fig:base_results} also shows (relative) peak hospitalizations and cumulative deaths as functions of this product. There is, however, a slight asymmetry between coverage and efficacy, such that increasing coverage of moderately effective masks is generally more useful than increasing the effectiveness of masks from a starting point of moderate coverage. \subsubsection{Delayed adoption} We run the simulated epidemics described, supposing the entire population is unmasked until mass mask adoption after some discrete delay. The level of adoption is also fixed as a constant. We find that a small delay in mask adoption (without any changes in $\beta$) has little effect on peak hospitalized fraction or cumulative deaths, but the ``point of no return'' can rapidly be crossed, if mask adoption is delayed until near the time at which the epidemic otherwise crests. This general pattern holds regardless of $\beta_0$, but the point of no return is further in the future for smaller $\beta_0$. \subsection{Mask use and equivalent $\beta$ reduction} \begin{figure} \centering \includegraphics[scale=.425]{equivalent_beta.eps} \caption{Equivalent $\beta_0$, $\tilde{\beta}_0$ (infectious contact rate) under baseline model dynamics as a function of mask coverage $\times$ efficacy, with the left panel giving the absolute value, and the right giving the ratio of $\tilde{\beta}_0$ to the true $\beta_0$ in the simulation with masks. That is, simulated epidemics are run with mask coverage and effectiveness ranging from 0 to 1, and the outcomes are tracked as synthetic data. The baseline model without mask dynamics is then fit to this synthetic data, with $\beta_0$ the trainable parameter; the resulting $\beta_0$ is the $\tilde{\beta}_0$. This is done for simulated epidemics with a true $\beta_0$ of 1.5, 1, or 0.5 day$^{-1}$.} \label{fig:equivalent_beta} \end{figure} The relationship between mask coverage, efficacy, and metrics of epidemic severity considered above are highly nonlinear. The relationship between $\beta_0$ (the infectious contact rate) and such metrics is similarly nonlinear. However, incremental reductions in $\beta_0$, due to social distances measures, etc., can ultimately synergize with other reductions to yield a meaningfully effect on the epidemic. Therefore, we numerically determine what the \textit{equivalent} change in $\beta_0$ under the baseline would have been under mask use at different coverage/efficacy levels, and we denote the equivalent $\beta_0$ value as $\tilde{\beta}_0$. That is, we numerically simulate an epidemic with and without masks, with a fixed $\beta_0$. Then, we fit the baseline model to this (simulated) case data, yielding a new \textit{equivalent} $\beta_0$, $\tilde{\beta}_0$. An excellent fit giving $\tilde{\beta}_0$ can almost always be obtained, though occasionally results are extremely sensitive to $\beta_0$ for high mask coverage/efficacy, yielding somewhat poorer fits. Results are summarized in Figure \ref{fig:equivalent_beta}, where the $\tilde{\beta}_0$ values obtained and the relative changes in equivalent $\beta$ (i.e., ($\tilde{\beta}_0$) / ($\beta_0$)) are plotted as functions of efficacy times coverage, $\epsilon \times \pi$, under simulated epidemics with three baseline (true) $\beta_0$ values. From Figure \ref{fig:equivalent_beta}, we see that even 50\% coverage with 50\% effective masks roughly halves the effective disease transmission rate. Widespread adoption, say 80\% coverage, of masks that are only 20\% effective still reduces the effective transmission rate by about one-third. \subsection{Outward vs. inward efficiency} \begin{figure} \centering \includegraphics[scale=.425]{mask_asymmetry.eps} \caption{Epidemiologic outcomes and equivalent $\beta_0$ changes as a function of mask coverage when masks are either much better at blocking outgoing ($\epsilon_o = 0.8$, $\epsilon_i = 0.2$) or incoming ($\epsilon_0 = 0.2$, $\epsilon_i = 0.8$) transmission. Results are demonstrated for both mask permutations under simulated epidemics with baseline $\beta_0$ = 0.5 or 1.5 day $^{-1}$.} \label{fig:mask_asymmetry} \end{figure} Figure \ref{fig:mask_asymmetry} demonstrates the effect of mask coverage on peak hospitalizations, cumulative deaths, and equivalent $\beta_0$ values when either $\epsilon_o = 0.2$ and $\epsilon_i = 0.8$, or visa versa (and for simulated epidemics using either $\beta_0$ = 0.5 or 1.5 day $^{-1}$. These results suggest that, all else equal, the protection masks afford against acquiring infection ($\epsilon_o$) is actually slightly more important than protection against transmitting infection ($\epsilon_i$), although there is overall little meaningful asymmetry. \subsection{Masks for symptomatic alone vs. general population} \begin{figure} \centering \includegraphics[scale=.425]{symptomatic_mask_results.eps} \caption{Equivalent $\beta_0$ under the model where \textit{all} symptomatic persons wear a mask (whether they otherwise habitually wear a mask or not), under varying levels of effective for the masks given to the symptomatic ($\epsilon_o^I$), and in combination with different degrees of coverage and effectiveness for masks used by the rest of the general public. Results are for simulated epidemics with a baseline $\beta_0$ of 1.5 day$^{-1}$.} \label{fig:symptomatic_mask_results} \end{figure} Finally, we consider numerical experiments where masks are given to all symptomatically infected persons, whether they otherwise habitually wear masks or not (i.e., both $I_U$ and $I_M$ actually wear masks). We explore how universal mask use in symptomatically infected persons interacts with mask coverage among the general population; we let $\epsilon_o^I$ represent the effectiveness of masks in the symptomatic, not necessarily equal to $\epsilon_o$. We again run simulated epidemics with no masks, universal masks among the symptomatic, and then compare different levels of mask coverage in the general (asymptomatic) population. In this section, we use equivalent $\beta_0$ as our primary metric. Figure \ref{fig:symptomatic_mask_results} shows how this metric varies as a function of the mask effectiveness given to symptomatic persons, along with the coverage and effectiveness of masks worn by the general public. We also explore how conclusions vary when either 25\%, 50\%, or 75\% of infectious COVID-19 patients are \textit{asymptomatic} (i.e., we vary $\alpha$). Unsurprisingly, the greater the proportion of infected people are asymptomatic, the more benefit there is to giving the general public masks in addition to those experiencing symptoms. \subsection{Simulated case studies: New York \& Washington states} \begin{figure} \centering \includegraphics[scale=.425]{data_fits.eps} \caption{The left half of the figure gives model predictions and data for Washington state, using either a constant (top panels) or variable $\beta$ (bottom panel), as described in the test. The right half of the figure is similar, but for New York state.} \label{fig:data_fits} \end{figure} \begin{figure} \centering \includegraphics[scale=.45]{sim_washington.eps} \caption{Simulated future (cumulative) death tolls for Washington state, using either a fixed (top panels) or variable (bottom panels) transmission rate, $\beta$, and nine different permutations of general public mask coverage and effectiveness. The y-axes are scaled differently in top and bottom panels.} \label{fig:sim_washington} \end{figure} \begin{figure} \centering \includegraphics[scale=.45]{sim_washington_daily.eps} \caption{Simulated future \textit{daily} death rates for Washington state, using either a fixed (top panels) or variable (bottom panels) transmission rate, $\beta$, and nine different permutations of general public mask coverage and effectiveness. The y-axes are scaled differently in top and bottom panels.} \label{fig:sim_washington_daily} \end{figure} \begin{figure} \centering \includegraphics[scale=.45]{sim_new_york.eps} \caption{Simulated future (cumulative) death tolls for New York state, using either a fixed (top panels) or variable (bottom panels) transmission rate, $\beta$, and nine different permutations of general public mask coverage and effectiveness.} \label{fig:sim_new_york} \end{figure} \begin{figure} \centering \includegraphics[scale=.45]{sim_new_york_daily.eps} \caption{Simulated future \textit{daily} death rates for New York state, using either a fixed (top panels) or variable (bottom panels) transmission rate, $\beta$, and nine different permutations of general public mask coverage and effectiveness.} \label{fig:sim_new_york_daily} \end{figure} Fitting to cumulative death data, we use the baseline model to determine the best fixed $\beta_0$ and $I(0)$ for cumulative death data for New York and Washington state. We use New York state data beginning on March 1, 2020, through April 2, 2020, and Washington state data from February 20, 2020 through April 2, 2020. For New York state, best-fit parameters are $I(0)$ = 208 (range 154--264) and $\beta_0$ = 1.40 (1.35--1.46) day$^{-1}$ under fixed $\beta_0$. For the time-varying $\beta(t)$, we fix $r = 0.03$ day$^{-1}$ and $t_0 = 20$, yielding a best-fit $\beta_0$ = 1.33 (1.24--1.42) day$^{-1}$, $\beta_{min}$ = 0.51 (-0.25--1.26) day$^{-1}$, and $I(0)$ = 293 (191--394). For Washington state, parameters are $I(0)$ = 622 (571--673) and $\beta_0$ = 0.50 (0.49--0.52) day$^{-1}$ under fixed $\beta_0$. For time-varying $\beta(t)$, we fix $r = 0.04$ day$^{-1}$ and $t_0 = 0$, to yield a best-fit $\beta_0$ = 1.0 (0.87--1.23) day$^{-1}$, $\beta_{min}$ = 0.10 (0--0.19) day$^{-1}$, and $I(0)$ = 238 (177--300). We fix $r$ and $t_0$, as it is not possible to uniquely identify $r$, $t_0$ and $\beta_{min}$, from death or case data alone (see e.g., \cite{Roda2020} on identifiability problems). Figure \ref{fig:data_fits} gives cumulative death and case data versus the model predictions for the two states, and for the two choices of $\beta(t)$. Note that while modeled and actual cumulative deaths match well, model-predicted cases markedly exceed reported cases in the data, consistent with the notion of broad underreporting. We then consider either fixed $\beta_0$ or time-varying $\beta(t)$, according to the parameters above, in combination with the following \textit{purely hypothetical} scenarios in each state. \begin{enumerate} \itemsep0em \item No masks, epidemic runs its course unaltered with either $\beta(t) \equiv \beta_0$ fixed or $\beta(t)$ variable as described above. \item The two $\beta$ scenarios are considered in combination with: (1) weak, moderate, or strong deployment of masks, such that $\pi$ = 0.2, 0.5, or 0.8; and (2) weak, moderate, or strong masks, such that $\epsilon$ = 0.2, 0.5, or 0.8. No masks are used up until April 2, 2020, and then these coverage levels are instantaneously imposed. \end{enumerate} This yields 18 scenarios in all (nine mask coverage/efficacy scenarios, plus two underlying trends). Following the modeled imposition of masks on April 2, 2020, the scenarios are run for 60 additional simulated days. Figures \ref{fig:sim_washington} and \ref{fig:sim_new_york} summarize the future modeled death toll in each city under the 18 different scenarios, along with historical mortality data. Figures \ref{fig:sim_washington_daily} and \ref{fig:sim_new_york_daily} show modeled daily death rates, with deaths peaking sometime in late April in New York state under all scenarios, while deaths could peak anywhere from mid-April to later than May, for Washington state. We emphasize that these are hypothetical and exploratory results, with possible death tolls varying dramatically based upon the future course of $\beta(t)$. However, the results do suggest that even modestly effective masks, if widely used, could help ``bend the curve,'' with the relative benefit greater in combination with a lower baseline $\beta_0$ or stronger underlying trend towards smaller $\beta(t)$ (i.e., in Washington vs. New York). \section{Discussion \& Conclusions} This study aims to contribute to this debate by providing realistic insight into the community-wide impact of widespread use of face masks by members of the general population. We designed a mathematical model, parameterized using data relevant to COVID-19 transmission dynamics in two US states (New York and Washington). The model suggests a nontrivial benefit to face mask use by the general public that may vary nonlinearly with mask effectiveness, coverage, and baseline disease transmission intensity. Face masks should be advised not just for those experiencing symptoms, and likely protect both truly healthy wearers and avoid transmission by asymptomatic carriers. The community-wide benefits are greatest when mask coverage is as near universal as possible. There is considerable ongoing debate on whether to recommend general public face mask use (likely mostly homemade cloth masks or other improvised face coverings) \cite{Chan2020}, and while the situation is in flux, more authorities are recommending public mask use, though they continue to (rightly) cite appreciable uncertainty. With this study, we hope to help inform this debate by providing insight into the potential community-wide impact of widespread face mask use by members of the general population. We have designed a mathematical model, parameterized using data relevant to COVID-19 transmission dynamics in two US states (New York and Washington), and our model suggests nontrivial and possibly quite strong benefit to general face mask use. The population-level benefit is greater the earlier masks are adopted, and at least some benefit is realized across a range of epidemic intensities. Moreover, even if they have, as a sole intervention, little influence on epidemic outcomes, face masks decrease the equivalent effective transmission rate ($\beta_0$ in our model), and thus can stack with other interventions, including social distancing and hygienic measures especially, to ultimately drive nonlinear decreases in epidemic mortality and healthcare system burden. It bears repeating that our model results are consistent with the idea that face masks, while no panacea, may synergize with other non-pharmaceutical control measures and should be used in combination with and not in lieu of these. Under simulated epidemics, the effectiveness of face masks in altering the epidemiologic outcomes of peak hospitalization and total deaths is a highly nonlinear function of both mask efficacy and coverage in the population (see Figure \ref{fig:base_results}), with the product of mask efficacy and coverage a good one-dimensional surrogate for the effect. We have determined how mask use in the full model alters the \textit{equivalent} $\beta_0$, denoted $\tilde{\beta}_0$, under baseline model (without masks), finding this equivalent $\tilde{\beta}_0$ to vary nearly linearly with efficacy $\times$ coverage (Figure \ref{fig:equivalent_beta}). Masks alone, unless they are highly effective and nearly universal, may have only a small effect (but still nontrivial, in terms of absolute lives saved) in more severe epidemics, such as the ongoing epidemic in New York state. However, the relative benefit to general masks use may increase with other decreases in $\beta_0$, such that masks can synergize with other public health measures. Thus, it is important that masks not be viewed as an alternative, but as a complement, to other public health control measures (including non-pharmaceutical interventions, such as social distancing, self-isolation etc.). Delaying mask adoption is also detrimental. These factors together indicate that even in areas or states where the COVID-19 burden is low (e.g. the Dakotas), early aggressive action that includes face masks may pay dividends. These general conclusions are illustrated by our simulated case studies, in which we have tuned the infectious contact rate, $\beta$ (either as fixed $\beta_0$ or time-varying $\beta(t)$), to cumulative mortality data for Washington and New York state through April 2, 2020, and imposed hypothetical mask adoption scenarios. The estimated range for $\beta$ is much smaller in Washington state, consistent with this state's much slower epidemic growth rate and doubling time. Model fitting also suggests that total symptomatic cases may be dramatically undercounted in both areas, consistent with prior conclusions on the pandemic \cite{Li2020}. Simulated futures for both states suggest that broad adoption of even weak masks use could help avoid many deaths, but the greatest relative death reductions are generally seen when the underlying transmission rate also falls or is low at baseline. Considering a fixed transmission rate, $\beta_0$, 80\% adoption of 20\%, 50\%, and 80\% effective masks reduces cumulative relative (absolute) mortality by 1.8\% (4,419), 17\% (41,317), and 55\% (134,920), respectively, in New York state. In Washington state, relative (absolute) mortality reductions are dramatic, amounting to 65\% (22,262), 91\% (31,157), and 95\% (32,529). When $\beta(t)$ varies with time, New York deaths reductions are 9\% (21,315), 45\% (103,860), and 74\% (172,460), while figures for Washington are 24\% (410), 41\% (684), and 48\% (799). In the latter case, the epidemic peaks soon even without masks. Thus, a range of outcomes are possible, but both the absolute and relative benefit to weak masks can be quite large; when the relative benefit is small, the absolute benefit in terms of lives is still highly nontrivial. Most of our model projected mortality numbers for New York and Washington state are quite high (except for variable $\beta(t)$ in Washington), and likely represent worst-case scenarios as they primarily reflect $\beta$ values early in time. Thus, they may be dramatic overestimates, depending upon these states' populations ongoing responses to the COVID-19 epidemics. Nevertheless, the estimated transmission values for the two states, under fixed and variable $\beta(t)$ represent a broad range of possible transmission dynamics, are within the range estimated in prior studies \cite{Shen2020,Read2020,Li2020}, and so we may have some confidence in our general conclusions on the possible range of benefits to masks. Note also that we have restricted our parameter estimation only to initial conditions and transmission parameters, owing to identifiability problems with more complex models and larger parameter groups (see e.g. \cite{Roda2020}). For example, the same death data may be consistent with either a large $\beta_0$ and low $\delta$ (death rate), or visa versa. Considering the subproblem of general public mask use in addition to mask use for source control by any (known) symptomatic person, we find that general face mask use is still highly beneficial (see Figure \ref{fig:symptomatic_mask_results}). Unsurprisingly, this benefit is greater if a larger proportion of infected people are asymptomatic (i.e., $\alpha$ in the model is smaller). Moreover, it is not the case that masks are helpful exclusively when worn by asymptomatic infectious persons for source control, but provide benefit when worn by (genuinely) healthy people for prevention as well. Indeed, if there is any asymmetry in outward vs. inward mask effectiveness, inward effectiveness is actually slightly preferred, although the direction of this asymmetry matters little with respect to overall epidemiologic outcomes. At least one experimental study \cite{Patel2016} does suggest that masks may be superior at source control, especially under coughing conditions vs. normal tidal breathing and so any realized benefit of masks in the population may still be more attributable to source control. This is somewhat surprising, given that $\epsilon_o$ appears more times than $\epsilon_i$ in the model terms giving the forces of infection, which would suggest outward effectiveness to be of greater import at first glance. Our conclusion runs counter to the notion that general public masks are primarily useful in preventing asymptomatically wearers from transmitting disease: Masks are valuable as both source control and primary prevention. This may be important to emphasize, as some people who have self-isolated for prolonged periods may reasonably believe that the chance they are asymptomatically infected is very low and therefore do not need a mask if they venture into public, whereas our results indicate they (and the public at large) still stand to benefit. Our theoretical results still must be interpreted with caution, owing to a combination of potential high rates of noncompliance with mask use in the community, uncertainty with respect to the intrinsic effectiveness of (especially homemade) masks at blocking respiratory droplets and/or aerosols, and even surprising amounts of uncertainty regarding the basic mechanisms for respiratory infection transmission \cite{MacIntyre2017,Bourouiba2020}. Several lines of evidence support the notion that masks can interfere with respiratory virus transmission, including clinical trials in healthcare workers \cite{Offeddu2017,MacIntyre2017}, experimental studies as reviewed \cite{Sande2008,Lai2012,Davies2013,Dharmadhikari2012,Patel2016}, and case control data from the 2003 SARS epidemic \cite{Wu2004,Lau2004}. Given the demonstrated efficacy of medical masks in healthcare workers \cite{Offeddu2017}, and their likely superiority over cloth masks in \cite{MacIntyre2015}, it is clearly essential that healthcare works be prioritized when it comes to the most effective medical mask supply. Fortunately, our theoretical results suggest significant (but potentially highly variable) value even to low quality masks when used widely in the community. With social distancing orders in place, essential service providers (such as retail workers, emergency services, law enforcement, etc.) represent a special category of concern, as they represent a largely unavoidable high contact node in transmission networks: Individual public-facing workers may come into contact with hundreds or thousands of people in the course of a day, in relatively close contact (e.g. cashiers). Such contact likely exposes the workers to many asymptomatic carriers, and they may in turn, if asymptomatic, expose many susceptible members of the general public to potential transmission. Air exposed to multiple infectious persons (e.g. in grocery stores) could also carry a psuedo-steady load of infectious particles, for which masks would be the only plausible prophylactic\cite{Lai2012}. Thus, targeted, highly effective mask use by service workers may be reasonable. We are currently extending the basic model framework presented here to examine this hypothesis. In conclusion, our findings suggest that face mask use should be as nearly universal (i.e., nation-wide) as possible and implemented without delay, even if most mask are homemade and of relatively low quality. This measure could contribute greatly to controlling the COVID-19 pandemic, with the benefit greatest in conjunction with other non-pharmaceutical interventions that reduce community transmission. Despite uncertainty, the potential for benefit, the lack of obvious harm, and the precautionary principle lead us to strongly recommend as close to universal (homemade, unless medical masks can be used without diverting healthcare supply) mask use by the general public as possible. \section*{Acknowledgements} One of the authors (ABG) acknowledge the support, in part, of the Simons Foundation (Award $\#$585022) and the National Science Foundation (Award 1917512). \section*{Appendix A: Basic Reproduction Number for Baseline Model} The basic reproduction number for both the baseline and the full model is for the special case when $\beta(t) \equiv \beta_0$. The local stability of the DFE is explored using the next generation operator method \cite{D-H,D-W}. Using the notation in \cite{D-W}, it follows that the matrices $\mathcal{F}$ of new infection terms and $\mathcal{V}$ of the remaining transfer terms associated with the version of the model are given, respectively, by \begin{equation*} \mathcal{F}= \left[ \begin{array}{*{20}c} 0&\beta_0&\beta_0 \eta\\ 0&0&0\\ 0&0&0\ \end{array} \right], \end{equation*} \begin{equation*} \mathcal{V}= \left[ \begin{array}{*{20}c} \sigma&0&0\\ -\alpha \sigma&(\phi+\gamma_I)&0\\ -(1-\alpha)\sigma&0&\gamma_A\ \end{array} \right]. \end{equation*} \vspace{2mm} \noindent The basic reproduction number of the model, denoted by $\mathcal{R}_0$, is given by \begin{equation} \mathcal{ R}_0 = \frac{\beta_0\alpha \sigma}{\sigma(\phi+\gamma_I)}+\frac{\beta_0 \eta(1-\alpha)}{\gamma_A}. \end{equation} \section*{Appendix B: Basic Reproduction Number for Full Model} The local stability of the DFE is explored using the next generation operator method \cite{D-H,D-W}. Using the notation in \cite{D-W}, it follows that the matrices $\mathcal{F}$ of new infection terms and $\mathcal{V}$ of the remaining transfer terms associated with the version of the model are given, respectively, by \begin{equation*} \mathcal{F}= \left[ \begin{array}{*{20}c} 0&\beta_0&\beta_0 \eta&0&\beta_0(1-\epsilon_o)&\beta_0 (1-\epsilon_o) \eta\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&\beta_0(1-\epsilon_i)&\beta_0 (1-\epsilon_i)\eta&0&\beta_0(1-\epsilon_o)(1-\epsilon_i)&\beta_0(1-\epsilon_o)(1-\epsilon_i) \eta\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\ \end{array} \right], \end{equation*} \begin{equation*} \mathcal{V}= \left[ \begin{array}{*{20}c} \sigma&0&0&0&0&0\\ -\alpha \sigma&(\phi+\gamma_I)&0&0&0&0\\ -(1-\alpha)\sigma&0&\gamma_A&0&0&0\\ 0&0&0& \sigma&0&0\\ 0&0&0&-\alpha \sigma&(\phi+\gamma_I)&0\\ 0&0&0&-(1-\alpha)\sigma&0&\gamma_A\ \end{array} \right]. \end{equation*} \vspace{2mm} \noindent The basic reproduction number of the model, denoted by $\mathcal{R}_0$, is given by \begin{equation} \mathcal{ R}_0 = \beta_0 [1+(1-\epsilon_o)(1-\epsilon_i)]\left(\frac{\alpha \sigma}{\sigma(\phi+\gamma_I)}+\frac{ \eta(1-\alpha)}{\gamma_A}\right). \end{equation} \footnotesize
{'timestamp': '2020-04-08T02:11:56', 'yymm': '2004', 'arxiv_id': '2004.03251', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.03251'}
\section{Introduction} \label{sec:intro} Galactic morphologies vary widely. Broadly speaking, galaxies range from elliptical, dispersion-supported systems to disk-dominated structures where the majority of stars are on well-ordered circular orbits \citep[e.g.][]{Hubble1926,Huertas-Company2011}. The former dominate at both the high-mass end \citep[e.g.][]{Bamford2009} and at the low-mass end \citep[e.g.][]{Wheeler2017}, with disky galaxies emerging primarily at intermediate stellar masses of $\sim10^{9}-10^{11}\textnormal{M}_\odot$ \citep[e.g.][]{Simons2015}. The preponderance of ellipticals at the high-mass end is typically associated with these galaxies growing primarily through dry mergers \citep{vanDokkum2005,vanDokkum2010,Rodriguez-Puebla2017}, which scramble stellar orbits and promote bulge formation \citep[e.g.][]{White78,Hopkins2009b,Stewart2009,Hopkins2010}. At the low-mass end, stars are both born out of gas with a high degree of pressure support (rather than rotational support), and they are then dynamically heated by the repeated cycles of gas blowouts that continue to $z\sim0$ in $\lesssim10^{11}\textnormal{M}_\odot$ halos \citep[][]{Kaufmann2007, Pontzen2012,Governato2012,diCintio2014a,DiCintio2014b,Onorbe2015,Chan2015, Wheeler2017,ElBadry2016,KEB2017,AnglesAcazar2017,Sparre2017,CAFG2017}. At intermediate masses, however, the exact properties of a galaxy and/or halo that drive the morphology of that system remain relatively poorly understood. \citet*[][hereafter MMW98]{MoMaoWhite} reproduced both the $z=0$ population of disk galaxies and the properties of $z\sim2.5$ damped Ly$\alpha$ systems in semi-analytic models by assuming (1) galaxy sizes are determined by their angular momentum, (2) the baryons in a galaxy acquire their angular momentum from the host dark matter (DM) halo, (3) DM halos respond adiabatically to the growth of galaxies, and (4) baryons initially have the same density profile as DM \citep[also see ][]{Fall1980, Fall1983,Romanowsky2012,Fall2013}. This model therefore predicts that the size of a galactic disk (relative to the radius of the halo) depends primarily on the spin of the host DM halo, such that elliptical galaxies reside in low angular momentum halos. Though the \citetalias{MoMaoWhite} paradigm broadly reproduces the galactic population, it has not been possible to directly test it against hydrodynamic simulations that include star formation and feedback, the latter of which appears to be particularly important for regulating the angular momentum (and therefore shapes) of galaxies. Such simulations typically fall into two categories: large-volumes simulations such as Illustris \citep{Illustris1,Illustris2}, Illustris-TNG \citep{IllustrisTNG2,IllustrisTNG1}, and EAGLE \citep{EAGLE}; and ``zoom-in'' simulations \citep{Katz1993,Onorbe2014} that focus on individual systems. While the former contain huge populations of galaxies in a given mass bin ($\gg10^3$), each galaxy typically contains $\lesssim10^3$ resolution elements, with spatial resolutions $\gtrsim1~\mathrm{kpc}$, such that it is impossible to fully resolve the vertical scale lengths of MW-like disks. However, recent work with this style of simulations have managed to broadly reproduce the observed Hubble sequence of galaxy types \citep[e.g.][]{Pedrosa2014,Pedrosa2015,Genel2015,Teklu2015,Zavala2016,Genel2017}. In particular, \citet{Rodriguez-Gomez2017} found that the morphologies of massive systems ($M^\mystar_\mathrm{galaxy}\geq10^{11}\textnormal{M}_\odot$) in the Illustris simulation are determined by their merger histories, while the morphologies of low mass galaxies ($M^\mystar_\mathrm{galaxy}\leq10^{10}\textnormal{M}_\odot$) correlate with their host halo spin. However, they found that neither spin nor merger history could individually explain morphologies at the intermediate mass scale occupied by the MW. Conversely, zoom-in simulations excel at resolving the structure of the galaxy (or galaxies) that they target, but each additional galaxy incurs a significant CPU cost, such that many suites of zoom-in simulations only include a few galaxies at a given mass simulated with a given physical model. There are thus only a few suites of hydrodynamic zoom-in runs (e.g. GIMIC, \citealp{Crain2009}; MAGICC, \citealp{Stinson2012}; NIHAO, \citealp{Wang2015}; Auriga, \citealp{Grand2017}) that have the sample size to test and explore even basic correlations between morphology and halo properties (such as the \citetalias{MoMaoWhite} model). However, some trends have emerged across a number of analyses of various zoom-in simulations, which have generally become successful in recent years at producing realistic disk galaxies \citep{Governato2007,Governato2009,Scannapieco2009,Guedes2011,Aumer2013, Marinacci2014,Fiacconi2015,Murante2015,Colin2016}. A wide variety of authors using different simulation codes agree that stellar feedback is crucial for regulating star formation in low angular momentum material, which otherwise quickly collapses to form overly-massive bulge components \citep{Okamoto2005,Scannapieco2008,Agertz2011,Roskar2014, Agertz2016,Brooks2016}. Some of these authors have examined the conditions that lead to disk formation. For example, \citet{SpringelHernquist2005} and \citet{Robertson2006} found that mergers of gas-rich galaxies can result in an extended star-forming disk, rather than a bulge-dominated system \citep[also see][]{Robertson2008}. Similarly, \citet{Governato2007} found that a substantial disk formed following a gas-rich major merger in a cosmological simulation. \citet{Governato2009} also examined the distribution of \emph{light} at $z=0$ in a galaxy that experienced a major merger at $z=0.8$, and found that this violent merger primarily grows the disk, rather the bulge. Combined with the passive evolution of the older stars in the bulge, this fresh star formation results in a bright, blue stellar disk. Together, these results suggest that gas-rich major mergers can lead to extended stellar disks \citep{Hopkins2009a}, particularly if they occur at late times when the potential is deep enough to prevent the burst-quench cycles that occur at higher redshift \citep{Muratov2015,Sparre2017,CAFG2017}, which heat stellar orbits and generally inhibit disk formation. Other works have used suites (of varying sizes) of zoom-in simulations to attempt to uncover the underlying drivers of stellar morphology. \citet{Scannapieco2009}, for example, argued that the fraction of mass in the disk does not depend on the spin parameter of the halo, but instead that the individual formation history of each galaxy is crucial to predicting its $z=0$ morphology. They also showed that spheroidal (bulge) components typically form earlier, while disks tend to form at later times from the inside-out \citep[also see][]{Aumer2013,Sokolowska2017}, in general agreement with observations tracing the evolution of the kinematics of gas in galaxies \citep{Simons2017}. Using a set of 100 MW-mass halos in high-resolution regions embedded within the Millennium \citep{millennium} simulation volume, \citet{Sales2012} similarly found that galaxy morphology was not correlated with the spin of the halo. They then further showed that it also does not monotonically depend on either the halo formation time \citep[which scales with the concentration of a halo; e.g.][]{Ludlow2014} or the merger history: even halos that grow significantly through major mergers can host either a disk-dominated or a bulge-dominated system at $z=0$. Instead, they argued that the star formation history is key: disks tend to form gradually and at late times, while spheroidal components assemble in episodic bursts of star formation that occur following the accretion of gas that is misaligned from the existing galaxy. More recently, \citet{Grand2017} used 30 galaxies from the Auriga Project to argue (1) that disk size \emph{does} correlate with halo spin (though the kinematic disk fraction, which we define below, does not) and (2) that well-aligned mergers of gas-rich satellites promote disk growth. Collectively, the results from large-volume and zoom-in simulations suggest that a picture where stellar morphology is regulated by angular momentum is not necessarily wrong, but that it is likely incomplete. However, the majority of these studies have focused on simulations that adopt a stiff equation of state for the interstellar medium, which could plausibly introduce artifacts into, e.g., the behavior of the gas during galactic mergers, motivating a study with an more physical description of the interstellar medium. Here, we use a sample of fifteen MW-mass galaxies, seven of which are isolated and eight of which are in Local Group-like pairs, from high resolution zoom-in simulations, run with physically-motivated and identical models and parameters for star formation and feedback, to explore correlations and drivers of (primarily) stellar morphology. We first test the \citetalias{MoMaoWhite} predictions against the sizes of our galaxies, then search for physically meaningful correlations between stellar morphology at $z=0$ and various properties of the host halo, including their evolutionary histories. We then explore the evolution of the stellar morphologies and the fraction of stars born in a disk at any given time to better understand the impact of dynamical interactions and the instantaneous state of the star-forming gas at any given time. Finally, we examine the morphology of the gas at $z=0$ to understand the morphologies of stars being born today. Throughout this work, we assume flat $\Lambda$CDM\ cosmologies, with $h = 0.68$~--~$0.71$, $\Omega_\mathrm{m} = 0.266$~--~$0.31$, $\Omega_\mathrm{b} = 0.0455$~--~$0.048$, and $\Omega_\Lambda = 1 - \Omega_\mathrm{m}$ \citep[e.g.][]{Larson2011,Planck15}.\footnote{The differences in average halo properties due to variances in the cosmological parameters are smaller than the typical halo-to-halo variance within a given cosmology, and, moreover, any systematic variations would be automatically included in the physical parameters we explore here.} We adopt the \citet{Bryan1998} definition of $M_\mathrm{vir}$ and $R_\mathrm{vir}$ throughout, except when computing the \citetalias{MoMaoWhite} predictions, which depend on the properties of the halo within $R_{\rm 200}$, the radius at which the density is $200$ times the critical density. For all stellar images and properties presented herein, we use a coordinate system where the $z$-axis is aligned with the shortest principal axis of the moment of inertia tensor of all star particles within $20~\mathrm{kpc}$. For the gas, we align our coordinate system with the shortest principal axis of the gas within $10~\mathrm{kpc}$; we select a smaller radius for the gas because the gas moment of inertia tensor at $20~\mathrm{kpc}$ is occasionally dominated by gas outside the galaxy. We sometimes refer to halo properties in the corresponding dark matter-only simulation; such properties will be indicated as ``DMO.'' We explicitly opt not to make comparisons with observations in this work because our goal is not to demonstrate the ``reasonableness'' of our galactic disks, but rather to understand why and how they came to have their $z=0$ morphologies. However, we note that the FIRE/FIRE-2 physics are broadly successful at reproducing observed galactic properties over a range of galaxy masses, including the stellar mass \emph{vs} halo mass relation \citep{FIRE,FIRE2}, the normalization and scatter of the star formation rate \emph{vs} stellar mass relationship \citep{Sparre2017}, the Kennicutt-Schmidt law \citep{Orr2017}, the mass-metallicity relationship \citep{MaMassMetallicity}, and even the vertical and radial structure (including stellar ages and metallicities) of the MW disk \citep{Ma2016b}. Sanderson et al. (in prep) also show that the masses of the stellar halos around the FIRE-2 MW-mass galaxies are in relative agreement with those measured by \citet{Merritt2016}. Moreover, proper comparisons to observations requires a careful conversion from the stellar mass to observed light to make a fair comparison with observables, including the effects of dust attenuation and stellar evolution \citep[e.g. radial variations in the mass-to-light ratio;][]{Wuyts2010}. \citet{Scannapieco2010}, for example, used mock observations to show that photometric bulge/disk decompositions typically overestimate the true disk fractions by at least a factor of two. A detailed comparison of observer-space disk indicators will be the focus of subsequent work(s). This paper is organized as follows. In \S~\ref{sec:sims}, we describe the simulations and briefly review the star formation and feedback models. \S~\ref{sec:quant} presents our measures of morphology, $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$, and compares them to other (primarily theoretical) quantifiers. \S~\ref{sec:drivers} compares the actual morphologies to those predicted by the \citetalias{MoMaoWhite} model, then presents correlations between $z=0$ morphologies and various properties of the galaxy and their host halos, while \S~\ref{sec:evolution} explores the evolution of the stellar morphologies and the birth properties of stars. \S~\ref{sec:gas_morph} presents the morphologies of the gas disks in our sample. We summarize our results and conclusions in \S~\ref{sec:conclusions}. \section{Simulations} \label{sec:sims} We analyze hydrodynamic, cosmological zoom-in \citep{Katz1993,Onorbe2014} simulations from the Feedback in Realistic Environments (FIRE)\footnote{\url{http://fire.northwestern.edu}} project, specifically with the improved ``FIRE-2'' version of the code from \citet{FIRE2}. In order to maximize our sample size, we include simulations with varying resolutions, which we discuss below, but the numerical methods and primary physical models are identical across all of the simulations. All of the simulations were run using \texttt{GIZMO} \citep{GIZMO},\footnote{\url{http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html}} a multi-method gravity plus hydrodynamics code, in meshless finite-mass (``MFM'') mode. This is a mesh-free Lagrangian finite-volume Godunov method which automatically provides adaptive spatial resolution while maintaining conservation of mass, energy, and momentum (for extensive tests, see \citealt{GIZMO}). Gravity is solved with an improved version of the Tree-PM solver from GADGET-3 \citep{Springel2005}, with fully-adaptive (and fully-conservative) gravitational force softenings for gas (so hydrodynamic and force softenings are always self-consistently matched), following \citet{Price2007}. The FIRE physics and source code are {\em exactly} identical to those in previous FIRE-2 simulations; these are described in detail in the papers above but we briefly review them here. Radiative heating and cooling is treated (from $10-10^{10}\,$K), including free-free, photo-ionization/recombination, Compton, photoelectric \& dust collisional, cosmic ray, molecular, and metal-line \& fine-structure processes (following each of 11 tracked species independently), and accounting for photo-heating both by a UV background \citep{FaucherGiguere2009} and an approximate model for local sources, and self-shielding. Star formation occurs only in gas identified as self-gravitating according to the \citet{Hopkins2013sf_criteria} criterion, which is also molecular and self-shielding (following \citealt{Krumholz2011}), Jeans unstable, and exceeds a minimum density threshold $n_{\rm min}=1000\,{\rm cm^{-3}}$. Once a star particle forms, the simulations explicitly follow several different stellar feedback mechanisms, including (1) local and long-range momentum flux from radiation pressure (in the initial UV/optical single-scattering, and re-radiated light in the IR), (2) energy, momentum, mass and metal injection from SNe (Types Ia and II) and stellar mass loss (both OB and AGB), and (3) photo-ionization and photo-electric heating. Every star particle is treated as a single stellar population with known mass, age, and metallicity, and then all feedback event rates, luminosities and energies, mass-loss rates, and all other quantities are tabulated directly from stellar evolution models ({\sc starburst99}; \citealt{Leitherer1999}), assuming a \citet{Kroupa2001} IMF. We emphasize that the FIRE physics were not tuned to reproduce galaxy sizes or morphologies. One of the pairs, \run{Romulus} \run{\&} \run{Remus}, was simulated with subgrid turbulent metal diffusion \citep{Hopkinsmetaldiff,Escala2017}; however, \citet{Su2016} showed metal diffusion has a small impact on the morphology of a MW-mass galaxy. \begin{table*} \centering \begin{tabular}{lcccccccccccc} \vspace*{0.125em} Galaxy & $M_\mathrm{vir}$ & $M^\mystar_\mathrm{galaxy}$ & $M^\mathrm{gas}_\mathrm{galaxy}$ & $f^\mystar_\mathrm{disk}$ & $R^{\mystar}_{90}$ & $Z^{\mystar}_{90}$ & $R_\mathrm{gas}$ & $f^\mystar_{\geq0.7}$ & $m_i,\,\mathrm{gas}$ & $m_{\rm DM}$ & Reference \\ & [$10^{12} \textnormal{M}_\odot$] & [$10^{10} \textnormal{M}_\odot$] & [$10^{10} \textnormal{M}_\odot$] & & [kpc] & [kpc] & [kpc] & & [$10^3\textnormal{M}_\odot$] & [$10^4\textnormal{M}_\odot$] & \\ \hline\hline \run{Romeo} & 1.28 & 6.98 & 3.45 & 0.79 & 17.4 & 1.95 & 30.5 & 0.65 & 28 & 15 & A \\ \run{Juliet} & 1.06 & 5.26 & 3.16 & 0.76 & 13.7 & 1.67 & 20.8 & 0.59 & 28 & 15 & A \\ \run{Louise} & 1.10 & 6.39 & 3.23 & 0.69 & 12.2 & 1.5 & 24.2 & 0.56 & 32 & 16 & A \\ \run{Robin} & 1.56 & 5.99 & 2.90 & 0.66 & 9.5 & 1.65 & 20.8 & 0.51 & 57 & 31 & A \\ \run{Thelma} & 1.44 & 11.58 & 2.56 & 0.65 & 11.6 & 2.13 & 11.2 & 0.5 & 32 & 16 & A \\ \run{m12f} & 1.58 & 7.53 & 2.85 & 0.64 & 11.1 & 2.39 & 20.8 & 0.48 & 7.1 & 3.5 & B \\ \run{Romulus} & 1.95 & 13.46 & 3.55 & 0.61 & 11.6 & 2.55 & 22.4 & 0.48 & 32 & 16 & E \\ \run{m12i} & 1.14 & 6.16 & 2.23 & 0.58 & 9.9 & 2.07 & 17.8 & 0.44 & 7.1 & 3.5 & C \\ \run{m12z} & 0.86 & 3.5 & 1.82 & 0.57 & 11.4 & 3.23 & 8.3 & 0.4 & 33 & 17 & D \\ \run{m12c} & 1.27 & 8.09 & 0.92 & 0.56 & 4.3 & 1.08 & 3.6 & 0.42 & 57 & 28 & A \\ \run{Remus} & 1.23 & 10.05 & 0.90 & 0.53 & 7.7 & 1.71 & 8.3 & 0.45 & 32 & 16 & E \\ \run{m12m} & 1.47 & 10.88 & 1.41 & 0.53 & 13.3 & 2.75 & 12.1 & 0.34 & 7.1 & 3.5 & A \\ \run{m12b} & 1.36 & 9.13 & 2.32 & 0.33 & 5.2 & 1.16 & 12.1 & 0.27 & 57 & 28 & A \\ \run{m12q} & 1.61 & 11.23 & 0.56 & 0.21 & 5.4 & 1.57 & 0.9 & 0.11 & 57 & 28 & A \\ \run{Batman} & 1.89 & 10.21 & 1.96 & 0.20 & 2.4 & 0.98 & 11.2 & 0.08 & 57 & 31 & A \\ \end{tabular} \caption{ Properties of the central galaxies and their host halos, sorted by decreasing $f^\mystar_\mathrm{disk}$. In order, columns indicate the host halo virial mass, the stellar and gas mass of the central galaxy (defined in \S\ref{ssec:simprops}), the fraction of stars in the central galaxy on ``disk-like'' orbits ($\epsilon\geq0.5$), and the sizes of the stellar and gas disks (see \S\ref{ssec:defns} for details). To give an estimate of how sensitive the disk fractions/ordering are to our $\epsilon\geq0.5$ cut, the following column lists the fraction of stellar mass with $\epsilon\geq0.7$. The remaining columns list the resolution of each simulation, given by the initial gas particle mass and the mass of the DM particles in the high resolution region. The final column lists the publication each run first appeared in: A: \citet{FIRE2}, B: \citet{GKDisk}, C: \citet{Wetzel2016}, D: \citet{Hafen2016}, E: this work. Galaxies beginning with ``\run{m12}'' are isolated MW-mass analogues, while those with names of individuals are in Local Group-like pairs. \run{Romulus} \run{\&} \run{Remus} and \run{Thelma} \run{\&} \run{Louise} are hydrodynamic re-simulations of the same pairs originally presented (as DMO simulations) in \citet{ELVIS}. Figures~\ref{fig:morph_v_mass}~and~\ref{fig:gas_drivers} plot the relationships between several of these properties. } \label{tab:sims} \end{table*} We focus on the roughly MW-mass galaxies simulated with FIRE-2. Therefore, we combine the Latte halo (here referred to as \run{m12i}) from \citet{Wetzel2016}; five additional isolated halos simulated with an identical pipeline, two at the same resolution and three with a factor of $8$ higher mass particles; one isolated halo from \citet{Hafen2016}; three pairs of halos in Local Group-like configurations \citep[first reported in][but analyzed in detail here for the first time]{FIRE2}, and one additional pair that has not yet been reported elsewhere. Hosts in Local Group-like pairs were selected with the same criteria as \citet{ELVIS}: isolated pairs with $M_\mathrm{vir}\sim10^{12}\textnormal{M}_\odot$ that are approaching one another. All other hosts were selected purely on the basis of their mass and isolation from other massive halos. The mass resolution of each galaxy is listed in Table~\ref{tab:sims}.\footnote{We list the initial mass of a gas particle in each simulation, but note that due to deposition onto gas particles from stellar mass loss, baryonic particle masses fluctuate slightly about their initial value.} Softening lengths for the gas are fully adaptive, typically down to $1~\mathrm{pc}$, with fixed stellar and DM softening lengths set according to the typical inter-particle spacing. \citet{FIRE2} list the exact values for our runs, but all are sufficient to resolve the disk heights. For each galaxy, we analyze the highest resolution simulation available that has been completed to $z=0$. We demonstrate the stability of our morphologies and sizes with numerical resolution in Appendix~\ref{sec:resolution}: the general trends are robust to resolution, but we caution that quantitative values do change slightly with resolution. Movies showing the formation and evolution of each galaxy in our sample, created using identical pipelines, may be found at \url{http://www.tapir.caltech.edu/~sheagk/firemovies.html}. \section{Quantifying morphology of the FIRE-2 galaxies} \label{sec:quant} There are a wide variety of reasonable definitions for galactic morphology that one can adopt. Broadly speaking, they range from kinematic distinctions (e.g.\ the fraction of stars on circular orbits) to visual quantifiers (e.g.\ photometric bulge-to-disk ratios, \citealp{Sersic1963} indices, and half-light radii). Though the former are straightforward to measure in simulations, they are difficult to determine with observations. The latter, however, are relatively straightforward to extract with photometry, but can only be measured for simulated galaxies if one assumes models for stellar evolution and dust attenuation. Though the relationship between observable morphological measures and kinematic quantifiers is extremely interesting, a full study requires ``mock observations'' of the simulated galaxies (including radiative transfer) and subsequent fitting of those images with the tools typically used by observers. We consider these steps to be beyond the scope of this paper, which instead focuses on the physical drivers of those morphologies, but plan to investigate this question in greater detail in future work. \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/star_disk_frac_starsgas.pdf} \vspace{-3em} \caption{Mass-weighted PDFs (normalized to a maximum of one) of the circularity $\epsilon = j_z/j_\mathrm{c}(E)$ for the stars (colored histograms) and gas (dashed green curves) within the MW-mass FIRE-2 galaxies at $z=0$. The stellar distributions are colored by the average cosmic formation time (where $z=0$ corresponds to $t_\mathrm{form}\simeq13$--$14~\mathrm{Gyr}$) of the stars in each bin. The vast majority of the galaxies transition to younger ages at higher circularities; the exceptions are \run{m12q} and \run{Batman}, which form counter-rotating disks at late times. The gray curves show the kinematics measured when stars are born; we discuss them in detail in \S~\ref{sec:evolution}, but emphasize here that almost all stars are born in $z=0$ MW-mass galaxies form on disk-like orbits (i.e. with very high circularities). We quantify the ``diskiness'' of a galaxy by $f^\mystar_\mathrm{disk}$, defined here as the fraction of stars with $\epsilon \geq 0.5$, indicated by the dashed vertical line. The panels are sorted by decreasing $f^\mystar_\mathrm{disk}$, with the numbers in parentheses indicating the rank they would have if the panels were instead sorted from largest to smallest $R^{\mystar}_{90}$, the 2D radial extent of the stars. We also demonstrate in Figures~\ref{fig:mymorh_vs_othermorph}~and~\ref{fig:morph_vs_sig1} that both $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$ correlate strongly with other kinematic and spatial measures of morphology. All but one of the galaxies has a well-ordered, rotating gas disk at $z=0$; the exception is \run{m12q}, which is nearly gas-free and is in the process of expelling what little gas remains by $z=0$. The stars display a range of kinematics ranging from well-ordered disks (\run{Romeo}) to dispersion supported bulges (\run{Batman}). } \label{fig:joverjz} \end{figure*} \subsection{Definitions} \label{ssec:defns} Here, we focus primarily on morphological measures that do not rely on specific profiles or on assumptions regarding the luminosities/colors of individual star particles. We primarily adopt two independent measures of galactic morphology, $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$. The latter, $R^{\mystar}_{90}$, is the radial extent of the disk. It is defined together with $Z^{\mystar}_{90}$, the height of each galaxy, such that 90\% of the stellar \emph{mass} within $30~\mathrm{kpc}$ of the galactic center is contained within a 2D radius $R^{\mystar}_{90}$ and a height above/below the disk $Z^{\mystar}_{90}$ when the stars are aligned with their principal axes. We then define $M^\mystar_\mathrm{galaxy}$ as the stellar mass within a radial distance $R^{\mystar}_{90}$ and a height above/below the disk $Z^{\mystar}_{90}$.\footnote{We note that this definition differs from the stellar masses listed in \citet{FIRE2}, who quoted total stellar masses within $3\timesr^{\mystar}_{50,\,3D}$.}. For the purposes of comparing with semi-analytic models (\S\ref{ssec:spin}), we identically define $R^\mystar_{50}$, the 2D radius that encloses 50\% of the stellar mass. We similarly define 3D stellar radii $r^{\mystar}_{90,\,3D}$ and $r^{\mystar}_{50,\,3D}$ as the radii that contain 90\% and 50\% of the stellar mass within $30~\mathrm{kpc}$. Though the same process typically yields accurate results for the gas, it artificially inflates the sizes of extremely gas-poor galaxies (e.g. \run{m12c} and \run{m12q}; see Figure~\ref{fig:gas_viz}). Therefore, we define the radial and vertical extents of the gas disk by first taking the peak of the face-on mass profile, $\mathrm{d}M_\mathrm{gas}(R)/\mathrm{d}\ln R$, as $R_\mathrm{gas}$, then defining $Z_\mathrm{gas}$ as the break in the vertical 1D mass profile of all the gas with a projected radius $R<R_\mathrm{gas}$. $M^\mathrm{gas}_\mathrm{galaxy}$ is then defined as the total gas mass within ($R_\mathrm{gas}, Z_\mathrm{gas}$). $M^\mathrm{gas}_\mathrm{galaxy}$ typically changes by only $\sim10-20\%$ between this method and the approach we adopt for the stars, with the technique we adopt for the gas yielding a slightly lower $M^\mathrm{gas}_\mathrm{galaxy}$ in all but two cases. All properties are based on centers calculated via a shrinking spheres approach \citep{Power2003}. Our kinematic morphological definition, $f^\mystar_\mathrm{disk}$, measures the fraction of stars on circular orbits that are aligned with the angular momentum of the galaxy as a whole. Specifically, for each particle within $r^{\mystar}_{90,\,3D}$, we compute the circularity $\epsilon = j_z/j_\mathrm{circ}(E)$ following the method of \citet{Abadi2003} and described in detail in \citet{KEB2017}. For a given mass element, the circularity relates the component of the specific angular momentum that is aligned with the average angular momentum vector of the galaxy, $j_z$, to the specific angular momentum of a circular orbit with the same energy, $j_\mathrm{circ}(E)$. Stars (or gas) with $\epsilon = 1$ are therefore on perfectly circular orbits in the plane of the galaxy, those with $\epsilon = 0$ have orbits that are exactly perpendicular to the galaxy, and those with $\epsilon = -1$ are perfectly counter-rotating. We adopt a cut of $\epsilon \geq 0.5$ to distinguish disk stars, and define $f^\mystar_\mathrm{disk}$ as the mass fraction of stars that meet this cut within $r^{\mystar}_{90,\,3D}$. We find nearly identical disk fractions if we consider all stars within $30~\mathrm{kpc}$: the fractional difference is typically $<5\%$. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{plots/star_viz_by_star_disk_frac.jpg} \caption{Face-on (top panels) and edge-on (bottom panels) projections of the stars in the FIRE-2 galaxies, again sorted by decreasing $f^\mystar_\mathrm{disk}$, with the radius $R^{\mystar}_{90}$ and height $Z^{\mystar}_{90}$ that contain 90\% of the mass indicated by the white circles/rectangles; the green lines show the equivalent half-mass quantities. Each panel is $80~\mathrm{kpc}$ across; the edge-on projections are $20~\mathrm{kpc}$ tall. Though there is not a direct correspondence between $f^\mystar_\mathrm{disk}$ and disk size, they are clearly correlated (see Figure~\ref{fig:mymorh_vs_othermorph}). } \label{fig:stellarviz} \end{figure*} \subsection{Simulation Properties} \label{ssec:simprops} \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/morphology_vs_morphology.pdf} \caption{ Comparing our adopted measures of morphology, $f^\mystar_\mathrm{disk}$ (black points; left axis) and $R^{\mystar}_{90}$ (open magenta squares; right axis), to other spatial and kinematic measures of morphology, all measured at $z=0$. The panels show $1-(Z^{\mystar}_{90}/R^{\mystar}_{90})$, a measure of the flatness of the stellar distribution; $\lambda_\mystar(R^{\mystar}_{90})$, the \citet{Bullock2001} spin parameter of the stars in the galaxy; $(c/a)_\mystar$, the ratio of the shortest to longest principal axes of the stars, measured within $20~\mathrm{kpc}$; and the disk fraction $f^\mystar_\mathrm{disk}$ \emph{vs} $R^{\mystar}_{90}$. The $r_\mathrm{sp}$ values indicate the median Spearman $r$-coefficient obtained over 100,000 bootstrapping trials; the upper and lower values give the full 95\% confidence interval. In all but the final column (and in the remainder of this work), we compute the $r$-coefficient by combining the magenta squares and black circles; i.e. $r_\mathrm{sp}$ represents the joint correlation between the property on the $x$ axis and both $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$ (see $\S\ref{ssec:morphvmorph}$ for details). } \label{fig:mymorh_vs_othermorph} \end{figure*} The distributions of $\epsilon$ for both the stars and gas in each MW-mass FIRE-2 galaxy (i.e.\ within $r^{\mystar}_{90,\,3D}$) are shown in Figure~\ref{fig:joverjz}. Each panel represents an individual galaxy; they are ordered by decreasing $f^\mystar_\mathrm{disk}$, which is indicated for each galaxy. The number in parentheses below $f^\mystar_\mathrm{disk}$ indicates the rank that each galaxy \emph{would} have if they were instead ordered by decreasing $R^{\mystar}_{90}$; we compare $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$ explicitly in Figure~\ref{fig:mymorh_vs_othermorph}. We will retain this sorting by $f^\mystar_\mathrm{disk}$ in other figures to ease comparison. The stellar distributions, which are plotted as the colored histograms in Figure~\ref{fig:joverjz}, vary widely even in our relatively small sample. Without pre-selecting for expected morphology, the MW-mass FIRE-2 sample includes nearly bulge-less disk galaxies (e.g. \run{Romeo} and \run{Juliet}), galaxies with clear bulge and disk components (e.g. \run{Remus} and \run{m12b}), and almost entirely dispersion-supported galaxies (\run{Batman} and \run{m12q}).\footnote{We note that \run{Batman} and \run{m12q} are very compact, with $R^\mystar_{50}\simeq0.5$~and~$1~\mathrm{kpc}$ respectively, and may be outliers in observations \citep[e.g.][]{Shen2003}. As noted, though, we caution against direct comparisons with observations without mock-observing the sample.} The color of each curve at a given $\epsilon$ indicates the average formation time of stars with that $\epsilon$. Other than \run{Batman} and \run{m12q}, which have formed roughly counter-rotating disks at late times, the disk ($\epsilon \geq 0.5$) is almost always composed of younger stars on average, in agreement with previous results that disks in MW-mass galaxies begin to appear at $z\lesssim1$ \citep[e.g.][]{Ma2016b,Mametalgrad}. In some cases, such as \run{m12b} and \run{Remus}, the average ages of the bulge and disk components differ dramatically, while the transition is much smoother in other systems (e.g. \run{m12m} and \run{m12z}). In contrast with the diversity in the kinematics of the stars, the gas distributions (green dashed curves) are almost uniform across this mass-selected sample. Specifically, every galaxy except \run{m12q}, (which has not experienced any significant gas accretion since $z\sim0.1$) hosts a thin, primarily rotation-supported gas disk. The gray curves in Figure~\ref{fig:joverjz}, which show the circularity distributions of the stars formed in the galaxy \emph{at birth} (i.e.\ stacking over all snapshots) are similarly uniform, with the vast majority of stars forming with $\abs{\epsilon} \geq 0.5$. We will discuss the kinematics of stars at birth along with the evolution of those kinematics in \S~\ref{sec:evolution}, and we will explore the characteristics of the gas disks in greater detail in \S~\ref{sec:gas_morph}, but we first focus on the $z=0$ stellar morphologies. Visualizations of the stars in all fifteen galaxies are shown in Figure~\ref{fig:stellarviz}, again sorted by $f^\mystar_\mathrm{disk}$. The top panels show face-on views of each galaxy, while the lower panels visualize the galaxy edge-on. There is a clear trend for galaxies to become more elliptical, less disky, and typically more spatially compact as $f^\mystar_\mathrm{disk}$ decreases. The thick dashed and thin solid circles (rectangles) in the upper (lower) panels of Figure~\ref{fig:stellarviz} indicate ($R^{\mystar}_{90}$, $Z^{\mystar}_{90}$) and ($R^\mystar_{50}$, $Z^\mystar_{50}$), respectively. As intended, the former captures roughly the full extent of the stellar populations. We also plot circular velocity profiles for the full sample in Appendix~\ref{sec:rotcurves}: galaxies with higher disk fractions tend to have flatter, more extended circular velocity curves and, conversely, the bulge-dominated systems have rotation curves that peak at small radii, but there is some scatter about that trend. We summarize several basic properties of each galaxy in Table~\ref{tab:sims}, including the host virial mass $M_\mathrm{vir}$, the galaxy stellar mass $M^\mystar_\mathrm{galaxy}$, and the mass in gas within the galaxy $M^\mathrm{gas}_\mathrm{galaxy}$, along with the fraction of stars in the galaxy on circular orbits $f^\mystar_\mathrm{disk}$ and the radial extent of the stars and gas in each galaxy, $R^{\mystar}_{90}$ and $R_\mathrm{gas}$. To give an indication of how sensitive our results are to our definition of ``disk'' stars having $\epsilon\geq0.5$, we also list the fraction of stellar mass with $\epsilon\geq0.7$. While the FIRE-2 physics successfully reproduce observed relationships over a wide range of masses \citep[see][and \S\ref{sec:intro}]{FIRE2}, our mass-selected sample does face some tension with observations. First, our galaxies are overly massive for their halo masses: our stellar mass definition places our sample between $0.2$--$0.55$~dex above the \citet{Behroozi2013} stellar mass \emph{vs} halo mass relation. Second, at these stellar masses, a non-negligible fraction of observed galaxies are quenched, with strongly suppressed star formation rates \citep[e.g.][]{Salim2007}. However, none of the galaxies in our sample fall into this category: our lowest 100~Myr averaged specific star formation rate at $z=0$ is $\sim10^{-11.5}$~yr$^{-1}$ (possibly because these simulations do not include AGN feedback; \citealp{Bower2006, Cattaneo2006,Croton2006,Somerville2008}). Though our sample includes only fifteen galaxies, we caution that we may overproduce (or at least over-represent) late-type galaxies, which could potentially alter the correlations we present herein. Furthermore, if quenching correlates with properties of either the galaxy or the halo \citep[e.g. the mass of the DM halo at fixed stellar mass;][]{Woo2013} in a way not captured by the FIRE-2 models, then our analysis will miss those relationships. \subsection{Comparing morphological measures} \label{ssec:morphvmorph} Before examining correlations between various halo/galaxy properties, $f^\mystar_\mathrm{disk}$, and the radial extents of our galaxies, we briefly explore the relationship between our morphological measures ($f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$) and other potential measures of morphology. As discussed in \S\ref{sec:intro} and \ref{ssec:defns}, we do not explicitly compare with observational measures, as that lies beyond the scope of this work. However, we do note that the stellar radii that we adopt in this paper scale closely with the half-mass radii derived from fitting two-component S\'{e}rsic profiles to these same galaxies (Sanderson et al., in prep), though the bulge-to-disk ratios of those profiles do not correlate particularly well with the true kinematic disk fraction $f^\mystar_\mathrm{disk}$. In addition to the properties we discuss above, there are a number of viable morphological definitions we could adopt, such as the angular momentum of the stars, the thickness of the stellar disk, or the shape of the stellar mass distribution. We examine how these properties correlate with $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$ in Figure~\ref{fig:mymorh_vs_othermorph}. In the first three panels, the filled black circles plot each quantity against $f^\mystar_\mathrm{disk}$ (left $y$ axis), while the open magenta squares correspond to the right $y$ axis and indicate $R^{\mystar}_{90}$. The final panel shows $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$ against one another and therefore omits the black points. \begin{figure} \centering \includegraphics[width=\columnwidth]{plots/morph_vs_sigma1_2panel.pdf} \caption{The stellar surface density within $1$ projected kpc of the center of the galaxy $\Sigma_1$ \emph{vs} our adopted morphological measures, $f^\mystar_\mathrm{disk}$ on the left and $R^{\mystar}_{90}$ on the right. Black circles show $\Sigma_1$ as measured from a perfectly face-on view, while the open symbols indicate $\Sigma_1$ measured along orthogonal edge-on projections. Regardless of viewing angle, the projected central density of the galaxy correlates strongly with both $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$. The anti-correlation arises because higher central densities necessarily imply more compact (and therefore less disky) galaxies at fixed mass.} \label{fig:morph_vs_sig1} \end{figure} Any of these properties shown in Figure~\ref{fig:mymorh_vs_othermorph} (along with other measures that we do not plot here, such as the stellar radius scaled by the virial radius, the specific angular momentum of the stars, the radius where the log-slope of the stellar density profile equals $-3$, or the kinematic bulge-to-disk ratio) are viable alternatives to $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$. The correlations are unsurprising: at roughly fixed mass, galaxies that are radially extended are also flatter, have larger stellar spin parameters, and have a greater fraction of rotation support. The final panel in Figure~\ref{fig:mymorh_vs_othermorph} indicates the relationship between $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$. As suggested by the visualizations in Figure~\ref{fig:stellarviz}, the radial extent of the stars correlates with the degree of order in the disk, but with non-trivial scatter, motivating our analysis of both properties throughout. \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/morphology_vs_masses.pdf} \caption{The $z=0$ relationship between our morphological measures and, (\textit{a}) the virial mass of the halo $M_\mathrm{vir}$, (\textit{b}) the stellar mass of the galaxy $M^\mystar_\mathrm{galaxy}$, (\textit{c}) the gas mass within the galaxy $M^\mathrm{gas}_\mathrm{galaxy}$, and (\textit{d}) the total gas mass within $R_\mathrm{vir}$. Morphology in the FIRE-2 simulations is not correlated with either $M_\mathrm{vir}$, $M^\mystar_\mathrm{galaxy}$, or the total gas mass within $R_\mathrm{vir}$, across this narrow mass range, but gas rich galaxies are more likely to be disky today; we discuss this correlation in more detail below.} \label{fig:morph_v_mass} \end{figure*} The text in each panel (and in similar Figures below) indicates the Spearman $r$-coefficient, $r_\mathrm{sp}$, which quantifies the monotonicity of each relationship. We compute $r_\mathrm{sp}$ on the joint relationship with $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$: we assign each galaxy a rank based on $f^\mystar_\mathrm{disk}$ and a rank based on $R^{\mystar}_{90}$, then combine those ranked datasets and compute $r_\mathrm{sp}$ against two copies of the ranked $x$ values of each plot. Our qualitative conclusions are unchanged if we compute $r_\mathrm{sp}$ against $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$ independently. For each relationship, we perform 100,000 bootstrapping trials (randomly drawing $N$ points, with replacement). We report the median $r_\mathrm{sp}$ of those trials, and the values in superscripts and subscripts indicate the full 95\% confidence interval for those trials. We provide identical statistics throughout the remainder of this work. Based on the correlations reported in Figure~\ref{fig:mymorh_vs_othermorph}, which plots correlations between morphological properties that we expect to be reasonably well correlated, we adopt a rough criterion of $\abs{r_\mathrm{sp}}\gtrsim 0.8$, with a lower 95\% bound on the confidence interval of $\abs{r_\mathrm{sp}}\gtrsim 0.6$, as a ``tight'' correlation. Before turning to the drivers of stellar morphology, we briefly examine one non-parametric morphological measure that is relatively easy to measure in both simulations and observations: $\Sigma_1$, the stellar surface density within the central $1~\mathrm{kpc}$ \citep[e.g.][]{Cheung2012,Bell2012,Fang2013}. For this mass-selected sample, we find a tight relationship between $\Sigma_1$ and morphology at $z=0$: Figure~\ref{fig:morph_vs_sig1} shows $\Sigma_1$ as measured edge-on in open symbols and face-on in black circles. The viewing angle has a small impact, though edge-on projections are always higher, as expected. The anti-correlation between $\Sigma_1$ and the true morphology of a galaxy is striking, though somewhat unsurprising: for a roughly fixed stellar mass, galaxies with high central densities must be more compact, and Figure~\ref{fig:mymorh_vs_othermorph} demonstrated that radial extent and degree of order in the stellar orbits are well correlated, again at fixed galaxy mass. We therefore conclude that $\Sigma_1$ is a reliable morphological measure, at least for roughly MW-mass galaxies. However, we caution that the low-lying outlier from the trend is \run{m12z}, our lowest mass galaxy, suggesting the possible emergence of a mass trend. Moreover, while some analyses have associated high $\Sigma_1$ with galactic quenching \citep[e.g.][]{Woo2015,Woo2017}, all of our galaxies show some level of continued star formation to $z=0$ (as noted above). \section{Drivers of Stellar Morphology} \label{sec:drivers} We now turn to correlations between stellar morphology, quantified primarily by $f^\mystar_\mathrm{disk}$ and $R^{\mystar}_{90}$, and various properties of the galaxy and the host halo, both in the hydrodynamic simulation and in the analogous dark matter-only (DMO) run. In short, we search for physical drivers of and explanations for the $z=0$ morphologies of each of the galaxies in our sample. \subsection{Mass (around MW masses)} \label{ssec:mass} We begin by checking whether the morphologies of the FIRE-2 MW-mass galaxies are driven by either the halo or galaxy mass. Figure~\ref{fig:morph_v_mass} indicates the virial mass $M_\mathrm{vir}$, the stellar mass $M^\mystar_\mathrm{galaxy}$, the gas mass $M^\mathrm{gas}_\mathrm{galaxy}$, and the total gas mass within $R_\mathrm{vir}$, all at $z = 0$. As in Figure~\ref{fig:mymorh_vs_othermorph}, black points correspond to the left axis and plot $f^\mystar_\mathrm{disk}$, while magenta squares indicate $R^{\mystar}_{90}$ (right axis). Of the masses shown in Figure~\ref{fig:morph_v_mass}, only $M^\mathrm{gas}_\mathrm{galaxy}$ displays evidence for a correlation with the $z=0$ stellar morphology. Though we do not plot it, we also find no correlation between the total baryonic mass within $R_\mathrm{vir}$ and kinematics/morphology ($r_\mathrm{sp} = -0.6$ -- 0.06). There is evidence for a correlation with the total mass in cold gas (defined as $T<10^5$ K) within $R_\mathrm{vir}$ ($r_\mathrm{sp} = 0.52$--$0.85$), but because the cold gas is predominantly in the galaxy, this correlation is driven by $M^\mathrm{gas}_\mathrm{galaxy}$. We will return to the correlation with $M^\mathrm{gas}_\mathrm{galaxy}$ below, but here we emphasize that the morphologies of the MW-mass FIRE-2 galaxies do not correlate with either the halo mass, the stellar mass of the galaxy, or the total baryonic mass within $R_\mathrm{vir}$. Note that over a large dynamic range, however, there is a strong mass dependence (e.g. \citealt{KEB2017} showed that the FIRE-2 dwarfs are spherical and dispersion dominated). \subsection{Spin (and other DM properties)} \label{ssec:spin} As discussed in \S\ref{sec:intro}, many authors have pointed out that, if baryons acquire their angular momentum from their dark matter halos and begin with the same density profile as those halos, then the size of the stellar disk should be predicted by a combination of the \citet{PeeblesSpin} spin parameter $\lambda_{\rm Peebles}$, the size of the host halo, the fraction of angular momentum in the halo that resides in the disk $\mathcal{j}_{\rm d}$, and the fraction of halo mass that resides in the disk $\mathcal{m}_{\rm d}$. In the simpler model of \citetalias{MoMaoWhite}, wherein the galaxy is hosted by a static isothermal sphere, \begin{equation} R_\mathrm{d} = 2^{-1/2} (\mathcal{j}_{\rm d}/\mathcal{m}_{\rm d}) \lambda_{\rm Peebles} R_{\rm 200}. \label{eqn:mmwRd} \end{equation} In their more complete model, where the disk grows adiabatically within an initially NFW \citep{Navarro1996} halo, the disk radius is modified by two multiplicative functions; the first arises from the change in the total energy of the NFW profile relative to an isothermal sphere and the second from the (assumed) adiabatic contraction of the halo in response to the growth of the disk. If such a relationship is borne out by the FIRE-2 simulations, and if $\mathcal{j}_{\rm d}/\mathcal{m}_{\rm d} = j_{\rm disk}/j_{\rm halo}$, the ratio of the specific angular momentum of the disk to the halo, is roughly constant (i.e. if the baryons acquire their angular momentum from the halo, as assumed in \citetalias{MoMaoWhite}), then one can accurately populate halos in DMO simulations with galaxies of the proper size and, by virtue of the correlation between $R^{\mystar}_{90}$ and $f^\mystar_\mathrm{disk}$, roughly the proper disk fraction. Moreover, validation of the model would provide evidence for the overall theory of angular momentum-regulated disk growth. Figure~\ref{fig:mmw_comparison} tests this picture by comparing the half-mass radius predicted by the models of \citetalias{MoMaoWhite} to the half-mass radius of each simulated galaxy. Circles show the results of the isothermal model (Equation~\ref{eqn:mmwRd}), and squares plot the full model assuming an adiabatically contracted NFW halo (Equation 28 of \citetalias{MoMaoWhite}). In order to test the assumption that galaxies acquire their angular momentum from the dark matter, the left panel uses properties available from the DMO simulations and fixes $\mathcal{j}_{\rm d}=\mathcal{m}_{\rm d}$.\footnote{We adopt $\mathcal{j}_{\rm d} = \mathcal{m}_{\rm d} = 0.1$, but our overall results are insensitive to the chosen value.} Given the relatively small variations in $R_{\rm 200}$ within our sample, the left panel implicitly tests whether disk size is driven by the spin of the halo at $z=0$. Neither model is able to reproduce the actual size of our galaxies, in line with the general results of zoom-in simulations discussed in \S~\ref{sec:intro}. The bracketed numbers in the legends indicate the average fractional error of each set of points relative to the simulations: the isothermal \citetalias{MoMaoWhite} model dramatically over-predicts the size of the galaxies when assuming $\mathcal{j}_{\rm d}=\mathcal{m}_{\rm d}$. The contracted-NFW halo model produces a reasonable order-of-magnitude estimate of $R^\mystar_{50}$, but the actual predictive value is quite poor. \begin{figure} \centering \includegraphics[width=\columnwidth]{plots/MMW_comparison.pdf} \caption{The half-mass radius of the FIRE-2 galaxies \emph{vs} the disk radius predicted by the \citetalias{MoMaoWhite} model for each galaxy (see Eq.~\ref{eqn:mmwRd}). Circles show results assuming an isothermal potential, and squares indicate the full model (an adiabatically contracted NFW profile). The left panel uses the properties of the halo available in the DMO simulation (fixing $\mathcal{j}_{\rm d} = \mathcal{m}_{\rm d}$), and therefore tests the assumption that the baryons acquire their angular momentum from the DM halo (given the small variations in $R_{\rm 200}$ in our sample). The right panel frees this assumption by adopting $\mathcal{j}_{\rm d}$ and $\mathcal{m}_{\rm d}$ from the hydrodynamic simulations, and instead tests whether the galaxies are well-described by a rotationally-supported disk in a fixed potential. The numbers in brackets indicate the average fractional error of the model relative to the simulations.} \label{fig:mmw_comparison} \end{figure} The right panel frees the assumption that the angular momentum of the galaxy is correlated with the spin of the halo and instead fits the galactic angular momentum independently by adopting $\mathcal{j}_{\rm d}$ and $\mathcal{m}_{\rm d}$ (along with the remainder of the halo properties) from the hydrodynamic simulations. We calculate $\mathcal{j}_{\rm d}$ ($\mathcal{m}_{\rm d}$) from the simulations as the ratio of the stellar angular momentum (mass) within $R^{\mystar}_{90}$ to the total angular momentum (mass) within $R_{\rm 200}$. By doing so, we measure the true angular momentum of the galaxy (i.e., independent of the spin of the halo) and therefore test the assumption that a rotationally supported disk in a fixed gravitational potential (determined by a simple NFW or isothermal model) provides a reasonable approximation. Even under this assumption, the predictions are only moderately accurate, though we do find order-of-magnitude agreement across this mass range, in line with observational results that show a correlation between virial radius (i.e. halo mass) and galaxy size \citep[e.g.][]{Kravtsov2013}. The relative success of the isothermal model (compared to the NFW model) may suggest that the density profiles are closer to isothermal spheres at their centers, but we see no strong evidence in the actual profiles (though see \citealp{Chan2015}, who found that the total density profiles at the centers of MW-mass FIRE-1 galaxies are well-fit by an isothermal sphere). \begin{figure*} \includegraphics[width=\textwidth]{plots/growth_histories.pdf} \vspace{-1.5em} \caption{Evolutionary histories of three representative galaxies; the entire sample is shown in Appendix~\ref{sec:allgrowth}. Each curve is normalized to its maximum value. The clearest trend, which holds generically for our sample, is that galaxies that have higher cold gas fractions and more gas available to form stars at late times (relative to their mass at early times) form the majority of their stars in disky configurations. This follows directly from the fact that star formation is chaotic and bursty at high redshift, but settles into an ordered, disk-like configuration after $z\sim1$ (for most galaxies that will be MW-mass at $z=0$). We quantify these trends for the full sample in Figure~\ref{fig:morph_vs_gashistories}.} \label{fig:growth_hist} \end{figure*} Though we adopt all of the halo parameters in the hydrodynamic simulation in the right panel ($R_{\rm 200}$, $\lambda_\mathrm{Peebles}$, $c$, $\mathcal{j}_{\rm d}$ and $\mathcal{m}_{\rm d}$), the majority of the changes are driven by allowing $\mathcal{j}_{\rm d}$ and $\mathcal{m}_{\rm d}$ to vary freely and independently: even for our sample of fifteen galaxies, $\mathcal{j}_{\rm d}$, $\mathcal{m}_{\rm d}$, and their ratio vary by nearly an order of magnitude: $0.005\lesssim\mathcal{j}_{\rm d}\lesssim0.07$, $0.04\lesssim\mathcal{m}_{\rm d}\lesssim0.09$, and $0.1\lesssim\mathcal{j}_{\rm d}/\mathcal{m}_{\rm d}\lesssim0.75$. Galaxies acquire a broad range of the specific angular momentum available in their hosts, and one must know the true $\mathcal{j}_{\rm d}$ and $\mathcal{m}_{\rm d}$ in order to even roughly predict the radial extent of a given galaxy with the \citetalias{MoMaoWhite} model. We are unable to recover a tight correlation with a single value of $\mathcal{j}_{\rm d}$ and $\mathcal{m}_{\rm d}$ for all galaxies (even when $\mathcal{j}_{\rm d}\neq\mathcal{m}_{\rm d}$). There is some evidence for a correlation between $\mathcal{j}_{\rm d}$ and the $1~\mathrm{Mpc}$ environment: the median $\mathcal{j}_{\rm d}$ of the galaxies in Local Group-like pairs is twice that of the isolated sample. Accordingly, six of the seven diskiest galaxies in our sample are in Local Groups. However, our sample size is too small to make definitive statements. In our parameter exploration, we have generally found that, of the properties of the $z=0$ DMO halo, $\lambda$ (or $\lambda_{\rm Peebles}$) correlates most tightly with morphology ($r_\mathrm{sp} = 0.05$ -- 0.7 for the latter) though the correlation is weak with a large degree of scatter about the average relationship: our largest, most ordered galaxy has an average spin parameter. While $\lambda(z=0)$ is relatively stable between the DMO and baryonic simulations,\footnote{The average fractional difference in our sample is $\sim1\%$.} $\lambda_{\rm DMO}$ alone is insufficient to predict morphologies without alleviating the scatter by multiplying by the true values of $\mathcal{j}_{\rm d}$ and $\mathcal{m}_{\rm d}$. Given the difficulty of predicting the morphology of a galaxy with only the information available about the host halo in a DMO simulation, we therefore turn our attention to identifying physical drivers of the morphology in the hydrodynamic simulations. That is, we do not attempt to predict morphologies, but rather to explain them through galactic/halo properties at all redshifts. \subsection{Gas fraction and accretion history} \label{ssec:growthhist} Figures~\ref{fig:growth_hist}~and~\ref{fig:morph_vs_gashistories} represent the culmination of these searches. The former, Figure~\ref{fig:growth_hist}, shows the normalized mass accretion histories of three representative galaxies, \run{Batman}, \run{m12i}, and \run{Juliet} (growth histories for the full sample are plotted in Appendix~\ref{sec:allgrowth}). The black curves indicate the stellar mass within $R^{\mystar}_{90}$, the blue curves show the total cold gas within $R_\mathrm{vir}$ (where ``cold'' is again defined as $T<10^5$ K), and the magenta curves indicate the ratio of the cold gas mass to the stellar mass of the galaxy (i.e. the ratio of the black and blue curves without normalizing). Finally, the cyan and orange curves indicate the total gas mass within $R_\mathrm{vir}$ and the total halo mass, respectively. Each curve is normalized to its maximum value. We find qualitatively identical results measuring the total gas mass near the galactic center via the same iterative process we adopt for the stars: the vast majority of the cold gas in the halo at any given time is in the galactic disk. However, we opt to use the total cold gas mass within the virial radius, $M^\mathrm{gas}_\mathrm{cold}$, because this iterative process can falsely capture hot gas in the halo, as discussed earlier. Of course, every galaxy has a unique evolutionary history, and our results suggest that history is instrumental in shaping the $z=0$ galaxy. However, there are trends that hold generically across our sample, which are exemplified by the three panels in Figure~\ref{fig:growth_hist}. First, while the total gas in the halo closely tracks the total halo mass for all galaxies, the behavior of the cold gas in the halo, i.e. the fuel for star formation, varies strongly with $f^\mystar_\mathrm{disk}$. Galaxies similar to \run{Batman} with low $f^\mystar_\mathrm{disk}$ tend to reach their maximum cold gas mass at early times (both in absolute terms and in comparison to the growth of their dark matter halos) when star formation is chaotic and bursty, and quickly exhaust (or heat) that gas. They therefore form more stars with bulge-like configurations, and stars that are formed in a disk are subject to greater dynamical disruption from the powerful feedback events. Galaxies that reach their maximum cold gas mass at early times but maintain a relatively large reservoir for star formation until late times, either through mergers or accretion from the circumgalactic medium (CGM), form a similar fractions of stars during the bursty period and at late times ($z\lesssim1$), when the star-forming gas has settled into a rotation-supported disk, as is the case with \run{m12i}. Finally, galaxies such as \run{Juliet} that are disk-dominated tend to have relatively little gas and form relatively few stars during the bursty phase. These trends are also evident in the gas fractions: bulge-dominated systems tend to reach gas fractions $\lesssim0.25$ at or before $z\sim1$, while disk dominated systems maintain high gas fractions until late times. \subsection{Galaxy mergers} \label{ssec:galmergers} Second, comparing the evolution of \run{Batman} and \run{Juliet} reveals the varying impacts of mergers on the $z=0$ morphologies. \run{Batman}, which experiences a double merger at $z\sim2$ ($t\sim3~\mathrm{Gyr}$; revealed by the sharp up-tick in $M_\mathrm{vir}$) when the halo mass is relatively low, has a large amount of cold gas dumped into the halo. That gas then forms nearly half of the $z=0$ stellar mass over the next $\sim1-2~\mathrm{Gyr}$, the majority of which ends up as a compact, dispersion supported system. Other bulge-dominated galaxies in our sample typically experience similarly large mergers at early times (when the systems have $M_\mathrm{vir}\ll10^{12}\textnormal{M}_\odot$ and therefore no extended ``hot halos'' of gas; see, e.g., \citealp{Keres2005}). Those mergers then tend to funnel their gas into the center of the galaxy relatively rapidly \citep{Barnes1991,Bournaud2011,Zolotov2015}. Mergers that occur later (when the hot halo is in place), however, tend to have their gas gradually stripped off and incorporated into the central galaxy more gradually. \run{Juliet}, for example, has a gas-rich halo fall inside the virial radius at $z\sim1$, but the gas in that subhalo is slowly stripped off and accreted onto the central disk over the course of several pericentric passages, feeding an extended star forming disk. Overall, visual inspection of the movies indicates that large \emph{galactic} mergers do typically lead to bulges in our sample. This is particularly true those mergers occur on first infall (i.e.\ with low angular momentum), before the gas in the merging system can be gradually stripped and mixed with the halo, then more gently added on to the host (in agreement with the results of \citealp{Sales2012} regarding morphology as a function of the dominant gas accretion mode). The prominent bulges of \run{Batman}, \run{m12q}, \run{m12b}, and \run{Remus}, for example, were all created by such events. However, the sizes of the bulges built by these events varies: \run{Robin} experiences such a merger at $z\sim2$, but the overall masses were low at that time, leading to a small bulge relative to the disk that grows later. \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/morphology_vs_gashistory.pdf} \caption{The morphologies of our galaxies as a function of several parameterizations of the gas accretion histories of our galaxies and their host halos. In order, the columns plot the spin parameter $\lambda$ of the gas within $R_\mathrm{vir}$ at the scale factor when half of stars in the galaxy at $z=0$ formed $a_{\mystar}^{50}$, the average amount of cold gas in the halo after $z=1$, the scale factor at which the cold gas in the halo first reaches 75\% of its peak, and the scale factor at which the halo reaches 50\% of its mass $a_{\rm halo}^{50}$. The first three, which contain information about the accretion history and buildup either of the galaxy itself or of material that helps to build the galaxy, all correlate reasonably tightly with $z=0$ morphology: the first panel is actually the tightest correlation we have identified. The final column, however, indicates that the history of the DM is less meaningful: the DM accretion history and halo merger history contains little information about the $z=0$ morphology (also see \S\ref{ssec:dmopred}).} \label{fig:morph_vs_gashistories} \end{figure*} Because a small fraction of the stellar mass in the central galaxy is formed ex situ (i.e. brought in by mergers; typically less than 5\%), they do not significantly contribute directly to $f^\mystar_\mathrm{disk}$ \citep[also see][]{AnglesAcazar2017}. The actual ex situ fraction is also not correlated with morphology ($r_\mathrm{sp}=-0.3$--0.55), and in the majority of our sample, the deposited ex situ stars are typically dispersion supported. However, a few galaxies in our sample have accreted stars that contribute to the disk: three galaxies have their ex situ circularity distributions peak at $\epsilon\sim0.6$, and in particular, our two diskiest galaxies have even their accreted stars on disky orbits with $\epsilon$ peaking at $\sim0.9$ and $1$. \subsection{Secular evolution and bulges} \label{ssec:secevol} Not all bulges are built by mergers, however: neither \run{m12m}, \run{m12i}, nor \run{Louise} experience a major head-on galactic merger, but all host dispersion-supported stars today. The bulge in \run{m12m} is built by a secular bar-buckling event (Sanderson et al., in prep.). Meanwhile, \run{m12i} hosts a compact gas disk that initially loses angular momentum in a series of mergers, but then slowly builds up a larger disk at late times. Therefore, systems that have undergone direct galactic mergers are more likely to host a bulge than compared to those evolving just under secular evolution (internal effects), but the exact morphology depends on the interplay between the merger history, star formation history, and angular momentum of the gas that builds the disk. \subsection{Clump sinking/migration} \label{ssec:clumps} Both observations \citep[e.g.][and references therein]{ForsterSchreiber2011} and simulations \citep[e.g.][]{Mandelker2017} of star-forming, disky galaxies at $z\sim2$ have found evidence for large ($\sim10^7$--$10^9\textnormal{M}_\odot$) gas clumps that may migrate to the centers of their host disks to form secular bulges. However, the galaxies we study here are low enough mass at $z\sim2$ that we simply do not expect or see this channel of bulge formation. Moreover, we note that \citet{Oklopcic2017} showed that while giant clumps do form in massive galaxies at $z\sim2$ in the FIRE simulations, there was no evidence that these clumps have a net inward migration inwards that build a bulge, even at higher masses than we study here. \subsection{Misalignments and counter-rotating disks} \label{ssec:misalignment} By examining the angular momentum of the material that end up in the galaxy and halo at turn-around ($z\sim3.5$), \citet{Sales2012} argued that disk-dominated galaxies are typically formed out of well-aligned material, while bulge-dominated systems are more likely to experience misaligned accretion events. We see some evidence for this picture in our sample: \run{m12q}, in particular, is formed out of the merger of two counter-rotating disks at $z\sim0.8$, and \run{Batman} and \run{m12b} also experience large, misaligned galactic mergers. However, in our sample, this effect manifests primarily through mergers, and it therefore has either a dramatic impact or a nearly negligible one: the fraction of counter-rotating stellar mass ($\epsilon\leq - 0.5$) at $z=0$ is less than $4\%$ in the remaining twelve galaxies and, as we show in \S\ref{ssec:formdisky}, the fraction that \emph{forms} counter-rotating is even smaller. Moreover, \S\ref{ssec:formdisky} demonstrates that the fraction of stars that form in a bulgy configuration ($\abs{\epsilon<0.5}$) is relatively smooth across our sample, suggesting a minor (or relatively constant) contribution from misaligned gas that forms stars before integrating with the disk. However, our results do not preclude the possibility of misaligned accretion contributing to torquing the disk and shifting stars to lower circularities. Together with the large scatter in the trend identified by \citet{Sales2012}, we conclude that our results are in overall agreement with theirs. \subsection{Summary: the evolution of the gas mass and spin} \label{ssec:driversum} We quantify these trends in Figure~\ref{fig:morph_vs_gashistories}. As in Figure~\ref{fig:mymorh_vs_othermorph}~and~\ref{fig:morph_v_mass}, the black circles show $f^\mystar_\mathrm{disk}$ and the magenta squares indicate $R^{\mystar}_{90}$. From left to right, the $x$-axes plot the spin parameter of the gas in the halo at the scale factor $a_\mystar^{50}$ when half of the $z=0$ stellar mass had formed, the average cold gas mass within the halo after $z=1$, the scale factor when $M^\mathrm{gas}_\mathrm{cold}$ first reaches 75\% of its peak, and the scale factor when the halo mass reaches half of the $z=0$ value, $a_{\rm halo}^{50}$. The first three are positively, and relatively strongly, correlated with morphology: the spin parameter of the gas at $a_\mystar^{50}$ is actually the tightest (non-morphological) correlation we have identified, and $\langleM^\mathrm{gas}_\mathrm{cold}(z<1)\rangle$ displays the tightest relationship outside of other related spin parameters. In fact, $\langleM^\mathrm{gas}_\mathrm{cold}(z<1)\rangle$ is even more tightly correlated with morphology than the spin parameter of the gas in the halo at $z=0$, which has $r_\mathrm{sp} = 0.31$ -- $0.81$. We find similar correlations for other descriptions of the gas accretion history of the halo, such as the scale factor when the total gas mass within $30~\mathrm{kpc}$ first reaches 75\% of its maximum. \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/morphology_vs_spins.pdf} \caption{Galactic kinematics and morphology as a function of the spin parameter, measured at the time (scale factor) when half of the stars in the galaxy have formed, $a_\mystar^{50}$ (left two panels, as in the left panel of Figure~\ref{fig:morph_vs_gashistories}), and at the time (scale factor) when the halo reached half of its $z=0$ mass, $^{50}_{\rm halo}$ (right two panels). The relationships with the spin parameter of the dark matter at $a = a_\mystar^{50}$, either in the baryonic or DMO run, shows more scatter than with the gas, but only marginally so. However, the spin parameters at $a=a_{\rm halo}^{50}$ (whether gas or DMO) are only weakly correlated with morphology, at best, emphasizing the difficulty of predicting galactic morphology given only a DMO simulation.} \label{fig:spins} \end{figure*} Together, Figures~\ref{fig:growth_hist}~and~\ref{fig:morph_vs_gashistories} suggest that disk-dominated galaxies are formed in systems that maximize their star forming reservoir at late times (often via fresh gas delivered by infalling subhalos), and where the gas that will turn into stars at late times has a high spin. These conditions often coincide -- gas that infalls at late times, whether through mergers or smooth accretion, tends to have a higher impact parameter and therefore more angular momentum than similar interactions at early times \citep{White1984,Peirani2004,Keres2009,Stewart2011,Pichon2011,Stewart2013,Lagos2017,Stewart2017}. This picture is also largely consistent with the results of previous theoretical and observational works, which have generally found that stellar disks form inside-out and are composed of young stars, while the oldest stars reside in the galactic bulge \citep{Kepner1999,Pilkington2012,Howes2015}. It is also relatively unsurprising based on the coloring in Figure~\ref{fig:joverjz}, which indicates that the youngest stars in each galaxy have disk-like kinematics. Unfortunately, it further reinforces the discussion above that it is difficult to accurately predict the $z=0$ morphology of a galaxy that will form in a given halo based on a DMO simulation, as the morphology is primarily driven by the gas dynamics. The ubiquity of galactic winds and gas recycling in the FIRE simulations further complicates efforts to connect DMO simulations to the galactic morphology/kinematics \citep{AnglesAlcazar2014,Muratov2015,AnglesAcazar2017}. Any individual bulge, meanwhile, can be sourced by either mergers or secular processes, and both contribute significantly. However, the former are more important at early times, while the latter typically lead to later bulge formation. In fact, when mergers happen at later times in our sample, they tend to be smaller, gas-rich galaxies merging onto the central host and depositing more gas at larger radii, enhancing the chance of disk survival \citep[consistent with][]{Hopkins2009a}. In our limited sample, however, there is no obvious way to attribute all morphological trends to ``merger history'' or to ``bar formation.'' \subsection{Predicting morphology from DM-only properties} \label{ssec:dmopred} To emphasize the difficulty of using a DMO simulation to \emph{a priori} estimate the morphology of a galaxy in light of the tight correlation between $\lambda_\mathrm{gas}$ at $a_\mystar^{50}$ (which suggests that a similarly tight correlation might exist for $\lambda_\mathrm{DMO}$ at some $z>0$), Figure~\ref{fig:spins} shows galactic morphology against various spin parameters at two scale factors: $a_\mystar^{50}$ and $a_\mathrm{halo}^{50}$, the half-mass time of the total halo mass; only the latter is available in a DMO simulation. The first panel shows the spin parameter of the dark matter in the baryonic simulation at $a_\mystar^{50}$. While it is less tightly correlated with morphology than the spin parameter of the gas in the halo at the same scale factor (Figure~\ref{fig:morph_vs_gashistories}), the relationship remains relatively tight. The second panel demonstrates that the correlation between the DM spin (at $a_\mystar^{50}$) and morphology is not driven by interactions between the baryons and the DM -- the spin parameter in the DMO simulation at the same scale factor also correlates with morphology, though again less strongly.\footnote{Though we have not identified any direct correlations between DMO halo properties and $a_\mystar^{50}$, the relationship in the second panel suggests that, if one \emph{could} predict the galaxy half-mass time from a DMO simulation alone, there may be a path from the properties of the halo in the DMO simulation to the galactic morphology.} However, those correlations have not yet appeared at the (earlier) $a_\mathrm{halo}^{50}$: the third panel shows that the spin parameter of the gas at $a_{\rm halo}^{50}$ is only weakly correlated with morphology, and the final panel illustrates that the spin parameter in the DMO simulation contains little information at this time (as it does at $z=0$). We also note that $\lambda_{\rm gas}$ is typically $2$--$3$ times the spin of the dark matter (both at high $z$ and at $z=0$; first pointed out by \citealp{Stewart2011}), emphasizing the disconnect between the angular momentum of the baryons (particularly those that eventually form the galaxy) and the halo. Moreover, while there is a reasonably tight correlation between $a_\mystar^{50}$ and $a_{\rm halo,\,DMO}^{50}$ ($r_\mathrm{sp} = 0.4$--0.92), a direct route from DMO halo properties to galaxy morphology would require a similar correlation between $\lambda_{\rm gas}(a = a_\mystar^{50}$) and $\lambda_{\rm DMO}(a = a_{\rm halo,\,DMO}^{50})$, which we see no strong evidence for ($r_\mathrm{sp} =-0.22$--0.85). We also explore trends with the accretion history of the main branch of the DMO halo in Figure~\ref{fig:dmo_mah}. The inset panel shows the scale factor of the last major merger in the DMO run against the galactic morphology.\footnote{Merger times and mass accretion histories are drawn from merger trees built with \texttt{consistent-trees} \citep{ctrees} using \texttt{rockstar} \citep{rockstar} halo catalogs.} The curves are colored by $f^\mystar_\mathrm{disk}$; the most bulge-dominated galaxies tend to have higher masses at early times, but the halos that host the galaxies with the highest $z=0$ disk fractions in our sample typically have even higher (normalized) masses at any $z\gtrsim1$. This is similar to the result in Figure~\ref{fig:morph_vs_gashistories}: the evolution (and spin parameter) of the halo contains relatively little information about the galactic morphology compared to the evolution (and spin parameter) of the galaxy itself. \begin{figure} \includegraphics[width=\columnwidth]{plots/dmo_norm_mah_wlastMM.pdf} \vspace{-1.5em} \caption{ The normalized mass accretion histories of our host halos in the DMO simulation. The inset shows the scale factor of the last major merger (defined as a mass ratio of $\geq0.3$) in the DMO simulation against galactic morphology. While the galaxies with the largest bulges do tend to reside in halos that form early, the most disk dominated systems have actually accreted a greater fraction of their mass by $z\sim1$ ($t\sim6~\mathrm{Gyr}$). Moreover, there is effectively no correlation between the timing of the last major merger and morphology. } \label{fig:dmo_mah} \end{figure} \subsection{Other} \label{ssec:other} In addition to the properties shown in Figures~\ref{fig:morph_v_mass}, ~\ref{fig:morph_vs_gashistories},~\ref{fig:spins},~and~\ref{fig:dmo_mah} and discussed above, we have also checked for correlations with numerous other parameters of the galaxy, halo, or DMO halo both at $z=0$ and at higher redshifts, including their growth histories. Examples of those that correlate with the $z=0$ morphology, but less strongly than those we present above, include properties associated with the star formation history, such as the amount of time that the galaxy maintains a (200 Myr averaged) star formation rate (SFR) of at least 50\% of its peak value and the fraction of stars formed during that time. Similarly, the actual peak SFR shows a weak anti-correlation: relatively constant, extended star formation is more likely to create a well-ordered disk \citep[as discussed in][]{Muratov2015}. However, the scale factor when the galaxy reaches its peak SFR is uncorrelated with morphology today ($r_\mathrm{sp} = -0.33$ -- 0.54). The fraction of specific angular momentum in the disk, $\mathcal{j}_{\rm d}/\mathcal{m}_{\rm d}$, is also weakly correlated with $z=0$ morphology, as is the spin of the gas/halo at $z=1$. Finally, a non-exhaustive list of properties that show no statistically significant signs of correlation with $f^\mystar_\mathrm{disk}$ or $R^{\mystar}_{90}$ include (along with their associated bootstrapped 95\% CI on $r_\mathrm{sp}$) includes: \begin{itemize} \item $M^\mystar_\mathrm{galaxy}/M_\mathrm{vir}$ ($r_\mathrm{sp} = -0.49$ -- $0.11$), \item the total angular momentum in the DMO halo at $z=0$ ($r_\mathrm{sp} = 0.09$ -- $0.48$), \item the NFW scale radius of the DMO halo at $z=0$ ($r_\mathrm{sp} = -0.52$ -- $0.14$), \item the $z=0$ shape of the DMO halo at various radii ($r_\mathrm{sp} = 0.14$ -- $0.5$ at 10~kpc and $r_\mathrm{sp} = 0.08$ -- $0.64$ at 300~kpc), \item the fraction of $M_\mathrm{vir}$ in bound subhalos at $z=0$ ($r_\mathrm{sp} = -0.18$ -- $0.54$), \item the scale factor at which the SFR peaks ($r_\mathrm{sp} = -0.32$ -- $0.52$), \item the $z=0$, 100-Myr-averaged SFR ($r_\mathrm{sp} = -0.34$ -- $0.52$) and specific SFR ($r_\mathrm{sp} = -0.27$ -- $0.52$), \item the fraction of stellar mass formed after $z = 1$ ($r_\mathrm{sp} = -0.25$ -- $0.44$), \item the fraction of halo mass accreted after $z = 1$ ($r_\mathrm{sp} = -0.27$ -- $0.31$), \item the fraction of in-situ stars within 30~kpc ($r_\mathrm{sp} = -0.3$ -- $0.55$), \item the mass of the stellar halo, whether selected by $z=0$ distance ($r_\mathrm{sp} = -0.41$ -- $0.23$) or formation distance ($r_\mathrm{sp} = -0.58$ -- $0.16$), \item the maximum gas mass within $R_\mathrm{vir}$ over cosmic time ($r_\mathrm{sp} = -0.21$ -- $0.44$) \item the mean stellar age ($r_\mathrm{sp} = -0.46$ -- $0.23$). \end{itemize} The final point appears contradictory to the picture that we describe above at first glance: if disks form late, then one would naively expect the mean stellar age, or $a_\mystar^{50}$, to correlate with morphology. However, by close inspection of Figure~\ref{fig:joverjz} (and as we will show further in Section~\ref{sec:evolution}), one can see that while the disk of a given galaxy is (almost) always younger than the bulge of that galaxy, disks emerge at different times in different galaxies. For example, the disk of \run{Romeo} is composed of stars with an average age of $\sim6~\mathrm{Gyr}$, while \run{m12f}, which hosts a disk and a bulge, formed its disk much more recently. \begin{figure*} \includegraphics[width=\textwidth]{plots/totdisk+negdisk+SFR_vs_time_by_star_disk_frac.pdf} \vspace{-3em} \caption{The instantaneous fraction of stars born in either a prograde or retrograde disk, i.e. with $\abs{\epsilon}\geq0.5$ at formation (black); only in a counter-rotating disk, i.e. with $\epsilon\leq - 0.5$ at formation (dashed cyan); and the normalized star formation rate (gray dotted curves). Most stars with high circularity are formed at late times, though in \run{Batman} and \run{m12q} most of the young star formation occurs in a counter-rotating disk. The three galaxies with the lowest $f^\mystar_\mathrm{disk}(z=0)$ all experience some level of counter-rotating star formation; the remainder experience almost none. Galaxies with high $z=0$ disk fractions have more prolonged disk-like star formation, but mergers sometimes destroy existing disks and scramble the correlation. } \label{fig:fsdisk_evol} \end{figure*} \section{The evolution of the stellar morphology} \label{sec:evolution} \subsection{Overview} As the previous section showed, the $z=0$ morphology is driven primarily by a combination of the accretion histories, the degree of rotation support in the halo at the half-mass time of the galaxy, and the relative amount of mass and angular momentum from the halo that end up in the disk. However, the $z=0$ morphology is also the culmination of star formation in the galaxy, stars being deposited onto the galaxy through mergers, and dynamical interactions altering the orbits of existing stars. In this section, we explore the birth morphologies of stars and the extent to which their orbits are shifted to lower circularities over time, and we demonstrate that while the $z=0$ morphologies do correlate with the spin of the gas that they are born out of, the full picture depends on the mutual evolution of the gas kinematics and star formation rate, and the impact of dynamical heating on the galactic disk. We do not explicitly investigate the radial or vertical structure of the disk as a function of time, but we refer the reader to \citet{Ma2016b} for a detailed discussion of the evolution of the disk of \run{m12i} simulated with FIRE-1. Figure~\ref{fig:fsdisk_evol} shows, as a function of time, the instantaneous fraction of stars forming with circularities $\abs{\epsilon}\geq0.5$ (measured at the time of formation) and with $\epsilon\leq0.5$, i.e. in a disk that is counter-rotating relative to the overall angular momentum axis of the existing stars in the galaxy, and the normalized star formation rate (SFR). We define the instantaneous birth disk fraction $f_{\mystar,\,{\rm birth}}^\mathrm{disk}(t)$ from the first snapshot that each particle appears in, capturing the kinematics of stars that are at most $\sim20~\mathrm{Myr}$ old. Circularities, and therefore disk fractions, are defined relative to the evolving $\hat{z}$ axis of the angular momenta of all the stars in the galaxy at a given time. The curves indicate running averages smoothed over $\sim300~\mathrm{Myr}$, but the qualitative conclusions are insensitive to the size of this window. We count stars within $R^{\mystar}_{90}(t)$, but we find similar results using all stars within a fixed cut of $30$~physical~kpc. \begin{figure} \centering \includegraphics[width=\columnwidth]{plots/absbirth+birth+negbirth+max.pdf} \vspace{-1.5em} \caption{The cumulative fraction of stars born in a disk (i.e. with $\abs{\epsilon} \geq 0.5$, $\epsilon\geq0.5$, and $\epsilon\leq - 0.5$ at birth), and the maximum instantaneous fraction of the stellar mass in the galaxy at any given time with $\epsilon\geq0.5$ (i.e. $f^\mystar_\mathrm{disk}(t)$), all as a function of the disk fraction today. The fraction of stars born in counter-rotating disks (open black circles) is $<4\%$ in all but the most bulge-dominated galaxies. The spread in maximum disk fraction and in the birth disk fraction is surprisingly small ($0.5$--$0.9$): though $z=0$ bulge-dominated systems do tend to form fewer stars with high $\abs{\epsilon}$ overall, and more stars in a counter-rotating disk, they are primarily differentiated by their subsequent evolution. Disk dominated systems at $z=0$ are more likely to be at or near their maximum disk fractions. } \label{fig:birth_v_now} \end{figure} \subsection{Most stars form in disky configurations} \label{ssec:formdisky} Figure~\ref{fig:fsdisk_evol} illustrates several points about the evolution of the disk morphology. First, at late times, most stars forming in MW-mass galaxies (black curves) do so with disk-like kinematics. This does not preclude them from forming in bulges, however, since they can be compact, rotationally-supported pseudo-bulge components. Even in our most bulge-dominated system ($f^\mystar_\mathrm{disk}\sim0.2$), the ``birth disk fraction'' is high at late times -- only \run{m12q} does not have $f_{\mystar,\,{\rm birth}}^\mathrm{disk}\sim1$ at some point after $t \sim 10$~Gyr. The three galaxies with the lowest $f^\mystar_\mathrm{disk}$ at $z=0$ are also the only three to experience a significant fraction of counter-rotating star formation. In \run{m12b}, that star formation eventually builds enough of a disk to flip the overall angular momentum axis of the galaxy (which occurs at $t\sim8$~Gyr when the cyan curve goes to zero), but in \run{Batman} and \run{m12q} it only decreases the $z=0$ disk fraction by adding stars opposite to the predominant $\hat{z}$. Therefore, even though SFRs typically peak around $z\sim1$, and dynamical interactions shift stars to lower circularity as time passes, MW-mass galaxies usually increase their disk fractions at late times through fresh star formation. Though it is not shown here, the disk size $R^{\mystar}_{90}$ also tends to grow smoothly after $z\sim1$. The remainder of the sample demonstrates that the interplay between the SFR and the fraction of stars forming in the disk as a function of time is also instrumental to the $z=0$ morphology. During the very early of stages of their growth ($t\lesssim2$~Gyr), the hosts are dwarf-size systems and experience chaotic, bursty star formation in clumpy, gas-rich dIrr-type progenitor galaxies. As the gray curves show, though, star formation rates are typically relatively low at these early times. The transition to ordered star formation (which is strongly correlated with the emergence of an ordered gas disk; \citealp{Ma2016b,Simons2017,Ceverino2017}) occurs at different times and at a different rate in each system, but it often coincides with a peak in the star formation. Both the timing of the transition and the behavior of the SFR following it strongly influences the $z=0$ morphology: galaxies with $f^\mystar_\mathrm{disk}(z=0)\gtrsim0.7$ shift to ordered star formation relatively early and maintain a relatively high SFR until $z=0$. Lower $f^\mystar_\mathrm{disk}$ galaxies either change over later (e.g. \run{m12z}) or have a relatively low SFR following the switch (e.g. \run{Remus}). \subsection{Disruption and disordering of stellar disks} Disks built by ordered star formation can also be heated and destroyed. Figure~\ref{fig:birth_v_now} plots the disk fraction at $z=0$ against both the cumulative fraction of stellar mass born in the galaxy with high $\epsilon$ and the maximum instantaneous fraction of stellar mass in the galaxy at any given time with high circularities. The scatter in $f_{\mystar,\,{\rm birth}}^\mathrm{disk}$ is relatively small: all of the galaxies in our sample have $0.6<f_{\mystar,\,{\rm birth}}^\mathrm{disk}<0.9$. Accounting for counter-rotating star formation, all of our systems form $\geq75\%$ of their stars in a disk. In other words, most stars form in disky configurations, as argued above. Furthermore, counter-rotating young stars (which would be formed out of retrograde star-forming gas) are typically too rare to have a significant impact on the galaxy (\S\ref{ssec:misalignment}). Though we do not show it here, we also note that the fraction of counter-rotating ($\epsilon\leq - 0.5$) stellar mass at $z=0$ is extremely small in all but the most bulge-dominated systems. Moreover, while the maximum counter-rotating fraction at any given time is $\sim15$--$25\%$ across our sample, these maxima occur at very early times ($z\gtrsim3$--$4$) when the galaxies were at the dwarf mass scale. The exception is \run{m12b}, which builds a disk that is initially counter-rotating relative to the existing bulge but becomes large enough to dominate the angular momentum of the galaxy and flip the overall $j_z$ vector; \run{m12b} therefore maximizes its counter-rotating fraction immediately before this transition at $z\sim0.5$. Because the galaxy masses and the degree of order in the galaxy build up over time, most of our galaxies have $f^\mystar_\mathrm{disk}(z=0)\simeq \max\left[f^\mystar_\mathrm{disk}\right]$, i.e., the majority of our sample is at its ``most disk-dominated'' today. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{plots/gas_viz_by_star_disk_frac.jpg} \caption{Gas column density maps of the galaxies in our sample. Galaxies are again sorted by decreasing $f^\mystar_\mathrm{disk}$, but the number in parentheses indicates the rank (from largest to smallest) that each galaxy has when sorted by $R_\mathrm{gas}$. \run{Batman} has a significant $\sim20^\circ$ warp, likely induced because the $z=0$ gas disk is built from an ongoing merger that is misaligned with the stars, and \run{m12q} exhausted most of its gas supply at $z\sim0.1$; the remainder of the MW-mass FIRE-2 galaxies have thin, rotation supported disks ($f_\mathrm{gas}^\mathrm{disk} \geq 0.9$) at $z=0$. We therefore focus on the size of the gas disk, rather than the co-rotation fraction. The dashed lines indicate the adopted radial and vertical extents of the gas disks; they are not shown for \run{m12q} where they are $<1~\mathrm{kpc}$.} \label{fig:gas_viz} \end{figure*} However, even bulge-dominated galaxies tended to have relatively strong disks at earlier times before having them destroyed by mergers and diluted by misaligned star formation. At $z\sim1$, all of \run{Batman}, \run{m12q}, and \run{m12b} had disks that comprised 50--70\% of their stellar mass at that time. That is, while bulge-dominated systems arise from a combination of both nature and nurture, those in our sample were primarily differentiated from disky systems by the latter. Even though they do have more than twice as much retrograde star formation (relative to their total stellar mass) as any of the diskier galaxies in our sample, the difference between their maximum $f^\mystar_\mathrm{disk}$ and their $z=0$ $f^\mystar_\mathrm{disk}$ is much larger, indicating that they were generally subject to more disk scrambling than the $z=0$ disk-dominated galaxies. \begin{figure*} \centering \includegraphics[width=\textwidth]{plots/gasmorph_vs_drivers.pdf} \caption{The size of the gas disks $R_\mathrm{gas}$ vs the gas mass within $R_\mathrm{gas}$ and $Z_\mathrm{gas}$, $M^\mathrm{gas}_\mathrm{galaxy}$; the average mass in $T<10^5$~K gas within the virial radius after $z=1$, $\left\langleM^\mathrm{gas}_\mathrm{cold}(z<1)\right\rangle$; the ratio of the shortest to longest principal axes of the stars within $20~\mathrm{kpc}$, $(c/a)_\mystar(<20~\mathrm{kpc})$; and the radial extent of the stars, $R^{\mystar}_{90}$. The dashed gray line in the final column is one-to-one, indicating that the majority of the gas disks extend well beyond the stellar disks. The points are colored by $f^\mystar_\mathrm{disk}$, which also correlates reasonably strongly with $R_\mathrm{gas}$. } \label{fig:gas_drivers} \end{figure*} Stars may also be shifted to lower circularities, while remaining on disk-like orbits, if the overall angular momentum axis of the galaxy shifts over time. Most of our sample is only marginally impacted by this effect: the angular momentum axis of our galaxies changes by $\lesssim35^\circ$ after $z=0.5$ in all but one of our galaxies. The exception, moreover, is \run{m12b}, which, as described above, builds a large enough young disk around the existing compact, bulgy core at late times to flip the overall angular momentum axis of the galaxy. As discussed above, however, the $z=0$ morphology is a very weak function of mean stellar age: the most bulge-dominated systems do tend to be the oldest, but the youngest systems are not necessarily disk-dominated. Thus, the length of time over which stellar orbits can be perturbed is not a primary driver of $z=0$ morphology. In other words, while the bulge-dominated systems in our sample do have their pre-existing stellar disks destroyed by mergers or counter-rotating star formation, this can happen early or late in cosmic time. \section{Gas morphologies in MW-mass FIRE-2 galaxies} \label{sec:gas_morph} As reflected in \S\ref{sec:evolution} via the stellar circularities at birth, the star-forming gas in MW-mass galaxies is typically in a well-ordered disk with the majority of star formation occurring with $\abs{\epsilon} \gtrsim 0.75$. That is, regardless of the instantaneous state of the stars in the galaxy, the short dynamical memory of the gas leads to thin disks at nearly all times in massive galaxies. For a detailed investigation of gas morphologies across a larger range of host masses in the FIRE-2 simulations, we refer the reader to \citet{KEB2017}, who examined gas angular momentum and HI morphology as a function of galaxy mass from the dwarf scale to the MW scale, and showed that in dwarfs the degree of rotation support can be much lower. \subsection{MW-mass galaxies have rotationally supported gas disks} As demonstrated in Figure~\ref{fig:joverjz}, almost all of the gas in the galaxies is on circular, disk-like orbits at $z=0$ (with the exception of \run{m12q}). Even without accounting for elliptical orbits, gas rotation curves are consistent with almost complete rotation support out to roughly $R_\mathrm{gas}$ in all the galaxies except \run{Batman} and \run{m12q}. In fact, with the exception of \run{m12q}, all of the galaxies have $f_\mathrm{gas}^\mathrm{disk} > 0.9$, with nine of the fifteen exceeding $0.98$. This is not particularly surprising at these masses, where pressure support for $T\lesssim10^4$~K gas is very weak. \subsection{Visual morphologies of gas disks} Figure~\ref{fig:gas_viz} shows face-on and edge-on projections of the gas in the FIRE-2 galaxies, again sorted by increasing $f^\mystar_\mathrm{disk}$. As in Figure~\ref{fig:stellarviz}, the circles and rectangles indicate $R_\mathrm{gas}$ and $Z_\mathrm{gas}$. Even with (almost) all of the galaxies having $f_\mathrm{gas}^\mathrm{disk} > 0.9$, there exists some diversity in the shape of the disks, and even more diversity in the radial extent. For example, \run{m12f} very recently interacted with a gas-rich subhalo, leaving a marginally disturbed gas disk at $z=0$. \run{m12q} has effectively no gas remaining at $z=0$, having consumed the last of its gas disk at $z\sim0.1$. \run{Batman} has a clear warp near the center of the disk, likely created because the $z=0$ gas disk is formed out of an ongoing accretion event. \run{Romulus} and \run{Louise} display similar warps near the edges of their disks. \run{Batman} is also the only galaxy with a gas disk misaligned from the stellar disk by more than $4^\circ$. This misalignment presumably survives because the gas is being continually replenished at $z\sim0$ \citep{vandeVoort2015}, and likely also because \run{Batman} has a relatively spherical stellar distribution: the ratio of the shortest to longest principal axes, $(c/a)$, of the stars within $10~\mathrm{kpc}$ is $0.72$.\footnote{The remainder of the sample all have $(c/a)^\mystar(<10~\mathrm{kpc})\lesssim0.5$.} \subsection{Sizes of gas disks} Figure~\ref{fig:gas_drivers} explores the radial extent of the gas disks. The radius of the gas disk is closely tied to the amount of gas in the galaxy, in broad agreement with the observed relationship between the size of gas disks and the amount of gas in those disks \citep[e.g.][]{Wang2016}, and potentially in agreement with arguments based on the \citet{Toomre1964} stability parameter (Schmitz et al., in prep.). $R_\mathrm{gas}$ is also correlated with the morphology of the stellar component: the points in Figure~\ref{fig:gas_drivers} are colored by $f^\mystar_\mathrm{disk}$ and are generally correlated with $R^{\mystar}_{90}$, though the gas disks are typically more extended. To the extent that this is causal, it appears primarily to owe to the fact that higher late-time gas masses are associated both with larger $R_\mathrm{gas}$ and diskier galaxies. Though our sample size is small, and we have yet to identify any underlying physical drivers, we note galaxies in our paired sample tend to have higher $R_\mathrm{gas}$ overall. This is apparent even by eye in Figure~\ref{fig:gas_viz}: the numbers in parentheses, which indicate the rank of $R_\mathrm{gas}$ for each galaxy, show that the five largest gas disks are all in halos that reside in a Local Group-like environment. However, with such a small sample size, it is impossible to reject the null hypothesis that they are drawn from the same distribution and, without tying $R_\mathrm{gas}$ to a property of the DMO halo, we cannot directly test this hypothesis with a larger sample. Neither $R_\mathrm{gas}$ nor the residuals about a power-law fit of $R_\mathrm{gas}(M^\mathrm{gas}_\mathrm{galaxy})$ strongly correlate with any of the DMO halo properties that we have checked, including the $z=0$ spin of the DMO halo ($r_\mathrm{sp} = -0.21$--$0.79$). We have also tested whether the \citetalias{MoMaoWhite} accurately predicts the sizes of \emph{gas} disks at $z=0$ based on the DMO halo, and find relatively poor correlations between the model predictions and the actual radial extents ($r_\mathrm{sp} = -0.05$--$0.79$ for the isothermal potential and $r_\mathrm{sp} = -0.26$--$0.69$ for the NFW model). \section{Conclusions} \label{sec:conclusions} In this work, we examined the kinematics and morphologies of MW-mass galaxies ($10^{10.5}\lesssimM^\mystar_\mathrm{galaxy}/\textnormal{M}_\odot\lesssim10^{11.5}$) simulated with the FIRE-2 physics. Our sample includes fifteen galaxies with effective stellar radii ranging from $\sim1~\mathrm{kpc}$~--~$17~\mathrm{kpc}$, and kinematic disk fractions varying from $\sim0.2$~--~$0.8$. We first demonstrated that these morphological measures broadly correlate with each other (though there is appreciable scatter), and that both also correlate with a variety of other morphological measures. In particular, $\Sigma_1$ is a reasonably accurate descriptor of the overall morphology over this narrow mass range at $z=0$. We then showed that the \citet{MoMaoWhite} model, wherein the baryons that form the disk are assumed to have the same specific angular momentum of their host DM halos, produces an estimate for galaxy sizes (and how they correlate with mass over a large dynamic range) that is accurate that the order-of-magnitude level, but fails to recover the actual half-mass radii of our galaxies. This is due primarily to the scatter in the amount of specific angular momentum that each galaxy acquires: $\mathcal{j}_{\rm d}/\mathcal{m}_{\rm d}$ has nearly an order of magnitude spread overall. Moreover, there are no obvious trends between the morphology of a galaxy and either the mass accretion history or the merger history of the host halo in the DMO simulation: both our most bulge-dominated and our most disk-dominated galaxies experience their last major merger at $z\sim2$--$3$. It therefore remains difficult to predict the morphology of a galaxy that would form in a given halo based purely on the information available from a DMO simulation. Instead, accurate predictors of morphology within this narrow mass range are related to the gas accretion and galaxy merger histories. Systems that maintain high gas fractions to late times tend to be disk-dominated at $z=0$ (generally growing inside-out, with $R^{\mystar}_{90}$ increasing over time), while those that maximize their star forming reservoir early tend to be bulge-dominated at $z = 0$. Based primarily on visual inspection of the movies, the amount of gas in the galaxy over time appears to be driven by a combination of the impact parameter and timing of galactic mergers, along with the amount of gas in the halo that cools and accretes onto the galaxy at late times. We reiterate that our results apply only at this specific mass, however: lower mass galaxies that cannot maintain an ordered gas disk at late times \citep[e.g.][]{KEB2017} would not necessarily follow the trends identified here. We find good correlations between morphology and the spin of the gas in the halo when the galaxy had formed half of its $z=0$ stellar mass, along with the average amount of gas available to form stars at late times. These quantities also correlate (though less strongly) with one another: gas that infalls at later times tends to have more angular momentum. However, we find no clear route from the host properties available only from a DMO simulation to galactic morphology: neither the DMO mass accretion history, the half-mass formation time of the halo nor the spin of the dark matter at the halo half-mass formation time (either in the hydrodynamic or DMO simulation) correlate significantly with morphology, emphasizing the difficulty of using DMO simulations to predict morphology. Nonetheless, our analysis does not preclude a multivariate relationship between morphology and DMO properties. In fact, given the correlation between $\lambda_{\rm DMO}$ at $a_\mystar^{50}$ and the morphology at $z=0$, there are hints that such a relationship may exist, but we lack the sample size to test for those. The prediction that the spin of the gas in the halo at the stellar half-mass formation time (i.e. the angular momentum support of the gas that contributes to building the galaxy) drives the late-time morphologies of MW-mass galaxies may eventually be observationally falsifiable. Wide-field observations with integral field spectroscopy (e.g. with instruments similar to the Keck Cosmic Web Imager; \citealp{KCWI}) could potentially map out the angular momentum in the cold CGM gas and ultimately measure the distribution of that angular momentum across halos. If the picture laid out here is correct, then the shapes of that distribution and the distribution of the morphologies of $z=0$ MW-mass galaxies should broadly agree. The $z=0$ morphologies can also be viewed as the summation of a Hubble time of star formation and subsequent heating of those stars (either from mergers, e.g. \citealp{Toomre1972,Hernquist1992,Quinn1993,Sanders1996}, or from internal interactions, e.g. \citealp{Minchev2006,Saha2010}). We showed that most stars in MW-mass galaxies formed from gas that was disky at the time of star formation (i.e.\ with circularity $\abs{\epsilon} \geq 0.5$). The most bulge-dominated galaxies at $z=0$ tend to have the lowest fraction of stars born in a prograde disk (and the highest fraction born in a retrograde one), but they also show the largest differences between their birth and $z=0$ kinematics. Therefore, while dispersion supported galaxies arise from a combination of birth stellar kinematics and subsequent stellar heating that destroys ordered rotation, our results suggest the latter effect is far more important. At late times ($z\lesssim1-2$), nearly all of the stars born in MW-mass galaxies have disk-like kinematics, such that $f^\mystar_\mathrm{disk}$ always grows after $z\sim1$. We do see two exceptions, which actually lower their disk fractions (slightly) at $z\lesssim0.5$ by forming stars in a counter-rotating disk, but this is only possible because those galaxies are already dispersion supported when cold gas is added to the central galaxy at low redshift. Moreover, we emphasize that the counter-rotating disks do not \emph{determine} the bulginess of the galaxy. We do not expect ``clump sinking'' to play a significant role in bulge formation for systems that are MW-mass at $z=0$ (whose progenitors were dwarfs at high redshift). The gas in the MW-mass FIRE-2 galaxies, meanwhile, always settles into a largely rotation-supported disk at late times. All but one of our galaxies maintain that disk to $z=0$, either though fresh accretion from merging satellites or condensation out of the CGM. The size of the gas disk is primarily driven by its mass. Our results generally agree with the results of some previous work on the formation of galactic disks in hydrodynamic simulations of MW-mass galaxies, which have found that star formation is chaotic and bursty at high redshift, with well-ordered gas disks only appearing after $z\sim1$ for galaxies with MW-masses at $z=0$ (for more/less massive galaxies, the transition occurs earlier/later; see \citealp{Muratov2015,Feldmann2015a,Simons2017,Hayward2017,Sparre2017,CAFG2017}.) The supply of the gas available to form that disk, therefore, determines the amount of stars that form with tangential orbits relative to radial orbits. In agreement with several authors \citep[e.g.][]{Scannapieco2009,Sales2012, Rodriguez-Gomez2017}, we find no strong morphological trends with the $z=0$ spin of the DM halo at the MW-mass scale. While our qualitative results are robust to the mass resolution of the simulations (Appendix~\ref{sec:resolution}), the quantitative morphology of a given galaxy does change slightly with resolution. However, these changes can typically be understood in terms of the trends that we identify here (e.g. because a slightly different merger history arises at different resolutions). We also caution that half of our sample is in Local Group-like pairs. While these are more directly comparable to the MW and Andromeda galaxies than simulations of isolated MW-mass hosts, there may be environmental effects that bias our results. However, because these effects should enter via properties of the halo or galaxy, such as the halo spin or mass accretion history, our analysis will automatically include any changes caused by the 1~Mpc environment. This work has investigated potential relationships between theoretical measures of morphology and (potentially unobservable) physical properties of the simulated galaxies in an attempt to understand the physical driver(s) of morphology in the simulations, not to compare with observations. We plan to probe the relationship between theoretical morphological measures of the FIRE-2 galaxies, like those presented here, and estimates extracted from mock observations, including the kinematic distributions \citep[e.g.][]{Zhu2018}, in future work. \section*{Acknowledgments} The authors thank Astrid Lamberts, Coral Wheeler, Evan Kirby, Laura Sales, and Virginia Kilborn for valuable discussions. We also thank the Santa Cruz Galaxy Formation Workshop, the Galaxy Formation and Evolution in Southern California (GalFRESCA) workshop, and the Swinburne-Caltech workshop for spawning useful discussions that significantly improved the quality of the manuscript, and we thank Alexander Knebe, Peter Behroozi, and Oliver Hahn, respectively, for making \texttt{AHF}, \texttt{rockstar} and \texttt{consistent-trees}, and \texttt{MUSIC} publicly available. Support for SGK was provided by NASA through Einstein Postdoctoral Fellowship grant number PF5-160136 awarded by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060. Support for PFH was provided by an Alfred P. Sloan Research Fellowship, NSF Collaborative Research Grant \#1715847 and CAREER grant \#1455342. AW was supported by a Caltech-Carnegie Fellowship, in part through the Moore Center for Theoretical Cosmology and Physics at Caltech, and by NASA through grants HST-GO-14734 and HST-AR-15057 from STScI. RES is supported by an NSF Astronomy \& Astrophysics Postdoctoral Fellowship under grant AST-1400989. KEB was supported by a Berkeley graduate fellowship, a Hellman award for graduate study, and an NSF Graduate Research Fellowship. EQ was supported in part by NSF grant AST-1715070 and a Simons Investigator Award from the Simons Foundation. JSB was supported by NSF grant AST-1518291 and by NASA through HST theory grants (programs AR-13921, AR-13888, and AR-14282.001) awarded by STScI, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under NASA contract NAS5-26555. ZH and CAFG were supported by NSF through grants AST-1412836, AST-1517491, AST-1715216, and CAREER award AST-1652522, and by NASA through grant NNX15AB22G, and ZH additionally acknowledges support from support from Northwestern University through the ``Reach for the Stars'' program. FvdV acknowledges support from the Klaus Tschira Foundation. The Flatiron Institute is supported by the Simons Foundation. DK acknowledges support from NSF grant AST-1412153, NSF grant AST-1715101 and the Cottrell Scholar Award from the Research Corporation for Science Advancement. MBK acknowledges support from NSF grant AST-1517226 and from NASA grants NNX17AG29G and HST-AR-13896, HST-AR-14282, HST-AR-14554, HST-GO-12914, and HST-GO-14191 from STScI. Numerical calculations were run on the Caltech compute cluster ``Wheeler,'' allocations from XSEDE TG-AST130039 and PRAC NSF.1713353 supported by the NSF, NASA HEC SMD-16-7223 and SMD-16-7592, and High Performance Computing at Los Alamos National Labs. This work also made use of \texttt{Astropy}, a community-developed core Python package for Astronomy \citep{Astropy}, \texttt{matplotlib} \citep{Matplotlib}, \texttt{numpy} \citep{numpy}, \texttt{scipy} \citep{scipy}, \texttt{ipython} \citep{ipython}, and NASA's Astrophysics Data System. \bibliographystyle{mnras}
{'timestamp': '2017-12-13T02:00:49', 'yymm': '1712', 'arxiv_id': '1712.03966', 'language': 'en', 'url': 'https://arxiv.org/abs/1712.03966'}
\section{Introduction} Understanding the topology of moduli spaces is a key aspect in order to grasp the behavior in families of the parametrized objects. Now, moduli spaces often come in sequences, such as the moduli spaces $\mathcal{M}_{g}$ of Riemann surfaces of genus~$g$. In some cases, such sequences satisfy \emph{homological stability}. Indeed, by \cite{MR786348}, the homology groups $H_{*}(\mathcal{M}_{g}; \Q)$ are independent of $g$ in a range of dimensions growing with $g$. The stable rational cohomology of $\mathcal{M}_{g}$ is the subject of \emph{Mumford's conjecture}, proved in~\cite{MR2335797}. In recent years, the study of (co-)homological stability phenomena has been of great interest in algebraic and geometric topology as well as algebraic geometry. Classical results include stabilization for the group homology of the sequences of symmetric groups $\mathfrak{S}_{n}$ (\cite{MR0112134}), general linear groups $\GL_{n}$ (\cite{Maazen}, \cite{MR586429}), and Artin braid groups $\Br_{n}$ (\cite{MR0274462}). The theorem for braid groups builds a bridge to moduli spaces: $\Br_{n}$ is classified by the unordered configuration space $\Conf_{n}$, which parametrizes subsets of size $n$ of a disk $D$. Hence, the sequence $\{\Conf_{n}\}$ is homologically stable. In fact, several homological stability theorems are concerned with sequences of (classifying spaces of) groups. There is by now a standard approach (cf.~\cite{MR2736166}) to the proof of such results which requires a highly connected simplicial complex with a nicely behaved group action in order to study the associated spectral sequence. Hurwitz spaces as moduli spaces of branched covers of $\C$ appeared in the second half of the 18th century in the work of Hurwitz (\cite{MR1510692}). Their properties helped proving the connectivity of $\mathcal{M}_{g}$ in \cite{MR0245574}, and they play an important role in arithmetic applications such as the Regular Inverse Galois Problem (cf.~\cite{MR1119950}). In this paper, we study the topology of Hurwitz spaces with respect to homological stabilization. It is worth mentioning the proximity of these spaces to both moduli spaces of Riemann surfaces and configuration spaces: The total space of a branched covering of $\C$ is a Riemann surface, whereas the branch locus defines an element of a configuration space. Having the homological stability theorems for both $\mathcal{M}_{g}$ and $\Conf_{n}$ in mind, it seems worthwhile to study Hurwitz spaces in this direction. \subsection*{Braids and configurations} Let $n\in\N$. By $\Br_{n}$, we denote the classical \emph{(Artin) braid group}, generated by $\sigma_{1},\ldots, \sigma_{n-1}$, subject to the relations \begin{equation*}\label{braidrel} \begin{alignedat}{2} \sigma_i \sigma_{i+1} \sigma_i &= \sigma_{i+1}\sigma_i\sigma_{i+1},\:\:\:\: &&1\leq i \leq n-2, \\ \sigma_i\sigma_j &= \sigma_j\sigma_i, &&|i-j|\geq 2, \end{alignedat} \end{equation*} cf.~\cite{MR3069440}. The \emph{pure braid group} $\PBr_{n} \subset \Br_{n}$ is the kernel of the surjection $\Br_{n} \to \mathfrak{S}_{n}$ which maps $\sigma_{i}$ to the transposition $(i, i+1)$. If $\underline\zeta = (\zeta_{1}, \ldots, \zeta_{t})$ is a partition of~$n$, the \emph{colored braid group with coloring $\underline\zeta$} is defined as the kernel of the map $\Br_{n} \to \mathfrak{S}_{n}/ \mathfrak{S}_{\underline\zeta}$, with $\mathfrak{S}_{\underline\zeta} \cong \mathfrak{S}_{\zeta_{1}} \times \ldots \times \mathfrak{S}_{\zeta_{t}}$. For a presentation of these groups, cf.~\cite{MR1465028} and \cite{MR2607077}. By \cite{MR0141126}, the \emph{(unordered) configuration space} $\Conf_{n}$ of $n$ points in (the interior of) a two-dimensional closed disk $D$ is of type $K(\Br_{n}, 1)$. Associated to the inclusion $\PBr_{n} \subset \Br_{\underline\zeta} \subset \Br_{n}$ of subgroups, there is a sequence of covering space maps $$ \PConf_{n} \to \Conf_{\underline\zeta} \to \Conf_{n}, $$ between aspherical spaces, where $\PConf_{n}$ is the \emph{ordered configuration space} of $n$ points in $D$. The space $\Conf_{\underline\zeta} = \PConf_{n}/\mathfrak{S}_{\underline\zeta}$ is called the \emph{colored configuration space} of $n$ points in $D$ with coloring $\underline\zeta$. By \cite{MR0274462}, for any $p\geq 0$, we have $$H_{p}(\Conf_{n}; \Z) \cong H_{p}(\Conf_{n+1};\Z)$$ for $n\geq 2p-2$. If $\XI \in\N^{t}$ and $n\cdot\XI = (n\xi_{1}, \ldots, n\xi_{t})$, it follows from \cite{1312.6327} that for any $p\geq 0$, \begin{equation}\label{tran} H_{p}(\Conf_{n\cdot\XI};\Z) \cong H_{p}(\Conf_{(n+1)\cdot\XI}; \Z) \end{equation} for $n\geq \frac{2p}{\min\XI}$, where $\min\XI$ denotes the smallest entry of $\XI$. Notable homological stability results for configuration spaces of surfaces include \cite{MR0358766}, \cite{MR533892}, \cite{MR2909770}, and \cite{MR3032101}, among others. \subsection*{Homological stability for Hurwitz spaces} Let $n\in\N$. Furthermore, let $G$ be a finite group, $c = (c_{1}, \ldots, c_{t})$ a tuple of $t$ distinct non-trivial conjugacy classes in $G$, and $\XI = (\xi_{1}, \ldots, \xi_{t})\in\N^{t}$ a partition of $\xi\in\N$. We replace $\C$ by a closed two-dimensional disk~$D$ and consider \emph{marked $n\cdot\XI$-branched $G$-covers of $D$}: We prescribe the covers' \emph{shape vectors} $n\cdot\XI$. With this, we mean that for $i=1,\ldots,t$, exactly $n\xi_{i}$ local monodromies around the branch points must lie in $c_{i}$. We refer to~Section~\ref{hurwitz-spaces} for a more thorough introduction to this kind of branched covers. We denote the space of such covers by $\Hur_{G,n\cdot\XI}^{c}$. This Hurwitz space must be a covering space of $\BBr_{n\cdot\XI} \cong \Conf_{n\cdot\XI}$ with fiber $\cc^{n} = (c_{1}^{\xi_{1}} \times \ldots \times c_{t}^{\xi_{t}})^{n}$, thus $$ \Hur_{G, n\cdot\XI}^{c} = \EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}} \cc^{n}, $$ up to homotopy, where the $\Br_{n\cdot\XI}$-action on $\cc^{n}$ is given by the restriction of the full \emph{Hurwitz action} of $\Br_{n\xi}$ on $G^{n\xi}$ described in~(\ref{hurwitz-action}). The prior homological stability result for Hurwitz spaces deals with the case where $c \subset G$ is a single conjugacy class and $\XI = 1 \in\N$. A conjugacy class $c \subset G$ is called \emph{non-splitting} if $c$ generates $G$ and for all subgroups $H\subset G$, $c\cap H$ is either empty or a conjugacy class in $H$. \begin{theorem*}[\textsc{Ellenberg--Venkatesh--Westerland}, \cite{0912.0325}] Let $c \subset G$ be a non-splitting conjugacy class. Let $A$ be a field of characteristic zero or prime to the order of $G$. Then there are positive constants $a$, $b$, $d$ such that for all $p\geq 0$, $$ H_{p}(\Hur_{G,n}^{c}; A) \cong H_{p}(\Hur_{G,n+d}^{c};A) $$ for $n > ap+b$. \end{theorem*} In Section~\ref{homstabhurwitz}, we follow the ideas of Sections~4 through~6 of \cite{0912.0325}. The main technical complication in comparison to the prior result is the fact that the colored braid group action on the set of $q$-simplices of the \emph{colored plant complexes} we introduce in Section~\ref{plants} is in general not transitive. In Section~\ref{hurwitz-spaces}, we explain why the $A$-module $$ R = \bigoplus_{n\geq 0} H_{0}(\Hur_{G,n\cdot\XI}^{c};A) $$ has the structure of a graded ring, where the grading is in the $n$-variable. For a central homogeneous element $U \in R$, we define $D_{R}(U) = \max\{\deg R/UR, \deg R[U]\}$, where $R[U]$ is the $U$-torsion in $R$. Our main theorem is proved in Section~\ref{homstabhurwitz}: \begin{theorem-main}\label{the-theorem} Suppose there is a central homogeneous element $U\in R$ of positive degree such that $D_{R}(U)$ is positive and finite. Then, for any $p\geq 0$, multiplication by~$U$ induces an isomorphism $$ H_{p}(\Hur_{G,n\cdot \XI}^{c}; A) \overset{\sim}{\to} H_{p}(\Hur_{G,(n+\deg U)\cdot \XI}^{c}; A) $$ whenever $n > (8 D_{R}(U) + \deg U)p + 7 D_{R}(U) + \deg U$. \end{theorem-main} Our theorem generalizes the prior theorem to the case of multiple conjugacy classes. Indeed, for $c$ a single non-splitting conjugacy class, \cite[Lemma~3.5]{0912.0325} shows that the condition of Theorem~\ref{the-theorem} is satisfied. We say that $G$ is \emph{invariably generated} by $c$ if for all choices of elements $g_{i} \in c_{i}$, $i = 1, \ldots, t$, the group generated by $g_{1}, \ldots, g_{t}$ is equal to $G$. We denote by $\partial U$ the product of the entries of a vector $U \in \cc^{d}$. Note that such a vector may be identified with a homogeneous element of $R$, cf.~Remark~\ref{comb-descr}. Applying Theorem~\ref{the-theorem}, a result from \cite{1212.0923}, and (\ref{tran}), we are able to deduce a concrete homological stability statement in the case where $c$ invariably generates $G$. \begin{theorem-main}[Theorem~\ref{thm-connected}] Assume $c$ invariably generates $G$. Then, for any $U \in \cc^{d}$ with $\partial U = \id$ and any $p \geq 0$, there are isomorphisms \begin{align*} H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Z ) &\cong H_{p}(\Hur_{G, (n+d)\cdot\XI}^{c}; \Z )\\ H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Q ) &\cong H_{p}(\Hur_{G, (n+1)\cdot\XI}^{c}; \Q ) \end{align*} for $n> (8D_{R}(U) + d)p+7D_{R}(U) + d$, and a constant $b\in\N$ such that $$ H_{p}(\Hur_{n\cdot\XI}^{c};\Q) \cong H_{p}( \Conf_{n\cdot\XI}; \Q ) \otimes_{\Q} \Q^{b} $$ in the same range. \end{theorem-main} \subsection*{Simplicial complexes in homological stability proofs} In the study of homological stability, simplicial complexes are ubiquitous. Given a sequence of groups $\{G_{n}\}$ and highly connected simplicial complexes $\mathcal{O}^{n}$ for all $n$ such that $G_{n}$ acts transitively on the set of $q$-simplices of $\mathcal{O}^{n}$ for all $q$ and with stabilizers isomorphic to $G_{n-q-1}$, the spectral sequence associated to the semi-simplicial space $\EG_{n} \times_{G_{n}} \mathcal{O}^{n}$ yields a description of the homology of $\BG_{n}$ in terms of the homology of spaces $\BG_{m}$, for $m<n$. This makes inductive arguments possible. The \emph{ordered arc complex} (cf.~\cite{MR3135444}) turns out to have the right properties for mapping class groups of surfaces (leading to the homological stability theorem for $\mathcal{M}_{g}$). For the Artin braid group, the \emph{arc complex} (though not used in the original article \cite{MR0274462}) is a suitable choice. This complex (which is contractible by \cite{Damiolini}) has also been employed in the homological stability proof in \cite{0912.0325}. We run into a couple of problems when examining homological stability for Hurwitz spaces: First, the Hurwitz spaces we consider are usually disconnected. This can be fixed by using the fact that they are finite covers of $K(G,1)$ spaces, where $G$ is a {colored braid group}. Secondly, there is no highly connected simplicial complex at hand which admits a well-behaved colored braid group action. For this purpose, we define and investigate \emph{plant complexes} in Sections~\ref{plants} to~\ref{combinatorics}. Thirdly, the group action on these complexes is generally not transitive. This last point makes a more extensive homological analysis in Section~\ref{homstabhurwitz} necessary. The definition of plant complexes generalizes both the arc complex and the \emph{fern complex} from~\cite{1410.0923}, hence the name. In Section~\ref{combinatorics}, we focus on a specific class of \emph{colored} plant complexes in order to obtain the following result which is essential to our homological stability proof: \begin{theorem-main}[Theorem~\ref{delta-conn}, Lemma~\ref{orbits}, Lemma~\ref{stabilizers}]\label{thm-5} For $n\in\N$ and $\XI\in\N^{t}$, there exists an $(n-1)$-dimensional and at least $\left( \lfloor\frac{n}{2}\rfloor -2\right)$-connected simplicial complex which admits a generally non-transitive action by the colored braid group $\Br_{n\cdot\XI}$. The stabilizer of a $q$-simplex under this action is isomorphic to $\Br_{(n-q-1)\cdot\XI}$. \end{theorem-main} \subsection*{Acknowledgements} This paper contains the central result of my 2016 Ph.D.~thesis. I would particularly like to thank my advisor Michael L\"onne for all the inspiring discussions. Furthermore, I am thankful to Craig Westerland for his supportive and helpful answers to my questions, and to Matthias Zach for numerous mathematical dialogues. I would like to appreciate the excellent mathematical environment and the pleasant colleagues I was offered by the Institute of Algebraic Geometry at the Leibniz University of Hannover over the last three years. \section{Plants and plant complexes}\label{plants} Let $S$ be a connected surface with non-empty boundary and $\DEL = (\delta_1, \ldots, \delta_t)\in\N^{t}$ a partition of $\delta = \sum_{i=1}^t \delta_i$. Let furthermore $\Delta$ be a set of $\delta$ points in the interior of $S$, partitioned as $\Delta = \Delta_1 \sqcup \ldots \sqcup \Delta_t$, where $|\Delta_{i}| = \delta_{i}$ for all $i=1,\ldots, t$. Finally, let $*$ be a fixed point in $\partial S$. An \emph{arc} is a smooth embedding $\gamma\colon I \to S$ with $\gamma(0) = *$ and $\gamma(1) \in \Delta$, meeting the boundary transversally, and with interior entirely in $S\setminus(\partial S\cup \Delta)$. \begin{definition}\label{def-plant} Let ${\XI} = (\xi_1, \ldots, \xi_t)\in \N^{t}$ and $\xi = \sum_{i=1}^{t}\xi_i $. \begin{enumerate}[(i)] \item A $\XI$\emph{-plant} in $(S,\Delta)$ is an unordered $\xi$-tuple of arcs in $S$ which only meet at~$*$, where for some permutation $\sigma\in \mathfrak{S}_{t}$, exactly $\xi_i$ arcs end at points of $\Delta_{\sigma(i)}$, for $i=1,\ldots,t$. The tuple $\XI$ is called the \emph{pattern}. \item A \emph{colored $\XI$-plant} in $(S,\Delta)$ is a $\XI$-plant in $(S,\Delta)$ with the requirement that for $i=1, \ldots t$, exactly $\xi_{i}$ arcs end at points of $\Delta_{i}$. \item Two $\XI$-plants $v, w$ in $(S,\Delta)$ are called \emph{equivalent} if there is an isotopy of $S$ fixing $\partial S \cup\Delta$ pointwise that transforms one plant into the other. \item For any plant $u$, we write $u^\circ = u\setminus (\{*\} \cup \Delta)$ for its \emph{interior}. \item We say that two plants (or arcs) $v$ and $w$ (not necessarily of the same pattern) have $s$ \emph{points of intersection} if $s$ is the minimal number such that there are are plants $v'$ and $w'$ equivalent to $v$ and $w$, respectively, such that $v'^{\circ}$ and $w'^{\circ}$ share $s$ points in $S\setminus(\Delta\cup\partial S)$. We write $v.w=s$. For $v.w=0$, we call $v$ and $w$ \emph{disjoint}. \end{enumerate} \end{definition} \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.31\linewidth} \centering \includegraphics[scale=0.14]{plant1.pdf} \caption{$\DEL = \XI = (1,1,1)$} \end{subfigure} \begin{subfigure}[b]{0.31\linewidth} \centering \includegraphics[scale=0.17]{plant2.pdf} \caption{$\DEL = (3,2,1)$, $\XI = (0,2,1)$} \end{subfigure} \begin{subfigure}[b]{0.31\linewidth} \centering \includegraphics[scale=0.17]{plant3.pdf} \caption{$\DEL = (3,2,1)$, $\XI = (0,2,1)$} \end{subfigure} \caption{Examples of $\XI$-plants in a disk.} \label{first-plants} \end{figure} First examples of plants in a two-dimensional closed disk $D$ can be seen in Figure~\ref{first-plants}. Given $\DEL$ and $\XI$ as in the captions, the left and right plants are colored. Changing $\XI$ to $(2,0,1)$, the middle plant is colored as well. \begin{lemma}\label{intersection_lemma} Let $v = (a_1, \ldots, a_\zeta)$ and $w = (b_1, \ldots, b_\xi)$ be plants in $(S, \Delta)$ with arbitrary patterns. The product $v.w$ is finite and arcwise distributive, i.e., we have $v.w = \sum_{i=1}^\zeta\sum_{j=1}^\xi a_i.b_j.$ \end{lemma} \begin{proof} For the first part, it suffices to show that generically, two arcs $a_{1}$, $b_{1}$ meet in finitely many points. By the transversality theorem (cf.~\cite{MR0061823}), the space of smooth embeddings $b_{1}\colon I \to S$ which are transversal to $a_{1}$ is dense in the space of all smooth embeddings. Now, $a_{1}\colon I \to S$ and $b_{1}\colon I \to S$ being transversal implies that $b_{1}^{-1}(a_{1}(I)) \subset I$ is a $0$-dimensional submanifold. Such a submanifold necessarily consists of only finitely many points. The inequality '$\geq$' is clear by definition of the products $a_i.b_j$. For the other inequality, let $v$, $w$ be in minimal position, i.e., $v.w = |v^{\circ} \cap w^{\circ}|$ and all intersections are transversal. Assume $v.w > \sum_{i=1}^\zeta\sum_{j=1}^\xi a_i.b_j$. By assumption, there exist indices $p$, $q$ with \begin{equation}\label{gnull} | a^{\circ}_p \cap {b^{\circ}_q}| > a_p.b_q. \end{equation} Therefore, there must be segments of $a_{p}$ and $b_{q}$ whose union is a continuous loop. Choose $k$ and $l$ among all such $p$, $q$ such that such a loop has no intersection with further arcs of $v$ or $w$. Then there is a closed disk $D_0 \subset S$ bounded by segments of $a_k$ and $ b_l$, containing no other arc segments of $v$ or $w$. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.28\linewidth} \centering \def\columnwidth{\columnwidth} \input{intersection1.pdf_tex} \end{subfigure} \begin{minipage}[b][3cm][c]{0.05\linewidth} $\rightsquigarrow$ \vspace{2cm} \end{minipage} \begin{subfigure}[b]{0.245\linewidth} \centering \def\columnwidth{\columnwidth} \input{intersection2.pdf_tex} \end{subfigure} \caption{The isotopy $H$ from the proof of Lemma~\ref{intersection_lemma}.} \end{figure} After eventual slight smooth deformations of $a_{k}$ or $ b_l$, we may assume that neither~$*$ nor~$\Delta$ share points with $D_0$. These deformed arcs $a'_{k}$, $b'_{l}$ can be chosen such that for the plants $v'$, $w'$ defined by replacing $a_{k}$ by $a'_{k}$ and $b_{k}$ by $b'_{k}$, respectively, we have \begin{equation}\label{cond3} |v'^{\circ} \cap w'^{\circ}| \leq |v^{\circ} \cap w^{\circ}| + 1. \end{equation} Indeed, if both $*$ and~$\Delta$ intersected $D_0$, there would be no interior point of intersection between $a_k$ and $ b_l$, contradicting~(\ref{gnull}). Denote by $D_{1}$ the resulting closed disk bounded by segments of $a'_{k}$ and $b'_{l}$. We now define an isotopy $H\colon I\times S \to S$ that satisfies \begin{align} \label{cond1} |H(\{1\} \times {a'_{k}}^{\circ}) \cap {{b_{l}'}^{\circ}}| &<| {a_{k}'}^\circ \cap {b_{l}'}^\circ| , \\ \label{cond2} |H(\{1\} \times {v'}^{\circ}) \cap {w'}^{\circ}| &\leq | {v'}^{\circ} \cap {w'}^{\circ}| - 2. \end{align} Let $\epsilon>0$ and $D_1^\epsilon$ an open $\epsilon$-neighborhood of $D_1$, where we choose $\epsilon$ such that there are no segments of arcs in $D_1^\epsilon$ other than $a'_{k}$ and $b'_{l}$, and such that $D_1^\epsilon$ lies entirely in the interior of $S$. Now, the arc segment $a'_{k} \cap \overline{D_1^\epsilon}$ is isotopic (fixing endpoints) to an arc segment in $\overline{D_{1}^{\epsilon}}$ which does not intersect $b'_{l} \cap \overline{D_1^\epsilon}$. By the isotopy extension theorem (cf.~\cite{MR0123338}), this isotopy may be extended to an ambient isotopy $h\colon I \times \overline{D_{0}^{\epsilon}} \to \overline{D_1^\epsilon}$ which fixes the boundary circle $\partial\overline{D_1^\epsilon}$ pointwise. We extend $h$ by the identity on $S \setminus D_1^\epsilon$ and denote the resulting isotopy of $S$ by $H$. Now, (\ref{cond1}) is satisfied since we push $D_1$ across $ b'_l$ and thus remove two intersections. As the potential slight deformation of $a'_k$ and $ b'_l$ creates at most one extra intersection, the application of $H$ removes at least one intersection point. Then, (\ref{cond2}) follows from the choice of $\epsilon$. From (\ref{cond3}) and (\ref{cond2}), we obtain $|H(\{1\} \times {v'}^{\circ}) \cap {w'}^{\circ}| < |v^{\circ} \cap w^{\circ}| = v.w$, which contradicts the definition of the intersection number. The assertion follows. \end{proof} \begin{definition} With notation as above, we define: \begin{enumerate}[(i)] \item The \emph{full $\XI$-plant complex} $\FP(S)$ is the simplicial complex with isotopy classes of $\XI$-plants in $(S,\Delta)$ as vertices. A $q$-simplex in $\FP(S)$ is a set of $q+1$ isotopy classes of $\XI$-plants on $(S,\Delta)$ which can be embedded with disjoint interiors. \item The \emph{$\XI$-plant complex} $\mathcal{P}_{\DEL}^{\XI}(S)$ is the subcomplex of $\FP(S)$ which contains the simplices $\alpha\in\FP(S)$ such that no two plants of $\alpha$ share a point in $\Delta$. \item The \emph{full colored $\XI$-plant complex} $\FO(S) \subset \FP(S)$ and the \emph{colored $\XI$-plant complex} $\cO(S) \subset \cP(S)$ are the subcomplexes defined by the restriction that only colored plants are allowed as vertices. \end{enumerate} \end{definition} \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant4.pdf} \caption{$2$-simplex in $\mathrm{F}\mathcal{P}_{(3,3)}^{(1,1)} (D)$} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant5.pdf} \caption{$1$-simplex in $\mathcal{P}_{(2,2,2)}^{(1,1,0)} (D)$} \end{subfigure} \caption{Representatives of simplices in plant complexes (1) -- colors indicate the type of endpoints, line styles distinguish between different plants.} \end{figure} \begin{remark} By definition, we have the following diagram: $$ \xymatrix{ \cO(S) \ar@{}[d]|-*[@]{\subset} \ar@{}[r]|-*[@]{\subset} &\cP(S) \ar@{}[d]|-*[@]{\subset}\\ \FO(S) \ar@{}[r]|-*[@]{\subset} &\FP(S)} $$ As there are only finitely many isotopy classes of arcs in $(S,\Delta)$, all plant complexes have a finite number of simplices. \end{remark} We use two partial orderings of $\N_{0}^t$: \begin{itemize} \item $(x_1, \ldots, x_t) \preccurlyeq (y_1, \ldots, y_t)$ if there is a permutation $\sigma\in \mathfrak{S}_{t}$ such that for all $i=1,\ldots, t$, we have $x_i \leq y_{\sigma{(i)}}$, and \item $(x_1, \ldots, x_t) \leq (y_1, \ldots, y_t)$ if $x_i \leq y_i$ for all $i=1,\ldots, t$. \end{itemize} Immediately from the definitions, we obtain: \begin{lemma}\label{nonempty} Both $\FP(S)$ and $\cP(S)$ are non-empty if and only if $\XI\preccurlyeq\DEL$, and $\FO(S)$ and $\cO(S)$ are non-empty if and only if $\XI \leq \DEL$. \end{lemma} \begin{remark}\label{plant-order} There is a natural total order on the vertices of simplices in $\cP(S)$ ($\cO(S)$) which induces the structure of an ordered simplicial complex: By imposing a Riemannian structure on $S$, we may assume that all arcs are parametrized by arc length. Now for plants $v$, $w$ of the same pattern, we write $v<w$ if and only if in a non-intersecting realization of $v$ and $w$, there is an arc in $v$ whose inward pointing unit tangent vector at $*$ occurs in clockwise order before any inward pointing unit tangent vector of an arc in $w$. \end{remark} \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant6.pdf} \caption{$1$-simplex in $\mathcal{P}_{(3,3)}^{(2,1)} (D)$} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant7.pdf} \caption{$1$-simplex in $\mathcal{O}_{(4,4)}^{(2,1)} (D)$} \end{subfigure} \caption{Representatives of simplices in plant complexes (2).} \end{figure} \section{Connectivity analysis} In this section, we always assume that the complexes are non-empty, so we have $\XI\preccurlyeq\DEL$ for plant complexes, and $\XI\leq\DEL$ for colored plant complexes. Similar connectivity proofs can be found in~\cite[Sect.~4]{MR3135444} and~\cite[Sect.~2]{1410.0923}. We defined the simplicial complexes abstractly. In order to talk about the \emph{connectivity} of a complex $\mathcal{O}$, we need to consider its geometric realization, which we denote by $|\mathcal{O}|$. \begin{proposition}\label{contractible} Both $|\FP(S)|$ and $|\FO(S)|$ are contractible. \end{proposition} \begin{proof} In the proof, we construct a flow similar to the \emph{Hatcher flow} introduced in~\cite{MR1123262}. The proof is carried out for $|\FP(S)|$. It is fully analogous for $|\FO(S)|$. In the following, we switch freely between plants as subsets of $S$, plants as vertices of plant complexes, and their respective isotopy classes if no misunderstandings are possible. We fix a plant $v$ in $\FP(S)$ with arcs $a_1, \ldots, a_\xi$ in a fixed order. Our goal is to show that $|\FP(S)|$ deformation retracts onto $|\Star(v)|$, which is contractible. We order the interior points of $v$ in the following way: \begin{itemize} \item $x \prec y$ if $x\in a_i$, $y\in a_j$ for $i<j$ \item If $x,y \in a_i$, $x\prec y$ if $x$ is closer to $*$ along $a_i$ than $y$. \end{itemize} Let $\alpha = \langle w_0, \ldots, w_p \rangle$ be a $p$-simplex of $\FP(S)$ with representative plants~$w_{i}$ chosen such that the number of intersections with $v$ is minimal. The ordering of the interior points of $v$ induces an order on the set $(w_{0}\cup\ldots\cup w_{p})\cap v^\circ$ of intersection points, which we denote by $g_1, \ldots, g_k$. At $g_i$, the plant $w_{j_i}$ intersects the arc $a_{l_i}$. In an $\epsilon$-neighborhood of $g_{1}$, we erase the segments of the arc of $w_{j_{1}}$ which contains~$g_{1}$ and join the two loose ends to $*$ by straight lines. We denote by $C(\alpha)$ the plant that is obtained from $w_{j_1}$ by replacing the arc containing $g_1$ with a smooth approximation of the one of the two newly created paths that is an arc. Because of the order we put on the intersection points, $C(\alpha)$ is a plant which is disjoint from the plants in $\alpha = \alpha^{(1)}$. In the following, we misuse notation by allowing vertices to occur more than once in a simplex. In this sense, by $\langle c_{0}, \ldots, c_{p}\rangle$ we denote the simplex with vertices $\{c_{0}, \ldots, c_{p}\}$, which might be of dimension smaller than $p$. We now define a finite sequence of simplices inductively. We start with $i=1$, the first intersection point $g_{i} = g_{1}$, and the simplex $$r_1(\alpha) = \langle w_0, \ldots, w_p, C(\alpha)\rangle = \langle \alpha^{(1)}, C(\alpha^{(1)})\rangle,$$ and execute the procedure below. In every step, we choose representative plants for the vertices of the simplices such that the number of intersections with $v$ is minimal. \begin{enumerate}[(1)] \item Increase $i$ by one. Stop if $i=k+1$, otherwise go to the next step. \item If the intersection at $g_{i}$ is not yet resolved in $\alpha^{(i)}$, replace the plant of $\alpha^{(i)}$ that contains~$g_i$ with $C(\alpha^{(i)})$, denote the resulting $p$-simplex by $\alpha^{(i+1)}$, and set $r_{i}(\alpha) = \langle \alpha^{(i)} , C(\alpha^{(i)})\rangle.$ Else, set $\alpha^{(i+1)} = \alpha^{(i)}$ and $r_{i}(\alpha) = \langle \alpha^{(i)} , w_{j_{i}}'\rangle$, where $w_{j_{i}}'$ is the $j_{i}$-th plant of $\alpha^{(i)}$. \item Go to step 1. \end{enumerate} By the above remarks about disjointness, we produce simplices $r_{i}(\alpha)$ of dimension at most $p+1$ at each step. The $p$-simplex $\alpha^{(k+1)}$ is in the star of $v$, since all of its plants are disjoint from $v$. By construction, $\alpha^{(k+1)}$ is a face of $r_{k}(\alpha)$. Now, we may use the sequence $r_1(\alpha), \ldots, r_{k}(\alpha)$ to define a deformation retraction of $|\FP(S)|$ onto $|\Star(v)|$. Using barycentric coordinates\footnote{Here, we use the same order on the vertices of $\FP(S)$ as in the definition of the $r_{i}(\alpha)$. If a vertex~$c_{j}$ appears more than once in $r_{i}(\alpha)$, adding up the corresponding entries of a given tuple $T$ yields the barycentric coordinate of the point we refer to.}, any point on the realization of the $p$-simplex $\alpha = \langle w_{0}, \ldots, w_{p} \rangle$ can be identified with a tuple $T = (t_0, \ldots, t_p)$, where $t_j\geq 0$ and $\sum_{i=0}^{p} t_i = 1$. For $i=0,\ldots, p$, let $k_i = |w_i^\circ \cap v^\circ| = w_i.v$, where the second equality is due to the choice of the~$w_{i}$. Given a tuple $T$ and $i\in\{1, \ldots, k\}$, we assign to~$g_i$ the weight $\omega_i(T) = t_{j_i}/k_{j_i}$ if $k_{j_{i}}>0$, and $\omega_{j}(T) = 0$ else, such that $\sum_{j=1}^{k} \omega_j(T) = 1$. For fixed $\alpha$ and $T$, we define $f\colon I \to |\FP(S)|$ by $$ f_{\alpha}^{T}(s) = [r_i(\alpha), (x_0, \ldots, x_{p+1})] $$ for $\sum_{j=1}^{i-1} \omega_j(T)\leq s \leq \sum_{j=1}^{i} \omega_j(T)$, where $i \in\{ 1, \ldots, k\}$. Here, we set $x_l = t_l$ for all $l$, except for the pair $$ (x_{j_i}, x_{p+1}) = (t_{j_i} - k_{j_i} (s-\sum_{j=1}^{i-1}\omega_j), k_{j_i}(s-\sum_{j=1}^{i-1}\omega_j )). $$ The map $f^{T}_{\alpha}$ is well-defined: \begin{align*} f^{T}_{\alpha}\left(\sum_{j=1}^{i} \omega_j\right) &= [r_{i+1}(\alpha), (t_0, \ldots,t_p, 0)] = [r_{i}(\alpha), (t_0, \ldots, t_{j_{i} -1}, 0, t_{j_{i} + 1}, \ldots, t_p, t_{j_{i}})]. \end{align*} By construction, $f_{\alpha}^{T}(1)$ lies in $\alpha^{(k+1)}\in \Star(v)$. We may now patch the maps $f^{T}_{\alpha}$ for all simplices $\alpha$ and coordinates $T$ with only non-zero entries in order to obtain a global homotopy $f\colon I \times |\FP(S)| \to |\FP(S)|$ with image in $\Star(v)$. We still need to prove that $f$ is continuous. By \cite[Thm.~3.1.15]{MR0210112}, we only have to show continuity for the restriction of $f$ to the geometric realization of any simplex. In the interior of the realization of $\alpha = \langle w_{0}, \ldots, w_{p}\rangle$, continuity follows from the definition of the $\omega_i(T)$. It remains to show that we may go to a subsimplex of $\alpha$ continuously. That is, for $\beta=\langle w_0, \ldots, w_{p-1}\rangle$, we must show that for all $s\in I$, $ f_{\alpha}^{(t_0, \ldots, t_{p-1},0)}(s) = f_{\beta}^{(t_0, \ldots, t_{p-1})}(s) $. This follows from Lemma~\ref{intersection_lemma}: The number of intersections of $w_p$ and~$v$ does not depend on the simplex $\alpha$; in other words, we have $v.\beta = v.\alpha - v.w_p$. Thus, going to $\beta$ corresponds exactly to $t_{p}$ and any corresponding weight going to zero. We can therefore pass from $\alpha$ to any facet $\beta$ of $\alpha$ continuously. For an arbitrary subsimplex, the claim follows inductively. \end{proof} \begin{theorem}\label{delta-conn} For the connectivity of plant complexes, we have: \begin{enumerate}[(i)] \item If $\min\XI>0$, $\conn |\cO(S)| \geq \min_{i=1, \ldots, t} \left\lfloor\frac{\delta_{i}}{2 \xi_{i}} \right\rfloor - 2$. \label{colored-conn} \item If $\min\XI>0$, $\conn |\cP(S)|\geq \left\lfloor \frac{\min\DEL}{2 \max \XI} \right\rfloor -2$. \label{plant-conn} \item Let $r>0$, $t\geq m> 0$. If $\DEL = (r, \ldots, r)\in\N^{t}$ and $\XI = (\overbrace{r, \ldots, r}^{m \text{ times}}, 0, \ldots, 0)\in\N_{0}^{t}$, $ \conn|\cP(S)| \geq \left\lfloor \frac{t}{2m-1}\right\rfloor -2$.\label{multi-conn} \end{enumerate} \end{theorem} \begin{remark} For $m=1$, the complex from Theorem~\ref{delta-conn}(\ref{multi-conn}) is the \emph{fern complex} from~\cite{1410.0923}. By the same article, the fern complex is at least $(t-2)$-connected. Our connectivity result generalizes this bound to the case $m>1$ which we call the \emph{multifern} case. \end{remark} If $\gamma$ is a collection of arcs in $(S,\Delta)$ which only meet at $*$, we write $S_\gamma$ for the connected space $(S\setminus\gamma)\cup\{*\}$. We can define (colored) plant complexes on $S_{\gamma}$ accordingly. In particular, the arguments from Proposition~\ref{contractible} carry over to $S=S_\gamma$, so spaces of the form $|\FP(S_\gamma)|$ ($|\FO(S_{\gamma})|$) are contractible. \begin{proof}[Proof of Theorem~\ref{delta-conn}] We prove the proposition for a surface with boundary $S$ or a space of the form $S_\gamma$. The proof is performed in detail for part~(\ref{colored-conn}), which is the result needed in subsequent sections. Some remarks on the proofs on the other two parts are included below. \textbf{Claim (\ref{colored-conn})} is proved by induction on $\min\DEL$, with $\XI$ fixed. The claim is void if there is an $i\in\{1, \ldots, t\}$ such that $\delta_{i} < 2\xi_{i}$, and we assume for the proof that $\delta_{i} \geq \xi_{i}$ for all $i$. Let now $k\leq \min_{i=1, \ldots, t} \left\lfloor\frac{\delta_{i}}{2 \xi_{i}} \right\rfloor - 2$, and consider a map $ f\colon S^k \to |\cO(S)|. $ We have to show that $f$ factors through a $(k+1)$-disk. By the contractibility of $|\FO(S)|$, we have a commutative diagram: $$ \xymatrix{ S^k \ar@{^{(}->}[d] \ar[r]^f &|\cO(S)| \ar@{^{(}->}[d]\\ D^{k+1} \ar[r]^{\hat f} &|\FO(S)|} $$ By simplicial approximation, we may assume that all maps are simplicial. That is, they are the geometric realization of simplicial maps $\mathcal{F} \colon \mathcal{S}^{k}\to \cO(S)$ and $\hat{\mathcal{F}} \colon \mathcal{D}^{k+1}\to \FO(S)$ for some finite PL triangulations $\mathcal{S}^{k}$ and $\mathcal{D}^{k+1}$ of the $k$-sphere and the $(k+1)$-disk, respectively. It suffices to show that $\mathcal{F}$ factors through $\mathcal{D}^{k+1}$. Our goal is to deform $\hat{ \mathcal{F}}$ such that its image lies entirely in $\cP(S)$. We call a simplex~$\alpha$ of~$\mathcal{D}^{k+1}$ \emph{bad} if in each plant in $\hat{\mathcal{F}}(\alpha)$, there is at least one arc that shares an endpoint with an arc from another plant in $\hat{\mathcal{F}} (\alpha)$ (note that vertices are good). In particular, a simplex of $\mathcal{D}^{k+1}$ with image in $\cP(S)$ cannot contain any bad subsimplices. Let $\alpha$ be a bad simplex of $\mathcal{D}^{k+1}$ of maximal dimension $p \leq k+1$ among all bad simplices. Now, $\hat{\mathcal{F}}$ restricts to a map $$ \hat{\mathcal{F}}|_{\Link(\alpha)}\colon \Link(\alpha) \to J_\alpha \coloneqq \mathcal{O}_{\DEL'}^{\XI}(S_{\hat{\mathcal{F}}(\alpha)}), $$ where $\DEL'$ is obtained from $\DEL$ by removing the endpoints of the arcs in $\alpha$ from an instance of $\DEL$. We still need to argue why the image of $\Link(\alpha)$ lies in $\mathcal{O}_{\DEL'}^{\XI}(S_{\hat{\mathcal{F}}(\alpha)})$: If it did not, there would be a bad simplex $\beta\in\Link(\alpha)$, hence $\alpha*\beta$ would be bad, contradicting the maximality of the dimension of $\alpha$ (note that $\alpha$ and $\beta$ are joinable as $\beta$ is in $\Link(\alpha)$). For all $i = 1, \ldots, t$, any $p$-simplex uses at most $(p+1)\cdot \xi_{i}$ endpoints of $\Delta_i$, so \begin{equation}\label{ineq} \delta_{i}'\geq \delta_{i} - (p+1)\cdot\xi_{i}. \end{equation} Furthermore, we have $p\leq k+1 \leq \min_{j=1, \ldots, t} \lfloor\frac{\delta_{j}}{2 \xi_{j}} \rfloor - 1$, so we obtain from~(\ref{ineq}) and the assumption $\delta_{i} \geq \xi_{i}$: \begin{align*} \delta'_{i} &\geq \delta_{i} - (p+1)\cdot \xi_{i} \\ &\geq \delta_{i} - \xi_{i} \cdot \min_{j=1, \ldots, t} \left\lfloor\frac{\delta_{j}}{2 \xi_{j}} \right \rfloor\\ & \geq \delta_{i} - \xi_{i} \left\lfloor \frac{\delta_{i}}{2\xi_{i}} \right\rfloor \\ & \geq \xi_{i} \end{align*} Here, the last inequality follows from the fact that for $a\geq b > 0$, the inequality $ a - b\cdot \left\lfloor \frac{a}{2b} \right\rfloor \geq b $ holds. From the assumption $\min\XI>0$, we get $\min\DEL'<\min\DEL$. Therefore, the induction hypothesis is applicable to $J_\alpha = \mathcal{O}_{\DEL'}^{\XI}(S_{\hat{\mathcal{F}}(\alpha)})$: \begin{align*} \conn J_{\alpha} &\geq \min_{i=1, \ldots, t} \left\lfloor\frac{\delta'_{i}}{\xi_{i}} \right\rfloor - 2 \\ &\geq \min_{i=1, \ldots, t} \left\lfloor\frac{\delta_{i} - (p+1)\cdot \xi_{i}}{2\xi_{i}}-2 \right\rfloor \\ &= \left\lfloor \min_{i=1, \ldots, t}\left( \frac{\delta_{i}}{2 \xi_{i}} \right) -2- \frac{p+1}{2}\right\rfloor\\ &\geq k-p,\end{align*} since $p\geq 1$. The rest is \emph{standard machinery}, cf. also the end of the proof of~\cite[Thm.~4.3]{MR3135444}: By the above connectivity bound for $J_\alpha$, as the link of $\alpha$ is a $(k+1)-p-1 = (k-p)$-sphere, there is a commutative diagram $$ \xymatrix{ \Link(\alpha) \ar[r]^{\hat {\mathcal{F}}|_{\Link(\alpha)}} \ar@{^{(}->}[d] &J_\alpha \ar[r] &\cO(S)\\ K \ar[ru]_{\hat f'}&& } $$ with $K$ a $(k-p+1)$-disk with boundary $\partial K = \Link(\alpha)$. The right map identifies plants on $S_\alpha$ with plants on $S$. Now, in the triangulation $\mathcal{D}^{k+1}$, replace the $(k+1)$-disk $\Star(\alpha) = \alpha*\Link(\alpha)$ with the $(k+1)$-disk $\partial\alpha*K$. This works because both $\Star(\alpha)$ and $\partial\alpha*K$ have the same boundary $\partial\alpha*\Link(\alpha)$. On $\partial\alpha*K$, modify $\hat{\mathcal{F}}$ by $$ \hat {\mathcal{F}} * \hat {\mathcal{F}}'\colon \partial\alpha*K \to \FO(S). $$ This is possible since $\hat {\mathcal{F}}'$ agrees with $\hat {\mathcal{F}}$ on $\Link(\alpha) = \partial K$. New simplices in $\partial \alpha*K$ are of the form $\tau = \beta_{1}*\beta_{2}$, where $\beta_{1}$ is a proper face of $\alpha$ and $\beta_{2}$ is mapped to $J_\alpha$. Therefore, if $\tau$ is a bad simplex in $\partial\alpha*K$, then $\tau=\beta_{1}$ since plants of $\hat {\mathcal{F}}'(\beta_{2})$ do not share any endpoints with other plants of $\hat {\mathcal{F}}'(\beta_{2})$ or $\hat {\mathcal{F}}'(\beta_{1})$, so they cannot contribute to a bad simplex. But $\beta_{1}$ is a proper face of $\alpha$, so we have decreased the number of top dimensional bad simplices. By induction on the number of top dimensional bad simplices, the result follows. The proof of \textbf{claim~(\ref{plant-conn})} is widely analogous to the proof of claim~(\ref{colored-conn}), replacing $\delta_{i}$ by $\min\DEL$ and $\xi_{i}$ by $\max\XI$ where necessary. \textbf{Claim~(\ref{multi-conn})} is proved by induction on $t$, for fixed $m$ and $r$. The multifern complex $\cP(S)$ is always non-empty, so the base case $t=m$ is trivial, as are all cases with $t< 4m-2$. We assume for the induction that $t\geq 2m$ holds. We argue as in part~(\ref{colored-conn}): Let $\alpha$ be a bad simplex of $\mathcal{D}^{k+1}$ of maximal dimension $p$. An arbitrary $p$-simplex in $\FP$ uses at most $(p+1) m$ different $\Delta_i$. Since $\alpha$ is bad, it uses at most $(p+1)(m-1) + \lfloor \frac{p+1}{2}\rfloor$ different $\Delta_i$. Let $\DEL'$ be the remainder of $\DEL$ after removing the arcs of $\alpha$, and $t'$ be the number of positive entries of~$\DEL'$. Recall that by the definition of multiferns, a multifern $\beta$ having endpoints at $\Delta_i$ necessarily implies that $\Delta_i$ disappears in $S_\beta$. Then, we have \begin{align} t' &\geq t - \left((p+1)(m-1) + \left\lfloor \frac{p+1}{2}\right\rfloor \right) \label{n-eqn}\\ &= t - \left\lfloor (p+1)\left(m-\frac{1}{2}\right) \right\rfloor\nonumber \\ &\geq t - \left\lfloor \left\lfloor\frac{t}{2m-1}\right\rfloor \left(m-\frac{1}{2}\right)\right\rfloor \label{eqn_x} \\ &\geq t - \left\lfloor\frac{t}{2}\right\rfloor\nonumber \\ &\geq m. \label{eqn_y} \end{align} Here, (\ref{eqn_x}) is due to the fact that $p+1 \leq k+2 \leq \left\lfloor \frac{t}{2m-1}\right\rfloor$, and~(\ref{eqn_y}) is true since we demanded that $t\geq 2m$. Consequently, we can apply the induction hypothesis to $J_\alpha$. Using~(\ref{n-eqn}), we obtain \begin{align*} \conn(J_\alpha) &\geq \left\lfloor \frac{t'}{2m-1}\right\rfloor -2 \\ &\geq \left\lfloor \frac{t-(p+1)(m-1) - \left\lfloor \frac{p+1}{2}\right\rfloor }{2m-1} - 2\right\rfloor \\ &\geq \left\lfloor k - \frac{\left\lfloor (p+1)\left(m-\frac{1}{2}\right) \right\rfloor}{2m-1}\right\rfloor \\ &\geq \left\lfloor k - \frac{p+1}{2} \right\rfloor \\ &\geq k-p, \end{align*} as $p\geq 1$ (there are no bad vertices). The rest of the proof can be copied from above (\emph{standard machinery}). \end{proof} \section{Combinatorics of colored plant complexes}\label{combinatorics} In this section, we focus on a specific class of colored plant complexes on a closed disk~$D$. Let $\XI$ be a $t$-tuple of positive integers and $n\in\N$. We consider colored plant complexes of the form $ \plant \coloneqq \mathcal{O}_{n\cdot\XI}^{\XI}(D) $ and write $\plantq$ for the set of $q$-simplices of $\plant$. Because of the specific constellation $\DEL = n\cdot\XI$, it is clear that the dimension of $\plant$ equals $n-1$. After applying a suitable homeomorphism, we may assume that $D$ lies in the complex plane as a disk of radius $1$ centered at $0\in\C$, that we have $* = -\mathrm{i}$, and that the $n\xi$ points of $\Delta$ are all real and arranged from left to right in $n$~\emph{clusters} of $\xi$ points each, where $\xi_{i}$ points in each cluster lie in $\Delta_{i}$, for $i=1, \ldots, t$. For each of these clusters, we suppose that the points in $\Delta_{i}$ are placed to the left of the points in $\Delta_{j}$, for $1\leq i<j\leq t$. \subsection*{The braid action} We can identify the full braid group $\Br_{n\xi}$ with the mapping class group $\Map(D\setminus \Delta)$, where the standard generator~$\sigma_{i}$ corresponds to a half twist in counterclockwise direction which interchanges the $i$-th and the $(i+1)$-th marked point. The colored braid group $\Br_{n\cdot\XI} \subset \Br_{n\xi}$ may then be identified with the set of mapping classes which leave the given partition of $\Delta$ invariant. This gives a well-defined left action of $\Br_{n\cdot\XI}$ on $\plantq$ by isotopy classes of orientation-preserving diffeomorphisms (or homeomorphisms, which is equivalent) for all $q = 0, \ldots, n-1$. We write $\alpha \mapsto \sigma\cdot\alpha$ for the action of $\sigma\in\Br_{n\cdot\XI}$ on a simplex $\alpha\in\plant$. Let $\alpha\in\plantq$ be a $q$-simplex. The sorting of the inward pointing unit tangent vectors of representative arcs of $\alpha$ at $*$ in clockwise order is well-defined, cf.~Remark~\ref{plant-order}. This ordering is invariant under the $\Br_{n\cdot\XI}$-action. We assign an \emph{index}~$i\in\{0, \ldots, q\}$ and a \emph{color}~$j\in\{1, \ldots, t\}$ to each arc in $\alpha$: \begin{itemize} \item An arc is labeled with the index $i$ if it belongs to the $(i+1)$-th plant in $\alpha$, where we use the order on the plants of $\alpha$ described in Remark~\ref{plant-order}. \item An arc is labeled with the color $j$ if its endpoint lies in $\Delta_{j}$. \end{itemize} Consequently, any $q$-simplex $\alpha$ defines a unique sequence \begin{equation}\label{ic-sequence} \omega_{\alpha} = (i_{1}, j_{1}), (i_{2}, j_{2}), \ldots, (i_{(q+1)\cdot\xi}, j_{(q+1)\cdot\xi}), \end{equation} where $i_{k}$ is the index and $j_{k}$ the color of the $k$-th arc of $\alpha$, in clockwise order at $*$. By definition of the mapping class group, this sequence is $\Br_{n\cdot\XI}$-invariant. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant8.pdf} \caption{$\omega_{\alpha} = (0,1),(1,2),(0,2),(0,1),(1,1),(1,1) $} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[scale=0.2]{plant9.pdf} \caption{$\omega_{\beta} = (0,2),(1,2),(2,1),(1,1),(2,2),(0,1)$} \end{subfigure} \caption{IC-sequences of simplices $\alpha\in\mathcal{O}^{[2,(2,1)]}_{1}$ and $\beta\in\mathcal{O}^{[4,(1,1)]}_{2}$.} \label{fig-sequences} \end{figure} \begin{definition} We call~(\ref{ic-sequence}) the \emph{IC-sequence} (index-color-sequence) of $\alpha\in\plantq$. \end{definition} With the help of IC-sequences, we are able to count the orbits of the $\Br_{n\cdot\XI}$-action: \begin{lemma}\label{orbits} Let $q < n$. The set $\plantq$ decomposes into $$ l_{q}^{\XI} \coloneqq \#\left\{\plantq / \Br_{n\cdot \XI} \right\} = \frac{(\xi(q+1))!}{(q+1)!\cdot(\xi_{1}!\cdot\ldots\cdot\xi_{t}!)^{q+1}} $$ $\Br_{n\cdot\XI}$-orbits. The orbits are in bijective correspondence to the set of occurring IC-sequences for $q$-simplices. \end{lemma} \begin{remark} A priori, the number $l_{q}^{\XI}$ depends not only on $q$ and $\XI$ but also on $n$. As a consequence of the lemma, the actual quantity is independent on $n$ as long as $q<n$; therefore, it makes sense to omit $n$ from the notation. \end{remark} \begin{proof} We start by counting the number of possible IC-sequences. We say a sequence of type~(\ref{ic-sequence}) is \emph{$q$-feasible} if each index $i\in\{0, \ldots, q\}$ is assigned $\xi$ times; for each index, each color $j = \{1, \ldots, t\}$ is assigned $\xi_{j}$ times; and the index $i+1$ does not appear before the index $i$, for all $i = 0, \ldots, q-1$. Since $D$ is path-connected, a $q$-feasible sequence appears as the IC-sequence of a $q$-simplex if and only if $q< n$, i.e., as long as there are enough endpoints for the arcs. We may now count the number of $q$-feasible sequences: There are $$ {\binom{\xi (q+1)}{\underbrace{\xi, \ldots, \xi}_{(q+1)\text{ times}}}}\frac{1}{(q+1)!} = \frac{(\xi(q+1))!}{(q+1)!\cdot(\xi!)^{q+1}} $$ different partitionings of $\xi (q+1)$ arcs into subsets of size $\xi$. Any such partition gives a unique indexing (recall that the first arc with index $i$ appears before the first arc with index $i+1$). Given an index $i$, the $\xi$ arcs labeled with it can be colored in $ \binom{\xi}{\xi_{1}, \ldots, \xi_{t}} $ different ways, which makes it $$ l_{q}^{\XI} = \frac{(\xi(q+1))!}{(q+1)!\cdot(\xi!)^{q+1}}\cdot \binom{\xi}{\xi_{1}, \ldots, \xi_{t}}^{q+1} = \frac{(\xi(q+1))!}{(q+1)!\cdot(\xi_{1}!\cdot\ldots\cdot\xi_{t}!)^{q+1}} $$ different choices of $q$-feasible sequences, as there are $(q+1)$ different indices. We have already seen above that the IC-sequence of a $q$-simplex is invariant under the $\Br_{n\cdot\XI}$-action. In the second part of the proof, we will now show that the group $\Br_{n\cdot\XI}$ acts transitively on simplices with according IC-sequences by an argument similar to \cite[Prop.~5.6]{0912.0325}. An alternate proof can be obtained by adapting the methods from the proof of \cite[Prop.~2.2(1)]{MR3135444}. Let $\alpha, \beta$ be two $q$-simplices with the same IC-sequence $\omega$. We choose representative non-intersecting collections of the plants of $\alpha$ and $\beta$ with arcs $a_{1}, \ldots, a_{(q+1)\xi}$ and $b_{1}, \ldots, b_{(q+1)\xi}$, respectively, subscripts chosen such that the arcs are arranged in clockwise order at $*$. Since $\Br_{n\cdot\XI}$ surjects onto $\mathfrak{S}_{n\cdot\XI}$, we may assume that $a_{i}$ and $b_{i}$ have the same endpoint $a_{i}(1) = b_{i}(1)$ for all $i = 1, \ldots, (q+1)\xi$. Furthermore, after a suitable isotopy, we may as well assume that for some $\epsilon>0$, we have $a_{i}(t) = b_{i}(t)$ for all $0 \leq t \leq \epsilon$. Hence, if we choose a continuous increasing function $h\colon I \to \R$ with $h(t) = t$ for $0 \leq t \leq \epsilon/2$ and $h(1) = \epsilon$, we obtain $a_{i} \circ h = b_{i} \circ h$ for all $i$. It remains to show that there is an orientation-preserving homeomorphism $G$ of~$D$ which, for all $i = 1, \ldots, (q+1)\xi$, retracts the arc $a_{i}$ to $a_{i} \circ h$, and fixes those marked points which are no endpoints of arcs in $\alpha$. In addition, we construct a similar map $H$ which carries $b_{i}$ to $b_{i} \circ h$ for all $i$. Then, the homeomorphism $H^{-1} \circ G$ defines a mapping class which carries $\alpha$ to $\beta$, and which corresponds to an element in $\Br_{n\cdot\XI}$ because the IC-sequences of $\alpha$ and $\beta$ coincide. To construct $G$, choose disjoint closed tubular neighborhoods $U_{i}$ of $a_{i}|_{[\epsilon/3, 1]}$ for all $i$. Such neighborhoods exist since the arcs are disjoint except at $*$. Now, $U_{i}$ is homeomorphic to a closed disk, and so there exists a homeomorphism which restricts to the identity on $\partial U_{i}$ and which carries the arc segment $U_{i} \cap a_{i}$ to its retraction $U_{i} \cap (a_{i} \circ h)$. Combining these homeomorphisms and extending them by the identity on $D \setminus \bigcup_{i} U_{i}$ yields the desired homeomorphism~$G$. The construction of $H$ is analogous. \end{proof} \begin{lemma}\label{stabilizers} Let $q< n$. For any simplex $\alpha\in\plantq$, the stabilizer of $\alpha$ under the $\Br_{n\cdot\XI}$-action is isomorphic to $\Br_{(n-q-1)\cdot\XI}$. In particular, there is a bijection between the elements of the orbit $\Br_{n\cdot\XI}\cdot\alpha$ and the cosets in $\Br_{n\cdot\XI} / \Br_{(n-q-1)\cdot\XI}$. \end{lemma} \begin{proof} We show that the stabilizer of any simplex $\alpha\in\plantq$ is isomorphic to $\Br_{(n-q-1)\cdot\XI}$. Then, the second assertion follows directly from the orbit-stabilizer-theorem. Let $\Sigma\subset D$ be the union of a representative set of the arcs of $\alpha$, only intersecting at~$*$. In clockwise order around $*$, denote the arcs in $\Sigma$ by $a_1, \ldots, a_{(q+1)\xi}$. By $\Map(D\setminus \Delta, \Sigma)$, we denote the group of isotopy classes of orientation-preserving diffeomorphisms of $D\setminus \Delta$ which fix $\Sigma$ pointwise. Since $\Sigma$ is contractible, the group $\Map(D\setminus\Delta, \Sigma)$ may be identified with $\Map(D\setminus (\Delta \setminus \Sigma)) \cong \Br_{(n-q-1)\cdot\XI}$, which itself may be identified with a subgroup of $\Br_{n\cdot\XI}$. We will show that the inclusion of subgroups of $\Map(D\setminus\Delta)$ \begin{equation*}\label{injection-mcg} \Map(D\setminus (\Delta \setminus \Sigma)) \hookrightarrow \left(\Map\left(D\setminus\Delta\right)\right)_{\alpha} \end{equation*} is surjective and hence an isomorphism. For this part, we follow the similar proof in \cite[Prop.~2.2(2)]{MR3135444}. Choose an element $\phi \in\Diff^+(D\setminus\Delta)$ which stabilizes the simplex $\alpha$. We have to show that $\phi$ is isotopic to a diffeomorphism that fixes $\Sigma$ pointwise. By definition, $\phi(a_1)$ is isotopic to $a_1$. The isotopy extension theorem \cite{MR0123338} implies that we can extend a corresponding isotopy to an ambient isotopy, so we may assume that $\phi$ fixes $a_1$ pointwise. We proceed by induction on the number of fixed arcs. Let $j>1$, and assume that $\phi$ fixes $\Sigma_{j} = a_1\cup \ldots\cup a_{j-1}$ pointwise. The arc $a_j$ is isotopic to $\phi(a_j)$, and we must show that the corresponding isotopy can be chosen disjointly from~$\Sigma_{j}$. If this holds, another application of the isotopy extension theorem implies the inductive step and thus the statement. Let $H\colon I \times I \to D$ be a smooth isotopy that conveys $\phi (a_{j})$ to $a_{j}$, and assume that $H$ is transverse to $\Sigma_{j}$, using the transversality theorem \cite{MR0061823}. Here, $H(0,-)$ and $H(1,-)$ correspond to the arcs $\phi(a_{j})$ and $a_{j}$, respectively. Furthermore, we have $H(-,0) = *$, and $H(-,1) \in\Delta$ is the endpoint of $a_{j}$. Now, consider the preimage $H^{-1}(\Sigma_{j})$. The line $I \times \{0\}$ is the preimage of $*$, and by transversality, all other components must be circles in the interior of $I \times I$. Since the intersection number is finite, there is at least one such circle which encloses no further circle in $H^{-1}(\Sigma_{j})$. Let $D_{0}$ be the closed disk it encloses. Let furthermore $\Sigma_{j}^{\delta}\subset D$ be a closed $\delta$-thickening of $\Sigma_{j}$ with $\delta>0$ chosen such that $\Sigma^{\delta}_{j}$ is still contractible. By continuity of $H$, we may now choose $\epsilon>0$ such that for a closed $\epsilon$-neighborhood $D_{0}^{\epsilon}$ of $D_{0}$, we have $H(\partial D_{0}^{\epsilon}) \subset \Sigma^{\delta}_{j}$, Restriction of $H$ to the closed disk $D_{0}^{\epsilon}$ defines an element of the relative homotopy group $\pi_{2}(D,\Sigma_{j}^{\delta} \setminus \Sigma_{j})$. This group is trivial. We may thus replace $H$ on $D_{0}^{\epsilon}$ by a homotopic map $H'$ with $H'|_{\partial D_{0}^{\epsilon}} = H|_{\partial D_{0}^{\epsilon}}$ and image in $\Sigma_{j}^{\delta} \setminus \Sigma_{j}$, which exists since $\Sigma_{j}^{\delta} \setminus \Sigma_{j}$ is simply connected. By extending $H'$ to $I \times I$ by $H'|_{(I \times I)\setminus D_{0}^{\epsilon}} = H|_{(I \times I)\setminus D_{0}^{\epsilon}}$, we obtain a homotopy $H'$ with $\pi_{0} (H'^{-1}(\Sigma_{j})) < \pi_{0} (H^{-1}(\Sigma_{j}))$. Inductively, we construct a homotopy $H''$ which is disjoint from $\Sigma_{j}$. Finally, by \cite[Thm.~3.1]{MR0214087}, $H''$ can be replaced by an isotopy in $(D\setminus \Sigma_{j}) \cup \{*\}$. \end{proof} \subsection*{Standard simplices} As a consequence of Lemma~\ref{orbits} and Lemma~\ref{stabilizers}, there are bijections between the set $\plantq$ of $q$-simplices and the disjoint union of $l_{q}^{\XI}$ copies of $\Br_{n\cdot\XI} / \Br_{(n-q-1)\cdot\XI}$ for all $q= 0,\ldots, n-1$. Our next goal is to make these bijections compatible with the semi-simplicial structure on $\plant$: We want to fix bijections and describe the structure of a semi-simplicial set on $$ \cplant = \bigsqcup_{q=0}^{n-1} \cplantq = \bigsqcup_{q=0}^{n-1} \bigsqcup_{l_{q}^{\XI} \text{ copies}} \Br_{n\cdot\XI} / \Br_{(n-q-1)\cdot\XI} $$ which is compatible with the face maps, thus defines a semi-simplicial isomorphism between $\cplant$ and $\plant$. Let $\omega$ be a fixed IC-sequence. In what follows, we define a standard $q$-simplex in $\plantq$ for the IC-sequence $\omega$: We resort the terms of $\omega$ by the consecutive sorting criteria \emph{index}~(1st), \emph{color}~(2nd) and \emph{position in the IC-sequence}~(3rd), and draw the arcs of a $q$-simplex in this new order, respecting the order at~$*$ prescribed by the IC-sequence. We draw the arcs such that the endpoint of each arc is chosen as the leftmost free marked point, where we always \emph{undercross} marked points if possible. This is compatible with the coloring because of the arrangement of marked points. This process generates a set of $(q+1)\cdot\xi$ arcs which is unique up to isotopy since we work on a disk $D$. Distinguishing by the indices, these arcs can be divided into $q+1$ colored $\XI$-plants, which in turn define a $q$-simplex $\alpha_{\omega} \in\plantq$. Using these simplices, every $q$-simplex can be written as $\sigma\cdot\alpha_{\omega}$ for some IC-sequence $\omega$ and some $\sigma\in\Br_{n\cdot\XI}$ as consequence of the transitivity of the $\Br_{n\cdot\XI}$-action on simplices with the same IC-sequence. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.2]{plant10.pdf} \caption{$\alpha_{\omega_{\alpha}}\in\mathcal{O}^{[2,(2,1)]}_{1}$} \end{subfigure} \begin{subfigure}[b]{0.48\linewidth} \centering \includegraphics[scale=0.2]{plant11.pdf} \caption{$\alpha_{\omega_{\beta}}\in\mathcal{O}^{[4,(1,1)]}_{2}$} \end{subfigure} \caption{Standard simplices for the IC-sequences of $\alpha$ and $\beta$ from Figure~\ref{fig-sequences}.} \label{fig-standard} \end{figure} \begin{definition} The simplex $\alpha_{\omega} \in\plantq$ is called the \emph{standard simplex for the IC-sequence $\omega$}. \end{definition} \begin{definition} Let $\alpha\in\plantq$ be a $q$-simplex, and $\omega$ its IC-sequence. The sequence $ \widetilde\omega = (p_{1}, p_{2}, \ldots, p_{(q+1)\cdot\xi}) $ induced by the reordering of the IC-sequence of $\alpha$ described above, where $p_{i}$ is the position in the IC-sequence of the corresponding arc, is called the \emph{P-sequence} (position sequence) of $\alpha$. \end{definition} \begin{example} The P-sequences of the simplices in Figures~\ref{fig-sequences} and~\ref{fig-standard} are given by $ \tilde\omega_{\alpha} = (1,4,3,5,6,2) $ and $ \tilde\omega_{\beta} = (6,1,4,2,3,5). $ \end{example} We identify $\Br_{n\xi}$ with the mapping class group of the $n\xi$-punctured disk in such a way that $\Br_{n\cdot\XI}$ is the stabilizer of the colored configuration of $n\xi$ points in $D$. The element $\sigma_{i\xi + j}$, for $i=0, \ldots, q$ and $j=1, \ldots, \xi-1$, describes the isotopy class of a a half twist that interchanges the $j$-th and the $(j+1)$-th point of the $(i+1)$-th cluster. On the other hand, the elements of the form $\sigma_{i\xi}$, $i=1, \ldots, q$, describe a half twist that interchanges the $\xi$-th point of the $i$-th cluster with the first point of the $(i+1)$-th cluster. We know from Lemma~\ref{stabilizers} that the stabilizer of a $q$-simplex is isomorphic to the group $\Br_{(n-q-1)\cdot\XI}$. For any standard $q$-simplex $\alpha_{\omega}$, we may thus write \begin{align*} (\Br_{n\cdot\XI})_{\alpha_{\omega}}&= \left\langle \sigma_{k} \mid (q+1)\cdot\xi+1 \leq k \leq n \xi-1 \right\rangle \cap \Br_{n\cdot\XI}. \end{align*} As this expression is independent of $\omega$, we may denote the stabilizer of \emph{any} standard $q$-simplex by $$ L_{q} = (\Br_{n\cdot\XI})_{\alpha_{\omega}} \cong \Br_{(n-q-1)\cdot\XI}. $$ Now, once and for all, we fix the bijection \begin{align*} \Gamma_{\omega}\colon \Br_{n\cdot\XI}/L_{q} &\to \Br_{n\cdot\XI}\cdot\alpha_{\omega} \\ \sigma L_{q} &\mapsto \sigma\cdot\alpha_{\omega} \end{align*} for each $\Br_{n\cdot\XI}$-orbit in $\plant$. Collecting these maps for all IC-sequences, we obtain a global bijection \begin{align*} \Gamma\colon\cplant&\to\plant\\ (\omega_{p}, \sigma L_{p}) &\mapsto \sigma\cdot\alpha_{\omega_{p}} \:\:\:\text{for all }p\geq 0, \end{align*} where $\omega_{p}$ is an IC-sequence of a $p$-simplex. \subsection*{Face maps} Recall that the colored plants in a $q$-simplex $\alpha = \langle v_{0}, \ldots, v_{q} \rangle \in\plantq$ are ordered by the tangential direction at $*$ of their respective leftmost arcs. For $i=0,\ldots,q$, the $i$-th \emph{face map} is given by leaving out the vertex $v_{i}$: \begin{align*} \partial_{i}\colon \plantq &\to \plantqi \\ \langle v_{0}, \ldots, v_{q} \rangle &\mapsto \langle v_{0}, \ldots, \hat{v_{i}}, \ldots v_{q}\rangle. \end{align*} We now determine \emph{face maps} in $\cplantq$ for all $q\geq 0$ which are compatible with the face maps in $\plant$ insofar as they give $\cplant$ the structure of a semi-simplicial set isomorphic to $\plant$. These maps are evidently given by $\Gamma^{-1}\circ \partial_{i} \circ \Gamma$. For later use, we need to describe them explicitly. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.2]{proc1.pdf} \caption{$\alpha_{\omega}$} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.2]{proc2.pdf} \caption{$\partial_{1}\alpha_{\omega}$} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering \includegraphics[scale=0.2]{proc3.pdf} \caption{$\alpha_{d_{1}\omega}$} \end{subfigure} \caption{The simplices $\alpha_{\omega}$, $\partial_{1}\alpha_{\omega}$, $\alpha_{d_{1}\omega}\in\mathcal{O}^{[3,(1,1)]}$ for $\omega$ from Example~\ref{ex-faces}.} \label{fig-faces} \end{figure} Given a $q$-simplex with IC-sequence $\omega$, its $i$-th face map is given by removing the arcs with index $i$. Hence, we may define the $i$-th \emph{face} of $\omega$ as the IC-sequence $d_{i}\omega$ obtained by first removing all the pairs with index $i$ from $\omega$, and secondly subtracting $1$ from the indices of the remaining elements with indices bigger than~$i$. The IC-sequence $d_{i} \omega$ defines a P-sequence which we denote by $d_{i} \tilde\omega$. \begin{example}\label{ex-faces} Consider the IC-sequence $\omega = (0,2),(1,1),(2,1),(0,1),(2,2),(1,2)$ for a simplex in $\mathcal{O}^{[3,(1,1)]}$. The corresponding P-sequence is $\tilde\omega = (4,1,2,6,3,6)$, and the first faces of the sequences are given by $d_{1}\omega = (0,2),(1,1),(0,1),(1,2)$ and $d_{1}\tilde\omega = (3,1,2,4)$. The inherent standard simplices are depicted in Figure~\ref{fig-faces}. \end{example} Our next goal is to find elements $\tau_{i,q}^{\omega}\in\Br_{n\cdot\XI}$ for all $i = 0, \ldots, q$, such that \begin{enumerate}[(i)] \item\label{cond11} $ \partial_{i}\alpha_{\omega} = \tau_{i,q}^{\omega}\cdot\alpha_{d_{i}\omega}, $ and \item\label{cond21} $\tau_{i,q}^{\omega}$ commutes with $L_{q-1}$. \end{enumerate} If we identify such elements, we are eventually able to define maps \begin{align*} \bd_{i}\colon \cplantq &\to \cplantqi \\ (\omega, \sigma L_{q}) &\mapsto (d_{i} \omega, \sigma\tau_{i,q}^{\omega}L_{q-1}), \end{align*} which are independent of the choice of a representative for the coset $\sigma L_{q}$ because of condition~(\ref{cond21}) and the fact $L_{q} \subset L_{q-1}$. Furthermore, by condition~(\ref{cond11}), such elements satisfy \begin{align} \bd_{i}(\omega, \sigma L_{q}) &= (d_{i} \omega, \sigma\tau_{i,q}^{\omega}L_{q-1})\nonumber \\ &= \Gamma^{-1}(\sigma\tau_{i,q}^{\omega}\cdot\alpha_{d_{i} \omega})\nonumber \\ &= \Gamma^{-1}(\sigma\cdot\partial_{i}\alpha_{\omega})\nonumber \\ &= (\Gamma^{-1}\circ\partial_{i})(\sigma\cdot \alpha_{\omega}) \label{geomfact}\\ &= (\Gamma^{-1}\circ\partial_{i}\circ\Gamma)(\omega, \sigma L_{q}) \nonumber, \end{align} as desired. Here, in~(\ref{geomfact}), we used the geometric fact that for any $\sigma\in\Br_{n\cdot\XI}$, we have $\sigma\cdot\partial_{i}\alpha_{\omega} = \partial_{i}(\sigma\cdot\alpha_{\omega})$: The plant deletion operator $\partial_{i}$ commutes with the action of the mapping class defined by $\sigma$. The face of an IC-sequence of a simplex is defined as the IC-sequence of the corresponding face of a simplex. By Lemma~\ref{orbits}, the colored braid group $\Br_{n\cdot\XI}$ acts transitively on the set of simplices with the same IC-sequence. From these two facts, it is immediate that elements $\tau^{\omega}_{i,q}$ satisfying condition (\ref{cond11}) exist and that the coset $\tau^{\omega}_{i,q}L_{q-1}$ is unique. We are yet to determine them explicitly, and check whether they satisfy condition~(\ref{cond21}). By the construction of standard simplices, the first $i$ vertices of $\partial_{i}\alpha_{\omega}$ and $\alpha_{d_{i}\omega}$ are identical. Now, mapping $\alpha_{d_{i}\omega}$ to $\partial_{i}\alpha_{\omega}$ requires transferring the points of the $(q+1)$-th cluster to the $(i+1)$-th cluster in a suitable way: This transfer is performed for one point after the other, starting with the leftmost point. Let $\widetilde\omega = (p_{1}, \ldots, p_{(q+1)\cdot\xi})$ be the P-sequence of $\alpha_\omega$. If $p_{i\xi + m} < p_{r \xi + s}$ for some $m, s\in \{1, \ldots, \xi\}$ and $q\geq r > i$, the $m$-th point of the $(q+1)$-th cluster has to be half-twisted around the endpoint of the arc which (in $\alpha_{d_{i}\omega}$) ends at the $s$-th point of the $r$-th cluster in a positive direction, and in a negative direction otherwise. A careful analysis of this procedure (where we take into account that the braid group acts from the left) yields the following formula: \begin{equation}\label{tau} \tau_{i,q}^{\omega} = \prod_{j=1}^{\xi}\left( \prod_{k=(i+1)\cdot\xi}^{(q+1)\cdot \xi - 1} \left(\sigma_{k-j+1}\right)^{\sgn(p_{k+1} - p_{(i+1)\cdot\xi - j + 1})}\right) \end{equation} Here, $\sgn\colon \Z \to \{-1, 1\}$ is the signum function. Visibly, the largest index of a braid generator involved is $(q+1)\cdot \xi - 1$, so $\tau_{q,i}^{\omega}$ commutes with the elements of $L_{q}$, where the smallest index involved is $(q+1)\cdot\xi + 1$. Thus, condition~(\ref{cond21}) is satisfied. An example for the stepwise construction of $\tau_{i,1}^{\omega}$ can be found in Figure~\ref{fig-tau}. \begin{figure} \centering \captionsetup[subfigure]{labelformat=empty} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc3.pdf} \caption{$\alpha_{d_{1}\omega}$} \end{subfigure} \begin{minipage}[b][3cm][c]{0.04\linewidth} $\overset{\sigma_{4}}{\longmapsto}$ \vspace{0.5cm} \end{minipage} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc4.pdf} \caption{$\;$} \end{subfigure} \begin{minipage}[b][3cm][c]{0.04\linewidth} $\overset{\sigma_{3}}{\longmapsto}$ \vspace{0.5cm} \end{minipage} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc5.pdf} \caption{$\;$} \end{subfigure} \vspace{4ex} \begin{minipage}[b][3cm][c]{0.04\linewidth} $\overset{\sigma_{5}^{-1}}{\longmapsto}$ \vspace{0.5cm} \end{minipage} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc6.pdf} \caption{$\;$} \end{subfigure} \begin{minipage}[b][3cm][c]{0.04\linewidth} $\overset{\sigma_{4}^{-1}}{\longmapsto}$ \vspace{0.5cm} \end{minipage} \begin{subfigure}[b]{0.28\linewidth} \centering \includegraphics[scale=0.15]{proc2.pdf} \caption{$\tau_{1,2}^{\omega}\alpha_{d_{1}\omega} = \partial_{1}\alpha_{\omega}$} \end{subfigure} \caption{Passing from $\alpha_{d_{1} \omega}$ to $\partial_{1}\alpha_{\omega}$ ($\omega$ from Example~\ref{ex-faces}).} \label{fig-tau} \end{figure} \begin{remark}\label{pic-tau} \begin{enumerate}[(i)] \item A priori, $\tau^{\omega}_{i,q}\in\Br_{n\cdot\XI}$ for some fixed $n>q$. We note that the definition in~(\ref{tau}) does not depend on the particular choice of $n$, so we can regard $\tau^{\omega}_{i,q}$ as a common element of all $\Br_{n\cdot\XI}$ for $n>q$, using the inclusions $\Br_{n\cdot\XI} \hookrightarrow \Br_{(n+1)\cdot\XI}$ given by attaching $\xi$ trivial strands with coloring $\XI$ to the right of a braid in $\Br_{n\cdot\XI}$. \item If $\tilde\omega$ is the P-sequence corresponding to the IC-sequence $\omega$, we sometimes also write $\tau_{i,q}^{\tilde\omega}$ to denote the element $\tau_{i,q}^{\omega}$. \end{enumerate} \end{remark} We have just finished proving the following result: \begin{proposition}\label{semisimpl-iso} For $i=0,\ldots,q$, the maps \begin{align*} \bd_{i}\colon \cplantq &\to \cplantqi \\ (\omega, \beta L_{q}) &\mapsto (d_{i} \omega, \beta\tau_{i,q}^{\omega}L_{q-1}), \end{align*} give $\cplant$ the structure of a semi-simplicial set such that $ \Gamma\colon \cplant \to \plant $ is an isomorphism of semi-simplicial sets. \end{proposition} \begin{remark}\label{comb-action} As there is a $\Br_{n\cdot\XI}$-action on $\plant$, there is also a $\Br_{n\cdot\XI}$-action on $\cplant$: A braid $\tau\in\Br_{n\cdot\XI}$ acts via $ \tau\cdot(\omega_{p}, \sigma L_{p}) = (\omega_{p}, \tau\sigma L_{p}) $. \end{remark} \section{Hurwitz spaces}\label{hurwitz-spaces} Let $G$ be a finite group. Following the path pursued in \cite{0912.0325}, we consider Hurwitz spaces of branched $G$-covers of a closed disk $D$. These spaces are relevant to arithmetic applications, cf.~\cite{MR1119950} and \cite{MR2316356}. Let $n\in\N$, and $*$ a marked point in the boundary of $D$. We consider (not necessarily connected) branched covers of $D$ described by the following data: \begin{itemize} \item[--] a branch locus $B \in\Conf_{n}$, \item[--] an unbranched covering space map $p\colon Y \to D \setminus B$, \item[--] a marked point $\bullet$ in the fiber of $p$ above $*$, and \item[--] a group homomorphism $\alpha\colon G \to \Aut(p)$ which induces a free and transitive action of $G$ on any fiber of $p$. \end{itemize} An isomorphism of two such covers is a homeomorphism of the total spaces of the coverings which is compatible with the remaining data. By virtue of the Riemann existence theorem, the isomorphism classes can be parametrized by the following data: \begin{definition} A \emph{marked $n$-branched $G$-cover of $D$} is defined as a pair $(B, \mu)$, where $B\in\Conf_{n}$ is a configuration of $n$ branch points, and $\mu\colon\pi_{1}(D\setminus B, *) \to G$ is a homomorphism. \end{definition} \begin{remark} We call such covers \emph{marked} since we do not consider the monodromy homomorphisms $\mu\colon\pi_{1}(D\setminus B, *) \to G$ up to conjugacy in the target. This amounts to marking the point $\bullet$ in the fiber of a branched cover above $*$. \end{remark} The space of marked $n$-branched $G$-covers must be a covering space of $\Conf_{n}$ with fiber $\Hom(\pi_{1}(D\setminus B, *), G) \cong G^{n}$. The elements of $G^{n}$ are called \emph{Hurwitz vectors}. Such vectors are unique up to the choice of a basis for $\pi_{1}(D\setminus B, *) \cong F_{n}$ which consists of loops around the single points of $B$, i.e., up to the action of $\Map(D\setminus B) \cong \Br_{n}$ on $G^{n}$, given by \begin{equation}\label{hurwitz-action} \sigma_{i} \cdot \underline g = (g_{1}, \ldots, g_{i-1}, g_{i}g_{i+1}g_{i}^{-1}, g_{i}, g_{i+2}, \ldots, g_{n}) \end{equation} for $i = 1, \ldots, n-1$, cf.~\cite{MR1509816}. The left monodromy action of $\pi_{1}(\Conf_{n}) \cong \Br_{n}$ on $G^{n}$ can be identified with the \emph{Hurwitz action} (\ref{hurwitz-action}) as well. Replacing $\Conf_{n}$ with the classifying space $\BBr_{n}$ for convenience, we obtain: \begin{definition}\label{def-hurwitz} The \emph{Hurwitz space for marked $n$-branched $G$-covers of the disk} is defined as the Borel construction $$ \Hur_{G,n} = \EBr_n \times_{\Br_{n}} G^n, $$ where $\Br_n$ acts on $G^{n}$ via the Hurwitz action~(\ref{hurwitz-action}). \end{definition} \subsection*{Combinatorial invariants} The connected components of $\Hur_{G,n}$ are indexed by the set of $\Br_{n}$-orbits in $G^{n}$. Below, we list some $\Br_{n}$-invariant functions on Hurwitz vectors in $G^{n}$. Such invariants must be constant on connected components of $\Hur_{G,n}$. Let $(B, \mu)$ be a fixed marked $n$-branched $G$-cover of $D$ with Hurwitz vector $\underline g = (g_{1}, \ldots, g_{n})\in G^{n}$. \begin{itemize} \item[--] The \emph{global monodromy} of $(B, \mu)$ is the subgroup of $G$ generated by $g_{1}, \ldots, g_{n}$. It is equal to $G$ if and only if the corresponding branched cover is connected. In this case, we say that the branched cover has \emph{full monodromy}. \item[--] The \emph{boundary monodromy} of $(B, \mu)$ is defined as the product $\partial\underline g = \prod_{i=1}^{n} g_{i}$. Its inverse describes the branching behavior \emph{at infinity}. \item[--] Given distinct conjugacy classes $c_{1}, \ldots, c_{t}$ such that all entries of $\underline g$ lie in some $c_{i}$, the \emph{shape (vector)} of $(B, \mu)$ is the $t$-tuple $(n_{c_{1}}(B,\mu), \ldots, n_{c_{t}}(B, \mu))$, where $n_{c_{i}}(B, \mu)$ is the number of elements of $\underline g$ that lie in~$c_{i}$. \end{itemize} Considering possible shapes for a nontrivial group $G$, we obtain a lower bound $b_{0}(\Hur_{G,n}) \geq n+1$ for the zeroth Betti number. It makes thus sense to consider sequences of subspaces of Hurwitz spaces which do not \emph{a priori} exclude the possibility of the existence of a homological stability theorem. From now on, let $c = (c_{1}, \ldots, c_{t})$ be a tuple of $t$ distinct nontrivial conjugacy classes in~$G$. For a shape vector $\XI = (\xi_{1}, \ldots, \xi_{t})\in\N^{t}$ of length $\xi = \sum_{i=1}^{t} \xi_{i}$, we consider the subspace $\Hur^{c}_{G,\XI}$ of $\Hur_{G,\xi}$ which parametrizes covers with shape $\XI$, which is a union of connected components of $\Hur_{G, \xi}$ The space $\Hur^{c}_{G,\XI}$ is a cover of $\BBr_{\xi} \cong \Conf_{\xi}$ with the tuples in $G^\xi$ for which $\xi_{i}$ entries lie in $c_{i}$ as a fiber. This cover factors over $\BBr_{\XI} \cong \Conf_{\XI}$, which is the classifying space of the colored braid group $\Br_{\XI}$. The fiber of the unbranched cover $\Hur^{c}_{G,\XI} \to \BBr_{\XI}$ can be identified with $$ \cc = c_{1}^{\xi_{1}} \times \ldots \times c_{t}^{\xi_{t}} , $$ on which $\Br_{\XI}$ acts via the Hurwitz action. Generalizing to covers with shape vector $n\cdot\XI= (n\xi_{1}, \ldots, n\xi_{t})$, we may write $$\Hur_{G,n\cdot\XI}^{c} = \EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}} \cc^{n},$$ where we identify $\Br_{n\cdot\XI}$ with the set stabilizer of $\cc^{n}$ under the $\Br_{n\xi}$-action on $G^{n\xi}$. \subsection*{Structure on Hurwitz spaces} We will now identify more structure Hurwitz spaces in order to support the homological investigations to follow. Before starting, it is good to know that we may work in the category of CW complexes: Hurwitz spaces and thus all of their components are homotopy equivalent to finite CW complexes by~\cite[Prop.~2.5]{0912.0325}. For $m, n \in \N_{0}$, we obtain continuous maps $$ \Hur_{G,m\cdot\XI}^{c} \times \Hur_{G,n\cdot\XI}^{c} \to \Hur_{G, (m+n)\cdot\XI}^{c} $$ from the inclusions $\Br_{m\cdot\XI} \times \Br_{n\cdot\XI} \to \Br_{(m+n)\cdot\XI}$ and $\cc^{m} \times \cc^{n} \to \cc^{m+n}$ which are defined and associative up to homotopy. These maps give $\bigsqcup_{n\geq 0} \Hur_{G,n\cdot\XI}^{c}$ the structure of a disconnected $H$-space with homotopy identity $\Hur_{G,0\cdot\XI}^{c}$. Let $A$ be a commutative ring. The $H$-space structure on the union of Hurwitz spaces induces a graded (grading in the $n$-variable) ring structure on the direct sum of the zeroth homologies: \begin{definition}\label{def-ring} The $A$-module $$ R^{A,c}_{G,\XI} = \bigoplus_{n\geq 0} H_{0}(\Hur_{G,n\cdot \XI}^{c}; A) $$ is called the \emph{ring of connected components (with coefficient ring $A$)} for the sequence $\{\Hur_{G,n\cdot \XI}^{c} \mid n\geq 0\}$ of Hurwitz spaces. If $G$, $\XI$, $A$, and $c$ are clear from the context, we simply denote the ring by $R$. \end{definition} \begin{remark}\label{comb-descr}\label{deg-one} There is a nice combinatorial description of the ring $R$, cf.~\cite{MR1119950}: Let $\mathfrak{s} = \bigsqcup_{n\geq 0} \cc^{n}/\Br_{n\cdot\XI}$. Concatenation of Hurwitz vectors gives $\mathfrak{s}$ the structure of a monoid with the empty tuple as the identity. Then, $R$ is the monoid algebra $A[\mathfrak{s}]$. In particular, $R$ is finitely generated as an $A$-algebra: Its degree one part is generated as an $A$-module by elements $r(g)$ with $g\in\cc/\Br_{\XI}$. Any element of $\cc^{n}$ is the concatenation of $n$ elements in $\cc$, and this concatenation descends to a map $(\cc / \Br_{\XI})^{n} \to \cc^{n} / \Br_{n\cdot\XI}$ by virtue of the natural inclusion $(\Br_{\XI})^{n} \to \Br_{n\cdot\XI}$. Therefore, the degree one elements $r(g)$ generate $R$ as an $A$-algebra. \end{remark} The direct sum of the $p$-th homology modules of Hurwitz spaces obtains the structure of a graded $R$-module from the $H$-space structure in connection with the K\"unneth formula: The graded $A$-module (grading in the $n$-variable) $$ M_{G, \XI,p}^{A,c} = \bigoplus_{n \geq 0} H_{p}(\Hur_{G,n\cdot\XI}^{c};A), $$ has the structure of a graded $R^{A,c}_{G,\XI}$-module. If no misunderstandings are possible, we denote it by $M_{p}$. Clearly, we have $M_{0} = R$. \section{Homological stability for Hurwitz spaces}\label{homstabhurwitz} Let $G$ be a finite group, $c = (c_{1}, \ldots, c_{t})$ a tuple of distinct conjugacy classes in $G$, $\XI\in\N^{t}$, and $\cc = c_{1}^{\xi_{1}}\times\ldots\times c_{t}^{\xi_{t}}$. Let moreover $A$ be a commutative ring. In what follows, we use the notion of the \emph{ring of connected components} introduced in Definition~\ref{def-ring}. We usually write $R$ instead of $R^{A,c}_{G,\XI}$. For a central element $U\in R$ and $R[U]$ the $U$-torsion in $R$, we use the notation $D_{R}(U) = \max\{\deg (R/UR), \deg (R[U])\}$. We work with colored plant complexes of the form $ \plant = \mathcal{O}_{n\cdot\XI}^{\XI}(D) \cong \cplant. $ We study the homology of the spaces $ \Hur_{G, n\cdot \XI}^{c} = \EBr_{n\cdot{\XI}} \times_{\Br_{n\cdot\XI}} \cc ^{n} $ , for $n\geq 0$. \subsection*{The purely abelian case}\label{abelian} At first, we consider the case of a central homogeneous element $U\in R$ such that $D_{R}(U) = 0$. In this case, $U$ is necessarily of degree one and induces an isomorphism $R_{i} \cong R_{i+1}$ in any degree $i\geq 0$, so $R \cong A[x]$ must hold. Hence, there is only one $\Br_{\XI}$-orbit in $\cc$. This necessarily implies that any conjugacy class in $c$ is a singleton, since otherwise there would be multiple boundary monodromies and thus multiple $\Br_{\XI}$-orbits in $\cc$. In other words, the monodromy $\mu\colon \pi_{1}(D\setminus B) \to G$ of the covers in $\Hur_{G, n\cdot\XI}^{c}$ is a subgroup of the center of $G$. We call such covers \emph{purely abelian}. Vice versa, if all covers in $\Hur_{G, n\cdot\XI}^{c}$ are purely abelian, any conjugacy class in $c$ is a singleton and thus the single element of $\cc$ defines an element $U\in R$ such that $D_{R}(U) = 0$. We see that we have \begin{equation}\label{hurwitz-conf} \Hur_{G, n\cdot\XI}^{c} = \EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}} \cc^{n} = \BBr_{n\cdot\XI} \cong \Conf_{n\cdot\XI}. \end{equation} By (\ref{tran}), for all $p\geq 0$, we have $ H_{p}(\Conf_{n\cdot\XI};\Z) \cong H_{p}(\Conf_{(n+1)\cdot\XI};\Z) $ with stable range $n\geq \frac{2p}{\min\underline\xi}$. As a result, we obtain: \begin{corollary}\label{abelian-case} If there exists a central homogeneous element $U\in R_{>0}$ such that $D_{R}(U) = 0$ (equivalently, if $\Hur_{G,n\cdot\XI}^{c}$ parametrizes purely Abelian covers), there is an isomorphism $ H_{p}(\Hur^{c}_{G,n\cdot\XI}; \Z) \cong H_{p}(\Hur^{c}_{G,(n+1)\cdot\XI}; \Z) $ for $n\geq \frac{2p}{\min\XI}$. \end{corollary} \subsection*{The spectral sequence}\label{spectral-sequence} The space $\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}}(\plant \times \cc^{n})$ inherits a semi-simplicial structure from the face maps $\partial_{i}$ on $\plant$, where the left action of $\Br_{n\cdot\XI}$ on the product $\plant \times \cc^{n}$ is the diagonal action. The spectral sequence associated to the semi-simplicial space, \begin{align}\label{the-sequence} \begin{split} E^{1}_{qp} &= H_{p}(\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}}(\plantq \times \cc^{n});A) \\ &\Longrightarrow H_{p+q}(\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}}(\rplant \times \cc^{n});A), \end{split} \end{align} converges to the homology of the realization of the total complex. By Theorem~\ref{delta-conn}(\ref{colored-conn}), the space $\rplant$ is $( \left\lfloor\frac{n}{2} \right\rfloor - 2)$-connected. Thus, the target of the spectral sequence (\ref{the-sequence}) is isomorphic to $H_{p+q}(\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}} \cc^{n};A) \cong H_{p+q}(\Hur_{G,n\cdot\XI}^{c}; A)$ in degrees $p+q \leq \left\lfloor\frac{n}{2} \right\rfloor - 2$. Now, for $q < n$, we identify each of the $ l_{q}^{\XI} $ $\Br_{n\cdot\XI}$-orbits in $\plantq$ with a copy of the quotient $\Br_{n\cdot\XI}/L_{q}$, cf.~Lemmas~\ref{orbits} and~\ref{stabilizers}. The subgroup $L_{q}\cong \Br_{(n-q-1)\cdot\XI}$ acts on the last $(n-q-1)\cdot\xi$ entries of $\cc^{n}$. Consequently, we obtain \begin{align} \label{spseq-iso} \begin{split} E^{1}_{qp} &\cong H_{p}(\{1, \ldots, l_{q}^{\XI}\} \times \cc^{q+1} \times (\EBr_{n\cdot\XI} \times_{L_{q}} \cc^{n-q-1} );A ) \\ &\cong A^{l_{q}^{\XI}} \otimes_{A} A\langle \cc^{q+1}\rangle \otimes_{A} H_{p}(\Hur_{(n-q-1)\cdot\XI}^{c};A). \end{split} \end{align} The differentials $\dd$ on the $E^{1}$-page are induced by the alternating sum of the face maps on the semi-simplicial space. In the following, we aim to find an explicit identification of the differentials under the isomorphism (\ref{spseq-iso}). Let $\underline g \in \cc^{n}\subset G^{n\xi}$ for some $n\in\N$. We write $(\underline g)^{\leq j}\in\cc^{j}$ for the tuple consisting of the first $j\xi$ entries of $\underline g$, and $(\underline g)^{> j}\in\cc^{n-j}$ for the complementary $(n-j)\xi$-tuple. By $(\underline g)_{j}\in\cc$, we denote the $j$-th $\xi$-tuple in $\underline g$. \begin{lemma}\label{spseq-translation} Let $q<n$. Under the isomorphism~(\ref{spseq-iso}), $\dd \colon E^{1}_{qp} \to E^{1}_{q-1,p} $ is represented by the linear map \begin{align*} \dd = \sum_{i=0}^{q} (-1)^{i}{{{{\partial_{i}}_{*}}}} \colon &A^{l_{q}^{\XI}} \otimes_{A} A\langle \cc^{q+1}\rangle \otimes_{A} H_{p}(\Hur_{(n-q-1)\cdot\XI}^{c};A) \\ &\to A^{l_{q-1}^{\XI}} \otimes_{A} A\langle \cc^{q}\rangle \otimes_{A} H_{p}(\Hur_{(n-q)\cdot\XI}^{c};A), \end{align*} the ${{{\partial_{i}}_{*}}}$, for $i=0,\ldots, q$, being given by linear extension of $$ {{{\partial_{i}}_{*}}}(\omega\otimes \underline h\otimes x) = d_{i}\omega \otimes ((\tau_{i,q}^{\omega})^{-1}\cdot\underline h)^{\leq q}\otimes r(((\tau_{i,q}^{\omega})^{-1}\cdot\underline h)_{q+1})\cdot x, $$ where $\omega$ is the IC-sequence of a $q$-simplex, $\underline h\in\cc^{q+1}$, and $x\in H_{p}(\Hur_{G,(n-q-1)\cdot\XI}^{c};A)$. \end{lemma} \begin{proof} In combinatorial terms, we may write the face maps $\partial_{i}$ on the semi-simplicial space $\EBr_{n\cdot\XI} \times_{\Br_{n\cdot\XI}}(\plant \times \cc^{n})$ as $$ [(e, (\omega, \sigma L_{q}), \underline g)]_{\Br_{n\cdot\XI}} \mapsto[(e, (d_{i}\omega, \sigma\tau^{\omega}_{i,q} L_{q-1}), \underline g)]_{\Br_{n\cdot\XI}}, $$ where the $\tau_{q,i}^{\omega}$ are defined as in~(\ref{tau}). This may be rewritten as \begin{equation}\label{claim-1} [(e, \omega, \underline g)]_{L_{q}} \mapsto [(e \cdot\tau_{i,q}^{\omega}, d_{i}\omega, (\tau_{i,q}^{\omega})^{-1}\cdot\underline g)]_{L_{q-1}}. \end{equation} \textbf{Claim:} The map (\ref{claim-1}) is $L_{q}$-equivariantly homotopic to \begin{equation}\label{claim-2} [(e, \omega, \underline g)]_{L_{q}} \mapsto [(e, d_{i}\omega, (\tau_{i,q}^{\omega})^{-1}\cdot\underline g)]_{L_{q-1}}. \end{equation} \textbf{Proof of the claim:} Let $\iota$ be the identity on $\EBr_{n\cdot\XI}$ and $\tau$ multiplication in $\EBr_{n\cdot\XI}$ by $\tau_{i,q}^{\omega}$. Now, $\tau$ descends to a map $\BBr_{n\cdot\XI} \to \BBr_{n\cdot\XI}$ which is induced by conjugation with $\tau_{i,q}^{\omega}$ in $\Br_{n\cdot\XI}$. Since $\tau_{i,q}^{\omega}$ commutes with the elements of $L_{q}$, this conjugation restricts to the identity on $L_{q}$. Therefore, both $\iota$ and $\tau$ descend to self-maps of $\BL_{q}$ homotopic to the identity (note that we may use $\EBr_{n\cdot\XI} / L_{q}$ as a model for $\BL_{q}$). Hence, $\tau$ is $L_{q}$-equivariantly freely homotopic to $\iota$. From this fact, the claim follows directly. \qed Now, the map \begin{align*} \EBr_{n\cdot\XI} \times_{L_{q}} \cc^{n} &\to \EBr_{n\cdot\XI} \times_{L_{q-1}} \cc^{n} \\ [(e, \underline g )]_{L_{q}} &\mapsto[(e, \underline g )]_{L_{q-1}} \end{align*} is identified with $$ \cc^{q+1} \times \Hur_{G,(n-q-1)\cdot\XI} \to \cc^{q} \times \Hur_{G,(n-q)\cdot\XI}, $$ given by left concatenation of a Hurwitz vector with the last $\xi$-tuple $(\underline g)_{q+1}$ of $\underline g \in \cc^{n}$, where we also identify $$\Hur^{c}_{G,(n-q-i)\cdot\XI} \cong\EBr_{n\cdot\XI} \times_{L_{q-i}} \cc^{n-q-i}$$ for $i = 0,1$. In homology, this corresponds to multiplication by $r((\underline g)_{q+1}) \in R$. Finally, note that $\tau_{i,q}^{\omega}$ only acts on $(\underline g)^{\leq q+1}$, while $L_{q-1}$ acts on the $\xi$-tuples $(\underline g)^{>q+1}$. Thus, the induced map in homology of (\ref{claim-2}) yields the desired form of ${{{\partial_{i}}_{*}}}$. The lemma follows from the fact that the differential on the $E^{1}$-page of (\ref{the-sequence}) is given by the induced map in homology of the alternating sum of the face maps. \end{proof} For a graded $R$-module $M$ and $q\geq 0$, we write $M(q)$ for its \emph{shift} by $q$. \begin{definition}\label{k-complexes} Let $M$ be a graded left $R$-module. The \emph{$\KK$-complex associated to $M$} is defined as the complex $\KK(M)$ with terms \begin{align*} \KK(M)_{0} &= M, \\ \KK(M)_{q+1} &= A^{l_{q}^{\XI}} \otimes_{A} A\langle\cc^{q+1} \rangle \otimes_{A} M(q+1) \end{align*} for $q \geq 0$, where $l_{q}^{\XI}$ is given as in Lemma~\ref{orbits}. The differentials on $\KK(M)$ are the linear maps defined by \begin{align*} \dd_{q+1}\colon \KK(M)_{q+1} &\to \KK(M)_{q} \\ \omega \otimes \underline g \otimes x &\mapsto \sum_{i=0}^{q} (-1)^{i} [d_{i}\omega \otimes ((\tau_{i,q}^{\omega})^{-1}\cdot\underline g)^{\leq q}\otimes r(((\tau_{i,q}^{\omega})^{-1}\cdot\underline g)_{q+1})\cdot x], \end{align*} where $\omega$ is the IC-sequence of a $q$-simplex, $\underline g \in \cc^{q+1}$, and $x \in M(q+1)$. \end{definition} In a less general form, $\KK$-complexes were introduced in~\cite[Sect.~4]{0912.0325}. $\KK(M)$ is in fact a complex of graded left $R$-modules, where the grading on $\KK(M)_{q}$ is induced by the grading on $M(q)$: For $M = M_p$, this is immediate, as the $n$-th graded part of the $\KK$-complex is equal to a row in the spectral sequence (\ref{the-sequence}) by construction. The complex property is only needed in this case. The more general case involves computations which utilize the semi-simplicial identity on the face maps of $\plant$. These are performed in the author's Ph.D.~thesis (currently under review). Note that the differential $\dd_{q}$ preserves the grading: The grading on $\KK(M)_{q}$ is induced by the grading on $M(q)$, on which $\dd_{q}$ acts by the alternating sum of multiplication with degree one elements. This cancels out with the shifted grading on $\KK(M)_{q-1}$. We resume: \begin{corollary}\label{specseq-corollary} There is a homological spectral sequence with $$ E^{1}_{qp} = n\text{-th graded part of } \KK(M_{p})_{q+1}, $$ differentials on the $E^{1}$-page given by the differentials on $\KK(M_{p})$, which converges to $H_{p+q}(\Hur_{G,n\cdot\XI}^{c};A)$ for $p+q \leq \left\lfloor \frac{n}{2}\right\rfloor - 2$. \end{corollary} We now consider the homology of $\KK$-complexes for $M = M_{0} = R$. In this case, multiplication in $R$ gives $\KK(R)_{q}$ (and hence also the homology modules of $\KK(R)$) the structure of a two-sided graded $R$-module. A simpler version of the following lemma is proved in \cite[Lemma~4.11]{0912.0325}. \begin{lemma}\label{killing} For all $q\geq 0$, $H_{q}(\mathcal{K}(R))$ is killed by the right action of $R_{>0}$. \end{lemma} \begin{proof} For simplicity of notation, in this proof we work with P- instead of IC-sequences. Recall that for a Hurwitz vector $\underline g\in\cc^{q+1}$, we write $\partial\underline g$ for its boundary, which is invariant under the $\Br_{(q+1)\cdot\XI}$-action. $R$ is generated as an $A$-module by orbits $[\underline s] \in \cc^{n}/\Br_{n\cdot\XI}$, for $n\geq 0$. Let $h\in\cc$, such that the elements of the form $r(h)$ generate $R_{>0}$ as an $A$-algebra. We define a map $S_{h}\colon \KK(R)_{q+1} \to \KK(R)_{q+2}$ by linear extension of $$ S_{h}(\tilde\omega \otimes \underline g \otimes [\underline s]) = (\xi +\tilde\omega ) \otimes (h^{(\partial \underline g \partial [\underline s])^{-1}}, \underline g) \otimes [\underline s], $$ where $\tilde\omega$ is the P-sequence of a $q$-simplex, and $\underline g, h, [\underline s]$ as above. Here, $(\xi +\tilde\omega)$ denotes the P-sequence of a $(q+1)$-simplex obtained by increasing every entry of $\tilde\omega$ by $\xi$ and then attaching $(1, \ldots, \xi)$ from the left. In particular, the equations \begin{align}\label{diff-new} \begin{split} d_{0}(\xi +\tilde\omega) &= \tilde\omega \\ d_{i}(\xi +\tilde\omega) &= (\xi + d_{i-1}\tilde\omega) \end{split} \end{align} hold for $i = 1, \ldots, q$. Furthermore, since the first $\xi$ positions of the P-sequence $\xi + \tilde\omega$ are given by $1, \ldots, \xi$, the equations \begin{align}\label{diff-new-2} \begin{split} (\tau_{0,q+1}^{(\xi +\tilde\omega)} )^{-1}\cdot (h, \underline g ) &= ( \underline g, h^{\partial\underline g}) \\ (\tau_{i+1,q+1}^{(\xi +\tilde\omega)} )^{-1}\cdot (h, \underline g ) &= (h, (\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g ) \end{split} \end{align} hold for any $h\in\cc$ and $i=0, \ldots, q$, cf.~the definition of $\tau_{i,q}^{\tilde\omega}$ in~(\ref{tau}). We claim that $S_{h}$ is a chain homotopy from right multiplication with $r(h)$ to the zero map. Indeed, we have \begingroup \allowdisplaybreaks \begin{align*} &(\dd_{q+1} S_{h} + S_{h} \dd_{q})(\tilde\omega \otimes \underline g \otimes [\underline s]) \\ &\underset{\phantom{(\ref{diff-new-2})}}{\overset{\phantom{(\ref{diff-new})}}{=}} \dd_{q+1}((\xi +\tilde\omega) \otimes (h^{(\partial \underline g \partial \underline s)^{-1}}, \underline g) \otimes [\underline s] ) \\ &\phantom{\underset{(\ref{diff-new-2})}{\overset{(\ref{diff-new})}{=}}}+ \sum_{i=0}^{q} (-1)^{i} S_{h}( [d_{i}\tilde\omega \otimes ((\tau_{i,q}^{\tilde\omega})^{-1}\cdot\underline g)^{\leq q}\otimes [(((\tau_{i,q}^{\tilde\omega})^{-1}\cdot\underline g)_{q+1}, \underline s )]] ) \\ &\underset{(\ref{diff-new-2})}{\overset{(\ref{diff-new})}{=}} \tilde\omega \otimes \underline g \otimes (h^{(\partial \underline s)^{-1}}, [\underline s] ) \\ &+ \sum_{i=0}^{q} (-1)^{i+1}[(\xi + d_{i}\tilde\omega) \otimes (h^{(\partial \underline g\partial\underline s)^{-1}}, ((\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g )^{\leq q} ) \otimes [(((\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g )_{q+1}, \underline s)]] \\ &+ \sum_{i=0}^{q} (-1)^{i}[(\xi + d_{i}\tilde\omega) \otimes (h^{(\partial( (\tau_{i,q}^{\tilde\omega})^{-1}\cdot\underline g)\partial\underline s)^{-1}}, ((\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g )^{\leq q} ) \otimes [(((\tau_{i,q}^{\tilde\omega})^{-1}\cdot \underline g )_{q+1}, \underline s)]] \\ &\underset{\phantom{(\ref{diff-new-2})}}{\overset{\phantom{(\ref{diff-new})}}{=}} \tilde\omega \otimes \underline g \otimes [\underline s \cdot r(h)], \end{align*} \endgroup as $(h^{(\partial \underline s)^{-1}}, \underline s )$ is equivalent to $(\underline s,h)$ under the $\Br_{(n+1)\cdot\XI}$-action. Hence, $S_{h}$ is the desired chain homotopy. \end{proof} \subsection*{Modules over stabilized rings}\label{stabilized} In the following, we study graded modules over graded rings satisfying a specific stability condition which will eventually form the essential criterion for homological stabilization of Hurwitz spaces. This generalizes the modules $M_{p}$ over the ring $R$. \begin{definition}\label{def_astable} Let $R = \bigoplus_{i\in\N_{0}} R_{i}$ be some graded ring, $A = R/R_{>0} \cong R_{0}$ the ring of degree zero elements, and $U\in R$ a central homogeneous element of positive degree. The ring $R$ is called \emph{$A$-stabilized by $U$} if the three following conditions are satisfied: \begin{enumerate}[(i)] \item Both kernel and cokernel of the multiplication $R \overset{U\cdot}{\to} R$ have finite degree as graded $R$-modules; in other words, $D_R(U) = \max\left\{\deg (R/UR), \deg (R[U])\right\}$ is finite, \label{isocoker} \item $A$ is commutative, and \label{comnoet} \item $R$ is generated in degree one (i.e., by $R_{1}$) as an algebra over $A$. \label{degone} \end{enumerate} We call $U$ the \emph{stabilizing element} for $R$. \end{definition} In what follows, let $M$ be a graded left $R$-module, where $R$ is $A$-stabilized by $U\in R$. We use the following notation: \begin{align*} D_{M}(U) &= \max\{\deg(M[U]), \deg(M/UM)\} \\ \delta_{M}(U) &= \max\{\deg (\Tor_{0}^{R}(R/UR, M)), \deg (\Tor_{1}^{R}(R/UR, M))\} \end{align*} Though both quantities depend heavily on the stabilizing element $U\in R$, we usually use the symbols $D_{M}$ and $\delta_{M}$. Furthermore, we write $H_{i}(M)$ for the graded left $R$-module $\Tor_{i}^R(A, M)$. We will from now on assume that $D_{R}(U)$ is positive. The case $D_{R}(U) = 0$ was fully treated earlier this section in the part about purely abelian covers. Section 4 of \cite{0912.0325} is about modules over graded rings $R$ which are $A$-stabilized by $U$. In that article's setting, it makes sense to focus on the case where $A$ is a field, though the proofs of Lemma~4.4 through Lemma~4.10 carry over directly to the case where $A$ is an arbitrary commutative ring. We proved an analogue to \cite[Lemma~4.11]{0912.0325} in Lemma~\ref{killing}. Therefore, the proofs and results of Proposition~4.12 and~4.13 of \cite{0912.0325} are also applicable to our situation. More specifically, we will need the follwing results: \begin{align} D_{M} &\leq \max\{\deg H_{0}(M), \deg H_{1}(M)\} + 5D_{R}, \label{homlem4.6} \\ \deg H_{q}(\KK(R)) &\leq D_{R} + \deg U + q, \label{4-12} \\ \deg H_{q}(\KK(M)) &\leq \max\{\deg H_{0}(M), \deg H_{1}(M)\} + (q+5)\cdot D_{R}+ \deg U \label{4-13}, \end{align} cf.~\cite[Lemma~4.6 and~4.9]{0912.0325}, \cite[Prop.~4.12]{0912.0325}, and \cite[Prop.~4.13]{0912.0325}, respectively. \begin{proposition}\label{homprop-4.2} Let $R$ be $A$-stabilized by $U$. Beyond, let $M$ be a graded left $R$-module and $h_{i} = \deg (H_{i}(\KK(M)))$. Then, we have $ h_{q} \leq \max\{h_{0}, h_{1}\} + D_{R}q + (5 D_{R} + \deg U), $ and multiplication by $U$, $M\overset{U\cdot}{\to}M$, is an isomorphism in source degree greater than or equal to $\max\{h_{0}, h_{1}\} + 5D_{R} + 1$. \end{proposition} \begin{proof} The present proof follows \cite[Thm.~4.2]{0912.0325}. We show that for $i = 0,1$, \begin{equation}\label{claim-prop} \deg (H_{i}(M)) \leq h_{i}. \end{equation} Using this result, we obtain \begin{align*} h_{q} &\overset{{(}\ref{4-13}{)}}{\leq} \max\{\deg (H_{0}(M)), \deg (H_{1}(M))\} + (q+5)\cdot D_{R} + \deg U \\ &\overset{(\ref{claim-prop})}{\leq} \max\{h_{0}, h_{1}\} + D_{R}q +(5 D_{R} + \deg U), \end{align*} which is the first part of the statement. Furthermore, by (\ref{homlem4.6}), multiplication by $U$ is an isomorphism in source degree greater or equal $\max\{\deg H_{0}(M), \deg H_{1}(M)\} + 5D_{R} + 1$. Together with~(\ref{claim-prop}), this gives the second claim of the proposition. It remains to show~(\ref{claim-prop}). For $i=0$, we have $H_{0}(M) = A \otimes_{R} M = M/R_{>0}M$ and $H_{0}(\KK(M)) = M/\im \dd_{1}$. Now, $\dd_{1}(\omega, g, x) = r(g)\cdot x$ for all elementary tensors in $\KK(M)_{1}$, so $\im \dd_{1} = R_{>0}M$ and the claim is vacuously true. For $i=1$, we factor the map $\dd_{1}\colon \KK(M)_{1} \to M$ as $\dd_{1} = \beta \circ \alpha$, $$ \KK(M)_{1} = A^{l_{0}^{\XI}} \otimes_{A} A\langle\cc\rangle \otimes_{A} M(1) \overset{\alpha}{\to} R_{>0}\otimes_{R} M\overset{\beta}{\to} M, $$ with $\alpha(\omega \otimes g \otimes x) = r(g) \otimes x$ and $\beta (r \otimes x) = r\cdot x$. As $R_{>0}$ is generated by elements of the form $r(g)$, we can factor any $r \in R_{>0}$ as $r = r(g)\cdot r'$ for some $r' \in R$; therefore, $\alpha$ is surjective. It is also degree-preserving -- note that $R_{>0} \otimes_{R} M$ is graded via $\deg(r\otimes x) = \deg r + \deg x$. Now, we have a sequence $$ \KK(M)_{2} \overset{\dd_{2}}{\to} \ker\dd_{1} \to H_{1}(\KK(M))\to 0 $$ which is by definition exact in the middle and on the right, so $\ker\dd_{1}$ is generated as an $A$-module in degree at most $\max\{\deg(\im\dd_{2}), \deg (H_{1}(\KK(M)))\}$. Now, the composition $$ \KK(M)_{2} \overset{\dd_{2}}{\to} \ker\dd_{1} \overset{\alpha}{\to} R_{>0}\otimes_{R} M $$ is zero, since it maps $\omega \otimes \underline g \otimes x$ to the element \begin{align*} &r((( \tau^{\omega}_{0,1} )^{-1}\underline g)_{1} ) \cdot r((( \tau^{\omega}_{0,1} )^{-1}\underline g)_{2} ) \otimes x - r((( \tau^{\omega}_{1,1} )^{-1}\underline g)_{1} ) \cdot r((( \tau^{\omega}_{1,1} )^{-1}\underline g)_{2} ) \otimes x \end{align*} which equals zero because $( \tau^{\omega}_{0,1} )^{-1}\underline g$ and $( \tau^{\omega}_{1,1} )^{-1}\underline g$ are equivalent up to the Hurwitz action. In other words, the elements of $\im\dd_{2} \subset \ker\dd_{1} \subset \KK(M)_{1}$ are killed by $\alpha$. Therefore, $\alpha(\ker \dd_{1})$ is generated as an $A$-module in degree $\leq \deg (H_{1}(\KK(M)))$. Now, as $\alpha$ is surjective, this implies $\deg(\ker \beta) \leq \deg (H_{1}(\KK(M)))$ (recall that $A$ is graded trivially). But the exact sequence $ 0 \to R_{>0} \to R \to A \to 0 $ is a projective resolution of $A = R/R_{>0}$; tensoring with $M$, we get an identification $ H_{1}(M) = \Tor_{1}^{R}(A,M) = \ker\beta. $ Thus, we have proved (\ref{claim-prop}). \end{proof} \subsection*{The proof} \begin{proof}[Proof of Theorem~\ref{the-theorem}]\label{hurwitz-proof} We follow the proof strategy of~\cite[Thm.~6.1]{0912.0325}, with an extra focus on the determination of the explicit stable range. By assumption, $D_{R} = D_{R}(U)$ is finite and positive. We know from Remark~\ref{deg-one} that $R$ is generated in degree one as an algebra over the commutative ring $A$. From these facts, we conclude that $R$ is $A$-stabilized by $U$. As before, let $ M_{p} = \bigoplus_{n\geq 0} H_{p}(\Hur_{G,n\cdot\XI}^{c}; A). $ In order to prove the theorem, we need to show that multiplication by $U$, $M_{p} \overset{U\cdot}{\to} M_{p}$, is an isomorphism in source degree $n > (8 D_{R} + \deg U)p + 7D_{R} + \deg U$. To see this, we show that \begin{equation}\label{thm-statement} \deg (H_{q}(\KK(M_{p}))) \leq D_{R} + \deg U + (8 D_{R} + \deg U)p + D_{R}q \end{equation} holds for all $q\geq 0$. Then, the theorem follows from the second statement of Proposition~\ref{homprop-4.2}, considering the cases $q = 0,1$. We prove~(\ref{thm-statement}) by induction on~$p$. For $p = 0$, we have $M_{0} = R$. Now, by~(\ref{4-12}), $$ \deg (H_{q}(\KK(R))) \overset{(\ref{4-12})}{\leq} D_{R} + \deg U + q \leq D_{R} + \deg U + D_{R} q, $$ which implies the assertion. For the inductive step, suppose that~(\ref{thm-statement}) holds for $0\leq p<P$. The final terms of $\KK(M_{P})$ are given by $$ \KK(M_{P})_{2} \overset{\dd_{2}}{\to} \KK(M_{P})_{1} \overset{\dd_{1}}{\to} M_{P}. $$ The $n$-th graded part of $\dd_{2}$ is a differential $\dd\colon E^{1}_{1P} \to E^{1}_{0P}$ in the spectral sequence from Corollary~\ref{specseq-corollary}. In the range $p\leq\left\lfloor\frac{n}{2} \right\rfloor - 2$, we can identify $\dd_{1}$ with an edge map in the same spectral sequence. Now, $E^{2}_{qp}$ is given by the $n$-th graded part of $H_{q+1}(\KK(M_{p}))$. From the inductive hypothesis, for $j>1$, we obtain $E^{2}_{j, P+1-j} = 0$ for $$n > 10 D_{R} + 2\deg U - (7 D_{R}+\deg U)j + (8D_{R}+\deg U)P.$$ Hence, we have $E^{2}_{0P} = E^{\infty}_{0P}$ for $n>-4 D_{R} + (8D_{R}+\deg U)P$, as there are no nonzero differentials going into or out of $E^{2}_{0P}$. Similarly, for $j>0$, we have $E^{2}_{j,P-j} = 0$ for $$n > 2 D_{R} + \deg U - (7 D_{R}+\deg U)j + (8D_{R}+\deg U)P.$$ Thus, for $n > -5 D_{R } + (8D_{R}+\deg U)P$, the only graded piece of $H_{P}(\Hur_{G,n\cdot\XI}^{c};A)$ which does not vanish is $E^{\infty}_{0P}$. Combining these results, we see that $E^{2}_{0P} \cong E^{\infty}_{0P} \cong H_{P}(\Hur_{G,n\cdot\XI}^{c};A)$ as long as $n> -4 D_{R} + (8D_{R}+\deg U)P$. In particular, the edge map $\coker (d_{2}) \to M_{P}$ is an isomorphism in degrees above $-4 D_{R} + (8D_{R}+\deg U)P$, and so \begin{equation}\label{pf-bound} \max\{\deg (H_{0}(\KK(M_{P}))), \deg (H_{1}(\KK(M_{P})))\} \leq -4 D_{R} + (8D_{R}+\deg U)P. \end{equation} Now, we make use of the first statement of Proposition~\ref{homprop-4.2} and obtain \begin{align*} \deg (H_{q}(\KK(M_{P}))) &\overset{(\ref{pf-bound})}{\leq} -4 D_{R} + (8D_{R}+\deg U)P + D_{R}q + 5D_{R} + \deg U\\ &\overset{\phantom{(\ref{pf-bound})}}{=} D_{R} + \deg U + (8D_{R}+\deg U)P + D_{R}q, \end{align*} which is what we wanted to show. \end{proof} \subsection*{Unmarked covers} The $G$-action on Hurwitz vectors by simultaneous conjugation induces an action of $G$ on the Hurwitz spaces $\Hur_{G,n\cdot\XI}^{c}$. The corresponding \emph{Hurwitz space for unmarked covers} is defined as the quotient $\mathcal{H}_{G,n\cdot\XI}^{c} = \Hur_{G,n\cdot\XI}^{c}/G$. Now, in general, $G$ does not act freely on Hurwitz vectors. Anyway, for suitable stabilizing elements $U$, homological stability for Hurwitz spaces descends to spaces of unmarked covers, as we will see in this section's theorem. Note that since the Hurwitz action commutes with conjugation, the $G$-action on Hurwitz vectors gives the ring $R$ and the modules $M_{p}$ the structure of modules over the group ring $A[G]$. \begin{corollary}\label{thm-unmarked} Let $A$ be a field whose characteristic is either zero or prime to the order of $G$. Assume that there is a $G$-invariant element $U\in R$ which for any $p\geq 0$ induces isomorphisms $H_{p}(\Hur_{G, n\cdot\XI}^{c} ; A) \overset{\sim}{\to} H_{p}(\Hur_{G, (n+\deg U)\cdot\XI}^{c} ; A)$ for $n\geq r(p)$. Then, $H_{p}(\mathcal{H}_{G, n\cdot\XI}^{c} ; A) \cong H_{p}(\mathcal{H}_{G, (n+\deg U)\cdot\XI}^{c} ; A)$ holds in the same range. \end{corollary} \begin{proof} By a transfer argument, we have $H_{p}(\mathcal{H}^{c}_{G,n\cdot\XI};A) \cong H_{p}(\Hur_{G,n\cdot\XI}^{c};A)_{G}$ for all $n,p\geq 0$. The assumption that $U$ is fixed under the action of $G$, together with the $G$-e\-qui\-va\-ri\-ance of the maps $G^{n\xi}\times G^{\deg U\cdot\xi} \to G^{(n+\deg U)\xi}$, implies that in the stable range, $H_{p}(\Hur_{G, n\cdot\XI}^{c} ; A) \cong H_{p}(\Hur_{G, (n+\deg U)\cdot\XI}^{c} ; A)$ is an isomorphism of $A[G]$-modules. Taking $G$-coinvariants yields the result. \end{proof} \section{Application}\label{application} We work in the same setting as in the previous section. \subsection*{Connected covers and stable homology} In the purely abelian case, we have seen in Section~\ref{homstabhurwitz} that Hurwitz spaces \emph{are} homotopy equivalent to colored configuration spaces. In the preprint~\cite{1212.0923}, this is made more general by showing that under certain conditions, the stable homology of the components of Hurwitz spaces is isomorphic to the stable homology of the corresponding colored configuration space. Due to a false reasoning in the application of results from the earlier article~\cite{0912.0325}, the named preprint was withdrawn from the arXiv in~2013. Fortunately, the statement of Theorem~\ref{evwii} remains untouched. We refer to Ellenberg's blog post \cite{Quomod} for an explanation of the mistakes and a clarification which results are still correct. For $\underline a = (a_{1}, \ldots, a_{t})\in \N^{t}$, we define the Hurwitz vector \begin{equation}\label{v-vector} V = V(\underline a) = \prod_{i=1}^{t}\prod_{g \in c_{i}} \left( g^{(a_{i} \ord(g))} \right), \end{equation} where the product operation means concatenation of tuples. Now, if there is an $n \in \N$ such that we have $\sum_{g\in c_{i}} a_{i}\ord(g) = n\xi_{i}$ for all $i = 1, \ldots, t$, we have $V \in \cc^{n}$ up to the action of $\Br_{n\cdot\XI}$. It is not hard to see that for any $\XI\in\N^{t}$, such an $\underline a$ exists. Hence, in this case, $V$ may be interpreted as an element of $R = R^{A,c}_{G,\XI}$, and we write $V\in R$. By construction, we have $\partial V = 1$, so $V$ is central in $R$. By $\CHur_{G,n\cdot\underline\xi}^{c} \subset \Hur_{G,n\cdot\underline\xi}^{c}$, we denote the union of connected components of $\Hur_{G,n\cdot\underline\xi}^{c}$ parametrizing covers with full monodromy. \begin{theorem}[\textsc{Ellenberg--Venkatesh--Westerland}, {\cite[Cor.~5.8.2]{1212.0923}}]\label{evwii} Suppose that for any $p\geq 0$, the element $V$ from (\ref{v-vector}) induces an isomorphism $$ H_{p}(\CHur_{G,n\cdot\underline\xi}^{c};\Q)\overset{\sim}{\to} H_{p}(\CHur_{G,(n+\deg V)\cdot\underline\xi}^{c};\Q) $$ for $n\geq r(p)$. Then for any connected component $X$ of $\CHur_{G,n\cdot\underline\xi}^{c}$, the branch point map $X \to \Conf_{n\cdot \underline\xi}$ induces an isomorphism $$ H_{p}(X;\Q) \overset{\sim}{\to} H_{p}(\Conf_{n\cdot\underline\xi};\Q) $$ whenever $n\geq r(p)$. \end{theorem} In Theorem~\ref{the-theorem}, we give a condition for homological stabilization of the sequence $\{\Hur_{G,n\cdot\XI}^{c} \}$, while Theorem~\ref{evwii} is about the subspaces $\CHur_{G,n\cdot\XI}^{c}$ \emph{of connected} covers. In order to make the two theorems compatible, the condition \begin{equation}\label{all-connected} \Hur_{G, n\cdot\XI}^{c} = \CHur_{G, n\cdot\XI}^{c} \text{ for all } n\geq 1 \end{equation} must be satisfied. Now, (\ref{all-connected}) holds if and only if $G$ is \emph{invariably generated} by $c$: \begin{definition} We say that $G$ is \emph{invariably generated} by a tuple $c = (c_{1}, \ldots, c_{t})$ of distinct conjugacy classes in $G$ if for all choices of elements $g_{i} \in c_{i}$, $i = 1, \ldots, t$, $G$ is generated by $g_{1}, \ldots, g_{t}$. In this case, we call $c$ an \emph{invariable generation system} for $G$. \end{definition} By Jordan's theorem (\cite{Jordan}), a list of all of its nontrivial conjugacy classes invariably generates $G$. The following proposition is a slight adaptation of a standard result about Hurwitz action orbits. \begin{proposition}\label{conn-standard} If $c$ invariably generates $G$, there is an $N\in\N$ such that for all $n \geq N$, concatenation with any $g \in \cc$ yields a bijection $$ \cc^{n}/\Br_{n\cdot\XI} \overset{1:1}{\longleftrightarrow} \cc^{n+1}/\Br_{(n+1)\cdot\XI}. $$ Thus, any Hurwitz vector $U\in R = R^{\Z,c}_{G,\XI}$ induces an isomorphism $$ H_{0}(\Hur_{G, n\cdot\XI}^{c} ; \Z ) \cong H_{0}(\Hur_{G, (n+\deg U)\cdot\XI}^{c} ; \Z) $$ for all $n \geq N$. In particular, $D_{R}(U)$ is finite for any vector $U \in R$ with $\partial U = 1$. \end{proposition} \begin{proof} We follow the proof in \cite[Prop.~3.4]{0912.0325}, where the first statement is proved for $t = 1$. The last two statements are direct consequences of the first one. Let $\underline h \in \cc^{n+1}$. We need to show that for $n$ sufficiently large, there is a tuple $\underline h' \in \cc^{n}$ such that $\underline h$ is equivalent under the $\Br_{n\cdot\XI}$-action to $(g, \underline h')$. This shows that the maps $\cc^{n}/\Br_{n\cdot\XI} \to \cc^{n+1}/\Br_{(n+1)\cdot\XI}$ given by concatenation with $g = (g_{1}, \ldots, g_{\xi})$ are surjective for $n \gg 0$; since the involved sets are finite, it follows that these maps are eventually bijective. In the following, we work with the full $\Br_{n\xi}$-action. If we construct a tuple $(g, \underline h'')$ which is equivalent under the $\Br_{n\xi}$-action to $\underline h$, there is another braid which transforms~$\underline h''$ to an element of $\cc^{n}$, since the Hurwitz action permutes conjugacy types. Thus, it suffices to show that we can realize any $g_{0}\in G$ as the first entry of a tuple which is $\Br_{n\xi}$-equivalent to $\underline h$; the claim follows by successive application of this property. Assume $g_{0} \in c_{1}$. For $n\gg 0$, there exists an element $g'_{0} \in c_{1}$ that appears at least $d+1 = \ord(g'_{0}) + 1$ times in $\underline h$. We may use the Hurwitz action to pull $d$ of these elements to the front of $\underline h$, resulting in a new tuple $({g'_{0}}^{(d)}, \tilde h_{1}, \ldots, \tilde h_{n\xi - d})$. By the invariable generation property, the elements $\tilde h_{1}, \ldots, \tilde h_{n\xi - d}$ generate $G$ (note that $g'_{0}$ appears at least once in these last $n\xi - d$ entries). Now for all $i = 1, \ldots, n\xi -d$, there is a braid $\sigma_{i}\in\Br_{n\xi}$ which satisfies $$ \sigma_{i}\cdot ({g'_{0}}^{(d)}, \tilde h_{1}, \ldots, \tilde h_{n\xi - d}) = ((\tilde h_{i}g'_{0}\tilde h_{i}^{-1})^{(d)}, \tilde h_{1}, \ldots, \tilde h_{n\xi - d}). $$ It is given by $$ \sigma_{i} = \alpha_{i}^{-1} (\sigma_{d+i-1} \cdots \sigma_{i+1} \sigma_{i}^{2} \sigma_{i+1} \cdots \sigma_{d+i-1}) \alpha_{i}, $$ where $\alpha_{i}$ pulls the $d$-tuple $({g'_{0}}^{(d)})$ in front of $\tilde h_{i}$, which works since the boundary of $({g'_{0}}^{(d)})$ is trivial. Thus, the Hurwitz action may conjugate the elements $g'_{0}$ at the beginning of the tuple by any element in the group generated by $\tilde h_{1}, \ldots, \tilde h_{n\xi - d}$, which is equal to $G$. Thus, we may establish $g_{0}$ as the first entry. \end{proof} We are now ready to conclude: \begin{theorem}\label{thm-connected} Let $G$ be a finite group, $c = (c_{1}, \ldots, c_{t})$ an invariable generation system for $G$, and $\XI = (\xi_{1}, \ldots, \xi_{t})\in\N^{t}$. Let $U \in R = R^{\Z.c}_{G,\XI}$ be a Hurwitz vector with $\partial U = 1$. Then for any $p \geq 0$, there are isomorphisms \begin{align*} H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Z ) &\cong H_{p}(\Hur_{G, (n+\deg U)\cdot\XI}^{c}; \Z )\\ H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Q ) &\cong H_{p}(\Hur_{G, (n+1)\cdot\XI}^{c}; \Q ) \end{align*} for $n> (8D_{R}(U) + \deg U)p + 7D_{R}(U) + \deg U$. For $b = b_{0}(\Hur_{G, (D_{R}(U) + 1)\cdot\XI}^{c})$, $$ H_{p}(\Hur_{n\cdot\XI}^{c};\Q) \cong H_{p}( \Conf_{n\cdot\XI}; \Q ) \otimes_{\Q} \Q^{b} $$ in the same range. \end{theorem} \begin{proof} For $G$ abelian, an even stronger statement follows from Corollary~\ref{abelian-case}. We may thus assume that $D_{R}(U)>0$. The last statement of Proposition~\ref{conn-standard} tells us that the assumptions of Theorem~\ref{the-theorem} are satisfied for $U$. Indeed, for $p\geq 0$, \begin{equation*}\label{periodical-stab} H_{p}(\Hur_{G, n\cdot\XI}^{c}; \Z ) \cong H_{p}(\Hur_{G, (n+\deg U)\cdot\XI}^{c}; \Z ) \end{equation*} as long as $n> (8D_{R}(U) + \deg U)p + 7D_{R}(U) + \deg U$. By definition of $D_{R}(U)$, the number $b = b_{0}(\Hur_{G, n\cdot\XI}^{c})$ of connected components is stable for $n> D_{R}(U)$. We note that $V = V(\underline a) \in R$ is also a $\Z$-stabilizing element for $R$ by Proposition~\ref{conn-standard} and the fact $\partial V = 1$. Fix $p\geq 0$ and $n> (8D_{R}(U) + \deg U)p + 7D_{R}(U) + \deg U$. Now, $n$ is \emph{always} in the stable range for $\{\Conf_{n\cdot\XI} \mid n\geq 0\}$, given by $n \geq \frac{2p}{\min\XI}$. Indeed, we have $D_{R}(U)>0$ because $G$ is non-abelian. We choose $k \geq 0$ such that $n+k \deg U$ is in the stable range for the stabilizing element~$V$. We obtain \begingroup \allowdisplaybreaks \begin{align*} H_{p}(\Hur_{n\cdot\XI}^{c};\Q) \overset{\phantom{((}\ref{the-theorem}\phantom{))}}{\cong} &H_{p}(\Hur_{(n+k\deg U)\cdot\XI}^{c};\Q) \\ \overset{\phantom{(}\ref{evwii}\phantom{)}}{\cong} &H_{p} (\Conf_{(n+k\deg U)\cdot \XI};\Q) \otimes_{\Q} \Q^{b} \\ \overset{\text{(\ref{tran})}}{\cong} &H_{p} (\Conf_{n\cdot \XI};\Q) \otimes_{\Q} \Q^{b} \\ \overset{\text{(\ref{tran})}}{\cong} &H_{p} (\Conf_{(n+k\deg U+1)\cdot \XI};\Q) \otimes_{\Q} \Q^{b} \\ \overset{\phantom{(}\ref{evwii}\phantom{)}}{\cong} &H_{p}(\Hur_{(n+k\deg U+1)\cdot\XI}^{c};\Q) \\ \overset{\phantom{((}\ref{the-theorem}\phantom{))}}{\cong} &H_{p}(\Hur_{(n+1)\cdot\XI}^{c};\Q), \end{align*} \endgroup which yields the remaining assertions. \end{proof} \subsection*{Outlook} The present paper may be understood as a sequel to the topological part of \cite{0912.0325}. There are still open questions regarding the homological stabilization for Hurwitz spaces: \begin{itemize} \item[-] For $t=1$, the condition from Theorem~\ref{the-theorem} is equivalent to the non-splitting property. Is there an analogous translation for the general case? \item[-] Until now, we only considered the \emph{diagonal} stabilization direction, i.e., we considered the sequence of shapes $\{n\cdot\XI\}$. Is there also a general theorem for the stabilization in the direction of a fixed conjugacy class, i.e., sequences $\{\XI + ne_{i}\}$, where $e_{i}$ is a unit vector? This is motivated by the corresponding result for colored configuration spaces in \cite{1312.6327}. \item[-] Does the homological stabilization carry over to base spaces of higher genus? \item[-] With Harer's theorem for $\mathcal{M}_{g}$ in mind, it is a natural question whether homological stabilization happens not only in the direction of the number of branch points, but also in the direction of the genus of the covered surface (\emph{genus stabilization}). For the zeroth Betti number, this has been tackled in the articles \cite{MR3428412} and \cite{1301.4409} in the slightly different setting of the substrata $\mathcal{M}_{g}(G)$ of $\mathcal{M}_{g}$ which contain the algebraic curves admitting a faithful action by a fixed finite group $G$. \end{itemize}
{'timestamp': '2016-06-24T02:10:16', 'yymm': '1606', 'arxiv_id': '1606.05459', 'language': 'en', 'url': 'https://arxiv.org/abs/1606.05459'}
\section{Introduction and motivation} In addition to its relevance to nuclear structure, the strange quark condensate in the nucleon, $\LL N | \bar s s| N \RR$, is important in understanding experimental searches for dark matter, since some of the leading candidates for dark matter couple most strongly to the nucleon through interactions with the strange quark loops. For example, the importance of this quantity is emphasized in Refs.~\cite{BALTZ06,ELLIS08}. The strange and light quark condensates in the nucleon have been calculated through effective field theories of nucleons and mesons\cite{NUCEFF}, and the heavy quark content can be studied perturbatively\cite{HEAVY}. Previous lattice studies of the nucleon strange quark content have been done in the quenched approximation\cite{FUKUGITA02,DONG02}, with two flavors of Wilson or overlap quarks\cite{SESAM98,UKQCD01,JLQCD08} or an exploratory study with 2+1 flavor stout quarks\cite{BALI08}. A recent study uses 2+1 flavor baryon mass fits\cite{YOUNGTHOMAS09}. Some, though by no means all, of these studies have suggested that $\LL N | \bar s s | N \RR$ might be much larger than found here. In this work we use lattices with 2+1 flavors of dynamical quarks (two light, one strange) and nucleon correlators generated from them. These were generated by the MILC collaboration, except for one long ensemble from the UKQCD collaboration. These simulations use a Symanzik improved gauge action and an improved staggered quark action. Details of the action, the ensembles of gauge configurations, and the techniques for computing the nucleon correlators are in Ref.~\cite{RMP}. The simulations cover a range of light quark masses and a range of lattice spacings, which allows us to check the extrapolations to zero lattice spacing and to the physical light quark mass. The lattices have a spatial size of 2.4 fm or larger, with $m_\pi L$ ranging from 3.8 to 6, so that finite size effects will be small. Each ensemble, or set of gauge configurations with a given gauge coupling and quark masses, used here contains from 500 to 4500 equilibrated lattices, with a total of 25784 lattices used in the analysis. Differentiation of a path integral expression for the nucleon mass with respect to the strange quark mass (the Feynman-Hellman theorem) relates the matrix element $\LL N \left| \bar s s \right| N\RR$ to $\PAR{M_N}{m_s}$. In particular, \begin{equation} \LL N \left| \int\, d^3x\, \bar s s \right| N\RR - \LL 0 \left| \int\, d^3x\, \bar s s \right| 0\RR = \PAR{M_N}{m_s}\Big|_{\alpha_s,m_l} \ \ \ \ ,\end{equation} where the left hand side makes definite what we mean by $\LL N|\bar s s | N \RR$. Note the vacuum subtraction and the integral over space. Since we expect that the nucleon is made mostly from light quarks, it may seem strange to suppose that $M_N$ depends strongly on the mass of the strange quark, $m_s$. However, to equate $\LL N | \bar s s | N \RR$ with $\PAR{M_N}{m_s}$, differentiation of the path integral must be done with all other parameters in the action held fixed. This change in $m_s$ would cause all dimensionful QCD quantities to change by roughly the same factor, with most of this change interpreted as a change in the physical lattice spacing. For example, if $f_\pi$ were used to determine the lattice spacing, both lattice quantities $aM_N$ and $af_\pi$ might change, with ratio $aM_N/af_\pi$ approximately constant. Thus it is not terribly surprising to find $\PAR{M_N}{m_s}$ of order one. The MILC ensembles contain runs with different values for $m_s$, so it is in principle possible to determine $\PAR{M_N}{m_s}$ from a fit to $M_N$ on different ensembles. However, we find that correlations of $\bar s s$ with the nucleon correlator give a better signal. (In one case to be discussed later, we do have a useful check from mass fits.) In the lattice simulations, the nucleon mass $M_N$ is obtained by a fit to the nucleon correlator $P(t) = \LL {\cal O}_N (0) {\cal O^\prime}_N (t) \RR$, where ${\cal O}_N$ and ${\cal O^\prime}_N$ are lattice operators with the best practicable overlap with the nucleon. As such, it is just a complicated function of the correlator at different times: \begin{equation} \label{eq_massfunction1} M_N = f\LP P(t_1),P(t_2),P(t_3) \ldots \RP \ \ \ \ .\end{equation} Using the chain rule to rewrite the derivative: \begin{equation} \PAR{M_N}{m_s} = \sum_i \PAR{M_N}{P(t_i)} \PAR{P(t_i)}{m_s} \ \ \ \ .\end{equation} The partial derivatives ${\partial P(t_i)}\over{\partial m_s}$ can be evaluated by using the Feynman-Hellman theorem in reverse to relate them to $\LL P(t_i) \int d^4x\, \bar s s \RR - \LL P(t_i) \RR \LL \int d^4x\, \bar s s \RR$. Then $\PAR{M_N}{m_s}$ is evaluated by adding and subtracting a small multiple of $\PAR{P(t)}{m_s}$ to the nucleon correlator $P(t)$ and examining the change in the fit result. It may seem that this second use of the Feynman-Hellman theorem has just reversed the original calculation relating $\LL N | \bar s s | N \RR$ to $\PAR{M_N}{m_s}$. If the source and sink for the lattice nucleon correlator $P(t)$ created nothing but a normalized nucleon state, this would be the case. But in practice the lattice correlator contains opposite parity particles with almost the same amplitude as the nucleon but with higher mass, and excited states of both parities. The fitting procedure implicit in Eq.~\ref{eq_massfunction1} is designed to determine a nucleon mass from this complicated correlator, by explicitly including opposite parity (alternating in $t$) contributions and by ignoring the correlator at short separations, so that the excited state contributions are suppressed. Nucleon correlators have been computed on most of the MILC ensembles. Typically these are averaged over eight Coulomb gauge wall sources in each lattice, so most of the lattice volume is involved in computing this correlator. Also, the MILC code does a stochastic estimate of $\int d^4x\, \bar s s$ using a random source (covering the entire lattice) when the lattice is generated or read in for a measurement. Thus, we have $\bar s s$ measurements from several random sources on each lattice, and can compute the correlation between the nucleon correlator and $\bar s s$. (The number of random sources ranged from two to fifteen, depending on the ensemble, with an average of ten.) We fit the nucleon correlators to a form including the nucleon and an opposite parity state, using distance range $D_{min}$ to $D_{max}$ \begin{equation} P(t) = A e^{-M_N t} + A^\prime (-1)^t e^{-M^\prime t} \ \ \ \ .\end{equation} Since the fractional statistical errors on the nucleon correlator increase quickly with minimum distance, it is advantageous to use as small a minimum distance in the fits as possible. Since we have a quark-line disconnected correlation function, statistical errors are much larger than in simple hadron mass calculations. Thus, in fitting the perturbed nucleon correlators, we have chosen smaller minimum distances than in our fits to the nucleon masses themselves. In particular, we have chosen $D_{min}=5$, $7$ and $10$ for the $a=0.12$ fm, $a=0.09$ fm and $a=0.06$ fm ensembles respectively, or a consistent physical distance of about $0.6$ fm. Since the nucleon mass, $M_N$, computed from these same correlators can be determined with a statistical error of order one percent, its dependence on minimum distance can be used to estimate the resulting systematic error. Fits to $M_N$ with these minimum distances differ by between 1\% and 5\% from the $M_N$ fit with larger minimum distances. Alternatively, from looking at the values of $\PAR{M_N}{m_s}$ for various minimum distances, it appears that there could be errors as large as 10\% from the choice of fit range. (The choice of $D_{max}$ has negligible effects.) Figure~\ref{samplefigs} shows the nucleon correlator and its derivative with respect to the strange quark mass for a sample ensemble. The second panel of the figure shows $\PAR{M_N}{m_s}$ (unrenormalized) for three of the $a \approx 0.09$ fm ensembles versus the minimum distance included in the fit, while the third panel shows the nucleon mass itself as a function of $D_{min}$. Since the quantity we are computing is a complicated, and implicitly defined, function of the averages measured on the lattice, we use a jackknife analysis to estimate statistical errors. Since consecutive lattices are correlated, we eliminated blocks of ten consecutive lattices, or 50 to 60 simulation time units, in the jackknife analysis. Using larger blocks made only a small difference. \begin{figure*}[tbh] \hspace{-0.1in} \includegraphics[width=0.325\textwidth]{prop_and_deriv_0093.ps} \includegraphics[width=0.335\textwidth]{deriv_vs_dmin_a09.ps} \includegraphics[width=0.325\textwidth]{nucfits_0093.ps} \caption{The nucleon correlator and the derivative of this correlator with respect to $m_s$ for the ensemble with $am_l=0.0093$ and $am_s=0.031$ (first panel). For the derivative, the squares are points where the derivative is negative, and crosses are points where it is positive. The vertical lines show the range used in fitting the correlator. The second panel shows $\PAR{M_N}{m_s}$ for three ensembles with $a\approx 0.9$fm as a function of the minimum distance used in the fitting, and the third panel shows the fitted nucleon mass itself versus $D_{min}$. The error bars labelled ``10\%'' in the second and third panels show the size of the ten percent systematic error estimate from excited state contamination. \label{samplefigs} } \end{figure*} Since the strange quark mass is renormalization scheme and scale dependent, so is the derivative $\PAR{M_N}{m_s}$. For a useful result, we wish to express our answer in a renormalization scheme useful for computations of cross sections. The relation between the strange quark mass in the Asqtad regularization and in the $\overline{\mathrm {MS}}$ scheme is known to two loop order in perturbation theory \cite{QUARKMASS2}. \begin{equation} \PAR{M_N}{m_s(\overline{\mathrm{MS}}, {\rm 2\ GeV})} = \frac{u_0}{Z_m} \PAR{M_N}{m_s({\rm Asqtad}, 1/a)} \end{equation} where the factor of $u_0$ converts the lattice definition of the quark mass used here to the definition used in Ref.~\cite{QUARKMASS2}, and $Z_m$ can be found in Ref.~\cite{QUARKMASS2}. Since in the subsequent steps in this analysis we will be combining results at different lattice spacings $a$, it is most consistent to make the conversion to the $\overline{\mathrm{MS}}$(2 GeV) scheme before making chiral and continuum extrapolations. The strange quark masses, $m_s$, used in the MILC simulations were of necessity estimated before the simulations were done, and the correct strange quark masses were only known after the pseudoscalar masses were analyzed. These differ significantly from the $m_s$ used in the simulations. For lattice spacings $0.12$ fm, $0.09$ fm and $0.06$ fm the values of $am_s$ used in most of the simulations were $am_s=0.050$, $0.031$ and $0.018$, while the corrected values are $0.036$, $0.026$ and $0.019$, respectively. (A few ensembles were run with a lighter strange quark mass $0.6$ times the above mass.) To adjust to the correct $m_s$, we use the fact that light quark $\bar\psi\psi$ was also evaluated on all of these lattices. In some of the largest ensembles the actual light quark mass used was still fairly large --- 0.2, 0.4 or even 0.6 times the simulation strange quark mass, outside the chiral regime and with qualitative behavior similar to heavy quarks. For example, one of the ensembles with $a \approx 0.12$ fm was generated with light and strange quark masses $am_l = 0.03$ and $am_s = 0.05$, where the correct strange quark mass determined later was about $0.036$. On these ensembles we use the difference between $\PAR{M_N}{m_s}$ and $\PAR{M_N}{m_l}$ ($m_l$ is the light quark mass used in the simulation) to calculate the derivative of $\PAR{M_N}{m_s}$ with respect to $m_s$, and use this to adjust the results to the correct strange quark mass. Since $\PAR{M_N}{m_s}$ and $\PAR{M_N}{m_l}$ are measured on the same lattices and with the same nucleon propagators, albeit with different random sources, they are highly correlated and the error on their difference is greatly reduced. With the additional assumption that this slope in physical units is the same for all ensembles, a correction factor can be estimated. In particular, using five long ensembles with $m_l \ge 0.2 m_s$, we find $\PAR{}{r_1 m_s} \LP \PAR{M_N}{m_s} \RP = -2.2(3)$. Here $r_1$ is a hadronic length scale determined from the heavy quark potential, and is approximately $0.31$ fm\cite{SOMMER,MILC_R1,RMP}. \begin{figure*}[tbh] \hspace{-0.1in} \includegraphics[width=0.302\textwidth]{data_a12_5_adj.ps} \includegraphics[width=0.384\textwidth]{data_a09_7_adj.ps} \includegraphics[width=0.302\textwidth]{data_a06_10_adj.ps} \caption{The derivative $\PAR{M_N}{m_s}$ on the various ensembles. As discussed above, the data have been adjusted to the correct strange quark mass, and the quark mass converted to the $\overline{\mathrm{MS}}({\rm 2\ GeV})$ regularization. In the horizontal axis, $r_1$ is a hadronic length scale, approximately $0.31$ fm. In these plots the symbol size is proportional to the number of lattices in the ensemble, with the largest symbol corresponding to about 4500 lattices. In each panel, the cross at $m_l r_1 \approx 0.05$ ($m_l \approx 0.4 m_s$) also shows the value of the nearby point before adjusting the strange quark mass. The line on each panel is the continuum and chiral fit in Eq.~\protect\ref{fitform} evaluated at the corresponding lattice spacing, and the error bar at the left is the error on the combined fit to all the data. \label{resultfigs} } \end{figure*} Figure~\ref{resultfigs} shows $\PAR{M_N}{m_s}$ on all of the ensembles used, where the results have been adjusted to the correct strange quark mass, and the quark mass converted to the $\overline{\mathrm{MS}}({\rm 2\ GeV})$ regularization. In this plot we see that the best results are in the $a=0.12$ fm ensembles, mainly because of the larger numbers of lattices. Finally, it is necessary to extrapolate the result to the physical light quark mass and to the continuum ($a=0$) limit. To do this, we fit the results to the form\cite{FRINKMEISSNER} \begin{equation}\label{fitform} \PAR{M_N}{m_s} = A + B m_l r_1 + C (a/r_1)^2\ \ \ .\end{equation} Since the results from the $a=0.06$ and $0.09$ fm ensembles have much larger statistical errors than the $0.12$ fm results, the term linear in $a^2$ is very poorly determined. However, we can use experience with other quantities to estimate the likely size of lattice corrections. In particular, the masses of the $\rho$, nucleon, and $\Omega^-$ at $a=0.12$ fm differ by about 4\%, 10\% and 9\% respectively from their continuum extrapolation. Therefore we constrain $C$ to be small by using a (Gaussian) Bayesian prior with a one standard deviation width corresponding to a 10\% effect at $a=0.12$ fm. This gives $\PAR{M_N}{m_s} = 0.69 \pm 0.07_{\rm statistical}$ in the continuum limit, with $\chi^2/D = 17.0/17$. There are also a number of systematic errors. As discussed above, we include a 10\% systematic error for the effects of excited states in the nucleon correlator. The extrapolation to the physical light quark mass contains higher order terms in chiral perturbation theory than the linear form used here. To estimate the likely size of these terms, we note that if the nucleon mass over this range of quark masses is fit to constant plus linear, the result at the physical point is seven percent different from the result including two more orders in the pion mass. We therefore take seven percent as an estimate of the effect of higher order terms in chiral perturbation theory. In one case where we have two spatial volumes, the nucleon mass on the volume used here was different by about one percent from the mass in the larger volume. It is possible that disconnected contributions are more sensitive to the volume, so we take three percent as an estimate of this systematic error. Finally, Ref.~\cite{QUARKMASS2} estimates an error of four percent in $Z_m$. The combined systematic error estimate from excited states, finite volume, higher order $\chi PT$ and $Z_m$ is 0.09. Evaluating the fit in the continuum limit at the physical light quark mass, we find $\PAR{M_N}{m_s} = 0.69 \pm 0.07_{\rm statistical} \pm 0.09_{\rm systematic}$, where $m_s$ is in the $\overline{\mathrm{MS}}$ regularization at 2 GeV. It is also common to quote the renormalization scheme invariant quantity $m_s \PAR{M_N}{m_s}$. Using a similar chiral and continuum fit to the one used for $\PAR{M_N}{m_s}$, we find $m_s \PAR{M_N}{m_s} = 59(6)(8)$ MeV. The systematic error here does not include error in $Z_m$, which cancels, but does include a lattice systematics error of almost the same amount, coming from uncertainty in the lattice strange quark mass and an overall two percent error in scale setting. In general the MILC ensembles were run at different lattice coupling for each quark mass, which makes it complicated to extract $\PAR{M_N}{m_q}$ from fits to the table of nucleon masses. However, in one case there is an accidental check. Through an error, two ensembles were run with the same coupling constant $10/g^2$ and tadpole factor $u_0$. These ensembles had sea quark masses $m_l/m_s = 0.0062/0.0186$ and $0.0093/0.031$ respectively. By computing a partially quenched nucleon mass on the latter ensemble and examining its difference from the nucleon mass on the former, we can make a check on a particular combination of $\PAR{M_N}{m_s}$ and $\PAR{M_N}{m_l}$, $(0.031-0.0186)\PAR{M_N}{m_s}+2(0.0093-0.0062)\PAR{M_N}{m_l} $. Here $\PAR{M_N}{m_q}$ is evaluated at the midpoint of the sea quark masses on these two ensembles, and the factor of two comes from the two light flavors. These nucleon masses are computed in the usual way by a fit to the nucleon correlator. The resulting difference in masses was $0.016(3)_{\rm stat.}(2)_{\rm fit\ range} = 0.016(4)$. The fit to $\PAR{M_N}{m_s}$ above, converted back into lattice units, together with a similar fit to $\PAR{M_N}{m_l}$, gives $0.020(3)$, in reasonable agreement. Our result for $\LL N | \bar s s | N \RR$ is smaller than the results of the quenched calculations in Refs.~\cite{FUKUGITA02,DONG02}. However, our result is reasonably consistent with the small value of $y$ recently found in the two flavor overlap calculation in Ref.~\cite{JLQCD08}, where combining their result $y<0.05$ with the value of $\sigma_{\pi N} = 53$ MeV found in the same fit gives $m_s<\bar s s> < 36$ MeV. Similarly, our result is marginally consistent with the result from fits to baryon masses in Ref.~\cite{YOUNGTHOMAS09}, who find $m_s<N|\bar s s|N> = 31(15)$ MeV, although there may be differences in how the derivative with respect to $m_s$ is taken. \vspace{-0.2in} \section*{Acknowledgements} \vspace{-0.25in} This work was supported by the U.S. Department of Energy grant number DE-FG02-04ER-41298. Additional computation for this work was done at the Texas Advanced Computing Center (TACC) and the National Energy Resources Supercomputing Center (NERSC). We thank the UKQCD collaboration for providing some of their lattices. We thank Alexei Bazavov, Claude Bernard, Carleton Detar, Craig McNeile, James Osborn and Bira van Kolck for helpful suggestions and assistance with the UKQCD lattices.
{'timestamp': '2009-09-15T00:57:26', 'yymm': '0905', 'arxiv_id': '0905.2432', 'language': 'en', 'url': 'https://arxiv.org/abs/0905.2432'}
\section{Introduction}\label{sec:RM:01} The Russell-Myhill paradox is a paradox about propositions which is structurally analogous to the Russell paradox about sets. While predicativity has been well-explored as a response to the Russell paradox about sets, it seems that there has been no attempt to set out and analyze a predicative response to the Russell-Myhill paradox. The primary aim of this paper is to do just that. The crucial idea behind the predicativity response is to restrict the comprehension schema for the ambient higher-order logic. Intuitively, the comprehension schema says that every well-formed formula determines a higher-order entity. Besides the burden of showing formal consistency, a predicative response to the Russell-Myhill paradox must provide at least the beginnings of an account of why some but not all formulas succeed in determining higher-order entities. The resulting formal system whose consistency we establish is centered around the intensional logic of Church. This intensional logic has a neutral core which we exposit in \S\ref{sec:RM:02}; it is neutral in the sense that its axioms are comparatively undemanding and consistent with contemporary theorizing based on possible-world semantics. In the subsequent \S\ref{sec:RM:03}, we set out a formalized version of the Russell-Myhill paradox, which is broadly similar to formalizations offered by Anderson and Klement, and we describe how different models offered by Kaplan and Anderson block different premises in the formalized version of the paradox. Then in \S\S\ref{sec:RM:04}-\ref{sec:RM:05}, we turn to and discuss the predicativity response to the Russell-Myhill paradox. In addition to discussing the philosophical motivations for predicative restrictions, we establish the formal consistency of the system by constructing a series of models. However, the models offered here validate an additional axiom of Church's intensional logic which, as Parsons and Klement have emphasized, is in some ways contrary to the spirit of a fine-grained theory of propositions. In \S\ref{sec:RM:06} we explain why this axiom holds on our models: it turns out that this is related to an expressive resource which allows these models to interpret a fragment of Gallin's intensional logic. Finally, in \S\ref{sec:RM:07}, we present one application of a broadly predicative perspective on the Russell-Myhill paradox about propositions, namely a response to the Wehmeier problem of many non-extensions that arises in connection to the naive conception of set found in Frege's {\it Grundgesetze}. This paper is the second in a series of three papers -- the other two being \cite{Walsh2014ac}, \cite{Walsh2014ad}-- which collectively constitute a sequel to the ``Basic Law~V'' component of our earlier paper \cite{Walsh2012aa}. In the companion paper \cite{Walsh2014ac}, we use G\"odel's constructible sets to study how much of Zermelo-Fraenkel set theory can be consistently realized in these fragments of the \emph{Grundgesetze}. In the complementary paper \cite{Walsh2014ad}, we examine the deductive strength of a related theory of abstraction principles. However, these papers do not touch the question of whether the models used to prove consistency of the {\it Grundgesetze} system are anything like intended models. In \S\ref{sec:RM:05} of this paper, we use G\"odel's constructible sets to produce models of our extension of Church's intensional logic. Our response to the problem of many non-extensions in \S\ref{sec:RM:07} involves showing how this extension of Church's intensional logic can define a model of Frege's \emph{Grundgesetze} system. In addition to articulating a predicative response to the Russell-Myhill paradox of propositions, this paper suggests the possibility of viewing the consistent fragments of Frege's naive set theory through the lens of a consistent version of Church's intensional logic. However, despite these connections to our earlier papers, this paper has been written so that one need not have read these other papers. At one point in \S\ref{sec:RM:05} below, we reference the earlier paper \cite{Walsh2014ac} for examples of one of our defined notions in this paper (namely that of an intensional hierarchy~(\ref{eqn:RM:defn:intensional:hierarchy})). However, this is the only respect in which this paper depends on the earlier papers. \section{The Neutral Core of Church's Intensional Logic}\label{sec:RM:02} The intensional logic of Church is an attempt to axiomatize Frege's sense-reference distinction. Of course, Frege thought that words not only designate their referent, but also express their sense. Hence, on Frege's view, our words bear two semantic relations to non-linguistic entities, namely they bear the \emph{designation relation} to their referents and they bear the \emph{expression relation} to their senses. In the crudest of terms, Frege is a semantic dualist. This of course allowed him to say how ``the morning star'' differs in meaning from the ``evening star'': while these two linguistic expressions refer to the same planet, they express different senses. In our view, Church's crucial observation was that semantic dualism induces a canonical non-semantic relation. The semantic dualist doesn't only think that there are more word-world connections, but they are also committed to an additional \emph{world-world} relation. In the case of Frege, the commitment is to a relationship between the abstract Fregean sense expressed by a linguistic expression and the entity (perhaps a planet) which is denoted by that linguistic expression. This relation is called the \emph{presentation} relation in the literature (\cite{Klement2010aa} p. 155), and one says that sense~$s$ \emph{presents} denotation~$d$ and one writes~$\Delta(s,d)$ precisely in the circumstance where there is a linguistic expression which expresses~$s$ and denotes~$d$. The ``triangle'' notation~$\Delta$ for the presentation relation is helpful here because it reminds us that a sense~$s$ on the bottom-left of the triangle stands in the presentation relation~$\Delta(s,d)$ to the denotation~$d$ on the bottom-right of the triangle in virtue of its semantic relations to some suppressed linguistic expression standing at the top of the triangle.\index{Symbol, Presentation~$\Delta$} Church proceeded by axiomatizing the presentation relationship. Of course, this is not the only way that one might seek to understand the presentation relationship. With respect to a given formal language, we know how to recursively define a satisfaction relation in terms of reference, and one might have thought that one ought to proceed similarly with the presentation relationship. However, this procedure would require the notion of sense to be as conceptually transparent as the notion of reference. By contrast, Church's own aim in axiomatizing the presentation relationship was to dissipate outright skepticism about Fregean sense. Here is how Church put the point in a 1943 review of a paper of Quine: \begin{quote} There remains the important task, which has never been approached, of constructing a formalized semantical system which shall take account of both kinds of meaning, the relation between a name and its denotation, and the relation between a name and its sense. [\ldots] [\P] [\ldots] Ultimately it is only on the basis of their inclusion in an adequate system of this kind that such otherwise indefensibly vague ideas as `understanding' of an expression, `attribute,' `objectiver Inhalt des Denkens,' may be regarded as logically significant (\cite{Church1943aa} p. 47). \end{quote} Hence one of the original aims of Church's work was to produce a formal theory whose quantifiers ranged over Fregean senses and which thus serve to implicitly define the notion of Fregean sense. The first component of Church's formal theory concerned the Fregean doctrine that sense determines reference. Frege tells us that whenever two linguistic expressions have the same sense, then if one refers then the other does too and they have the same referent. When put this way, it automatically suggests the following axiom (Church's Axiom~17, cf. \cite{Church1951ab} p.~19, \cite{Klement2002aa} pp.~108-109, \cite{Anderson1980aa} p. 220; \cite{Anderson1984aa} Axiom C8 p. 377): \begin{myenumerate} \item \emph{Sense Determines Reference}:~$(\Delta(s,d_0) \; \& \; \Delta(s,d_1)) \Longrightarrow d_0=d_1$ \label{eqn:RM:sdr} \end{myenumerate} Practically, this indicates to us that the presentation relationship is functional in character. Thus instead of writing~$\Delta(s,d)$, we may write instead~$\Delta(s)=d$. Likewise, borrowing notation from computability theory, sometimes we write~$\Delta(s)\hspace{-1mm}\downarrow$ to indicate that there is a~$d$ such that~$\Delta(s)=d$ (cf. \cite{Soare1987} pp. 16-17). Of course, we should keep in mind that on its intended interpretation, the presentation relation is not a total function. For, any meaningful linguistic expression will always have a sense but need not have a referent. The other of Church's axioms pertain to compositionality. On the side of reference, Frege postulated a fundamental distinction between objects and concepts. Objects were the referents of proper names of people and places, whereas concepts were the referents of predicate-words. In sentences such as ``Venus is a planet,'' we predicate a concept (``being a planet'') of an object (``Venus''). This may be rendered as a case of functional application, namely~$\textsc{Planet}(\textsc{Venus})=1$. Following contemporary practice, we here identify ``$1$'' with the truth-value ``true'' and ``$0$'' with the truth-value ``false,'' and for the sake of simplicity we assume that these are the only truth-values. Due to the fact that sense is a determiner of reference, it's natural to think that senses of sentences are also compositional. Frege called the senses of sentences \emph{thoughts}, so that sentences express thoughts and refer to truth-values. Just as the reference of a sentence is a function of the reference of its constituent parts, so Frege and Church hold that the sense of a sentence (a thought or a proposition) is a function of the senses of its parts. This thus suggested to Church the following axiom on the presentation relationship (Church's Axiom 15 \cite{Church1951ab} p.~18, \cite{Klement2002aa} pp.~108-109, \cite{Anderson1980aa} p. 219; cf. \cite{Anderson1984aa} Axiom C7 p. 377): \begin{myenumerate} \item \emph{Composition Axiom}:~$[\Delta(f^{\prime})=f \; \& \; \Delta(x^{\prime})=x] \Longrightarrow \Delta(f^{\prime}\langle x^{\prime}\rangle) = f(x)$\label{eqn:RM:cca1} \end{myenumerate} Here~$(f^{\prime}, x^{\prime})\mapsto f^{\prime}\langle x^{\prime}\rangle$ is a primitive \emph{intensional} application function on senses,\index{Symbol, Intensional Application~$(f^{\prime}, x^{\prime})\mapsto f^{\prime}\langle x^{\prime}\rangle$} just as~$(f,x)\mapsto f(x)$ is a primitive \emph{extensional} application function on referents.\index{Symbol, Extensional Application~$(f,x)\mapsto f(x)$} The axiom itself leaves open the relationship between intensional and extensional application, although we'll see later in this section, Church himself proposed that we identify them. Now we have at least six types of entities: objects, senses of objects, concepts, senses of concepts, truth-values, senses of truth-values (which we also call \emph{thoughts} or \emph{propositions}). However, there is a serious redundancy here. For, we may identify concepts with functions from objects to truth-values. (For details on this familiar identification, see circa equation~(\ref{eqn:RM:ide}) in the next section). If one does so, then it seems natural enough to further assume that if~$a$ is a type of entity and~$b$ is a type of entity, then there is a type~$ab$ of entities consisting of functions from entities of type~$a$ to entities of type~$b$. One makes these assumptions rigorous by defining the types recursively as follows: \begin{myenumerate} \item (Types in the Church System) (i) there is a type~$e$ of objects, (ii) there is a type~$t$ of truth-values, (iii) if~$a,b$ are types, then there is a type~$ab$ of functions from type~$a$ entities to type~$b$ entities, and (iv)~if~$a$ is a type then~$a^{\prime}$ is a type of senses which present entities of type~$a$.\label{eqn:typesystemChurch}\index{Types in Church's System~(\ref{eqn:typesystemChurch})} \end{myenumerate} \noindent In this last clause, it's important to emphasize that~$a\mapsto a^{\prime}$ is a primitive operation on types (cf. Kaplan~\cite{Kaplan1975aa} p. 721 and Klement~\cite{Klement2010aa} p. 173). Hence,~$a^{\prime}$ is the result of applying an operation to the type~$a$, and not simply another variable for types (and likewise~$a^{\prime\prime}$ is the result of applying the prime operation to type~$a^{\prime}$). Sometimes in what follows, if we write entities of type~$a$ as~$f,g,h,\ldots$ (resp.~$x,y,z \ldots$), then we will adopt the convention of writing entities of type~$a^{\prime}$ as ~$f^{\prime},g^{\prime},h^{\prime},\ldots$ (resp.~$x^{\prime},y^{\prime},z^{\prime} \ldots$). However, under this convention, entity~$f^{\prime}$ of type~$a^{\prime}$ is \emph{not} the result of applying any operation to the entity~$f$ of type~$a$, but rather just a conventional device which allows us to visually keep track of which entity has which type. Having set up the type system in this way, one sees immediately that there must be not a single presentation relationship~$\Delta$, but rather a presentation relationship~$\Delta_a$\index{Symbol, Presentation Symbol, Typed~$\Delta_a$} for each type~$a$, which relates senses of type~$a^{\prime}$ to entities of type~$a$. Having made this distinction, one thus reformulates the Axiom that Sense Determines Reference~(\ref{eqn:RM:sdr}) and the Composition Axiom~(\ref{eqn:RM:cca2}) as follows: \begin{myenumerate} \item \emph{Typed Sense Determines Reference}:~$(\Delta_a(s,d_0) \; \& \; \Delta_a(s,d_1)) \Rightarrow d_0=d_1$ \label{eqn:RM:sdr2}\index{Sense Determines Reference, Typed~(\ref{eqn:RM:sdr2})} \item \emph{Typed Composition}:~$[\Delta_{ab}(f^{\prime})=f \; \& \; \Delta_a(x^{\prime})=x] \Longrightarrow \Delta_b(f^{\prime}\langle x^{\prime}\rangle) = f(x)$\label{eqn:RM:cca2}\index{Composition, Typed~(\ref{eqn:RM:cca2})} \end{myenumerate} \noindent In the latter, the intensional application function~$(f^{\prime}, x^{\prime})\mapsto f^{\prime}\langle x^{\prime}\rangle$ takes an sense~$f^{\prime}$ of type~$(ab)^{\prime}$ and a sense~$x^{\prime}$ of type~$a^{\prime}$ and returns a sense of type~$b^{\prime}$, just as the extensional application function~$(f,x)\mapsto f(x)$ takes a referent~$f$ of type~$ab$ and a referent~$x$ of type~$a$ and returns a referent of type~$b$. Hence, just as there are as many presentation relations as there are types, so there are as many intensional and extensional application functions as there are pairs of types. Later, when we deal more formally with these systems, we will introduce symbols subscripted by types for the intensional and extensional application functions but for the time being we simply allow context to determine the types (cf. circa equation~(\ref{eqn:RM:defn:intensional}) in \S\ref{sec:RM:05}). Finally, let's record our standing assumption in this paper that entities of type~$ab$ are individuated extensionally: \begin{myenumerate} \item \emph{Extensional Identity Criterion for Functional Entities}: if~$f,g$ are entities of type~$ab$ then~$f=g$ if and only if~$f(x)=g(x)$ for all entities~$x$ of type~$a$.\label{extensionalidentityfunctions}\index{Extensional Identity for Functional Entities} \end{myenumerate} But no analogous assumptions are made on the individuation of entities of type~$(ab)^{\prime}$ in this paper. In his own writings, Church adopted the following axiom on the types themselves: \begin{myenumerate} \item \emph{Axiom of Type Reduction}:~$(ab)^{\prime} = a^{\prime}b^{\prime}$\label{eqn:RM:TRD}\index{Axiom of Type Reduction~(\ref{eqn:RM:TRD})} \end{myenumerate} In other words, this axiom says that the type~$(ab)^{\prime}$ of senses of functions from type~$a$ entities to type~$b$ entities is identical to the type~$a^{\prime}b^{\prime}$ of functions from senses of type~$a$ entities to senses of type~$b$ entities. It's called a reduction axiom because it allows one to reduce the senses of all higher-order entities to the senses of objects and truth-values (and senses of senses of objects, senses of propositions, etc.). The primary formal advantage of doing this is that it allows one to interpret intensional application as extensional application and hence disburdens one from developing an alternative conception of intensional application. Indeed, if senses of concepts are really extensional functions from senses of objects to senses of truth-values, then it's natural to think that the sense of a proposition is produced via the extensional application of the sense of an object to the sense of a concept. But some of the objections to Church's intensional logic have revolved around this Axiom of Type Reduction~(\ref{eqn:RM:TRD}). Dummett was concerned that the reduction axiom would require us to deny the seemingly plausible idea that ``[\ldots] we are able to learn what thought \emph{some} sentences containing the predicate express in advance of knowing the sense of the predicate'' (\cite{Dummett1981aa} p. 294, cf. \cite{Klement2002aa} pp. 69-70 ff). Dummett's idea was that we can learn the senses of complete propositions like~$Fa$ and~$Fb$ without precisely knowing the sense of~$F$. But the Axiom of Type Reduction~(\ref{eqn:RM:TRD}) demands that the sense of~$F$ is a function, hence presumably complete propositions like~$Fa$ and~$Fb$ will be the result of functional application of this sense-function, so that knowledge of them may well require prior knowledge of the sense of~$F$. Another objection to Church's Axiom of Type Reduction~(\ref{eqn:RM:TRD}) is due to Bealer, who notes that functionality seems to be entirely absent from qualia and other facets of conscious experience. Bealer writes: ``Joy, the shape of my hand, the aroma of coffee- these are not functions. When I feel joy, see the shape of my hand, or smell the aroma of coffee, it is not a function that I feel, see, or smell (cf. \cite{Bealer1982aa} p. 90, cf. \cite{Duzi2010aa} p. 6). This concern resonates well with the observation that when we are pressed to say something about the sense of words such as ``red,'' ``cold,'' or ``bitter,'' the mathematical notion of a function is far from our first thoughts. Of course, someone who denies Church's reduction axiom for these types of reasons need not be taken to deny that senses can compose with other senses. Rather, the denial should be registered merely as a denial that senses of predicate words can be exhaustively identified with functions from senses to other senses. Once we reject Church's reduction axiom, we are left with the following core of Church's system: \begin{myenumerate} \item The \emph{core of Church's system} consists of the Typed Sense Determines Reference~(\ref{eqn:RM:sdr2}) and the Typed Composition Axiom~(\ref{eqn:RM:cca2}); this theory is a typed theory, and the types are exactly as in~(\ref{eqn:typesystemChurch}). \label{eqn:RM:coresystem}\index{Core of Church's System~(\ref{eqn:RM:coresystem})} \end{myenumerate} \noindent It's noteworthy that there is nothing in these core axioms themselves that forces or even necessarily recommends the identification of type~$a^{\prime}$ with Fregean senses as opposed to any other notion of meaning. Indeed, Kaplan pointed out long ago that the standard frameworks for possible worlds semantics yield models of these axioms (cf. \cite{Kaplan1975aa} pp. 721~ff). In particular, Kaplan proceeded by identifying type~$a^{\prime}$ in the Church system with the type of functions from the worlds to the entities of type~$a$ and by defining the other primitives of the Church system as follows, wherein~$w_0$ is fixed world (say the actual world) and~$w$ is an arbitrary world: \begin{myequation}\label{eqn:kapalandadsfas} \Delta_a(f^{\prime}) = f^{\prime}(w_0), \hspace{10mm} (f^{\prime}\langle x^{\prime}\rangle)(w) = (f^{\prime}(w))(x^{\prime}(w)) \end{myequation} On the basis of these definitions, it's not too difficult to check that the Typed Sense Determines Reference~(\ref{eqn:RM:sdr2}) and the Typed Composition Axiom~(\ref{eqn:RM:cca2}) are both satisfied.\footnote{The Typed Sense Determines Reference~(\ref{eqn:RM:sdr2}) is satisfied because the definition of~$\Delta_a(f^{\prime})$ in equation~(\ref{eqn:kapalandadsfas}) is clearly functional since~$f^{\prime}$ is by stipulation a function defined on worlds and the world of evaluation~$w_0$ is fixed. For the Typed Composition Axiom~(\ref{eqn:RM:cca2}), suppose that~$\Delta_{ab}(f^{\prime})=f$ and~$\Delta_a x^{\prime}=x$. Then one can calculate that~$\Delta_b(f^{\prime}\langle x^{\prime}\rangle) = (f^{\prime}\langle x^{\prime}\rangle)(w_0) = (f^{\prime}(w_0))(x^{\prime}(w_0))=(\Delta_{ab} f^{\prime})(\Delta_a x^{\prime}) = f(x)$.} The core of Church's system is thus fairly neutral on the philosophical interpretation of the intensional notions which it axiomatizes. In the next section we turn to the formalization of the Russell-Myhill paradox within this core~system, and this treatment of the paradox will yield a common framework in which advocates of Fregean sense and advocates of possible worlds semantics can discuss the comparative advantages and disadvantages of different solutions to the paradox. Before doing so, it's perhaps worth underscoring some of the departures that we have made in this paper from traditional treatments of Church's intensional logic. Some of these differences are merely notational. One such difference is that Church wrote the type reserved for functions from entities of type~$a$ to entities of type~$b$ as~$ba$ rather than~$ab$ (cf. \cite{Church1951ab} p. 12, \cite{Anderson1984aa} p. 370, \cite{Klement2002aa} p. 106). We prefer the latter simply because it is now the norm in formal semantics (cf. \cite{Heim1998} p. 28, \cite{Gamut1991aa} pp. 84, 121). Further, Church respectively used the letters~$o_1$ and~$\iota_1$ instead of~$t$ and~$e$ for the truth-values and objects (cf. \cite{Church1951ab} p. 11, \cite{Anderson1984aa} p. 370, \cite{Klement2002aa} p. 106). Again, we use~$t$ and~$e$ simply because this is now the norm (cf. \cite{Heim1998} p. 28, \cite{Gamut1991aa} pp. 79, 128). Finally, sometimes in Church --and sometimes in intensional logics more generally-- the word ``concept,'' perhaps proceeded by modifiers like ``individual'' or ``propositional,'' is reserved for certain kinds of senses or intensions (cf. \cite{Klement2002aa} p. 96, \cite{Duzi2010aa} pp. 155 ff, \cite{Gamut1991aa} p. 122). However, here in this paper we eschew this usage and use the Fregean terminology, on which ``concepts'' are the unsaturated entities which may be saturated by objects and which are thus one-half of the concept-object distinction. The chief contentful difference between Church's original formulation of his intensional logic and our treatment of it concerns intensional application. Church himself did not introduce a primitive~$(f^{\prime},x^{\prime})\mapsto f^{\prime}\langle x^{\prime}\rangle$ for intensional application; again, this was because of his adoption of the axiom on type-reduction~(\ref{eqn:RM:TRD}) which we and many others reject. Further, because of his adoption of this axiom, Church did not introduce a primitive operation~$a\mapsto a^{\prime}$ on types but could simply get by with postulating types~$o_1, o_2, \ldots, \iota_1, \iota_2, \ldots$ wherein the type $o_1$ is the type of truth-values and the type $\iota_1$ is the type of objects and wherein~$\sigma_{n+1}$ is the type reserved for senses of entities of type~$\sigma_n$ for~$\sigma\in \{o,\iota\}$ (cf. \cite{Church1951ab} pp. 7, 11, \cite{Anderson1984aa} p. 370, \cite{Klement2002aa} p. 106). If one rejects the axiom on type-reduction~(\ref{eqn:RM:TRD}), then it's natural to postulate the primitive operation~$a\mapsto a^{\prime}$ on types, and here we follow Kaplan~\cite{Kaplan1975aa} p. 721 and Klement~\cite{Klement2010aa} p. 173. Finally, much of Church's own work on his system concerned various proposals for individuating senses, and these went under the name of Alternative~(0), Alternative~(1), and Alternative~(2) (cf. Klement~\cite{Klement2002aa} pp. 101~ff for overview). How exactly senses are individuated is obviously important, but it is not needed for the formalization of the Russell-Myhill Paradox discussed in the next section or for the predicative response discussed in the subsequent sections.\footnote{Admittedly, there is something deeper going on here. The distinction between Alternative~(0) and Alternative~(1) lies in whether lambda-conversion preserves sense. However, lambda-terms are an alternative way of formalizing comprehension~(cf. (\ref{eqn:RM:compschem})) which the predicative response offered here does not have available in full generality. Thus the formal extensions of Church's core system with which we work here simply don't have lambda-terms in the object-language. Hence an immediate issue which faced Church-- namely whether to say that lambda-conversion preserves sense-- is not even available in the object-language of our systems.} \section{Formalized Version of the Russell-Myhill Paradox and Extant Responses}\label{sec:RM:03} The aim of this section is to set out a formalized version of the Russell-Myhill Paradox and to survey some extant non-predicative responses. The formalization offered here is distinct from but owes much to the formalizations offered by Anderson and Klement, and we'll discuss explicitly in this section these similarities and differences. Since the Russell-Myhill paradox is a proposition-theoretic version of the Russell paradox about sets, it's useful to begin with a brief review of Russell's paradox about sets and Cantor's related theorem about cardinalities. A collection~$X$ is said to have \emph{cardinality less than or equal} to collection~$Y$ just in case there is an injection~$\iota:X\rightarrow Y$, while the two collections~$X,Y$ are said to have \emph{the same cardinality} if there is a bijection between them. Cantor's theorem about cardinalities says that for any collection, there is no injection from the set of all its subcollections to itself. In symbols, Cantor's theorem says that for any~$X$, there is no injection from~$\{Y: Y\subseteq X\}$ to~$X$ itself. But there is a natural bijective correspondence between the subcollections~$Y\subseteq X$ and the zero-one valued functions~$f:X\rightarrow \{0,1\}$, given by sending~$Y\subseteq X$ to its characteristic function~$f_Y:X\rightarrow \{0,1\}$ which is defined by \begin{myequation}\label{eqn:RM:ide} f_Y(x) = \begin{cases} 1 & \text{$x\in Y$}, \\ 0 & \text{$x\notin Y$}. \end{cases} \end{myequation} Since the type~$at$ is reserved for functions from entities of type~$a$ to the truth-values~$\{0,1\}$, there is thus a natural type-theoretic expression of Cantor's theorem: \begin{myenumerate} \item (Type-Theoretic Version of Cantor's Theorem) For any type~$a$, there is no injection from entities of type~$at$ to entities of type~$a$.\label{eqn:RM:typetheoreticRP}\index{Type-Theoretic Version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP})} \end{myenumerate} This version is entirely type-theoretic, since the injection in question would be an element of type~$(at)a$ and since the property of being injective is expressible purely with the extensional application notions built into the type theory. Let us briefly recall the traditional proof of the type-theoretic version of Cantor's Theorem. For the sake of readability, in this proof let us write entities of type~$at$ as~$f,g,h$ \ldots and entities of type~$a$ as~$x,y,z\ldots$. A function from the entities of type~$at$ to entities of type~$a$ is a function~$\iota$ taking input~$f$ of type~$at$ and returning output~$\iota(f)=x$ of type~$a$. Now, suppose that there was such an injection~$\iota$ from entities of type~$at$ to entities of type~$a$. Then consider the \emph{diagonal} map~$d$ from elements of type~$a$ to elements of type~$t$ given by\index{Diagonal map~$d$ (\ref{eqn:RM:eqndai})} \begin{myequation}\label{eqn:RM:eqndai} d(x) = 1 \Longleftrightarrow \exists \; f \; (\iota(f) = x \; \& \; f(x)=0) \end{myequation} Then let~$y=\iota(d)$ and ask whether~$d(y)=1$ or~$d(y)=0$. If~$d(y)=1$, then by the left-to-right direction of equation~(\ref{eqn:RM:eqndai}) one obtains witness~$f$ satisfying~$\iota(f) =y$ and~$f(y)=0$. Then since~$\iota(d)=y=\iota(f)$, we may conclude from the injectivity of~$\iota$ that~$d=f$, which contradicts that~$d(y)=1$ while~$f(y)=0$. Alternatively, if~$d(y)=0$, then~$d$ and~$y$ are witness to the right-hand side of equation~(\ref{eqn:RM:eqndai}), and so by the right-to-left direction of this equation we have~$d(y)=1$, a contradiction. Hence, in either case we obtain a contradiction. The connection between the type-theoretic version of Cantor's theorem and Russell's paradox about sets can be made more transparent if one defines a membership relation by \begin{myequation} y\in x \Longleftrightarrow \exists \; f \; (\iota(f)=x \; \& \; f(y)=1) \end{myequation} Further, for the moment let's call a \emph{set} an entity which is in the range of the operator~$\iota$. Then for any set~$x$, it follows from the injectivity of the~$\iota$-operator that we can also express \emph{non}-membership in~$x$ with an existential quantifier as follows: \begin{myequation} y\notin x \Longleftrightarrow \exists \; f \ (\iota(f)=x \; \& \; f(y)=0) \end{myequation} But then it is easy to see that for sets~$x$, equation~(\ref{eqn:RM:eqndai}) is equivalent to: \begin{myequation}\label{eqn:RM:eqndai2} d(x) = 1 \Longleftrightarrow x\notin x \end{myequation} Expressed in these terms, the diagonal function~$d$ from the above paragraph is the characteristic function of the collection of sets which are not members of themselves. This is one way to see the connection between the Russell paradox about sets and the type-theoretic version of Cantor's Theorem. The Russell-Myhill paradox about propositions proceeds by arguing, based on considerations related directly to propositions, that there is an injection from collections of propositions to propositions. Since type~$t^{\prime}$ is reserved for propositions in the Church system and since collections of propositions can be identified with their characteristic functions of type~$t^{\prime}t$ (\emph{\`a~la} equation~(\ref{eqn:RM:ide})), if this argument were correct than it would mean that there was an injection from entities of type~$t^{\prime} t$ to entities of type~$t^{\prime}$, which contradicts the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}). One way to respond to the Russell-Myhill paradox about propositions is to block \emph{in a well-motivated way} the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}). This is the kind of proposal which we shall pursue in this paper, beginning in \S\ref{sec:RM:04}. But in the remainder of this section we focus simply on formalizing the Russell-Myhill paradox and on surveying other extant responses. The informal version of the Russell-Myhill paradox was initially described in Appendix~B of Russell's 1903 \emph{The Principles of Mathematics} and then again in Myhill's~1958 paper on Church's intensional logic.\footnote{More specifically see \S{500} p.~538 of Russell \cite{Russell1903aa} and p. 82 of Myhill \cite{Myhill1958aa}. According to the history as set out in de~Rouilhan \cite{Rouilhan2005aa}, Russell never mentioned this paradox again. As for Myhill, in the same 1958 paper he reports that Carnap's ``general approach to the problem, in terms of `possible worlds' and state-descriptions, is in [his] opinion practically certain to yield a correct explication within a few years'' (\cite{Myhill1958aa} p. 81). This contrasts with Myhill's earlier 1952 paper on Church (\cite{Myhill1952ae}) in which he weighs carefully the costs and benefits of Fregean and modal approaches without indicating a decisive preference for either. It is well-known that Myhill continued to work on intuitionistic and non-classical approaches to the set-theoretic paradoxes throughout his career, but to my knowledge he never after the 1958 paper returned to this proposition-theoretic paradox.} The argument of Russell and Myhill runs as follows. Given a collection of propositions~$\mathcal{C}$, consider the proposition~$\iota(C)$ expressed by the sentence ``every proposition in~$\mathcal{C}$ is true'' (or ``every proposition is in~$\mathcal{C}$.'') It seems that this function is an injection. For, suppose that~$\iota(\mathcal{C})=\iota(\mathcal{D})$. Since these two propositions differ only as to~$\mathcal{C}$ and~$\mathcal{D}$, then presumably the senses (or intensions) of their constituents~$\mathcal{C}$ and~$\mathcal{D}$ are the same as well. And this would presumably imply that~$\mathcal{C}$ and~$\mathcal{D}$ are the same not only in sense or intension, but that they are also the same in reference or extension. Hence the map~$\iota$ is ostensibly an injection from collections of propositions to propositions. But, by applying the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}), we obtain a contradiction. While this version of the Russell-Myhill paradox is traditional, it is not obviously a formal paradox. This is for two reasons. First, formal paradoxes show that some formal system is inconsistent. But it is not at all obvious-- based merely on its informal description-- in what system the argumentation of the above paragraph may be formalized. To be sure, a good start could be made on this to the extent that one could formalize the notion of ``a sentence expressing a proposition.'' But to the extent that one could formalize this notion one could presumably likewise formalize the notion of ``a sentence denoting a truth-value,'' and hence one would worry that this formalization would require prior treatment of the liar paradox.\footnote{Intensional logics like Church's intensional logic and possible worlds semantics have resources for axiomatizing the notion of a ``proposition denoting the true.'' In Church's system, this is written as~$\Delta_t(p)=1$ while in possible worlds semantics this is written~$p(w_0)=1$ where~$w_0$ is the world of evaluation. However, in neither of these intensional logics does one have the resources for going from a name of a sentence to the proposition expressed by the sentence. If one did, then since these systems of intensional logic are consistent with the addition of resources needed to effect self-reference, one could replicate the formal versions of the liar paradox.} Second, formal paradoxes are always \emph{valid} arguments, whose conclusion is that some formal set of axioms is inconsistent. But there is a real concern about the validity of the above rendition of the Russell-Myhill paradox. For, it seems at crucial points to equivocate between the collection~$\mathcal{C}$ and a sense thereof. Indeed, it seems that it is the latter which would contribute to the proposition expressed by ``every proposition in~$\mathcal{C}$ is true.'' Yet, the argument as a whole pertains to an injection which takes as inputs collections of propositions~$\mathcal{C}$. A truly formalized version of the Russell-Myhill paradox would leave no doubt as to whether the argument was, at any juncture, operating on a collection of propositions or a sense.\footnote{Klement suggests that this kind of concern is one way of understanding Frege's own reservations about the Russell-Myhill paradox (\cite{Klement2002aa} p. 183).} The formalization of the Russell-Myhill paradox which we adopt avoids these two problems, and reads as follows: \begin{myenumerate} \item (\emph{Formalized Russell-Myhill Paradox}). The following axioms are jointly inconsistent against the background of the core of Church's system~(\ref{eqn:RM:coresystem}): the Surjectivity Axiom~(\ref{eqn:RM:SM}), the Senses are Objects Axiom~(\ref{eqn:RM:SO}), the Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}), and the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}).\label{eqn:RM:formalized}\index{Formalized Russell-Myhill Paradox (\ref{eqn:RM:formalized})} \end{myenumerate} As one can see, this formalization concerns three additional axioms which we need to introduce and motivate in these next pages prior to setting out the derivation of the paradox. The first axiom in the formalized version of the paradox is the called the Surjectivity Axiom. In essence, this axiom says that every entity-- including the higher-order ones-- are presented by some sense or intension: \begin{myenumerate} \item \emph{Surjectivity Axiom}: for each type~$a$ and each element~$f$ of type~$a$, there is element~$f^{\prime}$ of type~$a^{\prime}$ such that~$\Delta_a(f^{\prime})=f$.\label{eqn:RM:SM}\index{Surjectivity Axiom (\ref{eqn:RM:SM})} \end{myenumerate} The immediate warrant for this axiom is that there is simply no other way to formalize the Russell-Myhill paradox. For, consider again how it opens: ``for each collection of propositions~$\mathcal{C}$, consider the proposition~$\iota(C)$ expressed by the sentence `every proposition in~$\mathcal{C}$ is true.'~'' We accordingly need some way to move from \emph{any} collection of propositions to a proposition. It seems that any way in which we do this will take a collection of propositions, take a sense or intension which presents this collection, and build a proposition based off of this sense.\footnote{The Surjectivity Axiom has a long and complicated history in Church's own writings. In 1946, Church seemed to indicate that Cantor-like paradoxes would lead one to deny this axiom (\cite{Church1946aa} p. 31). In 1974, Church indicated that this axiom followed from the premises of his system called Alternative~2 (\cite{Church1974aa} p. 145). In his last paper in 1993, Church included this axiom in his system (\cite{Church1993aa} pp. 144-145), albeit without saying anything explicit about his reasons for this inclusion. For other statements of the Surjectivity Axiom in the secondary literature, see Anderson~\cite{Anderson1980aa} principle~(C) p. 221 and Klement \cite{Klement2002aa} Theorem LSD(0) 1 p. 116 and Klement \cite{Klement2003aa} p. 305 Axiom~PC. For more on Anderson and Klement on the Surjectivity Axiom, see the discussion below. } The next axiom concerns the location of senses or intensions within Frege's concept-object distinction, or within the typed systems usually employed in formal semantics. In essence, it says that senses or intensions fall on the object side of the concept-object distinction: \begin{myenumerate} \item \emph{Senses are Objects Axiom}: for each type~$a$ and each element~$f^{\prime}$ of type~$a^{\prime}$, there is element~$x$ of type~$e$ such that~$f^{\prime}=x$.\label{eqn:RM:SO}\index{Senses are Objects Axiom (\ref{eqn:RM:SO})} \end{myenumerate} This axiom is non-trivial because~$f^{\prime}$ and~$x$ are variables of different types. If contrary to fact they were variables of the same type, this would simply be a truth of the ambient predicate logic. The primary kind of consideration which points in favor of the Senses are Objects Axiom is a reflection on traditional conceptions of the nature of Fregean senses: in particular, while Russell suggested that we might view Fregean senses as definite descriptions,\footnote{It is admittedly somewhat inaccurate to speak of definite descriptions merely as an ``interpretation of Fregean sense,'' since they in fact provide a systematic way of dispensing with the Fregean notion of expression altogether and maintaining that reference is the sole semantic primitive. But presumably part of our tradition's reason for thinking that Frege's theory of meaning is susceptible to modal counterexamples couched in terms of definite descriptions is something like the thought that we can think of Fregean senses as definite descriptions.} Dummett has suggested that we might understand them as certain kinds of procedures or algorithms, a ``route to reference.''\footnote{Cf. \cite{Dummett1981aa} pp. 96, 102, 179~ff, \cite{Horty2007aa} pp. 66~ff, \cite{Taschek2010aa} p. 323. This idea is also associated to Tich\'y. See in particular the papers ``Sense and Procedure'' and ``Intensions in Terms of Turing Machines'' in \cite{Tichy2004aa}.} If either of these two traditional proposals about the nature of Fregean sense are correct, then it seems that senses might be regarded as objects of certain kinds, as opposed to concepts: for, whatever the exact nature of definite descriptions and algorithms, presumably they are unsaturated and fall on the ``object'' side of Frege's concept-object distinction. For instance, if one views definite descriptions as G\"odel numbers of formulas or if one views algorithms as indexes of Turing machines this will be the case.\footnote{Obviously, one way to respond to the version of the Russell-Myhill paradox formalized here would be to deny the Senses are Objects Axiom (\ref{eqn:RM:SO}). One way to do that might be to accept that senses are definite descriptions or procedures but to deny that these can be identified with specific objects like G\"odel numbers of formulas or indexes of Turing machines. Traditional reasons for such a denial might be that e.g. abstract procedures aren't represented by a specific index for a specific Turing machine, but rather by a large class of such indexes (cf. \cite{Blass2009ab}). I don't think that such a response would ultimately succeed. For, grant all this and then just select, for each abstract procedure, a specific index for a specific Turing machine which represents it, and call these things \emph{quasi-senses}. Then quasi-senses are objects and so one could run the entire Russell-Myhill paradox again with respect to quasi-senses. For, the other axioms occurring in the formalized version of the paradox seem just as plausible for the so-defined quasi-senses as for senses qua abstract procedures. A similar point can be made with respect to definite descriptions simply by selecting G\"odel numbers of specific formulas.} The final axiom operative in our formalized version of the Russell-Myhill paradox~(\ref{eqn:RM:derepropsaxiom}) is an axiom postulating a connection between objects and propositions: \begin{myenumerate} \item \emph{Propositions as Fine-Grained as Objects Axiom}: there is an injection~$\chi$ from entities of type~$e$ to entities of type~$t^{\prime}$. \label{eqn:RM:derepropsaxiom}\index{Propositions as Fine-Grained as Objects Axiom (\ref{eqn:RM:derepropsaxiom})} \end{myenumerate} Since entities of type~$e$ are objects and entities of type~$t^{\prime}$ are propositions, this axiom is just saying that there is an injection from objects to propositions. One plausible case for this axiom might be made from the assumption that (i)~our language is ample enough to distinguish different objects from one another and (ii)~propositions are organized roughly after the manner of the sentences which express them. For, by (i), for any object, we can fasten onto a predicate or name in our language which distinguishes this object from the others in our purview. And then by (ii) the distinctness of this item of language, be it a predicate or name, will then be matched in the propositions expressed by sentences featuring it. Having set out and motivated the various axioms, let us now establish the formalized version of the Russell-Myhill paradox~(\ref{eqn:RM:formalized}). By the Surjectivity Axiom~(\ref{eqn:RM:SM}), for every collection of propositions~$\mathcal{C}$, there is a sense~$\mathcal{C}^{\prime}$ which presents it. By the Senses are Objects Axiom~(\ref{eqn:RM:SO}), each such sense~$\mathcal{C}^{\prime}$ is identical to some object. There is thus a map from collections of propositions to objects such that the object is identical to a sense which presents the collection. More formally: \begin{myenumerate} \item For every collection of propositions~$\mathcal{C}$ there is an object~$x$ and there is a sense~$\mathcal{C}^{\prime}$ such that~$\mathcal{C}^{\prime}$ is identical to object~$x$ and~$\mathcal{C}^{\prime}$ presents~$\mathcal{C}$.\label{eqn:RM:AC1} \end{myenumerate} This induces a map~$\mathcal{C}\mapsto \delta(\mathcal{C})$ from collections of propositions to objects such that \begin{myenumerate} \item For every collection of propositions~$\mathcal{C}$ there is a sense~$\mathcal{C}^{\prime}$ such that~$\mathcal{C}^{\prime}$ is identical to object~$\delta(\mathcal{C})$ and~$\mathcal{C}^{\prime}$ presents~$\mathcal{C}$.\label{eqn:RM:AC2} \end{myenumerate} Further, by the Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}), there is an injection~$\chi$ from objects to propositions. Let~$\iota$ be the composition of the two maps, so that~$\iota = \chi \circ \delta$. Then~$\iota$ is a map from collections of propositions~$\mathcal{C}$ to propositions~$\iota(\mathcal{C})$. Further, the map~$\iota$ too is an injection. For, suppose that~$\iota(\mathcal{C}_1)=\iota(\mathcal{C}_2)$. Then since~$\chi$ is an injection,~$\delta(\mathcal{C}_1)=\delta(\mathcal{C}_2)$. Then by the characterization of~$\delta$ in equation~(\ref{eqn:RM:AC2}), for each~$k\in\{1,2\}$ there is sense~$\mathcal{C}^{\prime}_k=\delta(\mathcal{C}_k)$ which presents~$\mathcal{C}_k$. But since~$\delta(\mathcal{C}_1)=\delta(\mathcal{C}_2)$, we have that~$\mathcal{C}_1^{\prime}=\mathcal{C}_2^{\prime}$. Since~$\mathcal{C}_k^{\prime}$ presents~$\mathcal{C}_k$, by the Typed Sense Determines Reference Axiom~(\ref{eqn:RM:sdr2}), it follows that~$\mathcal{C}_1=\mathcal{C}_2$. Hence the map~$\iota$ is an injection from collections of propositions to propositions, which contradicts the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}). This formalization of the Russell-Myhill paradox is distinct from but owes much to the earlier formalizations of Anderson and Klement. On the one hand, Anderson made the Surjectivity Axiom~(\ref{eqn:RM:SM}) the focus of his treatments of the paradox (\cite{Anderson1980aa} pp. 221~ff, \cite{Anderson1987aa} pp. 107~ff). However, Anderson's formalization is given in a system which includes the axiom of type-reduction~(\ref{eqn:RM:TRD}), and so is not obviously available once we have rejected this axiom. In his paper \cite{Klement2003aa}, Klement gave a version of the Russell-Myhill paradox which invoked a ``principle of conceivability'' to the effect that ``for every entity, there is at least one sense presenting it as referent'' (\cite{Klement2003aa} p. 305, cf. \S{5} pp. 309~ff). Our Surjectivity Axiom~(\ref{eqn:RM:SM}) is just another expression of Klement's principle of conceivability. However, in that paper, Klement worked in a system which collapsed the concept-object distinction, so that concepts were a particular species of object.\footnote{See axiom ``PCE'' on \cite{Klement2003aa} p. 305. Another way of formalizing the system of Klement~\cite{Klement2003aa} might be to regard it simply as an untyped system, where there is no distinction between concepts and objects.} Since we want to work within Church's intensional logic, which is a typed system, our formalization has to proceed slightly differently. That said, one can view the formalization given above as the minimal way to modify the formalization of Klement~\cite{Klement2003aa} into the framework of what we're calling the core of Church's system~(\ref{eqn:RM:coresystem}). In particular, while Klement's argument postulated that concepts were a particular species of object, the Senses are Objects Axiom~(\ref{eqn:RM:SO}) postulates that senses are a particular species of object.\footnote{It also bears mentioning that Church, Anderson, and Klement additionally considered formalizations of the Russell-Myhill paradox within an alternative framework of intensional logic that goes under the heading of ``Russellian intensional logic'' (\cite{Church1984aa}, \cite{Anderson1986aa}, \cite{Klement2002aa} pp. 175~ff). This is the general framework which Klement employed in his widely-read \cite{Klement2005ab}. Since this framework was designed to be an alternative to what we're calling ``Church's intensional logic,'' we have not made use of this in our formalization. By the same token, it is beyond the scope of this paper to say whether anything like a predicative response is available in this alternative framework. To say anything definitive would require at least another lengthy consistency proof like that we offer in \S\ref{sec:RM:05}. If it turned out that nothing like a predicative response was available in this alternative setting, this might well indicate a certain lack of robustness to the predicative response offered in this paper.} Let's now turn to describing extant responses to this formalized version of the Russell-Myhill paradox. Of course, the specific formalization given above is new to this paper; hence, it is not as if previous authors have explicitly addressed this specific rendition of the paradox. However, anyone who has constructed a model of the core of Church's system~(\ref{eqn:RM:coresystem}) has found some way to avoid this paradox, and so we can ask how these consistent formal systems evade the formalized version of the Russell-Myhill paradox. By Kaplan's construction described in the last section (circa equation~(\ref{eqn:kapalandadsfas})), we can view the standard models of possible worlds semantics as models of Church's core system, and it turns out that the Surjectivity Axiom always comes out true on these models. For, on these models, intensions are just functions from possible worlds to extensions, and so for any extension one can consider the ``constant'' intension that picks out that extension at any world. The models traditionally used in possible worlds semantics are so-called ``standard'' models in which the type~$ab$ is interpreted as the set of all functions from entities of type~$a$ to entities of type~$b$, as judged by the ambient set-theoretic metatheory; hence, the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}) also comes out true on these models. The tradition of possible worlds semantics then avoids the formalized version of the Russell-Myhill paradox by either rejecting the Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}) or the Senses are Objects Axiom~(\ref{eqn:RM:SO}). For, if there are fewer sets of worlds than there are objects in the worlds, then of course there is no injection from objects to propositions, and so the Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}) comes out false. But if there are more sets of worlds than there are objects in the worlds, then there will be properly more functions from worlds to collections of propositions than there are objects, and hence in this case the Senses are Objects Axiom~(\ref{eqn:RM:SO}) comes out false.\footnote{More formally, we suppose that the types are assigned to sets as follows, wherein~$E$ and~$W$ are fixed sets, corresponding to the objects and the worlds respectively (cf.~(\ref{eqn:RM:usualseq})): \begin{equation} D_e = E, \hspace{5mm} D_t=\{0,1\}, \hspace{5mm} D_{ab}=D_b^{D_a} = \{f: D_a\rightarrow D_b\}, \hspace{5mm} D_{a^{\prime}} = D_a^W = \{f: W\rightarrow D_a\} \end{equation} Suppose that we are working in a set-theoretic metatheory where as usual $\left|X\right|$ is used to denote the cardinality of the set $X$. Then either~$\left|D_{t^{\prime}}\right|< \left|D_e\right|$ or not. If so, then there is no injection from~$D_e$ to~$D_{t^{\prime}}$ and the Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}) comes out false. Suppose alternatively that~$\left|D_{t^{\prime}}\right|\geq \left|D_e\right|$. Since we're working in a set-theoretic metatheory, we can then appeal to Cantor's theorem and basic facts about cardinality to obtain that~$\left| D_{(t^{\prime}t)^{\prime}}\right| \geq \left|D_{t^{\prime}t}\right| >\left|D_{t^{\prime}}\right| \geq \left|D_e\right|$. Then it is not the case that~$D_{(t^{\prime}t)^{\prime}}$ (or anything bijective with it) is a subset of~$D_e$ and hence the Senses are Objects Axiom~(\ref{eqn:RM:SO}) comes out false for the specific type of~$a=(t^{\prime}t)$.} The work of Anderson (cf. \cite{Anderson1980aa}, \cite{Anderson1984aa} pp. 371 ff) represents a distinct response to the Russell-Myhill paradox, on which one rethinks certain elements of Church's Core System~(\ref{eqn:RM:coresystem}). Anderson's basic idea was to modify Church's system so that there was not a single presentation relation~$\Delta$, but rather a series of presentation relations~$\Delta^{(1)}, \Delta^{(2)}, \ldots$. Let's call~$\Delta^{(n)}$ the~\emph{~$n$-th order presentation function}. On this view, one modifies the axioms of Church's Core System~(\ref{eqn:RM:coresystem}) so that there is one of these axioms for each of the~$n$-th order presentation functions. If one wants to present a formalization of the Russell-Myhill paradox, one needs to specialize it to some specific~$n$-th order presentation function. On this conception, it's natural to think that the analogues of the Surjectivity Axiom would be false. While it might be true in certain models that every higher-order entity was~$n$-th order presented for some~$n$, there might not be a single~$n$ which did this for each higher-order entity. Anderson's response to the Russell-Myhill paradox then parallels the ``typed'' responses to the liar paradox (cf. \cite{Anderson1984aa} p. 376). This is not the place to argue against the various responses to the formalized version of the Russell-Myhill paradox which we have distilled from the writings of Kaplan and Anderson. For one, it seems likely that such an adjudication would ultimately proceed by reference to larger considerations like the ability of each of the resulting systems to interpret categorical grammar or to provide a satisfactory semantics for belief attributions. Further, before trying to resolve the paradox in favor of one of these responses, it's important to understand whether we have actually exhausted the entirety of the solution space to the paradox. It seems that the predicativity response has been neglected in the extant literature on the Russell-Myhill paradox, and a chief aim of this paper, which we begin on in earnest in the next section, is to describe the general shape of a plausible predicative response to the Russell-Myhill paradox of propositions.\footnote{Obviously other approaches have been neglected as well: for instance, I know of no extant approaches to the Russell-Myhill paradox which adopt the perspective of non-classical logic.} \section{The Predicative Response to Russell-Myhill}\label{sec:RM:04} In the last pages we have surveyed how various constructions of models of Church's core system~(\ref{eqn:RM:coresystem}) avoid the formalized version of the Russell-Myhill paradox (\ref{eqn:RM:formalized}). In the extant literature there seems to have been no attempt to respond to this paradox by rejecting the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}). Part of the reason for this might be that Cantor's theorem is, well, a theorem. So there might be great pressure to not reject it. But this pressure will only be so great as the strength and plausibility of the axioms from which the theorem is derived. The axioms tacit in the derivation given above of Cantor's theorem are instances of the so-called \emph{comprehension schema}. However, there is a long tradition of \emph{predicative mathematics}, stemming from Poincar\'e and Weyl and represented in our day by the likes of Feferman (\cite{Feferman1964aa,Feferman2005ab}), which proceeds by systematically restricting the comprehension schema. Our aim in what follows is simply to set out and examine a predicative response to the Russell-Myhill paradox of propositions. Intuitively, the comprehension schema is a mechanism for converting formulas to higher-order entities. In the setting of type-theory, it's most expedient to adopt a version which says that any functional formula determines a higher-order function: \begin{myenumerate} \item (Typed Comprehension Schema). The \emph{typed comprehension schema} consists of all the axioms \vspace{-2mm}\[\forall \; z_1, \ldots, z_k \; [[\forall \; x \; \exists ! \; y \; \varphi(x, y, z_1, \ldots, z_k)]\rightarrow \exists \; h \; [\forall \; x \; \varphi(x, h(x), z_1, \ldots, z_k)]]\] where~$\varphi(x, y, z_1, \ldots, z_k)$ is a formula with all free variables displayed and with free variable~$x$ of type~$a$,~$y$ of type~$b$, while~$h$ is a variable of type~$ab$ that does not appear free in~$\varphi$.\label{eqn:RM:compschem}\index{Typed Comprehension Schema~(\ref{eqn:RM:compschem})} \end{myenumerate} In this schema, ``$\exists ! x \; \theta(x)$'' is simply the standard abbreviation expressive of uniqueness: ``$\exists \; x \; (\theta(x) \; \& \; \forall \; z \; (\theta(z)\rightarrow z=x))$''. A special case of this is the following, in which there is no requirement that the formula in question be functional in nature: \begin{myenumerate} \item (Concept Comprehension Schema). The \emph{concept comprehension schema} consists of all the axioms~$\forall \; z_1, \ldots, z_k \; \exists \; h \; \forall \; x \; (h(x)=1 \leftrightarrow \psi(x, z_1, \ldots, z_k))$, where~$\psi(x, z_1, \ldots, z_k)$ is a formula with all free variables displayed and with free variable~$x$ of type~$a$, while~$h$ is a variable of type~$at$ which does not appear free in~$\psi$.\label{eqn:RM:compschem0}\index{Concept Comprehension Schema (\ref{eqn:RM:compschem0})} \end{myenumerate} To derive this schema from the Typed Comprehension Schema~(\ref{eqn:RM:compschem}), one defines the formula~$\varphi(x,y,z_1, \ldots, z_k)$ to be \begin{myequation}\label{eqn:defntrickfromgoing} [(\psi(x, z_1, \ldots, z_k) \wedge y=1) \vee (\neg \psi(x, z_1, \ldots, z_k) \wedge y=0)] \end{myequation} The reason for wanting the Typed Comprehension Schema as opposed to the mere Concept Comprehension Schema~(\ref{eqn:RM:compschem0}) is that one wants a way to e.g. go from the entity~$q$ of type~$a$ to the constant function~$f_q(x)=q$ of type~$aa$. Since the Concept Comprehension Schema~(\ref{eqn:RM:compschem0}) only delivers entities~$h$ of type~$at$, it itself cannot do this. Finally, there is a natural principle which generalizes rather than specializes the Typed Comprehension Schema. In particular, if one removes the uniqueness clause from the antecedent of this schema, then this becomes a version of the axiom of choice: \begin{myenumerate} \item (Typed Choice Schema). The \emph{typed choice schema} consists of all the axioms \[\forall \; z_1, \ldots, z_k \; [[\forall \; x \; \exists \; y \; \varphi(x, y, z_1, \ldots, z_k)]\rightarrow \exists \; h \; [\forall \; x \; \varphi(x, h(x), z_1, \ldots, z_k)]]\] where~$\varphi(x, y, z_1, \ldots, z_k)$ is a formula with all free variables displayed and with free variable~$x$ of type~$a$,~$y$ of type~$b$, while~$h$ is a variable of type~$ab$ which does not appear free in~$\varphi$.\label{eqn:RM:compschem3}\index{Typed Choice Schema~(\ref{eqn:RM:compschem3})} \end{myenumerate} The Typed Choice Schema~(\ref{eqn:RM:compschem3}) trivially implies the Typed Comprehension Schema~(\ref{eqn:RM:compschem}), which as we remarked above implies the Concept Comprehension Schema~(\ref{eqn:RM:compschem0}). The predicative response to the Russell-Myhill paradox restricts the Typed Comprehension Schema (\ref{eqn:RM:compschem}) by imposing constraints on the kinds of higher-order quantifiers and higher-order parameters that can occur in the formula. These restrictions are formulated in terms of the notion of the degree of a type: \begin{myenumerate}\index{Degree of Type (\ref{eqn:RM:degtypinitial})} \item (Degree of Type) The \emph{degree} of a type is a positive natural number which is defined recursively as follows: \label{eqn:RM:degtypinitial}\vspace{-2mm} \[\|e\|=\|t\| =1, \hspace{10mm} \|a^{\prime}\| = \|a\|, \hspace{10mm} \|ab\|=\begin{cases} \|a\|+1 & \text{if~$\|a\|\geq \|b\|$}, \\ \|b\| & \text{if~$\|a\|<\|b\|$}. \end{cases}\] \end{myenumerate} To illustrate this last clause, note that $\|e(et)\|=\|t(et)\|=2$ and $\|(et)e\|=\|(et)t\|=3$, while $\|a\|<\|ab\|$ and $\|b\|\leq \|ab\|$ for all types $a,b$. Intuitively, the idea is that degree goes up when the entities of type~$ab$ are genuinely of higher order than those entities of type~$b$. For instance, suppose that~$a=t$ so that that there are only two entities of type~$a$, namely the two truth-values~$0$ and~$1$, and suppose that~$b=et$, so that there are many entities of type~$b$, namely as many as there are concepts. Then entities of type~$ab$ are functions from~$\{0,1\}$ to concepts, and so are really just another way of talking about pairs of concepts. Hence, quantifying over entities of type~$ab$ should involve no more higher-order quantification than quantifying over concepts, and so the degree of~$ab$ should be the same as the degree of~$b$ in this case. The predicative comprehension schema may then be defined as follows: \begin{myenumerate} \item (Predicative Typed Comprehension Schema). The \emph{predicative typed comprehension schema} consists of all the axioms \vspace{-2mm} \[\forall \; z_1, \ldots, z_k \; [[\forall \; x \; \exists ! \; y \; \varphi(x, y, z_1, \ldots, z_k)]\rightarrow \exists \; h \; [\forall \; x \; \varphi(x, h(x), z_1, \ldots, z_k)]]\] where~$\varphi(x, y, z_1, \ldots, z_k)$ is a formula with all free variables displayed and with free variable~$x$ of type~$a$,~$y$ of type~$b$, while~$h$ is a variable of type~$ab$ which does not appear free in~$\varphi$, and in addition variable~$z_i$ has type~$c_i$ with~$\|c_i\|\leq \|ab\|$ and all the bound variables in~$\varphi(x,y,z_1, \ldots, z_k)$ have type~$c$ with~$\|c\|< \|ab\|$. \label{eqn:RM:predcompschem}\index{Predicative Typed Comprehension Schema (\ref{eqn:RM:predcompschem})} \end{myenumerate} Using the same trick as above in equation~(\ref{eqn:defntrickfromgoing}), it's easy to see that the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}) implies a version of the Concept Comprehension Schema in which there are the same restrictions on the parameters and bound variables appearing in the formula. For the sake of completeness, we state this version here: \begin{myenumerate} \item (Predicative Concept Comprehension Schema). The \emph{predicative concept comprehension schema} consists of all the axioms \[\forall \; z_1, \ldots, z_k \; \exists \; h \; \forall \; x \; (h(x)=1 \leftrightarrow \psi(x, z_1, \ldots, z_k))\] where~$\psi(x, z_1, \ldots, z_k)$ is a formula with all free variables displayed and with free variable~$x$ of type~$a$, while~$h$ is a variable of type~$at$ which does not appear free in~$\psi$, and in addition variable~$z_i$ has type~$c_i$ with~$\|c_i\|\leq \|a\|+1$ and all the bound variables in~$\psi(x,z_1, \ldots, z_k)$ have type~$c$ with~$\|c\|< \|a\|+1$. \label{eqn:RM:predcompschem0}\index{Predicative Concept Comprehension Schema (\ref{eqn:RM:predcompschem0})} \end{myenumerate} Finally, one has the predicative version of the Typed Choice Schema~(\ref{eqn:RM:compschem3}): \begin{myenumerate} \item (Predicative Typed Choice Schema). The \emph{predicative typed choice schema} consists of all the axioms \vspace{-1mm} \[\forall \; z_1, \ldots, z_k \; [[\forall \; x \; \exists \; y \; \varphi(x, y, z_1, \ldots, z_k)]\rightarrow \exists \; h \; [\forall \; x \; \varphi(x, h(x), z_1, \ldots, z_k)]]\] where~$\varphi(x, y, z_1, \ldots, z_k)$ is a formula with all free variables displayed and with free variable~$x$ of type~$a$,~$y$ of type~$b$, while~$h$ is a variable of type~$ab$ which does not appear free in~$\varphi$, and in addition variable~$z_i$ has type~$c_i$ with~$\|c_i\|\leq \|ab\|$ and all the bound variables in~$\varphi(x,y,z_1, \ldots, z_k)$ have type~$c$ with~$\|c\|< \|ab\|$.\label{eqn:RM:compschem4}\index{Predicative Typed Choice Schema (\ref{eqn:RM:compschem4})} \end{myenumerate} Again, it's easy to see that the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}) is the deductively strongest schema, so that it implies the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}), which in turn implies the Predicative Concept Comprehension Schema~(\ref{eqn:RM:predcompschem0}). Let's illustrate these predicative schemata by reference to the derivation of the type-theoretic version of the Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}). In particular, recall the following equation where we defined the diagonal function~$d$ of type~$at$: \begin{myequation}\label{eqn:digaonaliam} d(x) = 1 \Longleftrightarrow \exists \; f \; (\iota(f) = x \; \& \; f(x)=0) \tag{\ref{eqn:RM:eqndai}} \end{myequation} This is a non-predicative instance of the concept comprehension schema. For, while the defining formula of~$d$ has free variable~$x$ of type~$a$ with degree~$\|a\|$, this formula at the same time contains a bound variable~$f$ of type~$at$. But by consulting the definition of the degree of a type~(\ref{eqn:RM:degtypinitial}), we see that~$\|at\|\geq \|a\|+1$, so that the defining equation~(\ref{eqn:RM:eqndai}) of the diagonal function~$d$ is not an instance of the Predicative Concept Comprehension Schema~(\ref{eqn:RM:predcompschem0}), even though it is an instance of the more general Concept Comprehension Schema~(\ref{eqn:RM:compschem0}). Hence, this illustrates how if one accepts only the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}), the traditional proof of the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}) is blocked. A similar elementary observation can be used to illustrate the motivation for the constraints on the parameters in the predicative variants of the comprehension schemas. In particular, if we did not have these restrictions, then we could again show that the diagonal function~$d$ from~(\ref{eqn:digaonaliam}) was in fact a higher-order entity. In this paragraph let us fix a type~$a$ and let us reserve~$\alpha$ and subscripted versions thereof for entities of type~$aa$. Then consider the admittedly uninteresting-appearing formula~$\theta(x,y,z)\equiv y=z$, wherein~$x,y,z$ are of type~$a$. Let~$q$ be an entity of type~$a$. Then trivially we have~$\forall \; x \; \exists \; ! \; y \; \theta(x,y,q)$. Then by the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}), there is a function~$\alpha_q$ of type~$aa$ such that~$\alpha_q(x)=q$ for all~$x$ of type~$a$. Let~$\Theta(q,\alpha)\equiv \forall \; x \; \alpha(x)=q$, where again~$x$ has type~$a$ and~$\alpha$ has type~$aa$. Then by the arguments given so far in this paragraph, we have that~$\forall \; q \; \exists \; ! \; \alpha \; \Theta(q,\alpha)$. Hence again by the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}), there is a function~$\mathcal{C}$ of type~$a(aa)$ (wherein~$\mathcal{C}$ reminds us of the word ``constant'') such that for all~$q$ of type~$a$, we have~$\mathcal{C}(q)=\alpha_q$. Now, again by the Predicative Concept Comprehension Schema~(\ref{eqn:RM:predcompschem0}), consider the following ``higher-order diagonal''~$\mathcal{D}$ function of type~$(aa)t$: \begin{myequation}\label{eqn:defn:dqbig} \mathcal{D}(\alpha)=1 \Longleftrightarrow [\exists \; q \; \alpha = \mathcal{C}(q) \; \& \; \exists \; f \; (\iota(f)=\alpha(q) \; \& \; f(\alpha(q))=0)] \end{myequation} Intuitively,~$\mathcal{D}$ is picking out those constant functions~$\alpha_q$ such that~$d(q)=1$ (where~$d$ is the diagonal function from equation~(\ref{eqn:RM:eqndai})). Now,~$\mathcal{D}$ has type~$(aa)t$ with degree~$\|(aa)t\|>\|a\|+1$. If, contrary to fact, there were no restrictions on parameters in the Predicative Concept Comprehension Schema~(\ref{eqn:RM:predcompschem0}), then we could use~$\mathcal{D}, \mathcal{C}$ to define a higher-order entity~$\widetilde{d}$ of type~$at$ as follows: \begin{myequation}\label{eqn:defnwided} \widetilde{d}(\widetilde{q})=1 \Longleftrightarrow \mathcal{D}(\mathcal{C}(\widetilde{q}))=1 \end{myequation} Then one can verify that \begin{myequation}\label{eqn:whatwewnattoshceck} \widetilde{d}(\widetilde{q})=1 \Longleftrightarrow \exists \; f \; (\iota(f) = \widetilde{q} \; \& \; f(\widetilde{q})=0) \end{myequation} from which it follows that we have shown that the diagonal function~$d$~(\ref{eqn:RM:eqndai}) again exists as a higher-order entity. This is why it is necessary to include restrictions on parameters in the predicative versions of the comprehension schema. The predicativity response to the Russell-Myhill paradox has two great burdens. First, it is necessary to show that the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}) is consistent with the remaining axioms from the formalized version of the Russell-Myhill paradox~(\ref{eqn:RM:formalized}). In the next section, we discharge this burden by proving: \begin{myenumerate} \item (Predicative Consistency Theorem). The following formal system is consistent: the core of Church's system~(\ref{eqn:RM:coresystem}), the Surjectivity Axiom~(\ref{eqn:RM:SM}), the Senses are Objects Axiom~(\ref{eqn:RM:SO}), the Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}), and the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}).\label{eqn:RM:predconthm}\index{Predicative Consistency Theorem (\ref{eqn:RM:predconthm})} \end{myenumerate} The second burden of the predicativity response is to say something about what motivates the restriction on the Typed Comprehension Schema~(\ref{eqn:RM:compschem}). Again, intuitively this schema says that every functional formula determines a higher-order entity. If one restricts this, one must say something about \emph{when} and \emph{why} a functional formula determines a higher-order entity. In the remainder of this section, we discuss this more philosophical dimension of the predicativity response to the Russell-Myhill paradox. Poincar\'e and Weyl were the original predicativists. They drew attention to the fact that higher-order definitions are not in general preserved when one keeps the first-order domain fixed but expands the range of the higher-order quantifiers. So Poincar\'e identifies \emph{predicativity} with definitions that are preserved under such expansions: ``a classification is called \emph{predicative} when it is not changed through the introduction of new elements'' (\cite{Heinzmann1986aa} p. 233, \cite{Poincare1910aa} p. 47, cf. \cite{Chihara1973aa} p. 141). Likewise, in his 1918 \emph{Das Kontinuum}, Weyl draws attention to the fact that if one codes real numbers and continuous functions as certain sets of natural numbers (or rationals), then the definition of the class of continuous functions will contain higher-order quantifiers and thus what counts as a continuous function will depend crucially on the extent and range of the higher-order quantifiers: \begin{quote} If we regard the principles of definition as an ``\emph{open}'' system, i.e., if we reserve the right to extend them when necessary by making additions, then in general the question of whether a given function is continuous must also remain \emph{open} [\ldots]' (\cite{Weyl1918} p. 66). \end{quote} The dual to Weyl's remark is that if one wants to define notions whose extension does not vary with the extent and range of the higher-order quantifiers, then one should restrict attention to definitions which do not contain higher-order quantifiers. In my view the best motivation for the predicativity restriction is related to these original thoughts of Poincar\'e and Weyl.\footnote{A distinct set of motivations for predicativity constraints come from the apparent affinity of predicativity with types of constructivism. For more on this complicated aspect of the history of predicativity, see Parsons \cite{Parsons2002}. Another important study of the history of predicativity-like conceptions is Goldfarb's \cite{Goldfarb1988aa} study of Russell. Goldfarb suggests that Russell's reasons for endorsing predicativity-like constraints might be related to having systems in which one can quantify over intensional entities like propositions. By contrast, the motivations given here for predicativity constraints are not intended to have anything to do with constructivity and are intended to apply with equal force to the quantifiers ranging over intensional entities like propositions as to those ranging over extensional entities like concepts.} In more modern terms, these ideas might be expressed in terms of intuitions about the stability of reference. Suppose that one is using a definite description to refer to an object. If minor variations in empirical conditions cause the object to fail to satisfy this description, then the definite description will be a less than efficacious route to reference. There is a natural generalization of this line of thought to the comprehension schema, where for the sake of simplicity we focus on the Concept Comprehension Schema~(\ref{eqn:RM:compschem0}). The idea is that there is a natural way of seeing each instance of this schema as related to a definite description of a higher-order entity. In particular, consider the following instance of the Concept Comprehension Schema~(\ref{eqn:RM:compschem0}): \begin{myequation} \exists \; h \; \forall \; x \; (\varphi(x) \leftrightarrow h(x)=1) \end{myequation} One can think about this~$h$ as ``the~$\widetilde{\varphi}$,'' where we define: \begin{myequation}\label{eqn:RM:mymy2} \widetilde{\varphi}(h)\equiv \; [\forall \; x \; (\varphi(x) \leftrightarrow h(x)=1)] \end{myequation} If the formula~$\varphi(x)$ contains higher-order quantifiers, then whether a given~$h$ satisfies the description~$\widetilde{\varphi}(h)$ may vary with expansions of the range of the higher-order quantifiers. However, when the formula does not itself contain higher-order quantifiers, then whether something satisfies this description will be stable under expansions of the range of the higher-order quantifiers. The motivation for the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}) can then be seen to derive from the intuition that where one uses a definite description to effect reference to a higher-order entity, this description should be stable under variations of the range of the higher-order quantifiers. There is nonetheless still a residual philosophical challenge for this predicative response. In particular, it must say something about what the pre-theoretic idea is behind the relevant sense of expansion of the range of the higher-order quantifiers. In my view, the best answer to this is tied to the kinds of positive reasons we can give for the Surjectivity Axiom~(\ref{eqn:RM:SM}). The best positive reason to believe this axiom flows from a conception of what we're trying to model: we're not trying to model higher-order entities as they are in some abstract inaccessible third realm, but we're trying to model higher-order entities insofar as they fall within our referential ken. And it's natural to think that our resources for referring to higher-order entities expands over time just as our resources for referring to concrete objects expands over time.\footnote{Presumably this motivation for the restriction on the quantifiers in the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}) likewise motivates the restriction on the higher-order parameters in this schema. In this it's helpful to recall the worked-out example above of the higher-order parameters~$\mathcal{D}$ and~$\mathcal{C}$. As one can see by inspecting equations~(\ref{eqn:defnwided}) and~(\ref{eqn:whatwewnattoshceck}) above, higher-order parameters are able to go proxy for higher-order quantifiers. Given this, if one wants to employ a description featuring a higher-order parameter to stably refer to a lower-order entity, then it's natural to require that this higher-order parameter likewise not shift in extension under variations of the range of the higher-order quantifiers.} However, it seems safe to say that the same motivations in terms of stability of reference cannot be given for the more general Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}). Like the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}), instances of this schema are conditionals which articulate a sufficient condition for the existence of a higher-order entity. However, unlike the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}), it does not seem that the sufficient condition offered by the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}) can be conceived of as providing any sort of intension which may serve as a mechanism by which to effect reference to the higher-order entity in question. Thus the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}) should not be viewed as following from the predicative viewpoint articulated here, but rather should be viewed as a tool which one can consistently avail oneself of. This is important to be clear about because it's only in the presence of some choice-like principle that the Formalized Version of the Russell-Myhill Paradox~(\ref{eqn:RM:formalized}) is actually a deductively valid argument. In particular, in the derivation of the inconsistency in \S\ref{sec:RM:03}, it's easy to see that the move from equation~(\ref{eqn:RM:AC1}) to equation~(\ref{eqn:RM:AC2}) is an instance of some choice-like principle, and one can verify that this move will be covered by the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}) (and hence the Typed Choice Schema~(\ref{eqn:RM:compschem3})). If one wanted to be very formal, another axiom that should be officially added to the list of the Formalized Version of the Russell-Myhill Paradox~(\ref{eqn:RM:formalized}) should be the Typed Choice Schema~(\ref{eqn:RM:compschem3}). On this way of putting the matter, the predicative response to the paradox is to deny the type-theoretic version of the Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}) and to remain ambivalent on the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}). Again, by the results of the next section, it's consistent for the predicative response to assume the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}), but what we have sought to emphasize in these paragraphs is that the reasons which motivate the predicative restrictions on the comprehension schema don't obviously motivate any instances of either the impredicative or predicative choice schema. \section{The Consistency of the Predicative Response}\label{sec:RM:05} In this section, we take up the task of demonstrating the Predicative Consistency Theorem~(\ref{eqn:RM:predconthm}). The reader who is uninterested in this result or merely willing to accept it conditionally might elect to pass on directly to the next section, since the details of this section will not be needed for understanding the subsequent sections of this paper. The most fundamental idea in the proof is to replace the use of the cumulative hierarchy of sets as deployed in the usual model theory of type systems with the constructible hierarchy of sets. Before recalling the definitions of the cumulative hierarchy and the constructible hierarchy of sets, let's then first recall how the usual model theory for type theory proceeds. Let's restrict attention to the \emph{extensional} fragment of the types which contains the type~$e$ for objects, the type~$t$ for truth-values, and which contains the type~$ab$ of functions from entities of type~$a$ to entities of type~$b$ whenever it contains type~$a$ and type~$b$; and moreover, let's consider the \emph{extensional} language which is bereft of the presentation symbols and the intensional application symbols and which contains only the extensional application symbols. Models for the extensional fragment of type theory usually begin with an assignment of domains~$D_a$ to each type~$a$. The procedure here is that the type~$e$ of objects is assigned some arbitrary domain~$D_e=E$, the type~$t$ of truth-values is assigned the set~$D_t=\{0,1\}$ of truth-values ($0$ for ``false'' and~$1$ for ``true''), and the type~$ab$ is assigned the domain~$D_{ab}$ of all functions~$f:D_a\rightarrow D_b$, which is sometimes written in exponential notation as~$D_b^{D_a}$ (cf. \cite{Heim1998} p. 28, \cite{Gamut1991aa} pp. 84, 121): \begin{myequation}\label{eqn:RM:usualseq} D_e = E, \hspace{10mm} D_t=\{0,1\}, \hspace{10mm} D_{ab}=D_b^{D_a} = \{f: D_a\rightarrow D_b\} \end{myequation} To see the connection with the cumulative hierarchy of sets, recall that we can identify sets with their characteristic functions (cf. equation~(\ref{eqn:RM:ide})). Further, recall that \begin{myenumerate} \item \emph{The power set~$P(X)$} of a given set~$X$ is defined to be the set of all the subsets~$Y$ of~$X$, that is,~$P(X)=\{Y: Y\subseteq X\}$.\label{eqn:RM:defnpower}\index{Powerset~$P(X)$ (\ref{defn:powerset}) }\label{defn:powerset} \end{myenumerate} Hence, we can identify the sets in~$P(D_a)$ with the functions in~$D_{at}$. For the moment, let's write~$D_{at}\approx P(D_a)$ as a shorthand for this identification. Iterating this, we can build the following sequence in a very straightforward manner: \begin{myequation} D_e= E, \hspace{5mm} D_{et}\approx P(E), \hspace{5mm} D_{(et)t} \approx P(P(E)), \hspace{5mm} D_{((et)t)t} \approx P( P(P(E))), \ldots \end{myequation} From this perspective, the usual model theory for the extensional theory of types is closely related to iterations of the powerset operator. This kind of sequence is of course also built into the standard conception of the set-theoretic universe, namely the cumulative hierarchy. In particular, the axioms of set theory guarantee that the universe~$V$ of sets is identical to the union of the following sequence of sets~$V_{\alpha}$, where~$\alpha$ is an ordinal (cf. \cite{Kunen1980} p.~95, \cite{Jech2003} p. 64, \cite{Hrbacek1999aa} p. 257):\index{Cumulative Hierarchy~$V_{\alpha}$ (\ref{eqn:ddfn:cummau})} \begin{myequation}\label{eqn:ddfn:cummau} V_0=\emptyset, \hspace{10mm} V_{\alpha+1} = P(V_{\alpha}), \hspace{10mm} V_{\alpha}=\bigcup_{\beta<\alpha} V_{\beta}, \; \alpha \mbox{ limit} \end{myequation} One of G\"odel's many important innovations in set theory was the definition of the constructible hierarchy of sets. The definition of this hierarchy is identical to the definition of the cumulative hierarchy except at the successor steps, where instead of looking at the full powerset of the previous step, one looks at a certain class of definable subsets of the previous step. In particular, we define, in contrast to the definition of the powerset~(\ref{eqn:RM:defnpower}) above: \begin{myenumerate} \item \emph{The collection of definable subsets~$\mathrm{Defn}(X)$} of a given set is defined to be the set of all subsets~$Y$ of~$X$ such that there is a first-order formula in the language of set theory~$\varphi(x,z_1, \ldots, z_n)$ with all free variables displayed and parameters~$q_1, \ldots, q_n$ from~$X$ such that~$Y=\{x\in X: (X,\in)\models \varphi(x, q_1, \ldots, q_n)\}$.\label{eqn:defn:RM:defn}\index{Definable subsets~$\mathrm{Defn}(X)$ of~$X$ (\ref{eqn:defn:RM:defn})} \end{myenumerate} For instance~$X$ is in~$\mathrm{Defn}(X)$ since~$X=\{x\in X: (X,\in)\models x=x\}$ and the empty set~$\emptyset$ is in~$\mathrm{Defn}(X)$ since~$\emptyset=\{x\in X: (X,\in)\models x\neq x\}$. G\"odel then defined the \emph{constructible hierarchy} as follows (cf. \cite{Kunen1980} p.~166, \cite{Kunen2011aa} p. 134, \cite{Jech2003} p. 174, \cite{Devlin1984aa} p. 58):\index{Constructible Hierarchy~$L_{\alpha}$ (\ref{eqn:mymylimit22})} \begin{myequation}\label{eqn:mymylimit22} L_0=\emptyset, \hspace{10mm} L_{\alpha+1} = \mathrm{Defn}(L_{\alpha}), \hspace{10mm} L_{\alpha}=\bigcup_{\beta<\alpha} L_{\beta}, \alpha \mbox{ limit} \end{myequation} and he defined the \emph{constructible universe}~$L$ to be the union of the sets from the constructible hierarchy. G\"odel further showed that the constructible universe models all of the axioms of set theory, so that this is not just another collection of sets but an alternative set theoretic universe. The key idea in our proof of the Predicative Consistency Theorem~(\ref{eqn:RM:predconthm}) is to assign the types to levels of the constructible hierarchy. To do this, we need to work with a very specific kind of level of the constructible hierarchy. This kind of level was first defined by Kripke (\cite{Kripke1964aa}) and Platek (\cite{Platek1966aa}), who had the idea that some initial segments of the constructible hierarchy can't ``tell'' that they are tall, and actually think that they can be ``shrunk'', and this idea was later famously employed by Jensen (\cite{Jensen1972aa}) in his proof of the uniformization theorem. Formally one defines this important notion as follows (cf. \cite{Jensen1972aa} pp. 256-257, \cite{Schindler2010aa} Definition 2.1 p. 619, \cite{Devlin1984aa} p. 156, \cite{Sacks1990} p. 157, Barwise \cite{Barwise1975ab} Definition V.6.1 p. 174, \cite{Kripke1964aa} p. 162, \cite{Walsh2014ac}):\index{Projectum,~$n$-th~$\rho_n(\alpha)$ (\ref{defn:theprojectummmm})} \begin{myenumerate} \item the \emph{\mbox{}$n$-th projectum}~$\rho_n(\alpha)$ is the smallest~$\rho\leq \alpha$ such that there is a~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable injection~$\iota: L_{\alpha}\rightarrow \rho$. \label{defn:theprojectummmm} \end{myenumerate} In this,~$\utilde{\Sigma}_n$-definability is first-order definability in the sense at issue in the definition of~$\mathrm{Defn}(X)$ above in equation~(\ref{eqn:defn:RM:defn}), but restricted to first-order formulas which begin with a block of alternating quantifiers of length~$n$ starting with an existential quantifier (and allowing parameters). Further, when not clear from context, one writes~$\utilde{\Sigma}_n^{L_{\alpha}}$ to emphasize that the definability is with respect to the structure~$L_{\alpha}$. Now we can proceed to a description of our models. Our most important definition is the following definition of an \emph{intensional position}~(\ref{eqn:RM:defn:intensional:position}). The motivation for this name comes in the subsequent definition of an \emph{intensional hierarchy}~(\ref{eqn:RM:defn:intensional:hierarchy}) which is given by a collection of \emph{intensional positions}, which intuitively are ``positions'' for the higher-order variables within the hierarchy.\footnote{The language of ``positions'' is apt because, as one can see from inspection of the below definitions, one intensional position can provide the interpretation of the~$n$-th order quantifiers in one intensional hierarchy but the interpretation of the~$m$-th order quantifiers in another.} While the definition of an intensional position is admittedly complicated, the broader significance of each element of this definition will be borne out by the subsequent discussion in this section: \begin{myenumerate} \item An \emph{intensional position}~$\mathfrak{p}$ is a given by a sextuple~$\mathfrak{p}=(\alpha, \ell, \iota, \mathcal{O}, \pi, \nu)$ wherein (i) the ordinal~$\alpha>\omega$ is a limit , (ii)~$\ell$ is a positive natural number such that~$L_{\alpha}$ is a model of the~$\Sigma_{\ell}$-collection schema and the~$\Sigma_{{\ell}-1}$ separation schema, (iii) the ordinal~$\alpha$ has non-trivial~$\ell$-th projectum~$\alpha_0=\rho_{\ell}(\alpha)<\alpha$ with~$\iota:L_{\alpha}\rightarrow \alpha_0$ a witnessing~$\utilde{\Sigma}_{\ell}^{L_{\alpha}}$-definable injection, (iv) the set~$\mathcal{O}$ is a~$\utilde{\Sigma}_{\ell}^{L_{\alpha}}$-definable subset of~$\alpha_0$, (v)~the map~$\pi:\mathcal{O}\dashrightarrow L_{\alpha}$ is a~$\utilde{\Sigma}_{\ell}^{L_{\alpha}}$-definable partial surjective function such that~$\pi\circ \iota$ is the identity on~$L_{\alpha}$ and such that~$\mathcal{O}\setminus \pi^{-1}(L_{\alpha})$ is~$\utilde{\Sigma}_{\ell}^{L_{\alpha}}$-definable and (vi) the definability in each of the previous items is with respect to the parameter~$\nu$ from~$L_{\alpha}$.\label{eqn:RM:defn:intensional:position}\index{Intensional Position~(\ref{eqn:RM:defn:intensional:position})} \end{myenumerate} In part~(ii) of this definition, the {\emph{}$\Sigma_n$-collection schema} is the axiom schema which says that if~$\varphi(x,y)$ is a~$\Sigma_n$-formula and for all~$x\in a$ there is~$y$ such that~$\varphi(x,y)$, then there is a set~$b$ such that for all~$x\in a$ there is~$y\in b$ with~$\varphi(x,y)$. In essence,~$\Sigma_n$-collection says that when for everything in an antecedently specified set~$a$ there is a witness to a~$\Sigma_n$-condition, then at least one witness for everything in~$a$ may be bounded or collected together in another set~$b$. The \emph{$\Sigma_n$-separation schema} is simply the separation schema from the ambient set theory restricted to the case of~$\Sigma_n$-formulas: it says that if~$\varphi(x)$ is a~$\Sigma_n$ formula and~$a$ is a set then there is another set~$b$ such that~$z\in b$ iff~$z\in a\wedge \varphi(z)$. In essence,~$\Sigma_n$-separation just says that all the~$\Sigma_n$-subsets of antecedently specified~$a$ set exist. Further, it's worth mentioning that the concept of an ordinal~$\alpha$ being~\emph{$\ell$-admissible} from \cite{Walsh2014ac} is equivalent to conditions~(i)-(ii) of the definition of an intensional position, so that intensional hierarchies are just certain collections of~$\ell$-admissibles for increasing values of~$\ell$. This generalizes the notion of Kripke-Platek set theory since in the case~$\ell=1$, a structure~$L_{\alpha}$ is~$\ell$-admissible just in case it is a model of this set theory (\cite{Kripke1964aa}, \cite{Platek1966aa}, Devlin \cite{Devlin1984aa} p. 48, p. 36). Having all this in place, we may now define the notion of an \emph{intensional hierarchy}: \begin{myenumerate} \item An \emph{intensional hierarchy}~$D=(\mathfrak{p}_1, \mathfrak{p}_2, \ldots)$ is given by a countable sequence~$\mathfrak{p}_n=(\alpha_n, \ell_n, \iota_n, \mathcal{O}_n, \pi_n, \nu_n)$ of intensional positions such that~(i) for all~$n,m\geq 1$ it is the case that~$\rho_{\ell_n}(\alpha_n)= \rho_{\ell_m}(\alpha_m)=\alpha_0$, and (ii)~the associated sequence of ordinals is strictly increasing:~$\alpha_0 <\alpha_1 <\alpha_2<\cdots <\alpha_n<\alpha_{n+1}<\cdots$.\label{eqn:RM:defn:intensional:hierarchy}\index{Intensional Hierarchy (\ref{eqn:RM:defn:intensional:hierarchy})} \end{myenumerate} One example of an intensional hierarchy is related to definite descriptions. Let~$\lambda$ be a cardinal in G\"odel's constructible universe~$L$, and let~$\kappa=\lambda^{+}$ be the next biggest cardinal in~$L$, as judged by~$L$. Further, let~$M_n=\mathrm{dcl}^{L_{\kappa}}_{\Sigma_n}(\lambda \cup \{\lambda\})$ be the sets in~$L_{\kappa}$ that have~$\Sigma_n$-definite descriptions over~$L_{\kappa}$ with parameters from~$\lambda \cup \{\lambda\}$. Then it can be shown that that~$M_n=L_{\alpha_n}$ for some~$\alpha$ with~$\lambda<\alpha_n<\alpha_{n+1}<\kappa$ and that~$\rho_n(\alpha_n)=\lambda$. For more details on the construction described in this paragraph, see \cite{Walsh2014ac}, and in particular the existence theorem.\footnote{It's worth spelling out exactly how one defines~$\mathcal{O}_n$ and~$\pi_n$, by more specific reference to the details of the Existence Theorem of \cite{Walsh2014ac} and in particular to the function $\theta_n$ defined therein. The simplest way is to take~$\mathcal{O}_n=\theta_n^{-1}(L_{\alpha_n})$ and to define~$\pi_n=\theta_n\upharpoonright \mathcal{O}_n$. Since~$\theta_n:\mathcal{F}_n\dashrightarrow L_{\alpha_n}$ is~$\utilde{\Sigma}_n^{L_{\alpha_n}}$-definable and~$\mathcal{F}_n$ is~$\utilde{\Sigma}_1^{L_{\alpha_n}}$-definable,~$\mathcal{O}_n$ will be~$\utilde{\Sigma}_n^{L_{\alpha_n}}$-definable, and the \emph{total} surjective map~$\pi_n: \mathcal{O}_n\rightarrow L_{\alpha_n}$ will be similarly definable. Because it is total, trivially~$\mathcal{O}_n\setminus \pi_n^{-1}(L_{\alpha_n})$ is ~$\utilde{\Sigma}_n^{L_{\alpha_n}}$-definable because it is, well, empty. Since~$\pi_n$ is designed to provide the interpretation of~$\Delta_{a}$ for each type~$a$ of degree~$n$ (cf. subsequent discussion circa equation~(\ref{eqn:RM:defnpres})), clearly this interpretation clashes with the intended interpretation of the presentation functions, on which they would be partial. To reinstitute partiality, choose any subset~$\mathcal{P}_n\subseteq \mathcal{F}_n\setminus \theta^{-1}_n(L_{\alpha_n})$ which is~$\utilde{\Sigma}_n^{L_{\alpha_n}}$-definable and then define~$\mathcal{O}^{\prime}_n=\theta_n^{-1}(L_{\alpha_n}) \cup \mathcal{P}_n$ and define ~$\pi_n^{\prime}=\theta_n\upharpoonright \mathcal{O}^{\prime}_n$, making sure to build the parameters defining~$\mathcal{P}_n$ into~$\nu_n$. For instance, one could take~$\mathcal{P}_n$ to be any finite subset of~$\mathcal{F}_n\setminus \theta^{-1}_n(L_{\alpha_n})$.} Each intensional hierarchy~$D$ naturally gives rise to a model of Church's core system~(\ref{eqn:RM:coresystem}). In particular, we assign types to domains as follows: \begin{myequation}\label{eqn:defn:RM:typestodomains}\index{Type assignment~$a\mapsto D_a$ (\ref{eqn:defn:RM:typestodomains})} D_e = \alpha_0,\hspace{5mm} D_t=\{0,1\},\hspace{5mm} D_{ab}={D_b}^{D_a} \cap L_{\alpha_{\|ab\|}},\hspace{5mm} D_{a^{\prime}} = \mathcal{O}_{\|a\|} \end{myequation} In this, recall that~$a\mapsto \|a\|$ is the degree of the type~$a$, as defined in~(\ref{eqn:RM:degtypinitial}). So the parallel to the usual semantics for extensional type theory becomes vivid. In particular, whereas these usual semantics employ the cumulative hierarchy to assign domains to types, here our semantics for our intensional type theory uses the constructible hierarchy to assign domains to types. For instance, instead of assigning~$ab$ the set~$D_b^{D_a}=\{f:D_a\rightarrow D_b\}$, we only assign it those elements of this set which are in the constructible hierarchy at an appropriate level. So we're only putting those higher-order entities of type~$ab$ in the range of the higher-order quantifiers when it has entered a level of the constructible hierarchy which is coordinated with the degree of the type~$ab$. Having defined intensional hierarchies, our next goal is to say how to interpret the extensional application symbols~$(f,x)\mapsto f(x)$, the presentation symbols~$\Delta_a$, and the intensional application symbols~$(f^{\prime}, a^{\prime})\mapsto f^{\prime}\langle x^{\prime}\rangle$ on intensional hierarchies. In providing these interpretations, we shall be associating each intensional hierarchy~$D$ to an \emph{intensional structure}~$\mathbb{D}$\index{Intensional Structure~$\mathbb{D}$} augmented by these interpretations. Further, as we go along, we shall also show that various axioms are true on these intensional structures. However, prior to doing this, we need to state the following elementary result about how the domains~$D_a$ of an intensional hierarchy relate to the sets~$L_{\alpha_n}$: \begin{myenumerate} \item\label{prop:location:domain} (Proposition on the Location of Domains) For all~$n\geq 1$, both of the following hold: \begin{itemize} \item[] (I) for all types~$a$ with~$\|a\|<n$, there is a~$\Sigma_1$-formula in parameter~$\mu_n$ such that~$D_a$ is the unique element of~$L_{\alpha_n}$ which satisfies this formula, wherein~$\mu_{n}$ is defined by~$\mu_{n}= \langle \nu_1, \ldots, \nu_{n}, \alpha_0, \alpha_1, \ldots, \alpha_{n-1}\rangle$. \item[] (II) for all types~$a$ with~$\|a\|= n$, the set~$D_a$ is a~$\utilde{\Sigma}_{\ell_{n}}$-definable subset of~$L_{\alpha_{n}}$ in parameter~$\mu_{n}$. \end{itemize} \end{myenumerate} \noindent For a proof, see Appendix~1 \S\ref{app1}. This result is important because it tells us that the domain~$D_a$ is a subset of~$L_{\alpha_{\|a\|}}$, so that we can locate the domain~$D_a$ amongst the levels of the constructible hierarchy by calculating the degree of the type~$a$. Further, from this we can deduce the following: \begin{myenumerate} \item (Proposition on Domain and Codomain of Projectum Witnesses) For all types~$a$, one has that restriction~$\iota_{\|a\|}\upharpoonright D_a$ has domain~$D_a$ and codomain~$D_{a^{\prime}}$, i.e.~$\iota_{\|a\|}\upharpoonright D_a: D_{a}\rightarrow D_{a^{\prime}}$.\label{eqn:RM:domcodomproj}\index{Proposition on Domain and Codomain of Projectum Witnesses (\ref{eqn:RM:domcodomproj})} \end{myenumerate} To see this, let~$n=\|a\|$. By the Proposition on the Location of Domains,~$D_a$ is a subset of the domain~$L_{\alpha_n}$ of the injection~$\iota_n: L_{\alpha_n}\rightarrow \alpha_0$. Hence the restriction notation~$\iota_{\|a\|}\upharpoonright D_a$ makes good sense. Suppose now that~$x$ is a member of~$D_a$ and set~$y=\iota_{\|a\|}(x)$. Since~$\pi_n \circ \iota_n$ is the identity function on~$L_{\alpha_n}$, we have that~$y$ is in the domain of~$\pi_{\|a\|}$, which by definition is a subset of~$\mathcal{O}_{\|a\|}=D_{a^{\prime}}$. This, in any case, is the elementary argument which characterizes the Domain and Codomain of the Projectum Witnesses~(\ref{eqn:RM:domcodomproj}). Given an intensional hierarchy~$D$, there is a natural interpretation of the presentation symbols~$\Delta_a$ such that the Typed Sense Determines Reference Axiom~(\ref{eqn:RM:sdr2}) is true on the induced intensional structure~$\mathbb{D}$. In particular, the presentation functional~$\Delta_a$ is interpreted on an intensional hierarchy~$D$ as the binary relation on~$D_{a^{\prime}}\times D_a$ defined by \begin{myequation}\label{eqn:RM:defnpres} \Delta_a (f^{\prime}, f) \Longleftrightarrow \pi_{\|a\|}(f^{\prime})=f \end{myequation} That is,~$\Delta_a$ is interpreted as the graph of~$\pi_{\|a\|}$ restricted to~$D_{a^{\prime}}\times D_a$. This definition makes good sense. For, suppose that~$f^{\prime}$ is from~$D_{a^{\prime}} = \mathcal{O}_{\|a\|}$ and~$f$ is from~$D_a$. Then by the Proposition on Location of Domains~(\ref{prop:location:domain}), we have that~$f\in D_a\subseteq L_{\alpha_{\|a\|}}$ and by definition~$\pi_{\|a\|}:\mathcal{O}_{\|a\|}\dashrightarrow L_{\alpha_{\|a\|}}$. We just showed how to expand an intensional hierarchy~$D$ to an intensional structure~$\mathbb{D}$ which has an interpretation of the presentation symbols. Now let us verify that the Typed Sense Determines Reference Axiom~(\ref{eqn:RM:sdr2}) is true on the intensional structure~$\mathbb{D}$. For the ease of readability, we reproduce this axiom here: \begin{itemize} \item[(\ref{eqn:RM:sdr2})] \emph{Typed Sense Determines Reference}:~$(\Delta_a(f^{\prime},f) \; \& \; \Delta_a(f^{\prime},g)) \Longrightarrow f=g$ \end{itemize} Suppose that~$\Delta_a(f^{\prime},f)$ and~$\Delta_a(f^{\prime},g)$. Then by definition in equation~(\ref{eqn:RM:defnpres}), we have that~$\pi_{\|a\|}(f^{\prime})=f$ and~$\pi_{\|a\|}(f^{\prime})=g$. Since~$\pi_{\|a\|}:\mathcal{O}_{\|a\|}\dashrightarrow L_{\alpha_{\|a\|}}$ is a partial function, it then follows that~$f=g$. Since the Typed Sense Determines Reference Axiom~(\ref{eqn:RM:sdr2}) comes out true on this interpretation of the presentation symbols, we have that the presentation symbol is functional. Just as when working in the object language of Church's core system~(\ref{eqn:RM:coresystem}), instead of writing~$\Delta_a(f^{\prime},f)$, we shall write~$\Delta_a(f^{\prime})=f$. Likewise, we shall write~$\Delta_a(f^{\prime})\hspace{-1mm}\downarrow$ to indicate that there is~$f$ such that ~$\Delta_a(f^{\prime})=f$ (cf. discussion subsequent to~(\ref{eqn:RM:sdr}) in \S\ref{sec:RM:02}). It remains to indicate the interpretation of the extensional application symbols~$(f,x)\mapsto f(x)$ and the intensional application symbols~$(f^{\prime}, a^{\prime})\mapsto f^{\prime}\langle x^{\prime}\rangle$. The extensional application symbols are comparatively straightforward: these are interpreted as the function from~$D_{ab}\times D_a$ to~$D_b$ given by the notion of extensional application from the metatheory. This makes sense because, per the definition of~$D_{ab}$ in equation~(\ref{eqn:defn:RM:typestodomains}), every element of~$D_{ab}$ is a function~$f:D_{a}\rightarrow D_b$. It's perhaps worth underscoring that for each pair of types~$a,b$, there is a separate extensional application symbol in the signature of Church's core system~(\ref{eqn:RM:coresystem}). We can usually ignore this since their interpretation is uniform. But, in what follows, if we need to explicitly display the types of an extensional application symbol, we shall write~$\mathrm{e\mbox{-}app}_{ab}(f,x)$ instead of~$f(x)$ for the extensional application symbols.\index{Symbol, Extensional Application, Typed~$\mathrm{e\mbox{-}app}_{ab}(f,x)$} Likewise, we shall sometimes write~$\mathrm{i\mbox{-}app}_{ab}(f^{\prime},x^{\prime})$ instead of~$f^{\prime}\langle x^{\prime}\rangle$ for the intensional application symbols, again to highlight the fact that there is one of these symbols for each pair of types~$a,b$.\index{Symbol, Intensional Application, Typed~$\mathrm{i\mbox{-}app}_{ab}(f^{\prime},x^{\prime})$} We interpret these symbols on an intensional hierarchy as follows: \begin{myequation}\label{eqn:RM:defn:intensional} \mathrm{i\mbox{-}app}_{ab}(f^{\prime},x^{\prime}) = f^{\prime}\langle x^{\prime}\rangle=\iota_{\|b\|} ((\Delta_{ab} f^{\prime}) (\Delta_a x^{\prime})) \end{myequation} From what we know about the interpretation of the presentation functions and the result on the Domain and Codomain of the Projectum Witnesses~(\ref{eqn:RM:domcodomproj}), we see that the intensional application function is a partial function~$\mathrm{i\mbox{-}app}_{ab}:D_{(ab)^{\prime}}\times D_{a^{\prime}}\dashrightarrow D_{b^{\prime}}$. As with the discussion of the partial presentation functions, technically in the formal system we shall identify the intensional application function with its graph, which is a ternary relation on~$D_{(ab)^{\prime}}\times D_{a^{\prime}}\times D_{b^{\prime}}$. As with presentation symbols, when we write~$\mathrm{i\mbox{-}app}_{ab}(f^{\prime},x^{\prime})$ all by itself, it is assumed that this is defined. Now, let's show that the Typed Composition Axiom~(\ref{eqn:RM:cca2}) comes out true on this interpretation. For the ease of readability, we reproduce this axiom here: \begin{itemize} \item[(\ref{eqn:RM:cca2})] \emph{Typed Composition}:~$[\Delta_{ab}(f^{\prime})=f \; \& \; \Delta_a(x^{\prime})=x] \Longrightarrow \Delta_b(f^{\prime}\langle x^{\prime}\rangle) = f(x)$ \end{itemize} Suppose that~$\Delta_{ab}(f^{\prime})=f$ and~$\Delta_a(x^{\prime})=x$. Then by its definition in equation~(\ref{eqn:RM:defn:intensional}), we see that~$f^{\prime}\langle x^{\prime}\rangle$ is defined. Then we may evaluate the term~$\Delta_b(f^{\prime}\langle x^{\prime}\rangle)$ as follows: \begin{myequation} \Delta_b (\iota_{\|b\|} ((\Delta_{ab} f^{\prime}) (\Delta_a x^{\prime})))= \Delta_b (\iota_{\|b\|} (f(x))) =(\pi_{\|b\|} \circ \iota_{\|b\|}) (f(x)) = f(x) \end{myequation} where the last equality follows from the fact that~$\pi_{\|b\|}\circ \iota_{\|b\|}$ is the identity function on the set~$L_{\alpha_{\|b\|}}$ (cf. clause~(v) in the definition of an intensional position~(\ref{eqn:RM:defn:intensional:position})). This is why the Typed Composition Axiom~(\ref{eqn:RM:cca2}) comes out true on intensional structures. Finally, let's note why the Surjectivity Axiom~(\ref{eqn:RM:SM}) and the Senses are Objects Axiom~(\ref{eqn:RM:SO}) are rather trivially true on intensional structures. As for surjectivity, suppose that~$f$ is an element of domain~$D_a$. Again, by the Proposition on Location of Domains~(\ref{prop:location:domain}), we have that~$D_a$ is a subset of~$L_{\alpha_{\|a\|}}$. Since~$\pi_{\|a\|}:\mathcal{O}_{\|a\|}\dashrightarrow L_{\alpha_{\|a\|}}$ is partial surjective, choose~$f^{\prime}$ from~$\mathcal{O}_{\|a\|}$ such that~$\pi_{\|a\|}(f^{\prime})=f$. Then since we have the identity~$D_{a^{\prime}}=\mathcal{O}_{\|a\|}$ (cf. equation~(\ref{eqn:defn:RM:typestodomains})) and since~$\Delta_a$ is interpreted as the graph of~$\pi_{\|a\|}$ restricted to~$D_{a^{\prime}}\times D_a$ (cf. equation~(\ref{eqn:RM:defnpres})), we have that~$\Delta_a(f^{\prime})=f$. This is why the Surjectivity Axiom~(\ref{eqn:RM:SM}) comes out true on intensional structures. As for the Senses are Objects Axiom~(\ref{eqn:RM:SO}), suppose that~$a$ is a type. By definition, we have the identities~$D_e=\alpha_0$ and~$D_{a^{\prime}}=\mathcal{O}_{\|a\|}$ (cf. equation~(\ref{eqn:defn:RM:typestodomains})), and by the definition of an intensional position we have that~$\mathcal{O}_{\|a\|}\subseteq \alpha_0$ (cf. part~(iv) of (\ref{eqn:RM:defn:intensional:position})). Hence, on intensional structures, it is indeed the case that every sense or intension is identical to an object. The Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}) requires the following definition: \begin{myenumerate} \item An intensional position~$\mathfrak{p}=(\alpha, \ell, \iota, \mathcal{O}, \pi, \nu)$ is \emph{expressive} if there is an injection~$\chi:\alpha_0\rightarrow \pi^{-1}(\{0,1\})$ whose graph is an element of~$L_{\alpha}$. An intensional hierarchy is \emph{expressive} if each position in it is expressive.\index{Intensional Position, Expressive (\ref{intensionalpsotiionex}) }\label{intensionalpsotiionex} \end{myenumerate} There are expressive intensional hierarchies (cf. the existence theorem in \cite{Walsh2014ac}), and any expressive intensional hierarchy models the Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}). In particular, take the injection~$\chi_1: \alpha_0\rightarrow \pi^{-1}_1(\{0,1\})$. Since~$D_e=\alpha_0$ and~$D_t=\{0,1\}$ and~$D_{t^{\prime}}=\mathcal{O}_{1}$ (cf. equation~(\ref{eqn:defn:RM:typestodomains})), it follows that~$\pi^{-1}_1(\{0,1\})\subseteq \mathcal{O}_{1}=D_{t^{\prime}}$, so that~$\chi_1: D_e\rightarrow D_{t^{\prime}}$ is an injection. Moreover, this injection also maps objects to propositions which present a truth-value. For, note that~$\Delta_{t}(\chi_1(x))$ is defined for each~$x$ from~$D_e$ since~$\chi_1(x)\in \pi^{-1}_1(\{0,1\})$. Hence expressive intensional structures model the Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}) in an interesting way since we may inject objects into propositions that actually succeed in presenting truth-values. To finish the proof of the Predicative Consistency Theorem~(\ref{eqn:RM:predconthm}), it remains to establish that intensional structures are indeed models of the predicative versions of comprehension: \begin{myenumerate} \item (Theorem on Consistency of Predicative Comprehension) For every intensional hierarchy~$D$~(\ref{eqn:RM:defn:intensional:hierarchy}), the associated intensional structure~$\mathbb{D}$ models each instance of the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}) and hence each instance of the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}). \label{thM:RM:big}\index{Theorem on Consistency of Predicative Comprehension (\ref{thM:RM:big})} \end{myenumerate} The proof of this is completed in Appendix 2 \S\ref{app2}, since it is comparatively technical in nature. But with this, the proof of the Theorem on the Consistency of Predicative Comprehension~(\ref{thM:RM:big}) is finished. In this section we have described models of certain extensions of Church's Core System~(\ref{eqn:RM:coresystem}), and before closing this section it's worth dwelling on one feature of these models related to the Senses are Objects Axiom~(\ref{eqn:RM:SO}). This axiom requires that there be non-trivial identities between different types. While perhaps obvious, it's worth underscoring how this effected. Formally, one simply makes identity an untyped binary relation in the definition of well-formed formulas, as is not uncommon in many-sorted logics (cf. \cite{Manzano1996} p. 229, \cite{Feferman1968} p. 16~footnote~10). On this approach and hence in the models described in this section, identity is simply interpreted as the usual identity relation from the ambient metatheory. One immediate consequence of this approach is that it is only items of syntax such as variables and terms which have a unique type, whereas elements of a domain of a model can have more than one type. This happens more often than one might initially suspect. For instance, the standard semantics for second-order logic is routinely formalized in a many-sorted setting wherein models are given by a pair $(M,P(M))$ wherein the non-empty set $M$ serves as the interpretation of the first-order variables and wherein its powerset $P(M)$ serves as the interpretation of the second-order variables. But if $M$ is a transitive set such as an ordinal, then $M$ is a subset of $P(M)$ and so the two are not at all disjoint. The fact that any finite ordinal is both a first-order object and a second-order object in the standard model of second-order arithmetic $(\omega, P(\omega))$ has never engendered any confusion. Similarly, while the Senses are Objects Axiom~(\ref{eqn:RM:SO}) may be objectionable on purely philosophical grounds, the non-trivial identities between types inherent in it pose no problems for the model theory of such typed systems. \section{Church's Other Axiom and Gallin's Intensional Logic}\label{sec:RM:06} Church included another axiom in his own formulation which we have omitted in our original description of his intensional logic in~\S\ref{sec:RM:02}. The strongest version of this axiom is the following, where recall that the ``downarrow'' notation~$\downarrow$ indicates that the partial function is defined on that value (cf. discussion immediately after~(\ref{eqn:RM:sdr}) in \S\ref{sec:RM:02}): \begin{myenumerate} \item \emph{Iterative Axiom}:~$\forall \; f^{\prime}, g^{\prime}\; [\Delta_{ab}(f^{\prime})\hspace{-1mm}\downarrow\neq \Delta_{ab}(g^{\prime})\hspace{-1mm}\downarrow] \rightarrow$ \item[] \hspace{30mm}~$[\exists \; x^{\prime}, x \; (\Delta_a(x^{\prime})=x \; \& \; \Delta_b(f^{\prime}\langle x^{\prime}\rangle) \neq \Delta_b(g^{\prime}\langle x^{\prime}\rangle))]$ \label{eqn:LSD16preface}\index{Iterative Axiom (\ref{eqn:LSD16preface})} \end{myenumerate} The motivation for this axiom is less obvious and Church said less explicitly on this subject. In my view, the best way to conceive of the motivation is as being expressive of a priority of lower-order senses over higher-order senses. The idea is that a canonical way to discern a difference between the \emph{presentations} of higher-order senses is via a difference at the level of the \emph{presentations} of propositions. So one knows that the sense of ``wise'' presents a different concept than the sense of ``courageous'' in part because one knows that, say, the sense of ``Zeno is wise'' presents the true while the sense of ``Zeno has courage'' presents the false. In Church's papers, this axiom was rather expressed contrapositively as follows (cf. Church's Axiom 16 \cite{Church1951ab} p.~19, \cite{Klement2002aa} pp.~108-109, \cite{Anderson1980aa} pp. 219, 224~ff): \begin{myequation} [(\forall \; x,x^{\prime} (\Delta_a(x^{\prime})=x \Rightarrow \Delta_{b}(f^{\prime}\langle x^{\prime}\rangle) = f(x))) \; \& \; \Delta_{ab}(f^{\prime})\hspace{-1mm}\downarrow] \Longrightarrow \Delta_{ab}(f^{\prime})=f\label{eqn:LSD16preface2} \end{myequation} In the presence of the Surjectivity Axiom~(\ref{eqn:RM:SM}) and the other axioms of Church's core system~(\ref{eqn:RM:coresystem}), this version follows deductively from the Iterative Axiom~(\ref{eqn:LSD16preface}). To show this, suppose that the antecedent of~(\ref{eqn:LSD16preface2}) holds but the consequent fails, so that~$\Delta_{ab}(f^{\prime})\hspace{-1mm}\downarrow\neq\hspace{-1mm}f$. By the Surjectivity Axiom~(\ref{eqn:RM:SM}), choose~$g^{\prime}$ of type~$(ab)^{\prime}$ such that~$\Delta_{ab}(g^{\prime})=f$. Then~$\Delta_{ab}(f^{\prime})\hspace{-1mm}\downarrow\neq \Delta_{ab}(g^{\prime})\hspace{-1mm}\downarrow$. Then by the Iterative Axiom~(\ref{eqn:LSD16preface}), we have that there is~$x^{\prime}, x$ such that~$\Delta_a(x^{\prime})=x$ and~$\Delta_b(f^{\prime}\langle x^{\prime}\rangle)\neq \Delta_b(g^{\prime}\langle x^{\prime}\rangle)$. By the Typed Composition Axiom~(\ref{eqn:RM:cca2}), we then have that \begin{myequation} \Delta_b(f^{\prime}\langle x^{\prime}\rangle)\neq \Delta_b(g^{\prime}\langle x^{\prime}\rangle) = (\Delta_{ab}(g^{\prime}))(\Delta_a(x^{\prime})) = f(x) \end{myequation} which contradicts the hypothesis that the antecedent of~(\ref{eqn:LSD16preface2}) is satisfied. This is the sense in which the Iterative Axiom~(\ref{eqn:LSD16preface}) generalizes Church's own axiom in~(\ref{eqn:LSD16preface2}). It turns out that the Iterative Axiom~(\ref{eqn:LSD16preface}) (and thus also~(\ref{eqn:LSD16preface2})) are true on the models that we have constructed in the previous section. This is not an accident but rather follows from some additional resources that one has available in these models, resources which allow one to interpret a fragment of Gallin's intensional logic. In particular, let us expand Church's core system~(\ref{eqn:RM:coresystem}) with a new function symbol~$\nabla_a$\index{Symbol, Representation Function~$\nabla_a$} for each type~$a$, called the \emph{representation function}, which takes entities of type~$a$ and returns an entity of type~$a^{\prime}$. Intuitively, the idea is that the representation~$\nabla_a$ function takes an extension~$f$ of type~$a$ and returns an intension~$f^{\prime}$ of type~$a^{\prime}$ which presents~$f$. More formally we have the following axiom: \begin{myenumerate} \item \emph{Representation Axiom}: For each entity~$f$ of type~$a$, one has that~$\nabla_a(f)$ is an entity of type~$a^{\prime}$ such that~$\Delta_a(\nabla_a(f))=f$.\label{eqn:RM:PresentationRepresentation}\index{Representation Axiom (\ref{eqn:RM:PresentationRepresentation})} \end{myenumerate} Note that it follows from this that the representation function~$\nabla_a$ is an injection from entities of type~$a$ to entities of type~$a^{\prime}$. For, suppose that~$\nabla_a(f)=\nabla_a(g)$. Then by applying the presentation function to each side and by applying the Representation Axiom~(\ref{eqn:RM:PresentationRepresentation}) one has that \begin{myequation} f = \Delta_a(\nabla_a(f)) = \Delta_a(\nabla_a(g))=g \end{myequation} Finally, it's perhaps also worth explicitly mentioning that the Representation Axiom~(\ref{eqn:RM:PresentationRepresentation}) formally implies the Surjectivity Axiom~(\ref{eqn:RM:SM}). For the Surjectivity Axiom~(\ref{eqn:RM:SM}) says that each entity~$f$ of type~$a$ is presented by some intension~$f^{\prime}$ of type~$a^{\prime}$, and the Representation Axiom~(\ref{eqn:RM:PresentationRepresentation}) actually says that one can select the intension~$f^{\prime}$ to be equal to the representation~$\nabla_a(f)$. The models which we have constructed in the previous section admit a natural interpretation of the representation function on which the Representation Axiom~(\ref{eqn:RM:PresentationRepresentation}) comes out true. In particular, given an intensional hierarchy~$D$~(\ref{eqn:RM:defn:intensional:hierarchy}), we may interpret the representation function~$\nabla_a$ as the injection~$\iota_{\|a\|}$ which comes built into the intensional hierarchy (where again~$\|\cdot\|$ denotes the degree function~(\ref{eqn:RM:degtypinitial}) on types). The Representation Axiom~(\ref{eqn:RM:PresentationRepresentation}) comes out true on intensional structures simply because an intensional hierarchy was built around the idea that the interpretation of the representation function is a (right) inverse to the interpretation of the presentation functions (cf. clause~(v) in the definition of an intensional position~(\ref{eqn:RM:defn:intensional:position}) as well as the Proposition on Domain and Codomain of Projectum Witnesses~(\ref{eqn:RM:domcodomproj})). Further, as we verify in Appendix~2 \S\ref{app2}, the resulting intensional structure continues to model the predicative comprehension schemata~(\ref{eqn:RM:predcompschem})-(\ref{eqn:RM:compschem4}). The representation functions are relevant to Church's Iterative Axiom due to another axiom which holds true on the models from the last section, namely: \begin{myenumerate} \item \emph{Characterization of Intensional Application}:~$f^{\prime}\langle x^{\prime}\rangle = \nabla_b((\Delta_{ab}(f^{\prime})(\Delta_a(x^{\prime})))$ \label{eqn:RM:charintfuncapp}\index{Characterization of Intensional Application (\ref{eqn:RM:charintfuncapp})} \end{myenumerate} \vspace{-3mm} One can easily check that this axiom comes out true on intensional structures by glancing at how intensional application was defined in equation~(\ref{eqn:RM:defn:intensional}). This axiom just brings into the object language what was implicit in our constructions in the previous section. Intuitively, what this axiom is saying is that intensional application of a functional intension to an intension simply consists in figuring out what extension is presented by each, performing extensional application on these referents, and then going back to an intension via the representation function. Given this axiom, intensional application can be defined in terms of extensional application and the representation and presentation functions. Before turning to the connection between the representation function and Church's Iterative Axiom~(\ref{eqn:LSD16preface}), let's note that the axioms governing the representation function allow us to capture a fragment of Gallin's intensional logic~$IL$ (cf. \cite{Gallin1975aa} Chapter 1). Gallin's work can be seen as an attempt to axiomatize Montague grammar (\cite{Dowty1981aa} Chapters 6-8, \cite{Gamut1991aa} Chapter 6). Montague grammar in turn can be viewed as an attempt to develop a logic motivated by possible worlds semantics in which one can distinguish between the intension and the extension of a given expression, while at the same time not having to actually quantify over possible worlds in the object-language (cf. \cite{Gallin1975aa} p. 58, \cite{Dowty1981aa} p. 161). To this end, Montague articulated a type system-- now familiar to us-- in which for any type~$a$ there was a type~$sa$ which in the standard model theory is interpreted as functions from worlds to entities of type~$a$. Montague then postulated that for every well-formed expression~$f$ of type~$a$ there is an intension~\small~$\widehat{f}$ \normalsize of type~$sa$ and for every well-formed expression~$f^{\prime}$ of type~$sa$ there is an extension~\footnotesize$\widecheck{f^{\prime}}$\normalsize \;of type~$a$. Gallin's later axiomatization can be seen as an attempt to see what is true on all models described by Montague. Some of the crucial axioms that Gallin set out in this intensional logic were the following (cf. \cite{Gallin1975aa} p. 19): \begin{myenumerate} \item \emph{Axiom A2}:~$\forall \; x, y \; [\widehat{x} = \widehat{y} \Longrightarrow \widehat{f(x)}=\widehat{f(y)}]$ \item \emph{Axiom A3}:~$[\forall \; x \; (\widehat{f(x)}=\widehat{g(x)})]\Longrightarrow \widehat{f}=\widehat{g}]$ \item \emph{Axiom AS6}:~$\widecheck{\;\widehat{f}\;\;}=f$ \end{myenumerate} If we interpret the~$sa$ as~$a^{\prime}$ and we interpret~$\widehat{f}$ by~$\nabla_a(f)$ when~$f$ is of type~$a$ and we interpret~$\widecheck{f^{\prime}}$ as~$\Delta_a(f^{\prime})$ when~$f^{\prime}$ is of type~$a^{\prime}$, then we can easily deduce these three axioms. Hence, the system of Church's intensional logic expanded with the resources of the representation function interprets a fragment of Gallin's intensional logic. One can see this observation as a partial converse to Kaplan's aforementioned possible-worlds model of Church's core system (cf. circa~(\ref{eqn:kapalandadsfas})). The other axioms of Gallin's intensional logic concern modal notions and lambda-terms. The system developed in this paper will have little to say about them because on the one hand it is not a modal system, and on the other hand lambda-terms, as is well known, have the force of effecting the satisfaction of the full Typed Comprehension Schema~(\ref{eqn:RM:compschem}), which we do not have in our predicative setting. In addition to their interest in providing for this interpretation of a fragment of Gallin's intensional logic, the axioms pertaining to the representation function are of interest because they deductively entail Church's Iterative Axiom.\footnote{To see this, suppose that the antecedent of the Iterative Axiom~(\ref{eqn:LSD16preface}) held, so that~$\Delta_{ab}(f^{\prime})$ was defined and not equal to~$\Delta_{ab}(g^{\prime})$. Then let~$f=\Delta_{ab}(f^{\prime})$ and let~$g=\Delta_{ab}(g^{\prime})$. Since~$f,g$ are functional entities of type~$ab$, it must be the case that they differ on some value (cf. (\ref{extensionalidentityfunctions})), so that there is an object~$x$ of type~$a$ such that~$f(x)\neq g(x)$. Now consider the representation~$x^{\prime}=\nabla_a(x)$ of this entity~$x$. Then by the Representation Axiom~(\ref{eqn:RM:PresentationRepresentation}), we have that~$x^{\prime}$ presents~$x$, or that~$\Delta_a(x^{\prime})=x$. Now, using the Characterization of Intensional Application~(\ref{eqn:RM:charintfuncapp}), let us quickly compute~$f^{\prime}\langle x^{\prime}\rangle$ and~$g^{\prime}\langle x^{\prime}\rangle$: \begin{myeqnarray} f^{\prime}\langle x^{\prime}\rangle=\nabla_b((\Delta_{ab}(f^{\prime})(\Delta_a(x^{\prime}))) = \nabla_b (f(x)) \\ g^{\prime}\langle x^{\prime}\rangle=\nabla_b((\Delta_{ab}(g^{\prime})(\Delta_a(x^{\prime}))) = \nabla_b (g(x)) \end{myeqnarray} Now, to finish the verification of the Iterative Axiom~(\ref{eqn:LSD16preface}), suppose for the sake of contradiction that~$\Delta_b(f^{\prime}\langle x^{\prime}\rangle)=\Delta_b(g^{\prime}\langle x^{\prime}\rangle)$. Then by the previous calculations, we see that~$\Delta_b(\nabla_b(f(x)))=\Delta_b(\nabla_b(g(x)))$. By the Representation Axiom~(\ref{eqn:RM:PresentationRepresentation}), it then follows that~$f(x)=g(x)$, contrary to hypothesis. Hence, this is why the axioms pertaining to the representation function deductively imply the Iterative Axiom~(\ref{eqn:LSD16preface}).} But in the literature on Church's intensional logic, the ideas behind the Iterative Axiom~(\ref{eqn:LSD16preface}) and the associated principle~(\ref{eqn:LSD16preface2}) have been criticized by Parsons and Klement (\cite{Parsons2001aa} p. 517, \cite{Klement2010aa} pp. 165-166). As Anderson later put it, the general concern is that the Iterative Axiom~(\ref{eqn:LSD16preface}) is ``really quite at odds with the heuristic ideas'' of Church's intensional logic, namely the formalization of fine-grained meanings (\cite{Anderson1998aa} p. 161). One way to see the nature of this concern is to adopt the richer perspective where we have access to the representation function. For, the axioms governing the representation function have the following consequence: \begin{myenumerate} \item \emph{Characterization of Intensional Injectivity}: A function~$f$ of type~$ab$ is injective if and only if for any~$f^{\prime}$ of type~$(ab)^{\prime}$ such that~$\Delta_{ab}(f^{\prime})=f$, one has that \label{eqn:RM:CharacterizationofIntensionalInjectivity}~$f^{\prime}\langle x^{\prime}\rangle=f^{\prime}\langle y^{\prime}\rangle$ implies~$\Delta_a(x^{\prime})=\Delta_a(y^{\prime})$. \index{Characterization of Intensional Injectivity (\ref{eqn:RM:CharacterizationofIntensionalInjectivity})} \end{myenumerate} The proof of this characterization is comparatively straightforward and so we relegate it to a footnote.\footnote{First suppose that~$f$ is injective and that~$f^{\prime}$ presents~$f$ and that we have the identity~$f^{\prime}\langle x^{\prime}\rangle=f^{\prime}\langle y^{\prime}\rangle$. By the Characterization of Intensional Application~(\ref{eqn:RM:charintfuncapp}), we then have the identity \begin{myequation} \nabla_b((\Delta_{ab}(f^{\prime}))(\Delta_a(x^{\prime}))) = \nabla_b((\Delta_{ab}(f^{\prime}))(\Delta_a(y^{\prime}))) \end{myequation} But since the representation function~$\nabla_b$ is an injection and since~$f^{\prime}$ presents~$f$, this reduces to the identity~$f(\Delta_a(x^{\prime})) = f(\Delta_a(y^{\prime}))$ and since~$f$ is an injection, we have~$\Delta_a(x^{\prime})=\Delta_a(y^{\prime})$, which is what we wanted to show. This completes the verification of the left-to-right direction of~(\ref{eqn:RM:CharacterizationofIntensionalInjectivity}). For the right-to-left direction of~(\ref{eqn:RM:CharacterizationofIntensionalInjectivity}), suppose that~$f$ satisfies the right-hand side of (\ref{eqn:RM:CharacterizationofIntensionalInjectivity}). Suppose for the sake of contradiction that~$f$ is not an injection, so that~$f(x)=f(y)$ but~$x\neq y$. Then let~$x^{\prime}=\nabla_a(x)$ and~$y^{\prime}=\nabla_a(y)$ and~$f^{\prime}=\nabla_{ab}(f)$, so that the Representation Axiom~(\ref{eqn:RM:PresentationRepresentation}) implies that~$x^{\prime}$ presents~$x$ and~$y^{\prime}$ presents~$y$ and~$f^{\prime}$ presents~$f$. Then we can expand the identity~$f(x)=f(y)$ to \begin{myequation} (\Delta_{ab} f^{\prime})(\Delta_a(x^{\prime})) = f(x) = f(y) = (\Delta_{ab} f^{\prime})(\Delta_a(y^{\prime})) \end{myequation} Then by applying the representation function~$\nabla_b$ to each side and appealing to the Characterization of the Intensional Application~(\ref{eqn:RM:charintfuncapp}), we have that~$f^{\prime}\langle x^{\prime}\rangle = f^{\prime}\langle y^{\prime}\rangle$. Then by the hypothesis that~$f^{\prime}$ satisfies the right-hand side of (\ref{eqn:RM:CharacterizationofIntensionalInjectivity}), we have that~$\Delta_a(x^{\prime})=\Delta_a(y^{\prime})$, and since~$x^{\prime}$ presents~$x$ and~$y^{\prime}$ presents~$y$, we have that~$x=y$, which contradicts the reductio assumption that~$x\neq y$. This completes the argument that the Characterization of Intensional Injectivity~(\ref{eqn:RM:CharacterizationofIntensionalInjectivity}) follows from our axioms governing the representation function.} The Parsons-Klement concern can be expressed thusly: this characterization grates against some natural intuitions that one might have about fine-grained meanings.\footnote{See in particular Klement~\cite{Klement2010aa} pp. 165-166. But this is only one aspect of the concern of Parsons and Klement. First, Parsons was most interested in the interaction of Church's other axiom~(\ref{eqn:LSD16preface2}) with senses which do not present any referent (\cite{Parsons2001aa} p. 517). Second, Klement was also concerned with unintuitive consequences of~(\ref{eqn:LSD16preface2}) related to the \emph{intent}ionality of senses (cf. \cite{Klement2010aa} p. 164). A deeper question raised by the work of Parsons and Klement is what analogues there are of the Typed Comprehension Schema~(\ref{eqn:RM:compschem}) for type~$(ab)^{\prime}$. This is relevant because Parsons and Klement's counterexamples to Church's other axiom~(\ref{eqn:LSD16preface2}) are engendered by combining senses or intensions of type~$(ab)^{\prime}$ together in various ways, a procedure which would be most naturally warranted by a version of the comprehension schema for senses or intensions of type~$(ab)^{\prime}$.} Indeed, take any non-injective function that might occur naturally in language, like ``the father of.'' If we take~$a=e$ and~$b=e$ and think of all the objects as consisting of persons, then this is a non-injective function~$f$ of type~$ab$. Let's further assume for the sake of concreteness that intensions of persons are definite descriptions of some kind: \textsc{the~$\Phi$}, \textsc{the~$\Psi$}, etc. Then the Characterization of Intensional Injectivity~(\ref{eqn:RM:CharacterizationofIntensionalInjectivity}) implies that there is some sense~$\textsc{the father of}$ which presents the father-of function~$f$ and which is such that the intension \textsc{the father of }$\langle$\textsc{the~$\Phi$}$\rangle$ is the same qua intension as \textsc{the father of }$\langle$\textsc{the~$\Psi$}$\rangle$, despite the fact that the person who is the~$\Phi$ is not the same as the person who is the~$\Psi$. But consequences like this seem highly counterintuitive: one might rather have thought that if the \textsc{the father of~$\langle$the best xylophone player$\rangle$} is the same qua intension as \textsc{the father of~$\langle$the best yazheng player$\rangle$}, then the best xylophone player is the best yazheng player. (The ``\emph{x}ylophone'' and ``\emph{y}azheng'' are two musical instruments which start with the same letters that are used as the variables in (\ref{eqn:RM:CharacterizationofIntensionalInjectivity})). However, what we now see is that if one accepts the Characterization of Intensional Application~(\ref{eqn:RM:charintfuncapp}), then one must to accept consequences like this. For, on this characterization of intensional application, the intension associated to the \textsc{the father of~$\langle$the best xylophone player$\rangle$} is \emph{not} the definite description which we normally associate to the linguistic expression ``the father of the best xylophone player.'' Rather, on this characterization of intensional application, the intension \textsc{the father of~$\langle$the best xylophone player$\rangle$} is the result of {\it intensionally} applying the intensional functional~\textsc{the father of} to the input of the intension \textsc{the best xylophone player}. Indeed, on the conception following from the Characterization of Intensional Application~(\ref{eqn:RM:charintfuncapp}), this is done by first by figuring out who the father of the best xylophone player actually is-- perhaps its Ted-- and going and figuring out what the representation of Ted is -- perhaps it is \textsc{the mayor of Montreal}. Now Ted might have two children, Alice and Bob, and it might turn out that Alice is best xylophone player while Bob is the best yazheng player. On this conception, the intension associated to the \textsc{the father of~$\langle$the best xylophone player$\rangle$} is identical to the intension associated to \textsc{the father of~$\langle$the best yazheng player$\rangle$} since both are identical to the intension \textsc{the mayor of Montreal}. But in spite of this identity, the person presented by the intension \textsc{the best xylophone player} is Alice, who is distinct from her sibling Bob, who is presented by the intension \textsc{the best yazheng player}. Thus the Characterization of Intensional Application~(\ref{eqn:RM:charintfuncapp}) requires us to depart from some of the original ambitions of a fine-grained theory of intensions, on which intensional injectivity would presumably be the rule rather than the exception.\footnote{For instance, some of the systems of Church and Anderson explicitly included axioms for the injectivity of senses of functional expressions. See the axiom designated ``64'' in Church \cite{Church1974aa} p. 151 and Anderson \cite{Anderson1980aa} p. 222.} It is not presently obvious to us whether there is a proof of the Predicative Consistency Theorem~(\ref{eqn:RM:predconthm}) which would produce models which do not validate either the Characterization of Intensional Injectivity~(\ref{eqn:RM:CharacterizationofIntensionalInjectivity}) or the Iterative Axiom~(\ref{eqn:LSD16preface}). If one rejects Church's Axiom of Type Reduction~(\ref{eqn:RM:TRD}) then the most difficult part of any construction of a model of these systems is to provide an interpretation of the intensional application function~$(f^{\prime}, x^{\prime})\mapsto f^{\prime}\langle x^{\prime}\rangle$. All the constructions which we have come up with so far have involved the aforementioned characterization of intensional application~(\ref{eqn:RM:charintfuncapp}) and hence the Iterative Axiom~(\ref{eqn:LSD16preface}). \section{Wehmeier and the Problem of Many Non-Extensions}\label{sec:RM:07} In this paper, we've been primarily concerned with describing the predicative response to the Russell-Myhill paradox. However, as we've seen, predicativity constraints block the normal proof of the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}). Since this theorem is closely related to Russell's paradox, it's natural to think that there is a connection between the predicative response to the Russell-Myhill paradox of propositions and consistent fragments of the so-called naive conception of set. To see this connection, let's note the precise way in which we can use axioms introduced thus far to produce violations of the type-theoretic version of Cantor's Theorem~(\ref{eqn:RM:typetheoreticRP}). In particular, let's note why these axioms give us reason to endorse the following principle: \begin{myenumerate} \item There is an injection~$\partial$ from entities of type~$et$ to entities of type~$e$ such that for all~$f$ of type~$et$ there is~$f^{\prime}$ of type~$(et)^{\prime}$ satisfying~$\Delta_{et}(f^{\prime})=f$ and~$f^{\prime}=\partial(f)$. \label{eqn:32143214123} \end{myenumerate} One proof of this proceeds via the representation function~$\nabla_{et}$ introduced in the previous section (cf. circa~equation~(\ref{eqn:RM:PresentationRepresentation})). For, one can use the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}) and the Senses are Objects Axiom~(\ref{eqn:RM:SO}) to define the map~$\partial$ as follows, where~$f$ is a variable of type~$et$,~$x$ is a variable of type~$e$, and~$f^{\prime}$ is a variable of type~$(et)^{\prime}$: \begin{myequation}\label{eqn:almostdone} \partial(f) = x \Longleftrightarrow \exists \; f^{\prime} \; (f^{\prime}=\nabla_{et}(f) \; \& \; f^{\prime}=x) \end{myequation} While the representation function~$\nabla_{et}$ is a function from entities of type~$et$ to entities of type~$(et)^{\prime}$, the~$\partial$ function is a function from entities of type~$et$ to entities of type~$e$. Hence~$\partial$ is an injection since as we noted in the last section the representation function~$\nabla_{et}$ is an injection. A second proof of~(\ref{eqn:32143214123}) proceeds by recourse to the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}). For, by the Surjectivity Axiom~(\ref{eqn:RM:SM}) and the Senses are Objects Axiom~(\ref{eqn:RM:SO}), one has the following, where again~$f$ is a variable of type~$et$,~$x$ is a variable of type~$e$, and~$f^{\prime}$ is a variable of type~$(et)^{\prime}$: \begin{myequation} \forall \; f \; \exists \; x \; [\exists \; f^{\prime} \; \Delta_{et}(f^{\prime})=f \; \& \; f^{\prime}=x] \end{myequation} Then by the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}), it follows that there is a function~$\partial$ of type~$(et)e$ such that \begin{myequation} \forall \; f \; [\exists \; f^{\prime} \; \Delta_{et}(f^{\prime})=f \; \& \; f^{\prime}=\partial(f)] \end{myequation} Then we may argue that~$\partial$ is an injection. For suppose that~$\partial(f)=\partial(g)$. Then by the previous equation, there are~$f^{\prime}, g^{\prime}$ such that~$f^{\prime}=\partial(f)=\partial(g)=g^{\prime}$ and~$\Delta_{et}(f^{\prime})=f$ and~$\Delta_{et}(g^{\prime})=g$. Then since~$f^{\prime}=g^{\prime}$, we have that~$f=\Delta_{et}(f^{\prime})=\Delta_{et}(g^{\prime})=g$, so that the injectivity of~$\partial$ is thereby established. One of the most traditional versions of the naive conception of set is that found in Frege's \emph{Grundgesetze} (\cite{Frege1893}, \cite{Frege2013aa}). One of the crucial axioms of this system is Basic Law~V, which postulates the existence of a injection from concepts to objects, which we may call the \emph{extension operator}. Now, concepts can be identified with functions of type~$et$ (as we have had numerous occasions to observe in this paper, e.g. circa equation~(\ref{eqn:RM:ide})). What the previous paragraphs then show is that the expansions of Church's intensional logic which we have studied in this paper afford the resources to satisfy one key postulate of Frege's \emph{Grundgesetze}-- namely Basic Law~V-- along with fragments of the comprehension schema, such as the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}). This consistency result in and of itself is not new: versions of it were established by Parsons~\cite{Parsons1987a}, Heck~\cite{Heck1996}, and Ferreira-Wehmeier~\cite{Ferreira2002aa}, and it was the focus of parts of our earlier papers \cite{Walsh2012aa}, \cite{Walsh2014ac}. However, the argument of the previous paragraphs is new in that it establishes the existence of a particular {\it species} of extension operator, which we might dub a {\it sense-selecting extension operator} and define formally as follows: \begin{myenumerate} \item A \emph{sense-selecting extension operator} is an injection~$\partial$ from entities of type~$et$ to entities of type~$e$ such that for all~$f$ of type~$et$ there is~$f^{\prime}$ of type~$(et)^{\prime}$ satisfying~$\Delta_{et}(f^{\prime})=f$ and~$f^{\prime}=\partial(f)$.\label{eqn:defn:sensebearing}\index{Sense-Selecting Extension Operator (\ref{eqn:defn:sensebearing})}\index{Symbol, Sense-Selecting Extension Operator~$\partial$} \end{myenumerate} Part of what is added by looking at Frege's naive conception of set as~embedded within a certain expansion of Church's intensional logic is that we have access to a particular kind of extension operator, one on which the extension of a concept is a sense of that concept.\footnote{However, it should be emphasized that the predicative response, as we have described it above in \S\ref{sec:RM:04}, is not necessarily committed to the existence of a sense-selecting extension operator. For, the two proofs from the above paragraphs used the representation operator~$\nabla_a$ from \S\ref{sec:RM:06} and the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}). As stressed in \S\ref{sec:RM:04}, the philosophical motivations for the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem0}) don't necessarily extend to the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}); and it goes without saying that while the representation operator helps bring more of the model construction into the object-language, it too is not necessarily built into the predicative response to the Russell-Myhill paradox. Indeed, it is not even clear to me whether one can derive the existence of sense-selecting extension operators merely from the core of Church's system~(\ref{eqn:RM:coresystem}), the Surjectivity Axiom~(\ref{eqn:RM:SM}), the Senses are Objects Axiom~(\ref{eqn:RM:SO}), the Propositions as Fine-Grained as Objects Axiom~(\ref{eqn:RM:derepropsaxiom}), and the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem0}). Thus the results of this section are only available to certain natural expansions of the predicative perspective by choice principles or by a representation operator. } Of course this general kind of maneuver is familiar from the literature on the philosophy of set theory. For instance, the stage axioms of Shoenfield~\cite{Shoenfield1961aa}, \cite{Shoenfield1967aa}, \cite{Shoenfield1977aa} and Boolos~\cite{Boolos1971} constitute an embedding of a fragment of Zermelo-Fraenkel set theory within a theory of ``collections formed in stages.'' This gave Shoenfield and Boolos additional resources by which to respond to the Quinean charge that this set theory was just ``wisdom after paradox,'' or just one of many ad-hoc responses to the paradoxes (\cite{Quine1986aa} p.~403, cf. \cite{Quine1984aa} p.~789, \cite{Quine1960aa} pp.~353-354, \cite{Quine1969aa} p.~5, \cite{Martin1970aa} pp.~111-112). Similarly, viewing Frege's set theory in the light of Church's intensional logic allows us to respond to a serious objection, due to Wehmeier, with these consistent fragments of the \emph{Grundgesetze}. One way to see one's way towards this objection is to observe that working within the framework of Church's intensional logic, we can show that sense-selecting extension operators have ranges which are indefinitely extensible in the sense of Russell and Dummett. To this end, let us first define some preliminary subset notation: \begin{myenumerate} \item \emph{Subset Notation}: If~$\Phi(x)$ is a formula in one free variable~$x$ of type~$a$ and~$f$ is an entity of type~$at$, let's say that~$h\subseteq \Phi$ if~$\forall \; x \; (h(x)=1 \rightarrow \Phi(x))$. Likewise, if~$g$ is also of type~$at$, let's say that~$h\subseteq g$ iff~$\forall \;x \; (h(x)=1 \rightarrow g(x)=1)$, and let us define~$h\subsetneq g$ as~$h\subseteq g \wedge \neg (g\subseteq h)$.\index{Symbol, Subset~$h\subseteq \Phi(x)$ (\ref{eqn:subsetneo})}\label{eqn:subsetneo} \end{myenumerate} Then we may define a formal version of indefinite extensibility as follows: \begin{myenumerate} \item A formula~$\Phi(x)$ in one free variable~$x$ of type~$a$ is \emph{formally indefinitely extensible} if for each~$h$ of type~$at$ with~$h\subseteq \Phi$ there is~$\widetilde{h}$ of type~$at$ such that~$h\subsetneq \widetilde{h}\subseteq \Phi$.\label{eqn:FormIndefExt}\index{Formally Indefinitely Extensible (\ref{eqn:FormIndefExt})} \end{myenumerate} Dummett, following Russell, expressed the idea of indefinite extensibility as follows: ``[a]n indefinitely extensible concept is one such that, if we can form a definite conception of a totality all of whose members fall under that concept, we can, by reference to that totality, characterize a larger totality of all whose members fall under it'' (\cite{Dummett1994aa} p. 22, \cite{Dummett1963aa} pp. 149-150, \cite{Dummett1978aa} pp. 195-196, \cite{Dummett1981aa} p. 533, \cite{Dummett1991aa} p. 316, cf. \cite{Russell1907aa} p. 36, \cite{Russell1973aa} p. 144). If one reads Dummett's use of ``concept'' as any formula~$\Phi(x)$ with a free object variable~$x$ and if one reads his ``definite concept'' as an entity of type~$et$, then there is a comparatively tight match between the formalization in~(\ref{eqn:FormIndefExt}) and Dummett's own formulation of indefinite extensibility.\footnote{That said, there are some differences. First, this formalization provides no insight into how, if at all,~$\widetilde{h}$ is provided ``by reference'' to~$h$. Second, on our explication of ``definite'', it will follow that the definite concepts are closed under boolean operations such as intersection, union, and complement. If one has the intuition that ``definite concepts'' should be small in some sense, one will resist the claim that definite concepts are closed under complementation. Finally, it should be noted that this general variety of formalization of indefinite extensibility is of course not new: see for instance Shapiro-Wright \cite{Shapiro2006ac} p. 266 and Priest \cite{Priest2013aa} pp. 1264-1265.} Let's now show that if~$\partial$ is a sense-selecting extension operator~(\ref{eqn:defn:sensebearing}) then the range~$\mathrm{rng}(\partial)$ of this extension operator is formally indefinitely extensible~(\ref{eqn:FormIndefExt}), where of course the range~$\mathrm{rng}(\partial)$ is the following formula with~$x$ a variable of type~$e$ and~$f$ a variable of type~$et$: \begin{myequation} (\mathrm{rng}(\partial))(x)\equiv \exists \; f \; \partial(f)=x \end{myequation} Fix~$h$ of type~$et$ such that~$h\subseteq \mathrm{rng}(\partial)$. Then one may show the following, which intuitively says that~$\partial$ admits a partial inverse: \begin{myenumerate} \item There is a~$\gamma_h$ of type~$e(et)$ such that for every~$g$ of type~$et$ with~$h(\partial(g))=1$, it is the case that~$\gamma_h(\partial(g))=g$.\label{eqn:surjectiveproperty} \end{myenumerate} Since the argument for~(\ref{eqn:surjectiveproperty}) is routine, we relegate it to a footnote.\footnote{The first argument for~(\ref{eqn:surjectiveproperty}) employs the representation function. So suppose that the sense-selecting extension operator satisfies $\partial(f)=\nabla_{et}f$ as in equation~(\ref{eqn:almostdone}). Fix a parameter $q$ of type $et$. Then one has the following, wherein $x$ has type~$e$ and $f$ has type~$et$: \begin{myequation} \forall \; x \; \exists \; ! \; f \; [(h(x)=0 \; \& \; f=q) \vee (h(x)=1 \; \& \; \nabla_{et}(f)=x)] \end{myequation} Then by the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}), there is $\gamma_h$ of type $e(et)$ such that \begin{myequation} \forall \; x \; [(h(x)=0 \; \& \; \gamma_h(x)=q) \vee (h(x)=1 \; \& \; \nabla_{et}(\gamma_h(x))=x)] \end{myequation} To verify equation~(\ref{eqn:surjectiveproperty}), suppose that $h(\partial(g))=1$. Letting $x=\partial(g)$ we have that $h(x)=1$. Then $\nabla_{et}(\gamma_h(x))=x=\partial(g)=\nabla_{et}g$. Then $\gamma_h(x)=g$ and so $\gamma_h(\partial(g))=g$, which is what we wanted to show. The second argument for~(\ref{eqn:surjectiveproperty}) employs the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}). Since~$h\subseteq \mathrm{rng}(\partial)$, we have that~$\forall \; x \; [h(x)=1\rightarrow (\exists \; f \; \partial(f)=x)]$. Then the definition of sense-selecting~(\ref{eqn:defn:sensebearing}) implies that~$\forall \; x \; [h(x)=1\rightarrow (\exists \; f \; \exists \; f^{\prime} \; \Delta_{et}(f^{\prime}) = f \wedge f^{\prime}=x)]$. Trivially we then have~$\forall \; x \; \exists \; f^{\prime} \; [h(x)=1\rightarrow f^{\prime}=x]$. Then we may apply the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}) since the parameter~$h$ has type with degree~$2$ and the type~$e(et)^{\prime}$ has degree~$2$. Doing this we get an entity~$\beta_h$ of type~$e(et)^{\prime}$ such that~$\forall \; x \; [h(x)=1\rightarrow \beta_h(x)=x]$. Further, we claim that \begin{myequation}\label{eqn:mymy34} \forall \; x \; \exists \; f \; [h(x)=1\rightarrow \Delta_{et}(\beta_h(x))=f] \end{myequation} For, if~$h(x)=1$ then~$\partial(f)=x$ for some~$f$ of type~$et$ and hence~$\Delta_{et}(g^{\prime})=f$ and~$g^{\prime}=x$ for some~$g^{\prime}$ of type~$(et)^{\prime}$ by the definition of sense-selecting~(\ref{eqn:defn:sensebearing}). Then~$\beta_h(x)=x=g^{\prime}$ and so~$\Delta_{et}(\beta_h(x))=\Delta_{et}(g^{\prime})=f$. So indeed equation~(\ref{eqn:mymy34}) holds. Further, we may apply the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem}) to this equation since the parameters~$h,\beta_h$ have types with degree~$2$ and since~$e(et)$ likewise has degree~$2$. Then we obtain~$\gamma_h$ of type~$e(et)$ such that \begin{myequation}\label{eqn:mymy35} \forall \; x \; [h(x)=1\rightarrow \Delta_{et}(\beta_h(x))=\gamma_h(x)] \end{myequation} Let's now verify equation~(\ref{eqn:surjectiveproperty}). Suppose that~$g$ is of type~$et$ such that~$h(\partial(g))=1$. Let~$x=\partial(g)$, so that~$h(x)=1$. By~$\partial(g)=x$, we obtain~$\Delta_{et}(g^{\prime})=g$ and~$g^{\prime}=x$ for some~$g^{\prime}$ of type~$(et)^{\prime}$ by the definition of sense-selecting~(\ref{eqn:defn:sensebearing}). Further by equation~(\ref{eqn:mymy35}), we have ~$\Delta_{et}(\beta_h(x))=\gamma_h(x)$. Then~$\beta_h(x)=x=g^{\prime}$ and so~$g=\Delta_{et}(g^{\prime})=\Delta_{et}(\beta_h(x))=\gamma_h(x)=\gamma_h(\partial(g))$, which is what we wanted to show.} By the definition of degree in equation~(\ref{eqn:RM:degtypinitial}), note that~$\gamma_h$ has type with degree~$2$. Hence the following formula~$\varphi(x,h,\gamma_h)$, where~$x$ is a variable of type~$e$, contains only parameters of degree~$2$: \begin{myequation} \varphi(x,h,\gamma_h) \equiv (h(x)=1 \; \& \; (\gamma_h(x))(x)=0) \end{myequation} Then by the Predicative Concept Comprehension Schema~(\ref{eqn:RM:predcompschem0}), there is~$g_h$ of type~$et$ such that \begin{myequation} g_h(x) =1 \Longleftrightarrow (h(x)=1 \; \& \; (\gamma_h(x))(x)=0) \end{myequation} Then we claim that~$h(\partial(g_h))=0$. For, suppose not. Then let~$y=\partial(g_h)$ so that~$h(y)=1$. Then by the earlier result~(\ref{eqn:surjectiveproperty}) we have that~$\gamma_h(y)=\gamma_h(\partial(g_h))=g_h$. Then the above equation implies that \begin{myequation} g_h(y) =1 \Longleftrightarrow (\gamma_h(y))(y)=0 \Longleftrightarrow g_h(y)=0 \end{myequation} which is a contradiction. Hence indeed we have~$h(\partial(g_h))=0$. Now, let~$p=\partial(g_h)$, so that~$p$ is a parameter of type~$e$ with degree~$1$. Then consider the following formula which has only parameters of degree~$\leq{2}$: \begin{myequation} \psi(x,h,p) \equiv (h(x)=1 \vee x=p) \end{myequation} Then by the Predicative Concept Comprehension Schema~(\ref{eqn:RM:predcompschem0}), there is~$\widetilde{h}$ of type~$et$ such that \begin{myequation} \widetilde{h}(x) =1 \Longleftrightarrow (h(x)=1 \vee x=\partial(g_h)) \end{myequation} so that in terms of our subset notation (\ref{eqn:subsetneo}), we have~$h\subsetneq \widetilde{h} \subseteq \mathrm{rng}(\partial)$ which completes the verification that the range~$\mathrm{rng}(\partial)$ of a sense-selecting extension operator is formally indefinitely extensible~(\ref{eqn:FormIndefExt}). Let's call the objects falling within the range~$\mathrm{rng}(\partial)$ of an extension operator~$\partial$ the \emph{extensions}. In this terminology, Wehmeier's observation about his consistent fragments of the \emph{Grundgesetze} was that they required that there were infinitely many non-extensions (cf. \cite{Wehmeier1999} \S{4.2} pp.~326~ff, \cite{Wehmeier2004aa} \S{3} pp.~255~ff). Given the above discussion, one can see now that this follows deductively from the formally indefinite extensibility of the range~$\mathrm{rng}(\partial)$ of an extension operator~$\partial$. For, suppose that there were only finitely many objects which were non-extensions, enumerated as~$q_1, \ldots, q_k$. Then consider the formula~$\theta(x,q_1, \ldots, q_k)$ with free variable~$x$ of type~$e$ and parameters~$q_1, \ldots, q_k$ of type~$e$ and hence degree~$\leq{1}$: \begin{myequation} \theta(x,q_1, \ldots, q_k) \equiv (x\neq q_1 \wedge \cdots \wedge x\neq q_k) \end{myequation} Then by the Predicative Concept Comprehension Schema~(\ref{eqn:RM:predcompschem0}), there is~$h$ of type~$et$ such that \begin{myequation} h(x)=1 \Longleftrightarrow (x\neq q_1 \wedge \cdots \wedge x\neq q_k) \end{myequation} But by hypothesis, we have that~$h$ is coextensive with~$\mathrm{rng}(\partial)$, in that~$h(x)=1$ iff~$x$ is in the range of the extension operator~$\partial$. But then by the formal indefinite extensibility~(\ref{eqn:FormIndefExt}) of~$\mathrm{rng}(\partial)$, there is~$\widetilde{h}$ of type~$et$ such that~$h\subsetneq \widetilde{h} \subseteq \mathrm{rng}(\partial)$, which contradicts that~$h=\mathrm{rng}(\partial)$. Thus the supposition that there were only finitely many non-extensions must have been wrong. Hence, the formal indefinite extensibility of the range of the extension operator requires that there be infinitely many non-extensions. As Wehmeier notes (cf. \cite{Wehmeier1999} \S{4.2} pp.~326~ff, \cite{Wehmeier2004aa} \S{3} pp.~255~ff), these considerations suggest an apparent tension between these subsystems of the \emph{Grundgesetze} and at least some renditions of Frege's logicism. For, sometimes logicism is described as the contention that mathematical reasoning is discoverable in every domain of inquiry (cf. (\cite{Demopoulos2005ab} p.~138, cf. \cite{Demopoulos1998aa} \S{VII} p.~496, \cite{Demopoulos1994aa} p.~229). Presumably there are domains of inquiry (such a chemistry and biology) in which there are comparatively few non-extensions (say, finitely many atoms or organisms). In such domains of inquiry there simply isn't ``space enough'' for an extension operator as axiomatized by the predicative fragments of the \emph{Grundgesetze} of the kind considered here. Besides this apparent tension, there is a more general reason to be concerned about the problem of many non-extensions. For, this problem tells us that the presence of an extension operator has consequences for the non-extensions. It's natural to seek an explanation for this-- that is, one seeks an answer to the question: what is it about the extension operator that results in it having consequences for the nature of the non-extensions? But in the case where the extension operator is a sense-selecting extension operator, it seems that there is a natural response to the problem of many non-extensions. For, if the extension operator, applied to a concept, is a sense of that concept, then it is natural to expect that there will be many senses which are not extensions. For, part of the explanatory power of the Fregean doctrine of sense is that any given referent can be presented in a number of different ways. Moreover, there is no reason to expect there to be any antecedently specified finite bound on the number of different ways that a referent can be presented. Hence, because a sense-selecting extension operator selects but one sense amongst many for each concept, there will inevitably be numerous objects in these models that are not extensions. Of course, this response to the problem of many non-extensions presupposes that one is thinking about the entities of type~$a^{\prime}$ as entities similar to Fregean senses in the respect that any given referent can be presented in a number of different ways. Even though Church himself was motivated by the project of axiomatizing Fregean sense, obviously there is nothing written into the axioms of Church's intensional logic or the extensions thereof considered here which forces one to adopt this presupposition. \section{Conclusions}\label{sec:RM:08} The last two sections have illustrated some of costs and benefits of the predicative response to the Russell-Myhill paradox of propositions, at least when expanded by certain choice-like principles or by the representation operators. In the previous section~\S\ref{sec:RM:07}, we've developed a response to the Wehmeier problem of many non-extensions: the solution simply is that there are many non-extensions because the extension of a concept selects one sense from the many which present a concept. But in section~\S\ref{sec:RM:06}, we saw that the consistency proof for the predicative response has some features which are not in the spirit of a fine-grained theory of intensions. For, on this model, intensions of functions are only as injective as the functions they present. Perhaps there are other model constructions which would not be committed to this result, but that question is left unresolved by the work in this paper. Likewise, for reasons of space we have been unable to compare and contrast the versions of Church's intensional logic studied here to other formalizations of Fregean sense given by authors such as Chalmers, Horty, Moschovakis, and Tich\'y (\cite{Chalmers2011aa}, \cite{Horty2007aa}, \cite{Moschovakis1993aa}, \cite{Tichy1988aa}), or to other formal systems of fine-grained intensions due to Fox, Lappin, Parsons, and Thomason (\cite{Fox2005aa}, \cite{Parsons1982aa}, \cite{Thomason1980aa}). Finally, while we indicated in \S\ref{sec:RM:03} how others like Anderson and Kaplan produced models which yield responses to the formalized version of the Russell-Myhill paradox~(\ref{eqn:RM:formalized}), we do not pretend to have done any serious appraisal of the costs and benefits of these proposals as compared with the predicative response. Rather, we have limited ourselves here to merely setting out the predicative response in a clear manner. In addition to suggesting a motivation for the restriction on the comprehension schema, our efforts in this paper have been directed towards establishing the formal consistency of the predicative response to the Russell-Myhill paradox of propositions. \section{Appendix 1: Proof of Proposition on Location of Domains}\label{app1} In this brief appendix, we prove the Proposition on Location of Domains~(\ref{prop:location:domain}) from \S\ref{sec:RM:05}. For ease of reference, we restate it here: \begin{myenumerate} \item[(\ref{prop:location:domain})] For all~$n\geq 1$, both of the following hold: \begin{itemize} \item[] (I) for all types~$a$ with~$\|a\|<n$, there is a~$\Sigma_1$-formula in parameter~$\mu_n$ such that~$D_a$ is the unique element of~$L_{\alpha_n}$ which satisfies this formula, wherein~$\mu_{n}$ is defined by~$\mu_{n}= \langle \nu_1, \ldots, \nu_{n}, \alpha_0, \alpha_1, \ldots, \alpha_{n-1}\rangle$. \item[] (II) for all types~$a$ with~$\|a\|= n$, the set~$D_a$ is a~$\utilde{\Sigma}_{\ell_{n}}$-definable subset of~$L_{\alpha_{n}}$ in parameter~$\mu_{n}$. \end{itemize} \end{myenumerate} The proof is by simultaneous induction on~$n\geq 1$. For~$n=1$, note that (I) holds vacuously. As for (II), first note that if~$a$ is a type with~$\|a\|=1$, then~$a$ is among the types~$e,e^{\prime}, e^{\prime\prime}, \ldots, t, t^{\prime},t^{\prime\prime}, \ldots$. Now, if~$a=e$ or~$a=t$, then part~(II) follows trivially since~$\alpha_0<\alpha_1$ and so both~$\alpha_0$ and the set~$\{0,1\}$ are members of~$L_{\alpha_1}$ and~$\mu_1$ includes the parameter~$\alpha_0$ by definition. Suppose the result holds for~$a$. Since~$\mathcal{O}_{1}$ is a~$\utilde{\Sigma}_{\ell_{1}}$-definable subset of~$L_{\alpha_{1}}$ in parameter~$\nu_{1}$, it follows trivially that~$D_{a^{\prime}}=\mathcal{O}_{1}$ is~$\utilde{\Sigma}_{\ell_{1}}$-definable subset of~$L_{\alpha_{1}}$ in the more complex parameter~$\mu_{1}$. This completes the argument in the case~$n=1$. Now suppose that the result holds for~$n$, and we show it holds for~$n\mbox{+}1$. For~(I), suppose that~$a$ is a type with~$\|a\|<n\mbox{+}1$, say~$\|a\|=m$. Then since~$D_a$ is a definable subset of~$L_{\alpha_m}$ by the induction hypothesis on part~(II) for~$m$, we may write~$D_a = \{x\in L_{\alpha_m}: L_{\alpha_m}\models \psi(x,\mu_m)\}$ for some formula~$\psi$. Hence $D_a$ is an element of~$L_{\alpha_m+1}$ and a member of~$L_{\alpha_{n+1}}$. Then we have that~$D_a$ is the unique~$X$ in~$L_{\alpha_{n+1}}$ which satisfies the following condition: \begin{myequation} (\forall \; x\in X\cap L_{\alpha_m} \; L_{\alpha_m}\models \psi(x,\mu_m)) \; \& \; (\forall \; x\in L_{\alpha_m} \; (L_{\alpha_m}\models \psi(x,\mu_m)\rightarrow x\in X)) \end{myequation} Then since~$m<n\mbox{+}1$ and the parameter~$\mu_{n+1}$ contains the parameter~$\mu_m$ as well as the ordinal~$\alpha_m$, and since the map~$\beta\mapsto L_{\beta}$ is~$\Delta_1$ in~$L_{\alpha_{n+1}}$ (cf. \cite{Devlin1984aa} II.2.8 p. 70) and since the satisfaction relation is likewise~$\Delta_1$ (cf. \cite{Devlin1984aa} I.9.10 p. 41), this is a~$\Sigma_1$-condition in~$L_{\alpha_{n+1}}$ in parameter~$\mu_{n+1}$. Here we're also appealing tacitly to the fact that the~$\Sigma_1$-formulas are closed under bounded quantification in models~$L_{\alpha}$ which satisfy~$\Sigma_1$-collection (cf. \cite{Devlin1984aa} Lemma I.11.6 p. 53). This completes the induction step for part~(I) of of the proposition. For the induction step for part~(II), note that the types with degree~$n\mbox{+}1$ are of the form~$a^{\prime}$ or~$ab$. Then we may do a subinduction on complexity of type. First suppose that~$\|a^{\prime}\|=n\mbox{+}1$ and suppose that the result holds for~$a$; we show it holds for~$a^{\prime}$. Since~$\mathcal{O}_{n+1}$ is~$\utilde{\Sigma}_{\ell_{n+1}}$-definable subset of~$L_{\alpha_{n+1}}$ in parameter~$\nu_{n+1}$, it follows trivially that~$D_{a^{\prime}}=\mathcal{O}_{n+1}$ is a~$\utilde{\Sigma}_{\ell_{n+1}}$-definable subset of~$L_{\alpha_{n+1}}$ in the more complex parameter~$\mu_{n+1}$. Second suppose that~$\|ab\|=n\mbox{+}1$, and suppose that the result holds for~$a,b$; we show it holds for~$ab$. There are two subcases here. In the first subcase, suppose that~$\|a\|\geq \|b\|$. Then by the definition of degree in~(\ref{eqn:RM:degtypinitial}), we have that~$\|a\|,\|b\|<\|ab\|$. Then if we let~$\mathrm{fnct}(f)$ abbreviate the~$\Sigma_0$-formula expressive of the graph~$f$ being functional, and fixing similar~$\Sigma_0$-definitions of~$\mathrm{dom}(f)=X$ and~$\mathrm{rng}(f)\subseteq Y$, then the set~$({D_b}^{D_a}) \cap L_{\alpha_{n+1}}$ is equal to \begin{myequation} \{f\in L_{\alpha_{n+1}}: \mathrm{fnct}(f) \; \& \; \exists \; X,Y \; X=D_a \; \& \; \mathrm{dom}(f)=X \; \& \; Y = D_b \; \& \; \mathrm{rng}(f)\subseteq Y \} \end{myequation} Then by part~(I), we have that this is a~$\utilde{\Sigma}_1$-definable subset of~$L_{\alpha_{n+1}}$ in parameter~$\mu_{n+1}$. Now, as a second subcase, suppose the result holds for~$a,b$ and that~$\|a\|< \|b\|$, so that by the definition of degree in~(\ref{eqn:RM:degtypinitial}) we have~$n\mbox{+}1=\|ab\|=\|b\|$. Then~$D_a$ is a member of~$L_{\alpha_{n+1}}$ by part~(I), while by the supposition that the result holds for~$b$ we have that~$D_b$ is a~$\utilde{\Sigma}_{\ell_{n+1}}$-definable subset of~$L_{\alpha_{n+1}}$ in parameter~$\mu_{n+1}$. Then~$({D_b}^{D_a}) \cap L_{\alpha_{n+1}}$ is also a~$\utilde{\Sigma}_{\ell_{n+1}}$-definable subset of~$L_{\alpha_{n+1}}$ in the parameter~$\mu_{n+1}$. For, we have the following definition of~$({D_b}^{D_a}) \cap L_{\alpha_{n+1}}$: \begin{myequation} \{f\in L_{\alpha_{n+1}}: L_{\alpha_{n+1}}\models [\mathrm{fnct}(f) \; \& \; \exists \; X \; X=D_a \; \& \; \forall \; x\in X \; \exists \; y\in D_b \; \langle x,y\rangle\in f]\} \end{myequation} Here we are appealing to part~(I) applied to~$D_a$ since~$\|a\|<n\mbox{+}1$ in this subcase. Likewise, we are appealing to the fact that the bounded quantification in the last conjunct does not move us out of the complexity class~$\utilde{\Sigma}_{\ell_{n+1}}$ in models of~$\utilde{\Sigma}_{\ell_{n+1}}$-collection and~$\utilde{\Sigma}_{\ell_{n+1}-1}$-separation. This finishes the induction step for~(II). With this the inductive proof of the proposition is finished. \section{Appendix 2: Verification of the Satisfaction of Predicative Comprehension}\label{app2} Here we prove the following theorem from \S\ref{sec:RM:05}: \begin{myenumerate} \item[(\ref{thM:RM:big})] (Theorem on Consistency of Predicative Comprehension) For every intensional hierarchy~$D$~(\ref{eqn:RM:defn:intensional:hierarchy}), the associated intensional structure~$\mathbb{D}$ models each instance of the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}) and hence each instance of the Predicative Typed Comprehension Schema~(\ref{eqn:RM:predcompschem}). \end{myenumerate} Further, we here prove this result for the language expanded by the representation functions~$\nabla_a: D_a\rightarrow D_{a^{\prime}}$ introduced in \S\ref{sec:RM:06} (cf. circa~(\ref{eqn:RM:PresentationRepresentation})). As a first step towards approaching the proof of this theorem, let's first note an elementary result on terms. The terms in the signature of an intensional structures consists simply of the closure of the constants~$0$,$1$ and the variables under the extensional application symbols and the representation operations. The presentation symbols and the intensional application symbols are not total and hence are formally treated as relation symbols as opposed to function symbols. The \emph{type} of a term is defined inductively as follows: the truth-values~$0$,~$1$ have type~$t$, the variables have the type that they are given initially, and if~$\tau$ has type~$ab$ and~$\sigma$ has type~$a$, then~$\tau(\sigma)$ or~$\mathrm{e\mbox{-}app}_{ab}(\tau, \sigma)$ has type~$b$; and if~$\tau$ has type~$a$ then~$\nabla_a(\tau)$ has type~$a^{\prime}$. Then we have the following elementary result: \begin{myenumerate} \item (Proposition that Terms do not Raise Degree). Suppose that~$\tau(x_1, \ldots, x_k)$ is a term in the signature of intensional structures with all free variables displayed such that the type of each variable~$x_i$ has degree~$\leq~n$. Then the type of the term~$\tau$ has degree~$\leq~n$. \label{eqn:RM:dontraise}\index{Proposition that Terms do not Raise Degree (\ref{eqn:RM:dontraise})} \end{myenumerate} The proof is by induction on the complexity of the term. Clearly this is true in the case of the truth-values and the variables. Suppose it holds for~$\tau(x_1, \ldots, x_k)$ and~$\sigma(x_1, \ldots, x_k)$; we must show it is the case for~$\mathrm{e\mbox{-}app}_{ab}(\tau, \sigma)$ and~$\nabla_a(\tau)$. First consider the case of~$\mathrm{e\mbox{-}app}_{ab}(\tau, \sigma)$. Then~$\tau$ has type~$ab$ and~$\sigma$ has type~$a$, and each has type with degree~$\leq n$ by the induction hypothesis. There are two cases to consider, corresponding to the two clauses in the definition of~$\|ab\|$ in~(\ref{eqn:RM:degtypinitial}). First suppose that~$\|a\|\geq \|b\|$. Then one has that~$\|b\|\leq \|a\|\leq n$, which is what we wanted to show since the type of~$\mathrm{e\mbox{-}app}_{ab}(\tau, \sigma)$ is~$b$. Second suppose that~$\|a\|<\|b\|$. Then we have that~$\|b\|=\|ab\|\leq n$, which is again what we wanted to show. Finally, consider the case of~$\nabla_a(\tau)$. Then~$\tau$ has type~$a$, and it has degree~$\leq{n}$ by induction hypothesis. Then~$\nabla_a(\tau)$ has type~$a^{\prime}$ and so~$\|a^{\prime}\|=\|a\|\leq n$ by the definition of degree of~$\|a^{\prime}\|$ in~(\ref{eqn:RM:degtypinitial}). This is why terms do not raise degree, or why~(\ref{eqn:RM:dontraise}) holds. Relatedly, as a preliminary step, let's establish the following result about the complexity of the functions on intensional structures induced by terms: \begin{myenumerate} \item (Proposition on Complexity of Terms) Suppose that~$\tau(\overline{u})\equiv \tau(u_1, \ldots, u_j)$ is a term with all free variables displayed where~$u_i$ has type~$d_i$. Since terms don't raise degree~(\ref{eqn:RM:dontraise}),~$\tau$ has type with degree~$d$ with~$\|d\|\leq m=\max\{\|d_1\|, \ldots, \|d_j\|\}$. Then~$\tau$ induces a function~$\tau^{\mathbb{D}}: D_{d_1}\times \cdots \times D_{d_j}\rightarrow D_d$ whose graph is~$\utilde{\Sigma}^{L_{\alpha_m}}_{\ell_m}$-definable.\label{eqn:asdfasdfdsaafasdf3214231}\index{Proposition on Complexity of Terms (\ref{eqn:asdfasdfdsaafasdf3214231})} \end{myenumerate} Clearly this is the case if the term is variable. Now for the induction step suppose that the result holds for~$\tau$ and~$\sigma$; we must show it holds for~$\rho(\overline{u})\equiv \mathrm{e\mbox{-}app}_{e_1e_2}(\tau(\overline{u}), \sigma(\overline{u}))$. Then~$\tau(\overline{u})$ has type~$e_1e_2$ and~$\sigma(\overline{u})$ has type~$e_1$. Since terms don't raise degree~~(\ref{eqn:RM:dontraise}), it follows that~$\|e_1e_2\|, \|e_1\|$ are all less than or equal to~$m=\max\{\|d_1\|, \ldots, \|d_j\|\}$, and from this we infer that~$\|e_2\|\leq \|e_1e_2\|\leq m$ as well. Then~$\rho^{\mathbb{D}}: D_{d_1}\times \cdots \times D_{d_j}\rightarrow D_{e_2}$ has the following graph: \begin{myeqnarray} \{ (\overline{u},u)\in D_{d_1}\times \cdots \times D_{d_j}\times D_{e_2} & : & \exists \; y\in D_{e_1} \; \exists \; z \in D_{e_1e_2} \; \notag \\ & & \sigma^{\mathbb{D}}(\overline{u})= y\; \& \; \tau^{\mathbb{D}}(\overline{u})=z \; \& \; \langle y,u\rangle \in z\} \end{myeqnarray} This is~$\utilde{\Sigma}^{L_{\alpha_m}}_{\ell_m}$-definable by the Location of Domains~(\ref{prop:location:domain}) since we have that~$\|e_1\|, \|e_2\|,\|e_1e_2\|\leq m$. For the final induction step, suppose that the result holds for~$\tau$; we must show it holds for~$\nabla_a(\tau)$. Then~$\tau$ has type~$a$, and since terms don't raise degree~~(\ref{eqn:RM:dontraise}), it follows that~$\|a\|\leq m$. Then the graph of~$\tau^{\mathbb{D}}$ is~$\utilde{\Sigma}^{L_{\alpha_m}}_m$-definable by induction hypothesis. Recall from the discussion in \S\ref{sec:RM:06} that the representation function~$\nabla_a$ is interpreted on intensional structures~$\mathbb{D}$ by the function~$\iota_{\|a\|}$ from the definition of an intensional hierarchy~(\ref{eqn:RM:defn:intensional:hierarchy}). However, this was by definition~$\utilde{\Sigma}^{L_{\alpha_{\|a\|}}}_{\|a\|}$-definable (cf. clause (iii) of the definition of an intensional position~(\ref{eqn:RM:defn:intensional:position})). Since~$\|a\|\leq m$, we then have that the composition~$\iota_{\|a\|}\circ \tau^{\mathbb{D}}$ is clearly also ~$\utilde{\Sigma}^{L_{\alpha_m}}_m$-definable. This finishes the proof of result on the complexity of terms~(\ref{eqn:asdfasdfdsaafasdf3214231}). Now let's consider what kinds of symbols can appear in a formula covered by the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}). Suppose that the formula~$\varphi(x, y, z_1, \ldots, z_k)$ is a formula with all free variables displayed and with free variable~$x$ of type~$a$,~$y$ of type~$b$, and in addition variable~$z_i$ has type~$c_i$ with~$\|c_i\|\leq \|ab\|$ and all the bound variables in~$\varphi(x,y,z_1, \ldots, z_k)$ have type~$c$ with~$\|c\|< \|ab\|$. Let~$\|ab\|=n+1$. There are then two cases to consider, corresponding to the split in cases in the definition of the degree~$\|ab\|$ in~(\ref{eqn:RM:degtypinitial}). If~$\|a\|\geq \|b\|$, then~$n+1=\|ab\|=\|a\|+1$ and so~$\|b\|\leq \|a\|\leq n$. Further, if we split the parameter variables~$z_1, \ldots, z_k$ into those that have type with degree~$n+1$ and those that have type with degree~$\leq~n$, then we can write the formula in question as: \begin{myenumerate} \item (First Configuration):~$\varphi(x, y, v_1, \ldots, v_m, z_1, \ldots, z_k)$ is a formula with all free variables displayed and with free variable~$x$ of type~$a$ with~$\|a\|\leq{n}$,~$y$ of type~$b$ with~$\|b\|\leq{n}$, and in addition variable~$v_i$ has type~$a_i$ with~$\|a_i\|\leq{n}$ and variable~$z_i$ has type~$c_i$ with~$\|c_i\|=n+1$ and all the bound variables in the formula have type~$c$ with~$\|c\|\leq{n}$.\label{eqn:RM:firstconfig}\index{Configuration, First (\ref{eqn:RM:firstconfig})} \end{myenumerate} Alternatively, in the other case, we have~$\|a\|<\|b\|$ and~$n+1=\|ab\|=\|b\|$. If we again split the parameter variables~$z_1, \ldots, z_k$ into those that have type with degree~$n+1$ and those that have type with degree~$\leq~n$, then we can write the formula in question as: \begin{myenumerate} \item (Second Configuration):~$\varphi(x, v_1, \ldots, v_m, y, z_1, \ldots, z_k)$ is a formula with all free variables displayed and with free variable~$x$ of type~$a$ with~$\|a\|\leq{n}$,~$y$ of type~$b$ with~$\|b\|= n+1$, and in addition variable~$v_i$ has type~$a_i$ with~$\|a_i\|\leq{n}$ and variable~$z_i$ has type~$c_i$ with~$\|c_i\|=n+1$ and all the bound variables in the formula have type~$c$ with~$\|c\|\leq{n}$.\label{eqn:RM:secondconfig}\index{Configuration, Second (\ref{eqn:RM:secondconfig})} \end{myenumerate} For ease of future reference, we call these two kinds of formulas which can feature in the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}) the ``first configuration'' and the ``second configuration''. The plan in what follows is to show that the Predicative Typed Choice Schema~(\ref{eqn:RM:compschem4}) holds for formulas in the second configuration~(\ref{eqn:RM:secondconfig}), and then to show it for formulas in the first configuration~(\ref{eqn:RM:firstconfig}). This first step is done by proving a result connecting the satisfaction of a formula in the second configuration to a certain level of definability in the constructible hierarchy. To build up to the statement of this result, suppose that~$\varphi(x, v_1, \ldots, v_m, y, z_1, \ldots, z_k)$ is in the second configuration~(\ref{eqn:RM:secondconfig}). Then any subformula of this formula has the form \begin{myequation} \psi(x,v_1, \ldots, v_m, v_{m+1}, \ldots v_{m+m^{\prime}},y,z_1, \ldots, z_k) \end{myequation} where the variable~$v_i$ for~$i>m$ has type~$a_i$ with degree~$\leq{n}$. Let's abbreviate~$\overline{v} = \langle v_1, \ldots, v_m, v_{m+1}, \ldots v_{m+m^{\prime}}\rangle$ and let's abbreviate \begin{myequation} D_{\overline{a}}=D_{a_1}\times \cdots \times D_{a_{m+m^{\prime}}}, \hspace{10mm} D_{\overline{c}} = D_{c_1}\times \cdots \times D_{c_k} \end{myequation} Note that since~$\|a\|, \|a_i\|\leq n$, it follows from the Location of Domains~(\ref{prop:location:domain}), we have that~$D_{a}\times D_{\overline{a}}$ is a member of~$L_{\alpha_{n+1}}$. However, since~$\|b\|, \|c_i\|=n+1$, we have that~$D_b\times D_{\overline{c}}$ is a~$\utilde{\Sigma}^{L_{\alpha_{n+1}}}_{\ell_{n+1}}$-definable subset of~$L_{\alpha_{n+1}}$. Having put this terminology in place, let's now show that: \begin{myenumerate} \item (Proposition on Complexity of Satisfaction, Second Configuration) For every intensional hierarchy~$D$ with induced intensional structure~$\mathbb{D}$ and every subformula~$\psi(x,\overline{v},y,\overline{z})$ of a formula in the second configuration~(\ref{eqn:RM:secondconfig}), the following set is~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable:\label{eqn:RM:complexsat:second}\index{Proposition on Complexity of Satisfaction, Second Configuration (\ref{eqn:RM:complexsat:second})} \[[ \psi]^D= \{(x,\overline{v},y,\overline{z})\in D_a\times D_{\overline{a}}\times D_b\times D_{\overline{c}} : \mathbb{D}\models \psi(x,\overline{v},y,\overline{z})\}\] \end{myenumerate} We establish this by induction on the complexity of the subformula. By pushing all the negations to the inside, it suffices to show that the result holds for atomics, negated atomics, and is closed under conjunctions, disjunctions, existential quantification, and universal quantification. Let's begin with the atomic case, considering the negated atomic cases along the way. The atomic formulas in intensional structures have three possible forms, namely: \begin{myequation} \tau = \sigma, \hspace{10mm} \Delta_d(\tau)=\sigma, \hspace{10mm} \mathrm{i\mbox{-}app}_{c_0d_0}(\tau, \sigma) = \rho \end{myequation} where~$\tau, \sigma, \rho$ are terms. These are the only possible subformulas because technically, the second is shorthand for the binary atomic formula~$\Delta_d(\tau, \sigma)$ and the third is shorthand for the associated ternary atomic relation (cf. discussion circa equations~(\ref{eqn:RM:defnpres}) and (\ref{eqn:RM:defn:intensional})). Since~$\tau, \sigma,\rho$ appear in a formula in the second configuration~(\ref{eqn:RM:secondconfig}), the free variables in these terms~$\tau, \sigma, \rho$ have types with degree~$\leq{n}+1$ and since terms don't raise degree~(\ref{eqn:RM:dontraise}), it follows that the respective types~$e_1, e_2, e_3$ of~$\tau, \sigma, \rho$ are also such that~$\|e_1\|, \|e_2\|, \|e_3\| \leq{n}+1$. From this it follows in turn that~$\|d\|\leq{n}+1$ and~$\|c_0\|, \|d_0\| \leq \|c_0d_0\|=\|(c_0d_0)^{\prime}\|\leq{n}+1$. Let's consider first the case of equality between terms, that is, atomic formulas of the form \begin{myequation} \psi(x,\overline{v}, y,\overline{z})\equiv \tau(x,\overline{v},y,\overline{z})=\sigma(x,\overline{v},y, \overline{z}) \end{myequation} Then we have that \begin{myeqnarray} (x,\overline{v},y,\overline{z})\in [\psi]^D & \Longleftrightarrow & \exists \; z_1\in D_{e_1}, \exists \; z_2\in D_{e_2} \; \tau(x,\overline{v},y,\overline{z})=z_1\notag \\ && \; \& \; \sigma(x,\overline{v},y,\overline{z})=z_2 \; \& \; z_1=z_2 \end{myeqnarray} which is~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable by the result on the complexity of terms~(\ref{eqn:asdfasdfdsaafasdf3214231}). Similarly we have that \begin{myeqnarray} (x,\overline{v},y, \overline{z})\in [\neg \psi]^D & \Longleftrightarrow & \exists \; z_1\in D_{e_1}, \exists \; z_2\in D_{e_2} \; \tau(x,\overline{v},y, \overline{z})=z_1 \notag \\ & & \; \& \; \sigma(x,\overline{v},y, \overline{z})=z_2 \; \& \; z_1\neq z_2 \end{myeqnarray} which is~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable for the same reasons. Now let's consider the case of the presentation symbols, that is atomic formulas of the form \begin{myequation} \psi(x,\overline{v},y, \overline{z})\equiv \Delta_d(\tau(x,\overline{v},y,\overline{z}))=\sigma(x,\overline{v},y,\overline{z}) \end{myequation} Then~$[\psi]^D$ is~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable because we have the following biconditional and because~$\|d\|\leq{n}\mbox{+}1$ implies that~$\pi_{\|d\|}$ is~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable: \begin{myeqnarray} (x,\overline{v},y,\overline{z})\in [\psi]^D & \Longleftrightarrow & \exists \; z_1\in D_{e_1}, \exists \; z_2\in D_{e_2} \; \tau(x,\overline{v},y,\overline{z})=z_1\notag \\ & & \; \& \; \sigma(x,\overline{v},y,\overline{z})=z_2 \; \& \; \pi_{\|d\|}(z_1)=z_2 \end{myeqnarray} Further, by using the fact that $\mathcal{O}_j\setminus \pi^{-1}_j(L_{\alpha_j})$ was~$\utilde{\Sigma}_{\ell_{j}}^{L_{\alpha_{j}}}$-definable for all~$j\geq 1$ (cf. clause~(v) of the definition of an intensional position~(\ref{eqn:RM:defn:intensional:position})) and that~$\|d\|\leq{n}+1$, we have that \begin{myeqnarray}\label{eqn:d123412341234213} (x,\overline{v},y,\overline{z})\in [\neg\psi]^D & \Longleftrightarrow& \exists \; z_1\in D_{e_1}, \exists \; z_2 \in D_{e_2}, \; \exists \; z_3\in L_{\alpha_{n+1}}\notag \\ & & \tau(x,\overline{v},y,\overline{z})=z_1 \; \& \; \sigma(x,\overline{v},y, \overline{z})=z_2 \\ & & \; \& \; (z_1\in \mathcal{O}_{\|d\|}\setminus \pi^{-1}_{\|d\|}(L_{\alpha_{\|d\|}})) \vee (\pi_{\|d\|}(z_1)=z_3 \; \& \; z_2\neq z_3)\notag \end{myeqnarray} As the final atomic case, consider the case of intensional application: \begin{myequation} \psi(x,\overline{v},y, \overline{z})\equiv \mathrm{i\mbox{-}app}_{c_0d_0}(\tau(x,\overline{v},y,\overline{z}), \sigma(x,\overline{v},y,\overline{z})) = \rho(x,\overline{v},y,\overline{z}) \end{myequation} Then the type~$e_1$ of~$\tau$ must be~$(c_0d_0)^{\prime}$ and the type~$e_2$ of~$\sigma$ must be~$c_0^{\prime}$. Then we have \begin{myeqnarray} (x,\overline{v},y,\overline{z})\in [\psi]^D & \Longleftrightarrow & \exists \; z_1\in D_{(c_0d_0)^{\prime}}, \exists \; z_2\in D_{c_0^{\prime}}, \exists \; z_3\in D_{d_0^{\prime}}\notag \\ & & \tau(x,\overline{v},y,\overline{z}) = z_1 \; \& \; \sigma(x,\overline{v},y,\overline{z})=z_2 \; \& \; \rho(x,\overline{v},y,\overline{z})=z_3 \notag \\ & & \; \& \; \iota_{\|d_0\|}((\pi_{\|c_0d_0\|}(z_1))(\pi_{\|c_0\|}(z_2)) = z_3 \end{myeqnarray} Since~$\|c_0\|, \|d_0\| \leq \|c_0d_0\|=\|(c_0d_0)^{\prime}\|\leq{n}+1$, we have that~$\pi_{\|c_0d_0\|}, \pi_{\|c_0\|}, \iota_{\|d_0\|}$ are all~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable. Finally, for the negation, we may argue as follows, again appealing to the fact that in the definition of an intensional position we required that the set~$\mathcal{O}_j\setminus \pi^{-1}_j(L_{\alpha_j})$ was~$\utilde{\Sigma}_{\ell_{j}}^{L_{\alpha_{j}}}$-definable for all~$j\geq 1$ (cf. clause~(v) of the definition of an intensional position~(\ref{eqn:RM:defn:intensional:position})): \begin{myeqnarray}\label{eqn:complemee321423} (x,\overline{v},y,\overline{z})\in [\neg \psi]^D & \Longleftrightarrow & \exists \; z_1\in D_{(c_0d_0)^{\prime}}, \exists \; z_2 \in D_{c_0^{\prime}}, \exists \; z_3\in D_{d_0^{\prime}}, \exists \; z_4\in L_{\alpha_{n+1}} \notag \\ & & \tau(x,\overline{v},y,\overline{z}) = z_1 \; \& \; \sigma(x,\overline{v},y,\overline{z})=z_2 \; \& \; \rho(x,\overline{v},y,\overline{z})=z_3 \notag \\ & & \wedge [(z_1\in \mathcal{O}_{\|c_0d_0\|}\setminus \pi^{-1}_{\|c_0d0\|}(L_{\alpha_{\|c_0d_0\|}}))\notag \\ &&\; \vee\; (z_2\in \mathcal{O}_{\|c_0\|}\setminus \pi^{-1}_{\|c_0\|}(L_{\alpha_{\|c_0\|}})) \\ & & \; \vee \; (\iota_{\|d_0\|}((\pi_{\|c_0d_0\|}(z_1))(\pi_{\|c_0\|}(z_2)) = z_4 \; \& \; z_4\neq z_3)] \notag \end{myeqnarray} This completes the base cases of the inductive argument for~(\ref{eqn:RM:complexsat:second}). Since~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definability is closed under finite intersections and unions, the inductive steps for conjunction and disjunction are trivial. Let us then consider the case of universal quantification. Suppose that the result holds for~$\psi(x,\overline{v},v,y,\overline{z})$ and let us show it holds for~$\theta(x,\overline{v},y,\overline{z})\equiv \forall \; v_0 \; \psi(x,\overline{v},v_0,y,\overline{z})$. Since this is a subformula of a formula in the second configuration~(\ref{eqn:RM:secondconfig}), it follows the bound variable~$v_0$ has a type~$a_0$ with degree~$\leq{n}$. Then by part~(ii) of the result on Locations of Domains~(\ref{prop:location:domain}), it follows that~$X=D_{a_0}$ is a~$\utilde{\Sigma}^{L_{\alpha_{n+1}}}_1$-condition. Then one has that \begin{myequation} (x,\overline{v},y,\overline{z})\in [\theta]^D \Longleftrightarrow \exists \; X \; X = D_{a_0} \; \& \; \forall \; v_0\in X \; (x,v_0,\overline{v}, y,\overline{z})\in [\psi]^D \end{myequation} so that~$[\theta]^D$ is likewise ~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable since ~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definability is closed under bounded quantification in models~$L_{\alpha_{n+1}}$ of~$\Sigma_{\ell_{n+1}}$-collection and~$\Sigma_{\ell_{n+1}-1}$-separation. A similar argument holds in the case of the existential quantifier, but is even easier since there we do not have to appeal to this result about closure under bounded quantification. This finishes the result on the complexity of satisfaction in the case of a formula which is in the second configuration~(\ref{eqn:RM:complexsat:second}). Now let us finally establish that the Predicative Typed Choice Schema~(\ref{eqn:RM:predcompschem}) holds on intensional structures, at first with respect to formulas in the second configuration~(\ref{eqn:RM:secondconfig}). Suppose that the antecedent holds: \begin{myequation}\label{eqn:testmealready} \mathbb{D}\models \forall \;x \; \exists \; y \; \varphi(x,p_1, \ldots, p_m, y,q_1, \ldots, q_k) \end{myequation} where~$\varphi(x,v_1, \ldots,v_m, y,z_1, \ldots z_k)$ is in the second configuration~(\ref{eqn:RM:secondconfig}). Consider the following relation: \begin{myequation} R(x,y)\equiv [x\in D_a \; \& \; y\in D_b \; \& \; \mathbb{D}\models \varphi(x,p_1, \ldots, p_m,y, q_1, \ldots, q_k)] \end{myequation} Then by the result on the complexity of satisfaction~(\ref{eqn:RM:complexsat:second}), one has that~$R$ is~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable. And by equation~(\ref{eqn:testmealready}), one has that \begin{myequation}\label{eqn:RM:almostdone} L_{\alpha_{n+1}}\models \forall \;x\in D_a \; \exists \; y \; R(x,y) \end{myequation} By the uniformization theorem (cf. \cite{Jensen1972aa} Theorem 3.1 p. 256 and Lemma 2.15 p. 255; \cite{Devlin1984aa} Theorem 4.5 p. 269, and ``weak uniformization'' in \cite{Walsh2014ac}), choose a~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable relation~$R^{\prime}\subseteq R$ such that \begin{myequation} L_{\alpha_{n+1}} \models [\forall \; x \; (\exists \; y \; R(x,y))\rightarrow (\exists \; ! \; y \; R^{\prime}(x,y))] \end{myequation} Then by equation~(\ref{eqn:RM:almostdone}), one has that~$R^{\prime}$ is the graph of a function~$h:D_a\rightarrow D_b$. Since this graph is ~$\utilde{\Sigma}_{\ell_{n+1}}^{L_{\alpha_{n+1}}}$-definable with domain~$D_a$ an element of~$L_{\alpha_{n+1}}$, by Replacement (cf. \cite{Devlin1984aa} Lemma I.11.7 p. 53) one has that it is an element of~$L_{\alpha_{n+1}}=L_{\alpha_{\|ab\|}}$. Then~$h$ is an element of the domain~$D_b^{D_a}\cap L_{\alpha_{\|ab\|}}= D_{ab}$ (cf. the third clause of equation~(\ref{eqn:defn:RM:typestodomains})). Hence, we've shown that there is~$h$ in~$D_{ab}$ such that \begin{myequation} \mathbb{D}\models \forall \;x \; \varphi(x,p_1, \ldots, p_m, h(x),q_1, \ldots, q_k) \end{myequation} which is what we were required to show in the consequent of the Predicative Typed Choice Schema~(\ref{eqn:RM:predcompschem}). \vspace{2mm} We've verified that the Predicative Typed Choice Schema~(\ref{eqn:RM:predcompschem}) holds on intensional structures, at least with respect to formulas in the second configuration~(\ref{eqn:RM:secondconfig}). Let's now argue that the same holds with respect to formulas in the first configuration~(\ref{eqn:RM:firstconfig}). Suppose that \begin{myequation}\label{eqn:testmealready2} \mathbb{D}\models \forall \;x \; \exists \; y \; \varphi(x,y,p_1, \ldots, p_m, q_1, \ldots, q_k) \end{myequation} where~$\varphi(x,y,v_1, \ldots,v_m, z_1, \ldots z_k)$ is in the first configuration~(\ref{eqn:RM:firstconfig}). Then consider the following, where~$w$ is a variable of type~$ab$ with degree~$n+1$ and~$x_1,x_2$ are variables of type~$a$: \begin{myequation} \psi(x,w,\overline{p}, \overline{q})\equiv (\forall \; x_1, x_2\; w(x_1)=w(x_2)) \; \& \; (\exists \; x_1, y \; \; (w(x_1)=y \; \& \; \varphi(x,y,\overline{p}, \overline{q}))) \end{myequation} Intuitively this formula~$\psi$ is saying that~$w$ is a constant function of type~$ab$ and its constant value is a witness to~$\varphi$. Now~$\psi$ is in the second configuration~(\ref{eqn:RM:secondconfig}), and we can verify by hand that for every element~$y$ of~$D_b$ there is a constant function of type~$ab$ whose constant value is $y$. For, if~$y\in L_{\alpha_{\|b\|}}$ then~$\{\langle x_1,y\rangle: x_1\in D_a\}$ is in~$L_{\alpha_{\|ab\|}}$. Then by Predicative Typed Choice Schema~(\ref{eqn:RM:predcompschem}) applied to~$\psi$, we have that there is an element~$h$ of type~$a(ab)$ such that~$\mathbb{D}\models \forall \; x \; \psi(x,h(x), p_1, \ldots, p_m, q_1, \ldots, q_k)$. Note that since~$\|ab\|=n+1=\|a(ab)\|$ we have that~$h$ is in~$L_{\alpha_{n+1}}$. Then the function~$g:D_a\rightarrow D_b$ such that~$g(x)=(h(x))(x)$ is in~$L_{\alpha_{n+1}}$ by~$\Sigma_0$-separation since \begin{myequation} g = \{\langle x,y\rangle\in D_a\times D_b: \langle x, \langle x,y\rangle\rangle \in h\} \end{myequation} We've now shown that~$g$ is an element of~$D_{ab}$ and by construction we have \begin{myequation}\label{eqn:testmealready3} \mathbb{D}\models \forall \;x \; \varphi(x,g(x),p_1, \ldots, p_m, q_1, \ldots, q_k) \end{myequation} so that we also have that Predicative Typed Choice Schema~(\ref{eqn:RM:predcompschem}) holds on intensional structures, regardless of which of the two configurations we are in. \subsection*{Acknowledgements} I was lucky enough to be able to present parts of this work at a number of workshops and conferences, and I would like to thank the participants and organizers of these events for these opportunities. I would like to especially thank the following people for the comments and feedback which I received on these and other occasions: Robert~Black, Roy~Cook, Matthew~Davidson, Walter~Dean, Marie~Du\v{z}\'{\i}, Kenny~Easwaran, Fernando~Ferreira, Martin~Fischer, Rohan~French, Salvatore~Florio, Kentaro~Fujimoto, Jeremy~Heis, Joel~David~Hamkins, Volker~Halbach, Ole Thomassen~Hjortland, Luca~Incurvati, Daniel~Isaacson, J\"onne~Kriener, Graham~Leach-Krouse, Hannes~Leitgeb, {\O}ystein~Linnebo, Paolo~Mancosu, Richard~Mendelsohn, Tony~Martin, Yiannis~Moschovakis, John~Mumma, Pavel~Pudl\'ak, Sam~Roberts, Marcus~Rossberg, Tony~Roy, Gil~Sagi, Florian~Steinberger, Iulian~Toader, Gabriel~Uzquiano, Albert~Visser, Kai~Wehmeier, Philip~Welch, Trevor~Wilson, and Martin~Zeman. This paper has likewise been substantially bettered by the feedback and comments of the editors and referees of this journal, to whom I express my gratitude. While composing this paper, I was supported by a Kurt G\"odel Society Research Prize Fellowship and by {\O}ystein Linnebo's European Research Council funded project ``Plurals, Predicates, and Paradox.'' \bibliographystyle{plain}
{'timestamp': '2015-06-09T02:09:50', 'yymm': '1506', 'arxiv_id': '1506.02206', 'language': 'en', 'url': 'https://arxiv.org/abs/1506.02206'}
\section{Introduction} The Isaac Newton Institute programme ``Computational challenges in partial differential equations'' in 2003 stimulated intense research on surface partial differential equations (PDEs) in the mathematical community covering topics from modeling, numerical analysis, and applications. PDEs that are defined on curved surfaces are intrinsically nonlinear and require a geometric framework. An important breakthrough in the development of numerical methods for this type of PDEs is the avoidance of charts and atlases. Most commonly used methods are either based on a triangulated surface and require geometric information through knowledge of the vertices and discrete normals, or based on a level set technique in which geometric information is derived from the level set function. Most of that work is concerned with scalar-valued surface PDEs, see \cite{DE2013Finite,OlshanskiiReusken2017,BD2020Finite} for reviews on such finite element based approaches. In the scalar case the coupling between surface geometry and the PDE solution is relatively weak, and numerical approaches developed for PDEs in a flat space need only minor modifications to be applicable to surface equations, see, e.g., \cite{DE2013Finite,VV2006AMDiS}. For $n$-tensor-valued surface PDEs with $n \geq 1$, these approaches are not directly applicable. The tensor-fields then need to be considered as elements of the tangent bundle and the surface derivatives require more geometric information. This leads to a stronger influence of the surface geometry on the solution of the PDE. In this paper we study in more depth this change in numerical complexity when going from PDEs in flat domains to curved surfaces. For this we consider a specific problem class, namely that of a surface heat equation for $n$-tensor fields on a smooth curved surface embedded in $\R^3$. We will focus on tensor ranks $n=0,1,2$. In the remainder of the paper we use for $n$-tensors, $n=0,1,2$, the terminology scalars, vectors and tensors, respectively. For $n=1,2$, the solution must be tangential. Concerning the numerics we restrict to finite element discretizations in space combined with low order BDF time stepping schemes, cf. \cite{LMV2013Backward}. When going from PDEs in flat Euclidean domains to PDEs on a curved surface the following additional numerical issues arise: \begin{itemize} \item[a)] \emph{Surface representation}. In flat domains one only has to represent or approximate the boundary of the computational domain. In problems on curved surfaces the whole computational domain has to be approximated. An issue directly related to this is the quadrature used in the finite element method. \item[b)] \emph{Representation of the gradient operator and geometry information}. For the (covariant) surface gradient operator different natural representations are available, leading to different numerical approaches. In the discretization process one needs approximations of geometric quantities such as surface normals and curvature. We will see that the geometric information needed depends on the gradient operator representation used and on the tensorial degree $n$. \item[c)] \emph{Tangentiality condition}. For $n$-tensor fields with $n \geq 1$ one has to take into account the condition that the solution must be tangential. \end{itemize} In recent years several approaches for dealing with these issues have been developed, leading to different numerical discretization methods. \emph{We present, in a unified framework, four methods known from the literature and explicitly address the different approaches these methods use regarding a)--c).} These four methods are: A surface finite element method (SFEM) \cite{NNPV2018Orientational,NN2019Finite,HL2020Analysis,HaPr2021Tangential}, which extends the SFEM for scalar-valued surface PDEs \cite{DE2013Finite,Demlow2009HigherOrder} to tensor-valued surface PDEs; an intrinsic surface finite element method (ISFEM), which so far has only been considered for scalar-valued surface PDEs \cite{BFP2021Intrinsic}; a trace finite element method (TraceFEM) \cite{JankuhnReusken2020}, extending the scalar version \cite{OlshanskiiReusken2017} to vector-valued PDEs; and a diffuse interface approach (DI) \cite{NNPV2018Orientational} which extends the approach for scalar-valued PDEs \cite{RV2006PDEs}. We note that for vector- or tensor-valued surface PDEs only very few rigorous discretization error analyses are available. Such analyses for SFEM and TraceFEM applied to a vector-Laplace problem are given in \cite{HL2020Analysis,JankuhnReusken2020,HaPr2021Tangential}. One conclusion from this comparative study is that in all four methods there is \emph{an essential increase in numerical complexity when one goes from the scalar case to the vector- or tensor-valued problem}, which goes well beyond the increase in complexity in flat Euclidean domains. Depending on the geometry, an approximation of geometric properties, which is sufficient to achieve the desired accuracy of the solution for the scalar case, might fail for the vector or tensor case, cf. \Cref{sec:Comparison} for a further discussion. We further consider the \emph{influence of the geometry on the solution of an $n$-tensor heat flow problem}. This is done on a surface with a rather simple geometry. We present results of numerical simulations with the four methods which demonstrate that curvature drastically affects the behaviour of the solution. The paper is structured as follows: In \Cref{sec:2} we recall different surface representations. We also discuss different possibilities for representing tensors and gradient operators. Furthermore, we introduce the surface $n$-tensor-valued heat equation and summarize known analytical results. In \Cref{sec:FiniteElementDiscretizationSchemes} we briefly describe the four numerical methods and discuss the above mentioned numerical issues a)--c). In \Cref{sec:NumericalExperiments} the $n$-tensor-valued heat equation is numerically solved on a specific surface. Certain influences of the geometry on the behavior of the solution are addressed in the Sections~\ref{sec:Phenomenon1} -- \ref{sec:Phenomenon3}. Below we restrict to tensorial degree $n \leq 2$. This restriction is not essential for the results presented or for the applicability of the numerical methods, but allows a clearer presentation. We provide reference solutions, which can serve as a benchmark problem. \section{Surface tensor diffusion}\label{sec:2} \subsection{Surface representation} Let $\Surface$ be a compact, orientable, two-dimensional surface isometrically embedded into $\R^3$. We consider two representations of this surface, namely based on a local parametrization and as the zero level of a level set function. The tangent bundle of $\Surface$ is denoted by $\TangentBundle\Surface$ and for each $\bm{x}\in\Surface$ a normal vector $\bm{n}(\bm{x})\in\R^3$ is defined as the unit vector orthogonal to all tangent vectors in $\TangentBundle_{\bm{x}}\Surface$. \subsubsection{Parameterized surface} \label{subsec:parametrizedsurface} We assume that $\Surface$ can be covered by a $C^k$-atlas $\Set{(\ChartMap_r,\hat{\Domain}_r,U_r)}_r$ of bijective mappings $\ChartMap_r\colon\hat{\Domain}_r\to U_r\cap\Surface$ that are charts of class $C^k$ with the domain open subsets $\hat{\Domain}_r\subset\R^2$. We further assume that the transition maps $\ChartMap_r^{-1}\circ\ChartMap_s$ between overlapping co-domains, $\ChartMap_r(\hat{\Domain}_r)\cap\ChartMap_s(\hat{\Domain}_s)\neq\emptyset$, are $C^k$-diffeomorphisms. For a local parametrization $\ChartMap = \ChartMap_r$ we denote by $\bm{x}=\ChartMap(\hat{\bm{x}})\in\Surface$ the surface coordinate associated to the local coordinate $\hat{\bm{x}}=(\hat{x}^1,\hat{x}^2)\in\hat{\Domain}=\hat{\Domain}_r$, \[ [\bm{J}(\bm{x})]_{ij} \colonequals [\hat{\nabla}\ChartMap(\hat{\bm{x}})]_{ij}=\frac{\partial \mu^i}{\partial\hat{x}^j}(\hat{\bm{x}}),\quad i=1,2,3; j=1,2, \] the Jacobian of the parametrization, and $\Metric=\bm{J}^T\bm{J}$ the surface metric tensor. The columns of $\bm{J}$ are tangential to $\Surface$, i.e., $\Basis_j(\bm{x})\colonequals\bm{J}(\bm{x})_{:,j}\in\TangentBundle_{\bm{x}}\Surface$ for $j=1,2$, with $\bm{x}=\ChartMap(\hat{\bm{x}})$. This gives rise to the definition of the normal direction $\bm{m}=\Basis_1\times\Basis_2$ and corresponding unit normal field $\bm{n}=\bm{m}/\Norm{\bm{m}}$. A regular $C^2$-surface has an invertible metric. This allows to transform derivatives from the parameter domain $\hat{\Domain}$ into surface derivatives, cf. \Cref{secintri}. The Weingarten map is given by \[ [\bm{H}(\bm{x})]_{ij} \colonequals -J_{jk}\,g^{kl}\frac{\partial n^i}{\partial\hat{x}^l}(\hat{\bm{x}})\,, \] where $\Metric^{-1}=[g^{ij}]$ is the inverse of the metric tensor. Here and in the remainder we use the Einstein summation convention. \subsubsection{Level set characterization of the surface} An implicit representation of $\Surface$ can be based on a $C^k$-mapping $\phi\colon\Omega\to\R$ with $\Surface\subset\Omega\subset\R^3$ a three-dimensional domain containing the surface. We assume that $\nabla\phi\neq 0$ on $\Surface$ and represent the surface as the zero-level set of $\phi$: \[ \Surface = \Set{\bm{x}\in\Omega\mid\phi(\bm{x})=0}\,. \] In a sufficiently small $\delta$-neighborhood $U_\delta(\Surface)\subset\R^3$ of $\Surface$ we can define the normal direction $\Ext{\bm{m}}(\bm{x})=\nabla\phi(\bm{x})$, $\bm{x} \in U_\delta(\Surface)$, and normal field $\Ext{\bm{n}}_\phi=\Ext{\bm{m}}/\Norm{\Ext{\bm{m}}}$ with $\Ext{\bm{n}}_\phi\vert_{\Surface}=\bm{n}$. Here and in the remainder we use an overline notation, e.g., $\Ext{\bm{m}}$, to denote quantities that are defined not only on $\Surface$ but in a (small) three-dimensional neighborhood of $\Surface$. A natural choice for $\phi$ would be the signed-distance function $\rho(\bm{x})=\operatorname{dist}(\bm{x},\Surface)$ with the property $\Norm{\nabla\rho}\equiv 1$. Let $\delta>0$ be sufficiently small so that the closest-point projection $\pi\colon U_\delta(\Surface)\to\Surface$ is uniquely defined by \begin{equation}\label{eq:closest-point-projection} \pi(\bm{x}) \colonequals \bm{x} - \rho(\bm{x})\bm{n}(\pi(\bm{x})),\quad\bm{x}\in U_\delta(\Surface)\,. \end{equation} Using the closest-point projection the signed-distance function can be determined based on $\rho(\bm{x}) = (\bm{x} - \pi(\bm{x}))\cdot\bm{n}(\pi(\bm{x}))$. The normal field $\Ext{\bm{n}}(\bm{x})=\nabla\rho(\bm{x})$, $\bm{x}\in U_\delta(\Surface)$, is a constant extension of the surface normal, i.e., $\Ext{\bm{n}}(\bm{x})=\bm{n}(\pi(\bm{x}))$. An alternative representation of the extended Weingarten map is given by $\Ext{\bm{H}}(\bm{x})=-\nabla\Ext{\bm{n}}(\bm{x})=-\nabla^2\rho(\bm{x})$, for $\bm{x} \in U_\delta(\Surface)$ with $\Ext{\bm{H}}\vert_{\Surface}=\bm{H}$. \subsection{Representation of tensor fields and gradient operators} \subsubsection{Intrinsic representation}% \label{secintri} Starting from the definition of a parameterized surface, one can represent tensor fields and define derivatives making use of local coordinates in a reference domain $\hat{\Omega}$. One possibility for the choice of the local coordinates is to consider the tangent vectors $\Basis_1,\Basis_2$, naturally associated with the parametrization $\ChartMap$, as reference frame for the tangent plane $\TangentBundle_{\bm{x}}\Surface$. We can then describe a function on $\Surface$ in the local coordinates and define the (intrinsic) surface gradients. Let $\bm{u}^{(0)}\colon\Surface\to\R$ be a scalar differentiable function on $\Surface$, $\bm{u}^{(1)}\colon\Surface \to \TangentBundle\Surface$ a tangent vector field given by $\bm{u}^{(1)}=u^1\Basis_{1} + u^2\Basis_{2}=u^i\Basis_{i}$, and $\bm{u}^{(2)}\colon\Surface\to T^2 \Surface $ a tangent tensor field given by $\bm{u}^{(2)}=u^{ij} \Basis_{i} \otimes \Basis_{j}$. At a point $\bm{x}\in\Surface$, the tensors of the contravariant components are denoted using underline notation, i.e, $\Emb{\bm{u}}^{(0)}=u\in\R$, $\Emb{\bm{u}}^{(1)}=[u^i] \in \R^2$, and $\Emb{\bm{u}}^{(2)}=[u^{ij}]\in \R^{2 \times 2}$. The intrinsic gradient of an $n$-tensor field is an $(n+1)$-tensor field and is defined by the following expressions in terms of the contravariant components: \begin{align} \left[\bm{\nabla}_{\Surface}\bm{u}^{(0)}\right]^i &= g^{il} \D{u}{\hat{x}^l}, \label{defG1} \\ % \left[\bm{\nabla}_{\Surface}\bm{u}^{(1)}\right]^{ij} &= g^{il}\bm{\nabla}_l u^{j} = g^{il}\left(\D{u^{j}}{\hat{x}^l} + \ChristSymb{lk}{j}u^{k}\right), \label{defG2} \\ % \left[\bm{\nabla}_{\Surface}\bm{u}^{(2)}\right]^{ijk} &= g^{il}\bm{\nabla}_l u^{jk} = g^{il}\left(\D{u^{jk}}{\hat{x}^l} + \ChristSymb{lh}{j}u^{hk} + \ChristSymb{lh}{k}u^{jh}\right), \label{defG3} \end{align} where $\ChristSymb{ij}{k}=g^{kl}\Basis_l \cdot \tfrac{\partial \Basis_i}{\partial \hat x^j}$ denote the Christoffel symbols, for $i,j,k=1,2$. Note that these contain curvature information. Scalar products of tensors are explicitly written in terms of the metric $\Metric$, e.g., \[ \scalarProd{\TT{u}^{(1)}}{\TT{v}^{(1)}}_{\Metric} \colonequals u^i\,v_i = g_{ij}\,u^i\,v^{j}\,, \quad \scalarProd{\TT{u}^{(2)}}{\TT{v}^{(2)}}_{\Metric} \colonequals u^{ij}\,v_{ij} = g_{il}\,g_{jm}\,u^{ij}\,v^{lm}\,. \] To simplify the computation and to increase numerical stability, we also consider an orthogonal reference frame as basis for the tangent plane $\TangentBundle_{\bm{x}}\Surface$. For this we orthogonalize the vector $\Basis_{2}$ with respect to $\Basis_{1}$. This orthogonalization yields the orthogonal frame $\BasisOrth_{1},\BasisOrth_{2}$ on $\TangentBundle_{\bm{x}}\Surface$, associated to the local coordinates $\hat{\bm{s}}=(\hat{s}^1,\hat{s}^2)$. The corresponding metric tensor is given by \[ \MetricOrth \colonequals \left( \begin{array}{ccc} \Norm{\BasisOrth_{1}}^2 & 0 \\ 0 &\Norm{\BasisOrth_{2}}^2 \\ \end{array} \right) \equalscolon \left( \begin{array}{ccc} \MetricOrthCoeff{1}^2& 0 \\ 0 &\MetricOrthCoeff{2}^2 \\ \end{array} \right)\,. \] We can now write explicit expressions for the Christoffel symbols in the basis $\{\BasisOrth_{1},\BasisOrth_{2}\}$: \begin{align}\label{eq:crhistsymb-orth} \ChristSymb{ik}{k}=\ChristSymb{ki}{k}=\frac{1}{\MetricOrthCoeff{k}}\D{\MetricOrthCoeff{k}}{\hat{s}^i}\quad& i,k=1,2\,, % \qquad\quad % \ChristSymb{ii}{k}=-\frac{\MetricOrthCoeff{i}}{\MetricOrthCoeff{k}^2}\D{\MetricOrthCoeff{i}}{\hat{s}^k}\quad i\ne k\,,\\ % &\ChristSymb{ij}{k}=0\quad i\ne j\ne k\,, \end{align} which are simplified due to the orthogonality property. Also the scalar products simplify with the metric $\MetricOrth$, e.g., $\scalarProd{\TT{u}^{(2)}}{\TT{v}^{(2)}}_{\MetricOrth}= \tilde{g}_{il}\,\tilde{g}_{jm}\,u^{ij}\,v^{lm}= h_{(i)}^2 h_{(j)}^2 u^{ij}v^{ij}$. \subsubsection{Representation based on embedding}% \label{sec:Embedding} An alternative convenient representation, to be used in SFEM, TraceFEM, and DI, follows from considering the $n$-tensor fields as general mappings into the embedding space, e.g., \begin{equation}\label{eq:embedded-tensor-fields} {\bm{u}}^{(0)} \colon \Surface \to \R,\quad {\bm{u}}^{(1)} \colon \Surface \to \R^3,\quad {\bm{u}}^{(2)} \colon \Surface \to L(\R^{3}, \R^{3})\,. \end{equation} If in $\R^3$ we use the standard basis, then the component vector $\TT{u}^{(n)}$, $n=1,2$, can be identified with the corresponding fields $\bm{u}^{(n)}$. To simplify the notation, we use this identification and delete the underline in the notation of the tensor fields if the meaning is clear from the context. This embedded representation gives rise to a natural inner product defined in the embedding, i.e., for $n$-tensors $\TT{u}^{(n)}\colonequals[u^{j_1\ldots j_n}]$ and $\TT{v}^{(n)}\colonequals[v^{j_1\ldots j_n}]$, we have \[ \scalarProd{\bm{u}^{(n)}}{\bm{v}^{(n)}} \colonequals u^{j_1\ldots j_n}\,v_{j_1\ldots j_n}\,, \] where rising and lowering of indices can be done using the Euclidean metric $\delta_{ij}$. Note that since the tensor fields are represented in the embedding space, the indices are in the range $j_k\in\{1,2,3\}$. Corresponding to the unit normal field $\bm{n}$ we introduce the tangential projection $\P=\bm{I}-\bm{n}\otimes\bm{n}$ and denote in the following a general tensor projection operator for $n$-tensors $\TT{u}^{(n)}=[u^{j_1\ldots j_n}]$ as $\PP$, defined by componentwise projection, \begin{equation}\label{eq:tensor-projection} [\PP\bm{u}^{(n)}]\indices{^{i_1\ldots i_n}} \colonequals \tensor{P}{^{i_1}_{j_1}}\cdots\tensor{P}{^{i_n}_{j_n}} u\indices{^{j_1\ldots j_n}}\,. \end{equation} The (total) covariant derivative of tangential tensor fields $\bm{u}^{(n)}$ with embedded representation $\Emb{\bm{u}}^{(n)}$ can be defined as $\bm{\nabla}_\Surface\bm{u}^{(n)} \colonequals \PP\nabla\Emb{\Ext{\bm{u}}}^{(n)}$. Recall that the overline symbol denotes a (smooth) extension of a function to a surface neighborhood, whereas the underline symbol highlights that the Euclidean gradient $\nabla$ is applied componentwise. Written out for $n=0,1,2$, this definition reads: \begin{align} \bm{\nabla}_\Surface \bm{u}^{(0)} &= \P\nabla \Emb{\Ext{\bm{u}}}^{(0)}, && \label{defGa}\\ \bm{\nabla}_\Surface \bm{u}^{(1)} &= \P\nabla \Emb{\Ext{\bm{u}}}^{(1)}\P, && \label{defGb} \\ \left[\bm{\nabla}_\Surface\bm{u}^{(2)}\right]\indices{^{i_1 i_2 i_{3}}} &= \tensor{P}{^{i_1}_{j_1}}\tensor{P}{^{i_2}_{j_2}} \tensor{P}{^{i_3}_{j_3}}\D{\Ext{u}^{j_1 j_2}}{x^{j_{3}}}\,. && \label{defGc} \end{align} The definitions in \eqref{defGa} and \eqref{defG1} yield the same surface gradient operator for scalar functions. The definitions in \eqref{defGb} and \eqref{defGc} are used also for vector and tensor fields that are not in the tangent bundle. For the case that these fields are tangential, these definitions yield the same gradient operators as the ones defined in \eqref{defG2} and \eqref{defG3}. The divergence of a tangential $n$-tensor, $n\geq 1$, is given by \begin{equation} \label{defdiv} \left[ \operatorname{div}_{\Surface} {\bm{u}}^{(n)}\right]^{i_1\ldots i_{n-1}} =\tensor{P}{^{i_1}_{j_1}}\cdots\tensor{P}{^{i_{n-1}}_{j_{n-1}}}\tensor{P}{_{i_{n} j_{n}}} \frac{\partial \Ext{u}^{j_1 \ldots j_n}}{\partial x^{i_{n}}}\,. \end{equation} \subsection{Problem definition}% \label{sec:ProblemDefinition} We study the following model problem: Find tangential $n$-tensor fields $\bm{u}$ that solve \begin{equation}\label{eq:strong-formulation} \partial_t\bm{u} - \ensuremath{\bm{\Delta}}_\Surface\bm{u} = 0\quad\text{ on }\Surface\,, \end{equation} subject to appropriate ``no-flux'' boundary conditions and initial conditions. Note that for $n=0$ the tangential condition is void. For $n \geq 1$, the $\ensuremath{\bm{\Delta}}_\Surface$ operator is the (negative) connection-Laplacian, the natural extension of the Laplace-Beltrami operator to $n$-tensor fields. It can be written as $\ensuremath{\bm{\Delta}}_\Surface\bm{u} = \operatorname{div}_{\Surface}\bm{\nabla}_\Surface\bm{u}$, with $\bm{\nabla}_\Surface$ the covariant gradient operator defined above and $\operatorname{div}_{\Surface}$ the tensor surface divergence as in \eqref{defdiv}. For $n=0$ the PDE \eqref{eq:strong-formulation} corresponds to the scalar heat diffusion problem on a surface, which shares several properties with the corresponding equation in flat space. For instance, it holds that $\Avg{\bm{u}}(t) = \Avg{\bm{u}^0}$ and $\bm{u}(t,\bm{x}) \to \Avg{\bm{u}^0}$ for $t \to \infty$, with the average $\Avg{\bm{u}}(t) = \frac{1}{\text{area}(\Surface)} \int_{\Surface} \bm{u}(t, \bm{x}) \, d \bm{x}$ of $\bm{u}(t,\bm{x})$. It also holds that if $\bm{u}^0 \geq 0$ on $\Surface$ and $\bm{u}^0 \not\equiv 0$, than $\bm{u}(t) > 0$ on $\Surface$ for $t > 0$, see, e.g., \cite{PhDBuergerRegensburg}. There are also results available that explain certain influences of surface curvature on the solution of the scalar heat equation problem. We outline a few of such results. Let $\bm{u}^0(\bm{x}) = \delta_{\bm{p}}(\bm{x})$ for some $\bm{p}\in\Surface$ be the Dirac delta function. In \cite{Varadhan1967Behavior} it is shown that for this initial condition the corresponding solution satisfies \[ \lim_{t \to 0} \big[-2 t \log \bm{u}(t, \bm{x}) \big] = d_{\Surface}^{\,2}(\bm{x},\bm{p})\,, \] with $d_{\Surface}(\cdot, \cdot)$ the geodesic distance on $\Surface$. This property is used to approximate geodesic distances on curved surfaces in computer science \cite{CW2013Geodesics}. In \cite{Faraudo2002Diffusion} the scalar heat equation is considered and rewritten in terms of geodesic polar coordinates. This allows to separate the diffusion from the geometric influence, with the last completely determined by the Gaussian curvature $\GaussCurvature$. We now outline a result that will be used to explain a certain phenomenon observed in the numerical experiments in \Cref{sec:NumericalExperiments}. The solution of the scalar heat equation with initial condition $\delta_{\bm{p}}$ can be expressed as \begin{equation}\label{eq:kernel-folding} \bm{u}(t, \bm{x}) = \int_{\Surface} k_t(\bm{x},\bm{y}) \delta_{\bm{p}}(\bm{y}) \; d \bm{y} \end{equation} with heat kernel $k_t(\cdot,\cdot)$. In \cite{SOG2009Concise,MS1967Curvature} it is shown that \begin{equation}\label{eq:heat-kernel} k_t(\bm{x},\bm{x}) = \frac{1}{4 \pi t} \big(1 + \frac{1}{6} \GaussCurvature(\bm{x})\, t + \text{h.o.}(t) \big)\,. \end{equation} This result motivates general statements like ``heat tends to diffuse slower at points with positive curvature, and faster at points with negative curvature''. Also several analytical solutions of the heat equation for special surfaces have been derived \cite{Faraudo2002Diffusion}. For $n = 1$, the surface vector heat equation, some of these results can be generalized. In particular, it can be shown, again by considering the associated heat kernel, that for $t \to 0$ it behaves like parallel transport along geodesics, along with a decay in magnitude that is identical to the decay of the scalar heat kernel \cite{SS2019Vector}. This property is crucial for various applications in computer graphics \cite{KCPS2013Globally,SS2019Vector} and data science \cite{SW2012Vector}, where it is also extended to tensor fields with $n > 1$. As the tangential tensor-valued heat equation can also be considered as the $L^2$-gradient flow for the tensor Dirichlet energy $\int_{\Surface} \Norm{\bm{\nabla}_\Surface \bm{u} }^2 \,\textrm{d}\bm{x}$, its solution tends to the minimizer of this energy functional (``smoothest possible'' tensor-field) for $t\to\infty$. \section{Finite element discretization schemes}% \label{sec:FiniteElementDiscretizationSchemes} To be able to apply a finite element discretization method we consider the $n$-tensor diffusion problem in a variational setting, where we use standard notation for Bochner spaces: \begin{problem}\label{prob:Problem1} Find $\bm{u}\in C^1(0,T;\bm{H}^1(\Surface, \TensorBundle^n\Surface))$ such that % \begin{equation}\label{eq:variational-formulation} \Inner{\partial_t\bm{u}(t)}{\bm{v}}_{\Surface} + \Inner{\bm{\nabla}_{\Surface}\bm{u}(t)}{\bm{\nabla}_{\Surface}\bm{v}}_{\Surface} = 0\,,\quad\text{for all}~ \bm{v}\in\bm{H}^1(\Surface, \TensorBundle^n\Surface)\,, \end{equation} % for $t\in(0,T]$, subject to $\bm{u}(0) = \bm{u}^0$. Here $\Inner{\cdot}{\cdot}_{\Surface}$ denotes the (tensor) $L^2$-scalar product. \end{problem} The ISFEM will be directly based on the variational formulation given in \Cref{prob:Problem1}. The other three methods, SFEM, TraceFEM, and DI, use the embedding of the surface in $\R^3$ and gradient representations as presented in \Cref{sec:Embedding}. For these methods applied to the $n$-tensor problem with $n\geq 1$ it is natural to allow (small) \emph{non}tangential solution components. The variational formulation given in \Cref{prob:Problem1} is not a suitable starting point for such a finite element method, since it uses the range space $\TensorBundle^n\Surface$. We now introduce, for $n \geq 1$, an augmented variational formulation with range space $\TensorBundle^n\R^3 \simeq\R^{3^n}$. It uses a term $\Inner{\QQ\bm{u}(t)}{\QQ\bm{v}}_{\Surface}$ with $\QQ=\textrm{Id}-\PP$ the normal projection operator that is scaled by a penalty parameter $\omega > 0$. Note that this term vanishes for tangential functions. The augmented variational formulation reads as follows: \begin{problem}\label{prob:Problem2} Assume $n \geq 1$. Find $\bm{u}\in C^1(0, T; \bm{H}^1(\Surface, \TensorBundle^n\R^3))$ such that % \begin{equation}\label{eq:variational-formulation-embedded} \Inner{\partial_t\PP\bm{u}(t)}{\PP\bm{v}}_{\Surface} + \Inner{\bm{\nabla}_{\Surface}\PP\bm{u}(t)}{\bm{\nabla}_{\Surface}\PP\bm{v}}_{\Surface} + \omega\,\Inner{\QQ\bm{u}(t)}{\QQ\bm{v}}_{\Surface} = 0 \end{equation} % for all $\bm{v}\in\bm{H}^1(\Surface, \TensorBundle^n\R^3)$ and $t\in(0,T]$, with initial condition $\bm{u}(0)=\bm{u}^0$. \end{problem} In \eqref{eq:variational-formulation-embedded} first derivatives appear only for the tangential components $\PP\bm{u}$, but not for the normal components $\QQ\bm{u}$. It is therefore natural to replace $\bm{H}^1(\Surface, \TensorBundle^n\R^3)$ by a larger anisotropic $\bm{H}^1$ space in which only weak differentiability in tangential direction is required. To simplify the presentation we do not elaborate this. \Cref{prob:Problem2} is consistent with \Cref{prob:Problem1} in the following sense. Let $\bm{u}_1$ and $\bm{u}_2$ be two solutions of \Cref{prob:Problem2} and $\bm{w}\colonequals\bm{u}_1-\bm{u}_2$. We then have $\bm{w}(0)=0$ and from \eqref{eq:variational-formulation-embedded} we obtain $\partial_t\|\PP\bm{w}\|_{\Surface}^2 \leq 0$ for $t \in [0,T]$. Hence $\PP\bm{w}(t)=0$ for $t \in [0,T]$. Using this in \eqref{eq:variational-formulation-embedded} it follows that $\QQ\bm{w}(t) =0$ and thus $\bm{w}(t) =0$ for $t \in [0,T]$. We conclude that we have uniqueness of a solution of \Cref{prob:Problem2}. It is easy to verify that a solution of \Cref{prob:Problem1} also is a solution of \Cref{prob:Problem2}. We see that by adding the consistent penalty term, with $\omega \geq 0$ arbitrary, we do not change the continuous solution. In general this ``exact'' consistency property does not hold after discretization and one then has to choose an appropriate penalty parameter value to control the consistency error, cf. Sections \ref{sec:sfem} and \ref{sec:tracefem}. In SFEM and TraceFEM, introduced below, a discrete projection operator $\PP_h$ is used that in general is discontinuous across element boundaries. Due to this, for a vector or tensor valued finite element function $\bm{u}_h$ the projected function $\PP_h\bm{u}_h$ does not have global $H^1$-smoothness and applying a discrete gradient to it is a delicate issue. To circumvent this, we apply the product rule to the term $\bm{\nabla}_{\Surface}\PP\bm{u}$ in \eqref{eq:variational-formulation-embedded} as follows. We have $\bm{\nabla}_{\Surface}\PP\bm{u}=\bm{\nabla}_{\Surface}\bm{u}- \bm{\nabla}_{\Surface}\QQ\bm{u}$. For a vector field $\bm{u}= \bm{u}^{(1)}$ one easily checks that $\bm{\nabla}_{\Surface}\QQ\bm{u} =\bm{\nabla}_{\Surface}( (\bm{n}\otimes\bm{n}) \bm{u} )= - \scalarProd{\bm{u}}{\bm{n}}\bm{H}$ holds. Hence, we obtain $\bm{\nabla}_{\Surface}\PP\bm{u}=\bm{\nabla}_{\Surface}\bm{u} +\scalarProd{\bm{u}}{\bm{n}}\bm{H}$ and the representation on the right-hand side is suitable for a finite element approximation. We introduce the notation $G(\bm{u})\colonequals\scalarProd{\bm{u}}{\bm{n}}\bm{H}$. Note that $G$ depends on the extended Weingarten map. Similar results hold for $n \geq 2$. In tensor notation we obtain the following identities: \begin{align}\label{eq:gradQu} [\bm{\nabla}_{\Surface}\QQ {{\bm{u}}}^{(1)}]\indices{^{i_1 i_2}}= [G(\bm{u})]\indices{^{i_1 i_2}} &\colonequals -H\indices{^{i_1 i_2}}\,u^k\,n\indices{_k}, && n=1\,, \\ [\bm{\nabla}_{\Surface}\QQ {{\bm{u}}}^{(2)}]\indices{^{i_1 i_2 i_3}}=[G(\bm{u}) ]\indices{^{i_1 i_2 i_3}} &\colonequals -H\indices{^{i_1 i_3}}\,P\indices{^{i_2}_{j}}u\indices{^{k j}}\,n\indices{_k} -H\indices{^{i_2 i_3}}\,P\indices{^{i_1}_{j}}u\indices{^{j k}}\,n\indices{_k}, && n=2\,. \notag \end{align} Thus we have the following alternative representation of the second term in \eqref{eq:variational-formulation-embedded}, which will be the one used in SFEM and TraceFEM below: \begin{align}\label{formeq} \Inner{\bm{\nabla}_{\Surface}\PP\bm{u}(t)}{\bm{\nabla}_{\Surface}\PP\bm{v}}_{\Surface} &= \Inner{\bm{\nabla}_{\Surface}\bm{u}(t) + G(\bm{u}(t)) }{\bm{\nabla}_{\Surface}\bm{v} + G(\bm{v})}_{\Surface}\,. \end{align} In the following subsections we briefly treat four known finite element discretization methods and apply these methods for spatial discretization of the $n$-tensor heat problem. We combine these spatial discretizations with a standard BDF-2 time discretization. We start with ISFEM, which is the ``most conforming'' method in the sense that it is based on the variational formulation in \Cref{prob:Problem1}. This method uses intrinsic gradient representations. The methods SFEM, TraceFEM, and DI are based on the formulation in \Cref{prob:Problem2} and make use of gradient representations in the embedding space. In the DI method an additional approximation of the inner products is considered by ``extending'' the PDE in a domain $\Omega \subset \R^3$ that contains the surface $\Surface$ and numerically restricting the integrals to $\Surface$ using a smeared-out Dirac-delta function. This method is not consistent in the sense that the solution of the extended PDE, restricted to $\Surface$, does \emph{not} coincide with the solution of the Problems 1 and 2. In this sense, the DI approach is the ``least conforming'' one. In the presentation of the methods we restrict to the lowest order finite element case. In remarks we will briefly comment on extensions to higher order finite elements. In \Cref{sec:Comparison} we discuss and compare the four methods and in particular address the issues a)--c) formulated in the Introduction. \subsection{Intrinsic Surface Finite Element Method (ISFEM)}% \label{sec:isfem} The ISFEM has so far only been introduced for scalar-valued problems in \cite{BFP2021Intrinsic}. We briefly review the scalar ISFEM setting and extend it to the case of a vector-valued problem. Analogously, an extension to tensor fields is possible, but it has not yet been addressed. The main idea of ISFEM is to consider the formulation as given in \Cref{prob:Problem1} and discretize it in local coordinates with the intrinsic differential operators containing all the geometric information. We will consider the local coordinates $\hat{\bm{s}}=(\hat{s}^1,\hat{s}^2)$ with respect to the orthogonal tangent vectors $\BasisOrth_{1},\BasisOrth_{2}$, and the intrinsic differential operators defined employing~\cref{eq:crhistsymb-orth}. Let $\Tri{\Surface}$ be a (curved) exact surface triangulation, formed by a set of non-intersecting (curved) surface triangles with vertices on $\Surface$, such that $\Surface = \bigcup\,\Set{\Elem\in\Tri{\Surface}}$. We will introduce \emph{conforming} subspaces $\VVh^{(n)}_{\Surface}$ such that the relation $\bm{H}^1(\Surface, \VVh^{(n)}_{\Surface}) \subset \bm{H}^1(\Surface, \TensorBundle^n\Surface)$ holds, cf. \Cref{prob:Problem1}. These spaces are used in an approximate Galerkin discretization of \Cref{prob:Problem1}, in the sense that the surface integrals $\Inner{\cdot}{\cdot}_\Elem$ are approximated using a quadrature rule. We denote by $\Inner{\bm{u}}{\bm{v}}_{h}\colonequals\sum_q w_q \scalarProd{\bm{u}(\bm{x}_q)}{\bm{v}(\bm{x}_q)}_{\MetricOrth(\bm{x}_q)} \sqrt{\Abs{\MetricOrth(\bm{x}_q)}}$ such an approximation of the surface integrals $\Inner{\bm{u}}{\bm{v}}_{\Elem}$ by a quadrature rule with $\bm{x}_q\in\Elem$ the quadrature points and $w_q\in\R$ the associated quadrature weights. Concerning practical computation, the key point is the need of geometric information only at quadrature points, in an exact or approximated way. \begin{remark} In the considered benchmark problem, see \Cref{sec:NumericalExperiments}, we consider a Gauss quadrature rule of order three. In this case, we make use of the knowledge of the parametrization of the surface to assign geometric information at the quadrature points. \end{remark} We first consider the scalar case $n=0$. The function space $\VVh^{(0)}_{\Surface}=\operatorname{span}(\{\psi_{l}\})$ is spanned by continuous basis functions $\psi_l\colon\Surface\to\R$ that are obtained by formally gluing together localized functions $\psi_l^{\Elem}\colon\Elem\to\R$ for $\Elem\in\Tri{\Surface}$. For each element $\Elem\in\Tri{\Surface}$, we consider the associated element $\hat{\Elem}=\ChartMap^{-1}(\Elem)$ in the reference domain and we define the classical linear Lagrange nodal basis functions $\hat{\psi}_l^{\hat{\Elem}}(\hat{\bm{x}})$ in reference local coordinates $\hat{\bm{x}}\in\hat{\Elem}$. Denoting by $\bm{x}=\ChartMap(\hat{\bm{x}})\in\hat{\Elem}$ the corresponding associated surface coordinates in $\Elem$, then the surface basis functions are simply lifted via this mapping, i.e., $\psi_l^{\Elem}(\bm{x}) \colonequals \hat{\psi}_l^{\hat{\Elem}}(\hat{\bm{x}})$. In order to compute gradients in the tangential basis representation $\{\BasisOrth_{1},\BasisOrth_{2}\}$ associated to coordinates $\hat{\bm{s}}$, instead of the natural tangential basis $\{\Basis_1,\Basis_2\}$ associated to $\hat{\bm{x}}$, we need to perform a coordinate transformation, i.e., \begin{equation*} \nabla_{\Surface}\psi_l^{\Elem}(\bm{x}) \colonequals \MetricOrth^{-1}\bm{W}\hat{\nabla}\hat{\psi}_l^{\hat{\Elem}}(\hat{\bm{x}})\,, \end{equation*} where $\bm{W}=\tilde{\bm{J}}^{+}\bm{J}$ is the Jacobian of the change of coordinates between $\hat{\bm{x}}$ and $\hat{\bm{s}}$, with $\tilde{\bm{J}}=[\BasisOrth_{1},\BasisOrth_{2}]$ and $\bm{J}=[\Basis_1,\Basis_2]$. \begin{remark} In the case of a surface obtained by the graph of a scalar function, e.g., $\ChartMap(\hat{\bm{x}})=(\hat{x}^1,\hat{x}^2,f(\hat{x}^1,\hat{x}^2))^T=\bm{x}\in\Surface$, the matrix $\bm{W}$ is directly obtained from the $2\times 2$ block of $\tilde{\bm{J}}^T$ corresponding to the independent variables. \end{remark} The discrete scalar functions $\bm{u}_h^{(0)}\in\VVh^{(0)}_{\Surface}$ can be expanded in terms of the basis functions as $\bm{u}_h^{(0)}(\bm{x})=\sum_l u_l \psi_{l}(\bm{x})$, with $u_l$ the scalar coefficient associated to the basis function $\psi_{l}$. For discrete vector-valued functions $\bm{u}_h^{(1)}$ we exploit the orthogonal covariant reference frame $\{\BasisOrth_{1}{},\BasisOrth_{2}{}\}$ and represent the solution in contravariant components, i.e., $\Emb{\bm{u}}^{(1)}=\left[u^{i}\right]$, for $i=1,2$. Each component $u^i$ can be approximated by the discrete functions $u_h^i=\sum_l u_l^i\,\psi_l$, with $\{\psi_l\}_l$ the set of scalar basis functions of $\VVh^{(0)}_{\Surface}$. Thus we get \begin{equation*} \bm{u}^{(1)}_h(t) = \sum_l u_l^1(t)\,\psi_l \BasisOrth_{1} + u_l^2(t)\,\psi_l\BasisOrth_{2}\,, \end{equation*} which gives rise to the definition of a discrete vector function space: \begin{equation*} \VVh^{(1)}_{\Surface} \colonequals \Set{\bm{v}_h=v_h^1\,\BasisOrth_{1} + v_h^2\,\BasisOrth_{2} \mid v_h^1,v_h^2\in\VVh^{(0)}_{\Surface}}\,. \end{equation*} The same idea can be applied to define a discrete tensor function space $\VVh^{(n)}_{\Surface}$. By applying the definition of gradients and scalar product in~\cref{secintri}, with respect to the orthogonal reference frame $\{\BasisOrth_{1},\BasisOrth_{2}\}$, and the quadrature rule $\big(\cdot,\cdot\big)_{h}$, we obtain the semi-discrete ISFEM discretization of \Cref{prob:Problem1}: \begin{problem} Find $\bm{u}_h(t)=\bm{u}^{(n)}_h(t)\in\VVh^{(n)}_{\Surface}$ such that % \begin{equation}\label{eq:isfem-semi-discrete} \Inner{\partial_t\bm{u}_h(t)}{\bm{v}_h}_{h} + \Inner{\bm{\nabla}_{\Surface}\bm{u}_h(t)}{\bm{\nabla}_{\Surface}\bm{v}_h}_{h} = 0\,,\quad\text{for all}~ \bm{v}_h\in\VVh^{(n)}_{\Surface} \end{equation} % for $t\in(0,T]$, with initial condition $\bm{u}_h(0)=\bm{u}^0$. \end{problem} We obtain fully discrete schemes by applying a BDF-2 discretization scheme to the semi-discrete problem~\eqref{eq:isfem-semi-discrete}. \subsection{Surface Finite Element Method (SFEM)}% \label{sec:sfem} The essence of the lowest order SFEM is the approximation of $\Surface$ by a (shape regular) triangulation, consisting of flat triangles, and the use of globally continuous piecewise linears on this triangulation for approximation of the continuous solution. This technique avoids surface parametrizations and (for scalar problems) is very similar to a standard finite element method in a flat domain. The piecewise triangular surface approximation is denoted by $\Surface_h$. The space of globally continuous piecewise linears on $\Surface_h$ is denoted by $V_{\Surface_h}$. We introduce the natural geometry normals $\bm{n}_h\vert_{\Elem}$, with $\Elem$ a triangle from $\Surface_h$, that are formally glued together to build the discrete surface normal field $\bm{n}_h$. The discrete tangential projection is given by $\Ph=\bm{I}-\bm{n}_h\otimes\bm{n}_h$. The SFEM for the scalar case is well-known in the literature and reads as follows (cf. \Cref{prob:Problem2}), with $\bm{\nabla}_{\Surface_h}\bm{v}_h = \P_h\nabla \Ext{\bm{v}}_h$ the discrete analogon of the surface gradient as in \eqref{defGa}: \begin{problem}[Scalar problem]\label{prob:discrSFEM1} Find $\bm{u}_h=\bm{u}_h^{(0)}\in C^1(0,T; V_{\Surface_h})$ such that % \begin{equation} (\partial_t\bm{u}_h(t), \bm{v}_h )_{\Surface_h} + (\bm{\nabla}_{\Surface_h}\bm{u}_h(t), \bm{\nabla}_{\Surface_h}\bm{v}_h )_{\Surface_h} = 0\,,\quad\text{for all}~ \bm{v}_h\in V_{\Surface_h} \end{equation} % and for all $t\in(0,T]$ subject to an initial condition $\bm{u}_h(0)=\bm{I}_h\bm{u}^0$. Here $\bm{I}_h$ denotes the nodal interpolation operator in the finite element space $V_{\Surface_h}$. \end{problem} Using the nodal finite element basis in the space $V_{\Surface_h}$ results in a ODE system for the coefficients of $\bm{u}_h$. We now consider $n \geq 1$. The discretization is based on the formulation in \Cref{prob:Problem2}, combined with a componentwise approximation using SFEM for scalar-valued problems. For a detailed description for tensor-valued problems see \cite{NN2019Finite,HL2020Analysis,HaPr2021Tangential}. The discrete tensor projection operator $\PPh$ is defined in analogy to eq. \eqref{eq:tensor-projection}. A corresponding orthogonal projection $\QQh=\mathcal{I}-\PPh$ naturally follows. We use the surface finite element space $\VVh^{(n)}_{\Surface_h}=[\Vh_{\Surface_h}]^N$ as the product space of $N=3^n$ scalar Lagrange spaces. A discrete surface gradient $\bm{\nabla}_{\Surface_h}$ is defined as in \eqref{defGb}--\eqref{defGc}, but with the continuous projections replaced by the discrete analogons. Error analysis and numerical experiments show that replacing the projection operator $\QQ$ in the penalty term of the continuous variational formulation by its discrete analogon $\QQ_h$ is \emph{not} satisfactory, since it results in suboptimal convergence in the $L^2$-norm, cf. \cite{HL2020Analysis}. Optimal convergence is obtained if one instead uses a projection operator based on a normal $\bm{n}^+_h$ that is a \emph{one order more accurate approximation} of $\bm{n}$ than the $\Surface_h$-normal $\bm{n}_h$. We denote such a modified (``higher order'') projection by $\QQ_h^+$. Thus we obtain the following SFEM discretization of eq. \eqref{eq:variational-formulation-embedded}, where we use the result \eqref{formeq}, see also \cite{HaPr2021Tangential}: \begin{problem} \label{prob:SFEMdiscr2} Take $n \geq 1$. Find $\bm{u}_h=\bm{u}_h^{(n)} \in C^1(0,T;\VVh^{(n)}_{\Surface_h})$ such that % \begin{multline}\label{eq:sfem-discretization} \big(\partial_t\PPh {\bm{u}}_h(t), \PPh {\bm{v}}_h\big)_{\Surface_h} + \big(\bm{\nabla}_{\Surface_h} \bm{u}_h(t) +G_h(\bm{u}_h (t)), \bm{\nabla}_{\Surface_h}{\bm{v}}_h + G_h(\bm{v}_h)\big)_{\Surface_h} \\ + \beta h^{-2}\,\big(\QQ_h^+ {\bm{u}}_h(t), \QQ_h^+ {\bm{v}}_h\big)_{\Surface_h} = 0 \quad \text{for all}~{\bm{v}}_h\in\VVh^{(n)}_{\Surface_h} \end{multline} % and for all $t\in(0,T]$ subject to an initial condition ${\bm{u}}_h(0)=\bm{I}_h {\bm{u}}^0$. \end{problem} Here $G_h(\cdot)$ is a discrete analogon of $G(\cdot)$ in \eqref{formeq}, e.g., for $n=1$, $G_h(\bm{v}_h)=\scalarProd{\bm{v}_h}{\bm{n}_h}\bm{H}_h$, where $\bm{H}_h$ is an approximation of the Weingarten mapping. The parameter $\beta > 0$ is a penalty parameter. The scaling with $h^{-2}$ in the penalty term follows from an error analysis, cf. \cite{HL2020Analysis,HaPr2021Tangential}. \begin{remark} The discrete Weingarten map $\bm{H}_h$ can be computed from the gradient of the elementwise gradient of the discrete normal field $\bm{n}_h$. Utilizing the representation $\bm{n}_h=\bm{m}_h/\|\bm{m}_h\|$ with $\bm{m}_h$ the cross-product of the columns of $\bm{J}_h$ and thus a discrete function, we can compute, $\bm{H}_h=\PPh\nabla\bm{n}_h=\|\bm{m}_h\|^{-1}\PPh\nabla (I_h \bm{m}_h)$. \end{remark} \begin{remark} In case the surface $\Surface$ is described by the coordinate mapping $\ChartMap$, higher order surface approximations than piecewise linear can be obtained by (Lagrange) interpolation $\ChartMap_h=I_h\ChartMap$. With discrete functions defined in the reference domain and lifted to the discrete surface using $\ChartMap_h$, a higher-order function space $\Vh_{\Surface}$ can be constructed, cf. \cite{Demlow2009HigherOrder}. Similar to the piecewise flat surface and linear function setting, geometric quantities are obtained by derivatives of the discrete parametrization $\ChartMap_h$. This allows to obtain also high-order convergence for the projection based scheme, cf. \cite{HL2020Analysis,HaPr2021Tangential}. If derivatives of the continuous parametrization $\ChartMap$ are directly available and computable, also an exact parametrization of the surface geometry, cf. \Cref{sec:isfem}, is possible and is used in the numerical example to compute a reference solution. \end{remark} The discretization in time follows standard approaches and is therefore not described in detail. We consider a classical BDF-2 scheme. \subsection{Trace Finite Element Method (TraceFEM)}% \label{sec:tracefem} The TraceFEM is based on the same variational \Cref{prob:Problem2} as the SFEM. The former uses a finite element space which is defined on a background volumetric mesh that is not fitted to the surface. The geometry approximation is based on an implicit description of the surface using a level set approach. For an overview of TraceFEM we refer to \cite{OlshanskiiReusken2017}. We assume the surface $\Surface$ to be represented as the zero level of a level set function $\phi$. We denote by $\Omega$ a sufficiently small polygonal 3d neighborhood of the surface. The surface approximation is based on a piecewise linear approximation $\phi_h$ (e.g., linear interpolation) of $\phi$ and given by $\Surface_h\colonequals\Set{\bm{x} \in \Omega \mid \phi_h(\bm{x})=0}$. Let $\Tri{\Omega}$ be a shape regular tetrahedral triangulation of $\Omega$ and $\Vh_{\Omega}$ the standard finite element spaces of continuous piecewise linear polynomials on $\Tri{\Omega}$. For higher order constructions of $\Surface_h$ we refer to \Cref{rem_TraceFEM_surface_approximation}. We introduce the set $\Tri{\Omega}^{\Surface_h}$, consisting of all tetrahedra $\Elem\in\Tri{\Omega}$ that have a nonzero intersection with $\Surface_h$. The domain formed by all these tetrahedra is denoted by $\Omega^{\Surface_h}=\bigcup\,\Set{\Elem\in\Tri{\Omega}^{\Surface_h}}$. On $\Omega^{\Surface_h}$, we define by simple restriction the scalar finite element space $\Vh^{\Surface_h}_{\Omega}\colonequals \Set{v\vert_{\Omega^{\Surface_h}} \mid v\in \Vh_{\Omega}}$. A corresponding $n$-tensor finite element space is given by $\VVh^{\Surface_h}_{\Omega} \colonequals \big[ \Vh^{\Surface_h}_{\Omega} \big]^N$ with $N=3^n$. To avoid instabilities due to small cuts of $\Surface_h$ in the triangulation $\Omega^{\Surface_h}$ one uses a so-called normal derivative volume stabilization \cite{BHLM2018Cut, GLR2018Analysis}. Again $\bm{n}_h$ denotes the piecewise normal field on $\Surface_h$, $\bm{P}_h = \bm{I} - \bm{n}_h \otimes \bm{n}_h$ and the discrete tensor projection operator $\PP_h$ is defined as in Subsection \ref{sec:sfem}. We now describe the method for the scalar case $n=0$. The stabilization is then given by $s_h(\bm{u}_h,\bm{v}_h)\colonequals\big(\bm{n}_h \cdot \nabla \bm{u}_h, \bm{n}_h \cdot \nabla \bm{v}_h)_{\Omega^{\Surface_h}}$. The discrete problem is as follows, cf. \Cref{prob:discrSFEM1}: \begin{problem}[Scalar problem]\label{prob:discrTraceFEM1} Find $\bm{u}_h=\bm{u}_h^{(0)}\in C^1(0,T; \Vh^{\Surface_h}_{\Omega})$ such that % \begin{equation} \big(\partial_t\bm{u}_h(t), \bm{v}_h \big)_{\Surface_h} + \big(\bm{\nabla}_{\Surface_h}\bm{u}_h(t), \bm{\nabla}_{\Surface_h}\bm{v}_h \big)_{\Surface_h} + \beta' h^{-1} s_h(\bm{u}_h(t),\bm{v}_h) = 0 \end{equation} % for all $\bm{v}_h\in \Vh^{\Surface_h}_{\Omega}$ and all $t\in(0,T]$, subject to an initial condition ${\bm{u}}_h(0)=\textbf{I}_\Omega^{\Surface_h}\Ext{\bm{u}}^0$. Here $\textbf{I}_\Omega^{\Surface_h}$ denotes the nodal interpolation operator in the finite element space $\Vh^{\Surface_h}_{\Omega}$. \end{problem} Note that compared to \Cref{prob:discrSFEM1} we use a different finite element space and added the stabilization term $s_h(\cdot,\cdot)$. We now consider $n \geq 1$. In the same spirit as in the SFEM, cf. \Cref{prob:SFEMdiscr2}, it is based on the variational formulation in \Cref{prob:Problem2}. The normal derivative volume stabilization takes the form \begin{equation*} \bm{s}_h(\bm{u},\bm{v}) \colonequals \int_{\Omega^{\Surface_h}} \left(\nabla\bm{u} \Tdot{n+1} \bm{n}_h \right) \cdot \left(\nabla\bm{v} \Tdot{n+1} \bm{n}_h \right) \,\text{d}\bm{x}\,, \end{equation*} and we obtain the following discretization of \eqref{eq:variational-formulation-embedded}: \begin{problem} \label{prob:TraceFEMdiscr2} Take $n\geq 1$. Find ${\bm{u}}_h={\bm{u}}^{(n)}_h \in C^1(0,T;\VVh^{\Surface_h}_{\Omega})$ such that % \begin{equation} \label{eq:tracefem-time-discretization} \begin{split} & \big(\partial_t\PPh {\bm{u}}_h(t), \PPh {\bm{v}}_h\big)_{\Surface_h} + \big(\bm{\nabla}_{\Surface_h} \bm{u}_h(t) +G_h(\bm{u}_h (t)), \bm{\nabla}_{\Surface_h}{\bm{v}}_h+ G_h(\bm{v}_h)\big)_{\Surface_h} \\ & \quad + \beta h^{-2}\,\big(\QQ_h^+ {\bm{u}}_h(t), \QQ_h^+ {\bm{v}}_h\big)_{\Surface_h} + \beta' h^{-1}\, \bm{s}_h(\bm{u}_h(t),\bm{v}_h) = 0 \quad \text{for all}~{\bm{v}}_h\in\VVh^{\Surface_h}_{\Omega} \end{split} \end{equation} % and for all $t\in(0,T]$ subject to an initial condition ${\bm{u}}_h(0)={\textbf{I}}_\Omega^{\Surface_h} \Ext{\bm{u}}^0$, with $\beta > 0$ a penalty parameter and $\beta' > 0$ a stabilization parameter. \end{problem} As in \Cref{sec:sfem} we use an ``improved'' projection $\QQ_h^+$ based on a higher order normal approximation. A motivation for this improved projection and a construction of an improved normal approximation are given in \cite{JankuhnReusken2020}. As in SFEM the term $G_h(\cdot)$ is a discrete analogon of $G(\cdot)$ given in \eqref{formeq}. The semi-discrete \Cref{prob:TraceFEMdiscr2} is essentially the same as the SFEM \Cref{prob:SFEMdiscr2}, except for the additional stabilization term $\bm{s}_h(\cdot,\cdot)$. As in \Cref{sec:sfem}, we use a classical BDF-2 scheme for time discretization. \begin{remark}\label{rem_TraceFEM_surface_approximation} To obtain a higher order discretization method an isoparametric mapping $\Theta_h$ is the key ingredient. The main idea and construction of this mapping is explained in \cite{LehrenfeldReusken2017}. It is based on a level set function approximation $\phi_h \in \Vh^k_{\Omega}$ of order $k$. This function implicitly defines a surface approximation. For $k \geq 2$ numerical integration is hard to realize. To obtain a computationally efficient method a piecewise triangular surface approximation $\Surface^{\text{lin}}$ is used, defined as follows. Let $\hat{\phi}_h = I^1 \phi_h$ be the linear nodal interpolation of the higher order level set function approximation $\phi_h$. Based on this we define % \begin{equation*} \Surface_{h} \colonequals \Theta_h(\Surface^{\text{lin}}) = \Set{\bm{x} \mid \hat{\phi}_h(\Theta_{h}^{-1}(\bm{x})) = 0}. \end{equation*} % In the same manner, the parametric mapping induces (higher order) finite element spaces. \end{remark} \subsection{Diffuse-Interface Approach (DI)} The DI method, see \cite{RV2006PDEs, LLTVW2009DiffuseInterface, NNPV2018Orientational}, considers an approximation of eq. \eqref{eq:variational-formulation-embedded}, which is a classical problem in the embedding space $\R^3$ and thus leads to a set up where established standard volume FEM can be applied. Similar to \Cref{sec:tracefem} the geometry approximation is based on an implicit description of the surface, but instead of a level set approach a phase field description is used. We define \[ \phi_\epsilon(\bm{x}) \colonequals \frac{1}{2}\left(1 - \tanh\left(\frac{3}{\epsilon}\rho(\bm{x})\right)\right),\quad \delta_\epsilon(\bm{x}) \colonequals \frac{36}{\epsilon}\phi_\epsilon^2(\bm{x})(1 - \phi_\epsilon(\bm{x}))^2\,, \] for $\bm{x}\in U_\delta(\Surface)$ with $0<\epsilon<\delta$ an interface thickness parameter. The phase-field function $\varphi_\epsilon$ is based on the signed-distance representation $\rho$ of $\Surface$. With this definition of $\varphi_\epsilon$ we obtain $\delta_\epsilon \to \delta_{\Surface}$ for $\epsilon \to 0$, with $\delta_{\Surface}$ the surface delta-function to $\Surface$. We define an extension of scalar-valued fields defined on $\Surface$ to the neighborhood $U_\delta(\Surface)$ by using the closest point projection, $\Ext{f}(\bm{x}) \colonequals f(\pi(\bm{x})) = f(\bm{x} - \rho(\bm{x}) \nabla \rho(\bm{x}))$ for $\bm{x}\in U_\delta(\Surface)$. Into the rest of the domain $\Omega$, the function $f$ is extended in an approximate way, e.g., using fast-marching algorithms, or a Hopf--Lax algorithm \cite{ClaudelBayen2010LaxHopf}. The scalar-valued phase-field function $\varphi_\epsilon$ and delta-function $\delta_\epsilon$ are extended with a constant value. Vector and tensor fields are extended by a componentwise extension of the embedded description. As in \Cref{sec:tracefem} let $\Tri{\Omega}$ be a shape regular tetrahedral triangulations of $\Omega$ and $\Vh_{\Omega}$ be the standard finite element spaces of continuous piecewise linears defined on $\Tri{\Omega}$. We define the tensor finite element space $\VVh^{(n)}_{\Omega} \colonequals \big[\Vh_{\Omega}\big]^N$ as the product of $N=3^n$ scalar finite element spaces. For scalar fields $\bm{u}^{(0)}$ the FEM discretization of the DI approximation of eq. \eqref{eq:variational-formulation} reads \begin{problem}[Scalar diffuse interface approach \cite{RV2006PDEs}] Find $\bm{u}_h = \bm{u}_h^{(0)} \in C^1(0,T; \Vh_{\Omega})$ such that % \begin{equation}\label{eq:di-scalar-time-discretization} \sum_{\Elem\in\Tri{\Omega}}\Inner{\delta_\epsilon \partial_t \bm{u}_h(t)}{\bm{v}_h}_{\Elem} + \Inner{\hat{\delta}_\epsilon \nabla \bm{u}_h(t)}{\nabla \bm{v}_h}_{\Elem} = 0 \end{equation} % for all $\bm{v}_h \in\Vh_{\Omega}$ and for all $t\in(0,T]$, subject to the initial condition $\bm{u}_h(0)=\Ext{\bm{u}}^0$. We denote by $\hat{\delta}_\epsilon = \max\{\delta_\epsilon, \sigma\}$ the regularized surface delta-function approximation, with $\sigma = 10^{-8}$ in the numerical examples. \end{problem} For vector fields $\bm{u}^{(1)}$ we consider the componentwise reformulation of the surface problem as in eq. \eqref{eq:variational-formulation-embedded}. For such a formulation we apply the scalar DI approach for each component. This requires geometric properties of the surface, namely the normal $\bm{n}$ and curvature $\bm{H}$ in the $\epsilon$-neighborhood of $\Surface$. To obtain these quantities one can use a numerical approximation of the signed-distance function $\rho^\epsilon$ in $\Omega$ and define $\bm{n}^{\epsilon} \colonequals \nabla \rho^\epsilon$ and $\bm{H}^{\epsilon} \colonequals -\nabla^2 \rho^\epsilon$, as well as the associated projections by $\bm{P}^{\epsilon}$, $\PP^{\epsilon}$, and $\QQ^{\epsilon}$ in terms of $\bm{n}^\epsilon$. \begin{problem} Assume $n\geq 1$. Find $\bm{u}_h\in C^1(0,T;\VVh^{(n)}_{\Omega})$ such that % \begin{multline}\label{eq:di-discretization} \sum_{\Elem\in\Tri{\Omega}}\Inner{\delta_\epsilon \partial_t\PP^{\epsilon}\bm{u}_h(t)}{\PP^{\epsilon}\bm{v}_h}_{\Elem} + \Inner{\delta_\epsilon \bm{\nabla}_{\Elem}\PP^{\epsilon}\bm{u}_h(t)}{\bm{\nabla}_{\Elem}\PP^{\epsilon}\bm{v}_h}_{\Elem} + \beta\,\Inner{\delta_\epsilon\QQ^{\epsilon}\bm{u}_h(t)}{\QQ^{\epsilon}\bm{v}_h}_{\Elem} \\ +\sum_{\Elem\in\Tri{\Omega}}\sigma\,\Inner{\left(1 - C_\epsilon \delta_\epsilon \right) \nabla\bm{u}_h(t)}{\nabla \bm{v}_h}_{\Elem} = 0 \end{multline} % for all $\bm{v}_h\in\VVh^{(n)}_{\Omega}$ and for all $t\in(0,T]$ subject to the initial condition $\bm{u}_h(0) = {\Ext{\bm{u}}}^0$, with $\beta>0$ a penalization factor and $C_\epsilon = 1/\max(\delta_{\epsilon})=4/9\,\epsilon$. \end{problem} The additional term in the second line provides a regularization for numerical conditioning of the resulting linear system. In the spirit of the scalar case, we add a small amount of ``additional diffusion'' to the domain off the interface. This domain is described by $\left(1 - C_\epsilon \delta_\epsilon \right)$. Similar to \cref{sec:tracefem} the discrete covariant derivative $\bm{\nabla}_{\Elem}$ is described along a componentwise description and extended to the embedding space by using the extended geometric quantities $\bm{n}^{\epsilon}$, $\bm{P}^{\epsilon}$, and $\bm{H}^{\epsilon}$, cf. \eqref{eq:gradQu}. \begin{remark}\label{rem:numericsDI} In the considered benchmark problem, see \Cref{sec:NumericalExperiments}, we use an embedding domain $\Omega = [-2,2]^3$ which is discretized by a hierarchical tetrahedra mesh. To be computational efficient and to ensure an appropriate resolution of $\delta_\epsilon$ an adaptive refinement with about 7--11 grid points across the interface, $\varphi_\epsilon \in [0.05,0.95]$, should be used while a very coarse grid in the remaining part of $\Omega$ is sufficient. For $\Tri{\Omega}$ we define the grid size $h$ by the shortest edge length of the smallest elements, typically located at the interface. To approximate the benchmark surface we refine the mesh according the interface thickness of $\epsilon = 0.125$, resulting in a grid size of $h=0.0156$. On this grid we use the \texttt{meshconv} tool \cite{Meshconv2020} to obtain the approximate distance function $\rho_{\epsilon}$. To obtain a numerical approximation with sufficient quality of normals and curvatures as derivatives of $\rho$ requires a proper resolution of the considered surface. Such resolution implies $\epsilon < \delta$ with $\delta$ the smallest curvature radius of the considered surface. In the benchmark this implies a very small $\epsilon$ yielding non feasible numerical effort. Therefore we use the analytic descriptions of $\bm{n}$ and $\bm{H}$ evaluate and extend them component wise on $\Tri{\Omega}$ using the Hopf-Lax algorithm. We further consider $\beta = 1000$. \end{remark} \subsection{Discussion of the methods}% \label{sec:Comparison} We discuss several issues that are important for the numerical treatment of $n$-tensor surface PDEs, in particular the issues listed in the Introduction. First note that there is the following key difference between ISFEM, SFEM, TraceFEM and DI. The first three methods are directly based on the (variational) PDEs in Problems 1 and 2, which are \emph{consistent}, in the sense that these have the same solution which also coincides with the solution of the $n$-tensor heat equation in strong formulation. The DI approach on the other hand is based on an $\epsilon$-dependent PDE (in a small volumetric neighborhood of the surface), the solution of which, restricted to the surface, in general differs from that of the Problems 1, 2. The formulation only formally converges to the $n$-tensor heat equation as $\epsilon \to 0$. \emph{Surface representation}.\space The representation of the surface $\Surface$ is either explicit, in ISFEM and SFEM, or implicit, in TraceFEM and DI. The explicit approach in ISFEM is based on the existence of a parametrization of the surface by an atlas. In SFEM one only needs an approximate surface triangulation. In ISFEM quadrature one needs (exact or approximated) geometric information from the local chart at the quadrature points. In SFEM the quadrature is very simple because only integrals over flat triangles have to be computed. The implicit description of the surface in TraceFEM and DI is based on a level set description $\phi$ or a phase field description $\phi_\epsilon$ of $\Surface$, respectively. In TraceFEM, based on a piecewise linear approximation of $\phi$, a surface approximation $\Surface_h$ consisting of triangles is constructed. This requires techniques for computing cuts of tetrahedra with zero levels of linear functions. Due to the fact that the resulting triangulation is in general not shape regular (``small cuts'') one needs a stabilization (the normal volume derivative stabilization term). As in SFEM quadrature is very simple because only integrals over triangles (and tetrahedra) have to be computed. Whereas in TraceFEM an explicit reconstruction $\Surface_h$ of the implicit surface is determined, the surface remains implicit in the DI method. In the discrete variational problems of DI only integrals over tetrahedra are involved. Hence, quadrature is straightforward. Information of the surface enters (only) via the signed-distance function $\rho$ that is needed in the phase-field function $\phi_\epsilon$. This distance computation requires an additional preprocessing step. \emph{Representation of the gradient operator and geometry information}.\space On surfaces there are different natural representations of differential operators, e.g. gradient and divergence. In ISFEM the intrinsic representation of the gradient based on local coordinates is used. One then needs a basis of the tangent spaces (at discrete points on the surface). In ISFEM the orthogonal basis $\{\BasisOrth_1, \BasisOrth_2\}$ is employed. The other three methods SFEM, TraceFEM, and DI use a representation of the surface gradient based on the standard gradient in $\R^3$. We now briefly discuss important differences concerning geometric information between $n=0$ and $n\geq 1$. In the ISFEM, for $n=0$ one needs the metric tensor (at discrete points on the surface) and for $n \geq 1$ one in addition needs information about derivatives of the metric coefficients. For SFEM and TraceFEM in the case $n=0$ one needs (only) the discrete normal $\bm{n}_h$, whereas for $n\geq 1$ a more accurate normal approximation (used in $\QQ_h^+$) and an approximation $\bm{H}_h$ of the Weingarten mapping (used in $G_h(\cdot)$) are needed. In the DI method, for $n=0$ we need (approximate) evaluations of the signed distance functions $\rho$ and for $n \geq 1$ we in addition need (approximate) evaluations of $\nabla \rho$. For $n \geq 1$, due to the different representations used, there is the following difference between ISFEM and the other three methods. The methods, SFEM, TraceFEM, and DI, represent the $n$-tensor fields in the embedding space as element of ${\R^{3^n}}$. For $n \geq 1$ the number of tensor components in the embedding space is larger than in the intrinsic representation used in ISFEM and this discrepancy grows with increasing tensorial rank $n$. \emph{Tangentiality condition}.\space A further significant difference between $n=0$ and $n \geq 1$ comes from the tangentiality condition, which is nontrivial only for $n \geq 1$. In ISFEM this condition is automatically satisfied due to the intrinsic representation used. In SFEM and TraceFEM it is treated by discretization of the augmented variational formulation in \Cref{prob:Problem2}, which involves the consistent penalty term with the projection $\QQ$. Thus, an additional term in the variational form is introduced and in the discrete setting an appropriate scaling of this term is essential. In DI a volumetric variant $\QQ^\epsilon$ of $\QQ$ is introduced to satisfy the tangentiality condition approximately. Note that on the continuous level, in \Cref{prob:Problem2}, due to the additional penalty term the tangentiality condition is satisfied exactly whereas this is not the case for the continuous formulation used in the DI method. Finally we briefly comment on parameters used in the different methods. In all four methods we have the mesh size parameter $h$, which in ISFEM and SFEM refers to a (approximate) surface triangulation whereas in TraceFEM and DI this $h$ corresponds to the mesh size of a tetrahedral triangulation of a volumetric domain that contains the surface. In all four methods we have a time step discretization parameter $\Delta t$. In all four methods one can choose the degree $k$ of the finite elements used. In the presentation above we restricted to $k=1$. In ISFEM we have no further parameters. In SFEM and TraceFEM there is a penalty term scaled with $\beta h^{-2}$, hence in these methods we have the penalty parameter $\beta$. In TraceFEM we in addition have a stabilization term scaled by $\beta' h^{-1}$, hence in this method there is the stabilization parameter $\beta'$. In DI also a penalty term with a corresponding penalty parameter $\beta$ occurs. A key parameter in this method is $\epsilon >0$, which quantifies the interface thickness. The DI method also has a regularization term that contains the parameter $\sigma$. The specific parameter values that we use are given below in \Cref{sec:NumericalExperiments}. \section{Numerical experiments}% \label{sec:NumericalExperiments} In this section we present results of a numerical experiment. We consider an $n$-tensor heat equation, $n=0,1,2$, on a relatively simple surface consisting of a large flat part and a localized bump. The height of this bump is varied and the resulting surfaces have small negative and positive Gaussian curvature values in the bump region (for small bump height) and (very) large negative and positive Gaussian curvature values in the bump region (for larger bump height). The initial condition is essentially a regularized Dirac delta function with support disjoint from the bump support. In \Cref{sec:Experiment} we give a precise description of the problem setting. The four methods described in the sections above are applied to this model problem and some numerical results are presented. The numerical results illustrate that curvature can drastically affect the solution behavior. Specific curvature related phenomena will be discussed in the Sections~\ref{sec:Phenomenon1} -- \ref{sec:Phenomenon3}, for $n=0,1,2$, respectively. \subsection{Formulation of a tensor diffusion model problem}% \label{sec:Experiment} Let $\Surface$ be the graph of a function $f$, \[\Surface = \Set{ \bm{x}=(\hat{x}^1,\hat{x}^2,f(\hat{x}^1,\hat{x}^2))^T \mid \hat{\bm{x}}=(\hat{x}^1,\hat{x}^2)^T\in \hat{\Domain}\subset\R^2}\,.\] We want to study a flat surface with an isolated bump defining a region with negative and positive Gaussian curvature. The bump is described by $f(\hat{\bm{x}}) \colonequals \alpha \eta(\Norm{\hat{\bm{x}} - \hat{\bm{p}}}/r)$, with $\alpha \geq 0$ a scaling factor, $\hat{\bm{p}}\in\hat{\Domain}$ the center point of the bump, and $r > 0$ its radius. The function $\eta:\R\to\R$ represents a cut-off compressed Gaussian, i.e., \[ \eta(d) = \eta(d; \delta) \colonequals \left\{\begin{array}{ll} \exp{\left(-\frac{1}{1 - d^2}\right)} & \text{if }d < 1-\delta \\ 0 & \text{otherwise}\,, \end{array}\right. \] with threshold value $\delta = 0.025$. See \Cref{fig:domain} for a visualization of $\Surface$. \begin{figure}[ht] \centering \begin{tikzpicture} \node[right] at (0.0,0.0) {\includegraphics{image_domain.pdf}}; \node[right] at (7.0,-2.0){\includegraphics{image_curve_alpha0.0.pdf}}; \node[right] at (7.0,0.0) {\includegraphics{image_curve_alpha1.0.pdf}}; \node[right] at (7.0,2.0) {\includegraphics{image_curve_alpha2.0.pdf}}; \end{tikzpicture} \caption{\label{fig:domain}(Color online) Left: Sketch of the domain with origin colored in black, the outer radius of the bump centered at $\hat{\bm{p}}$ with radius $r$, the initial solution radius $\epsilon$, and the three evaluation points $\hat{\bm{x}}_0, \hat{\bm{x}}_1$, $\hat{\bm{x}}_2$ highlighted in three different colors. The overall domain size of $\hat{\Domain}$ in the numerical computations is chosen to be $[-2,2]^2$. Right: Plot of the bump surfaces along the $\hat{\bm{x}}_0$-axis for $\alpha\in\{0.0,1.0,2.0\}$. Highlighted are the highest and lowest Gaussian curvature $K$.} \end{figure} Let $\bm{p}\in\Surface$ represent a center point and $\bm{u}_p\in\TensorBundle^n_{\bm{p}}\Surface$ a (tangential) tensor in $\bm{p}$, then we set as initial condition \[ \bm{u}^0(\bm{x}) = \delta_\varepsilon(d_\Surface(\bm{x}, \bm{p}))\,\bm{u}_p\,,\text{ for }\bm{x}\in\Surface\,, \] with $\delta_\varepsilon(\cdot)$ the Dirac delta function. For simplicity we choose a point $\bm{p}$ in a flat region away from the bump, so that $d_\Surface(\bm{x}, \bm{p})=\Norm{\bm{x}-\bm{p}}$. The Dirac-delta function is approximated by a single bump of radius $\varepsilon$ around the origin, scaled by $\varepsilon$, such that $\delta_\varepsilon(\Norm{\bm{x}}) = \varepsilon^{-2} \eta(\Norm{\bm{x}}/\varepsilon)$. We consider $\hat{\bm{p}} = (-0.5, 0.0)^T$ and $r = 0.25$ with varying $\alpha\in[0.0, 2.0]$. For the initial condition, we set $\varepsilon=0.2$ and \[ \bm{u}_p^{(0)} = 1, \quad \bm{u}_p^{(1)} = (-1,0,0)^T, \quad \bm{u}_p^{(2)} = \bm{u}_p^{(1)}\otimes\bm{u}_p^{(1)} \] for the scalar, vector and tensor problem, respectively. The heat equation is solved in the time interval $t\in[0,1]$ and on the surface $\Surface$ with $\hat{\Domain} = [-2,2]^2$. To illustrate the solution behavior we define three evaluation points in the parameter domain: $\hat{\bm{x}}_0=\hat{\bm{p}}$, $\hat{\bm{x}}_1=0.25\,(-\sqrt{2},\sqrt{2})^T$ and $\hat{\bm{x}}_2=(0.0,0.5)^T$, all on the circle with radius $0.5$ around the origin in $\hat{\Domain}$, see \Cref{fig:domain}. For the evaluation of the (discrete) solution $\bm{u}_h$, these points need to be lifted to the discrete surface $\Surface_h$. The discretization parameters are summarized in \Cref{tab:parameters}. \begin{table}[ht!] \centering {\renewcommand{\arraystretch}{1.2} \begin{tabular}{c|c|c|c|c|c|c|c} & $h$ & $\dt$ & $k$ & $\beta$ & $\beta'$ & $\epsilon$ & $\sigma$ \\ \hline Reference (SFEM) & $0.0027$ & $10^{-4}$ & 2 & $10$ & --- & --- & --- \\ \hline ISFEM & $0.011$ & $10^{-3}$ & 1 & --- & --- & --- & --- \\ SFEM & $0.011$ & $10^{-3}$ & 1 & $10$ & --- & --- & --- \\ TraceFEM & $0.0078$ & $10^{-3}$ & 1 & $0.01$ & $1$ & --- & --- \\ DI & $0.0156$ & $10^{-3}$ & 1 & $10^3$ & --- & $0.125$ & $10^{-8}$ \end{tabular}} \caption{\label{tab:parameters}Numerical parameters used in the different methods. Note that for the TraceFEM and DI method, the grid size corresponds to the 3d element grid size. The polynomial order $k$ represents the Lagrange polynomial order of the discrete function spaces.} \end{table} With the parameters listed in the table all four methods determine approximate solutions within reasonable time on standard hardware. Previous comparisons of the different methods have shown advantageous properties of SFEM with respect to accuracy and computational effort, see \cite{BJPRV2022Finite}. In order to provide numerical reference data, we use SFEM with a higher resolution in space and time and a higher polynomial order of the solution space. We set as finest space resolution on the bump the grid size $h\approx0.0027$, timestep size $\dt=10^{-4}$, and polynomial degree $k=2$. In order to reduce the numerical influence of surface approximation, we have chosen $\ChartMap_h\equiv\ChartMap$ for the SFEM reference computations. \subsection{Results for the scalar case}% \label{sec:Phenomenon1} Starting from the initial delta peak at the origin, the scalar heat $\bm{u}^{(0)}$ diffuses over the surface. In flat regions this diffusion is symmetric. For bump strength $\alpha=0$ this corresponds to the whole domain and thus the maximal heat remains on the initial position. Classical properties can be observed, as described already in \Cref{sec:ProblemDefinition}. Not surprisingly, with all four methods the flat case can equally well be represented, see \Cref{fig:scalar}. \begin{figure}[ht] \centering \includegraphics{image_scalar_alpha_t0-1}% \includegraphics{image_scalar_alpha_t1-0} \\ \includegraphics{image_scalar_time_x0}% \hspace*{0.2cm}\includegraphics{image_scalar_time_x1} \caption{\label{fig:scalar}(Color online) Plot of scalar values $\Abs{\bm{u}^{(0)}(\bm{x}_i)}$ over $\alpha$ (top) and over time $t$ (bottom). Solid lines correspond to a reference solution. Colors correspond to $\bm{x}_0, \bm{x}_1$, and $\bm{x}_2$ and $\alpha\in\{0.0,1.0,2.0\}$.} \end{figure} For $\alpha > 0$, the variation of the curvature introduces non-symmetric and anisotropic diffusion into the system. \Cref{fig:scalar} (top) shows that at early times, $t=0.1$, when comparing the solution at the three points, the maximal heat value is at $\bm{x}_2$, while at $t=1.0$ this changes and the maximum value is at $\bm{x}_0$ (on top of the bump). The difference between this maximum value at $\bm{x}_0$ and the values at the other two points increases for larger $\alpha$ values. Plotted over time in \Cref{fig:scalar} (bottom), there is a transition time point where the maximum changes. It can thus clearly be seen that the diffusion does not only depend on the geodesic distances on the surface but also on the surface curvature. The differences at $\bm{x}_1$ and $\bm{x}_2$, points located in the flat region and having the same distance to the origin, also demonstrate that nearby curved regions influence the solution in the flat part. On curved surfaces the simple relation involving the (geodesic) distance only holds for sufficiently short times. An explanation of the phenomenon observed in this experiment can be given using eq. \eqref{eq:heat-kernel}. At the bottom of the bump, we have \emph{negative} curvature, leading to \emph{fast} diffusion around the bump, whereas in a small region that contains the bump center the \emph{positive} curvature \emph{slows down} the diffusion, leading to an accumulation of heat in the bump region. The heat diffuses out of the bump region when the difference between the heat values in this region and the region outside the bump is sufficiently large. These local differences also influence nearby regions with zero curvature. The solution behavior is accurately resolved by all four numerical methods. The three consistent methods yield results that (in the ``eye norm'') can hardly be distinguished from the reference solution, whereas the inconsistent DI method is less accurate (due to a too large $\epsilon$ value). \subsection{Results for the vector case}% \label{sec:Phenomenon2} \begin{figure}[ht] \centering \includegraphics{image_vector_norm_alpha_t0-1}% \includegraphics{image_vector_norm_alpha_t1-0}\\ \includegraphics{image_vector_norm_time_x0}% \hspace*{0.2cm}\includegraphics{image_vector_norm_time_x1} \caption{\label{fig:vector-norm}(Color online) Plot of the vector norm $\|\bm{u}^{(1)}(\bm{x}_i)\|$ over $\alpha$ (top) and over time $t$ (bottom). Solid lines correspond to a reference solution. Colors correspond to $\bm{x}_0, \bm{x}_1$, and $\bm{x}_2$ and $\alpha\in\{0.0,1.0,2.0\}$.} \end{figure} For the vector case not only the norm but also the direction of $\bm{u}^{(1)}$ is of interest. We thus measure at the three reference points the magnitude of the solution $\Norm{\bm{u}^{(1)}}$ and the angle between the vector and the positive $x^1$-axis, i.e., $\angle(\bm{u}^{(1)},\bm{e}_1)\colonequals\arccos\scalarProd{\bm{u}^{(1)}/\Norm{\bm{u}^{(1)}}}{\bm{e}_1}$, see \Cref{fig:vector-norm} and \Cref{fig:vector-angle}, respectively. \begin{figure}[ht!] \centering \includegraphics{image_vector_angle_alpha_t0-1}% \includegraphics{image_vector_angle_alpha_t1-0}\\ \includegraphics{image_vector_angle_time_x0}% \hspace*{0.2cm}\includegraphics{image_vector_angle_time_x1} \caption{\label{fig:vector-angle}(Color online) Plot of the angle between the vector $\Emb{\bm{u}}^{(1)}(\bm{x}_i)$ and the positive $x$-axis $\bm{e}_1=(1,0,0)^T$ over $\alpha$ (top) and over time $t$ (bottom). Solid lines correspond to a reference solution. Colors correspond to $\bm{x}_0, \bm{x}_1$, and $\bm{x}_2$ and $\alpha\in\{0.0,1.0,2.0\}$.} \end{figure} At early times, $t=0.1$, the norm behaves qualitatively similar to the scalar case, but at later times, $t=1.0$, the behavior is very different. While scalar heat diffuses over the whole domain, in the vector case, the norm stays close to zero in the bump center for large $\alpha$, see \Cref{fig:vector-norm}. It does not increase significantly over a very long time. In the scalar case with $t=1.0$ and $\alpha \in [1,2]$ we see that there is a distinct maximal heat value at te top of the bump, corresponding to $\bm{x}_0$, cf. \Cref{fig:scalar} (b). In the vector case with $t=1.0$ and $\alpha \in [1,2]$ the opposite happens: the norm values at te top of the bump are much smaller than in the other two points, cf. \Cref{fig:vector-norm} (b). It appears that there is a strong influence of the additional tangentiality constraint and of the interaction with the transport of the direction. \Cref{fig:vector-angle} shows that the initial \emph{direction} $\bm{u}^0$ is instantaneously extended to the whole domain only in the case of $\alpha=0$. For $\alpha > 0$ this direction extension property only holds close to the origin. This coincides with the findings in \cite{SS2019Vector} where the limit $t\to 0$ is considered and a vector parallel transport is reconstructed from the vector heat flow solution. For larger times, the curvature of the surface causes a violation of this property. We see in \Cref{fig:vector-angle} (top) that even in the points in the flat region, i.e., in $\bm{x}_1$ and $\bm{x}_2$ with a flat geodesic to the origin, the ideal angle $\pi$ is missed for $\alpha > 0$. Due to symmetry of the problem setup, on the bump, $\bm{x}_0$, the angle of the solution is the same as the initial angle. Note that due to the small vector norms on the bump, the evaluation of the angle is poorly conditioned and thus a small deviation from the $x^1$-axis, results in large deviations in the evaluated angle. Interpreted as a minimization problem of the Dirichlet energy, $\int_{\Surface}\Norm{\bm{\nabla}_{\Surface}\bm{u}}^2\,\textrm{d}\bm{x}\to\text{min}$, the vector heat equation minimizes gradients in the magnitude and gradients in the angle. For strongly curved domains, the (by curvature) enforced violation in the angle is compensated by reducing the norm of the vector. This has consequences and leads to increased differences in the norm in the three reference points if compared with the scalar case. All methods show qualitatively the same behavior. However, for all methods the differences with the reference solution are (significantly) larger compared with the scalar case. These differences increase for larger $\alpha$ values. This loss of accuracy compared to the scalar case is caused by the significantly higher numerical problem complexity for $n\geq 1$, cf. discussion in \Cref{sec:Comparison}. Depending on the method, the vector case requires the evaluation of derivatives of the projection (SFEM, TraceFEM, DI) or derivatives of the metric coefficients (ISFEM), and thus become more sensitive to the approximation of the geometry. This is also seen for the vector angle with large variations. These large variations are in addition due to the low evaluation accuracy in the point $\bm{x}_0$ and the sensitivity of the evaluation to small perturbations. \subsection{Results for the tensor case}% \label{sec:Phenomenon3} For the tensor case we again consider the norm $\Norm{\bm{u}^{(2)}}$ and the angle with the positive $x^1$-axis. The tensor angle is defined as follows: $\angle(\bm{u}^{(2)},\bm{e}_1)\colonequals\arccos\scalarProd{\bm{u}^{(2)}/\Norm{\bm{u}^{(2)}}}{\bm{e}_1\otimes\bm{e}_1}$. We again measure these quantities at the three reference points. Due to the increased complexity we here only show results for SFEM and DI, without the corresponding reference solution, see \Cref{fig:tensor-norm} and \Cref{fig:tensor-angle}, respectively. The results are qualitatively similar to the vector case and can be explained by the same reasoning. \begin{figure}[ht] \centering \includegraphics{image_tensor_norm_alpha_t0-1}% \includegraphics{image_tensor_norm_alpha_t1-0}\\ \includegraphics{image_tensor_norm_time_x0}% \hspace*{0,2cm}\includegraphics{image_tensor_norm_time_x1} \caption{\label{fig:tensor-norm}(Color online) Plot of the tensor norm $\Norm{\Emb{\bm{u}}^{(2)}(\bm{x}_i)}$ over $\alpha$ (top) and over time $t$ (bottom). Solid lines correspond to a reference solution. Colors correspond to $\bm{x}_0, \bm{x}_1$, and $\bm{x}_2$ and $\alpha\in\{0.0,1.0,2.0\}$.} \end{figure} The qualitative behavior can be resolved by both methods. However, the differences between the methods further increase. \subsection{Summary}% \label{sec:Summary} While some analytical results exist for the diffusion of tangential tensor fields, see \Cref{sec:ProblemDefinition}, quantitative results which allow to test numerical algorithms on simple benchmark problems have been missing. We here provided such a setup. We have considered four different numerical methods, ISFEM, SFEM, TraceFEM, and DI, all based on finite element discretizations. They are briefly described and compared with each other. The methods differ with respect to the surface representation, the representation of the gradient operator and the geometric information, and the tangentiality condition. The methods were applied to a benchmark problem with a relatively simple surface geometry. We observe that for not too small curvature values the solution behavior is strongly influenced by the geometry. Furthermore, the results show a stronger coupling with geometric properties and an increased sensitivity on the resolution of these properties for increased tensor degree. Due to this, there is a significant increase in numerical complexity when going from tensor degree $n=0$ to $n \geq 1$. A wealth of applications exists in materials science and biology, which make use of the influence of curvature of thin structure. Modeling such effects often requires tangential vector or tensor fields. We suggest to first test numerical methods for such applications on the provided setup, to ensure a proper resolution of the geometric influence. \paragraph{Acknowledgment} The authors wish to thank the German Research Foundation (DFG) for financial support within the Research Unit ``Vector- and Tensor-Valued Surface PDEs'' (FOR 3013) with project no. RE 1461/11-1 and VO 899/22-1. We further acknowledge computing resources provided by ZIH at TU Dresden and within project PFAMDIS at FZ J{\"u}lich. \begin{figure}[ht] \centering \includegraphics{image_tensor_angle_alpha_t0-1}% \includegraphics{image_tensor_angle_alpha_t1-0}\\ \includegraphics{image_tensor_angle_time_x0}% \hspace*{0.2cm}\includegraphics{image_tensor_angle_time_x1} \caption{\label{fig:tensor-angle}(Color online) Plot of the angle between the tensor $\bm{u}^{(2)}(\bm{x}_i)$ and the positive $x$-axis $\bm{e}_1=(1,0,0)^T$ over $\alpha$ (top) and over time $t$ (bottom). Solid lines correspond to a reference solution. Colors correspond to $\bm{x}_0, \bm{x}_1$, and $\bm{x}_2$ and $\alpha\in\{0.0,1.0,2.0\}$.} \end{figure} \bibliographystyle{abbrv_title}
{'timestamp': '2022-05-26T02:12:09', 'yymm': '2205', 'arxiv_id': '2205.12581', 'language': 'en', 'url': 'https://arxiv.org/abs/2205.12581'}
\section{Introduction} \label{sec:intro} \subsection{Elderly} \label{sub:elder} Countries around the world consider an individual to be an elderly at a different age. Elderly are divided into different stages, for example, 65-74, 75-84, and those above 85. According to the Thailand Act on the Elderly (2003), it determines that an elderly is an individual who is 60 or above (\citealt{ministry_of_social_development_and_human_security_act_2003}). Currently, the amount of elderly population is rising around the globe. By 2050, people aged 60 and above will reach 2.4 billion (\citealt{oecd_ageing_2015}). Declining fertility and increasing life expectancy are the main causes of aging society (\citealt{institute_for_population_and_social_research_situation_2019}; \citealt{phijaisanit_how_2016}). Thai society has entered into an aging society since the year 2000. By 2010, the population age 60 and above in Thailand amounts to 8.4 million which is 13.2 percent of the total population (\citealt{wattanasaovaluk_economic_2021}; \citealt{wongboonsin_aging_2017}). It is estimated that in 2021 Thailand will enter into a complete aged society, which means that Thai population 60 years and above will account for twenty percent of the total population and by 2031 Thailand will become a super-aged society with population 60 and above amounting to twenty-eight percent of the total population (\citealt{National_Statistical_Office}; \citealt{srisuchart_promotion_2019}). \subsection{Retirement} \label{sub:retirement} As the elderly population in Thailand is on the rise, it is important that we examine the retirement trends. Retirement is an important topic to examine as the timing of retirement has a crucial impact on a country. It has an effect on the labor market, the social security payouts, the caregiving industry as well as the overall productivity of a country's economy (\citealt{arkornsakul_labor_2020}; \citealt{bloom_macroeconomic_2015}). Retirement for the elderly varies from country to country and also varies for men and women (\citealt{oecd_pensions_2013}). Moreover, it changes over time. In the United States, for instance, in 1950, men on average retire at 70, in 1970 it dropped to 65 and by 1985 it dropped to 62 (\citealt{quinn_changing_2002}). Across 34 countries in the Organization for Economic Co-operation and Development (OECD), men's average retirement age in 1949 was 64.3 years and in 1999 it decreased to 62.4 years, and in 2012 it increased to 64.2 years. Women's average retirement age in 1949 was 62.9 years and in 1999 it decreased to 61.1 years and in 2012 it increased to 63.1 years. (\citealt{oecd_pensions_2013}). In Thailand, the retirement age for the public sector is sixty years of age, however, for the private sector, the retirement age varies depending on the occupation and the business industry (\citealt{Office_of_the_Council_of_State}; \citealt{Srawooth_Paitoonpong_2018}; \citealt{Labor_Protection_Act_2017}). Retirement is not a one-time discrete event; rather it is a process in which individuals transition from full employment to full retirement (\citealt{beehr_process_1986}; \citealt{denton_what_2009}; \citealt{wang_employee_2010}). Some workers engage in bridge employment, that is, individuals continue working for pay after they retire from a career job. Bridge employment can take multiple forms; for example, reduce work hours on a current job or move to a less demanding job (\citealt{beehr_working_2015}). Bridge employment is becoming a common phenomenon. Seventy percent of current workers plan to work for pay after retirement (\citealt{quinn_work_2010}). Other workers engage in unretirement which is the transition from full retirement back to part-time or full-time employment (\citealt{maestas_back_2010}). Elderly workers retire due to several reasons such as declines in health status, negative job conditions, family caregiving responsibilities, and desire for leisure pursuits (\citealt{fisher_retirement_2016}; \citealt{hansson_successful_1997}; \citealt{phijaisanit_how_2016}). One of the most cited reasons for early retirement is poor health (\citealt{mcgarry_health_2004}; \citealt{park_health_2010}; \citealt{van_rijn_influence_2014}). Health conditions that lead to early retirement include cardiovascular conditions (e.g. heart problems and stroke), musculoskeletal conditions (e.g. back pains), and mental illness (e.g. anxiety and depression) (\citealt{fisher_retirement_2016}). \subsection{Do elderly want to work?} \label{sub:want} Many countries around the world are now embracing the concept of “Active Aging.” Active Aging is a policy that support people as they age to remain in charge of their own lives and to encourage their continuing contribution to the economy and society. This concept underscores that the elderly should remain active participators in social, economic, cultural, and civic activities (\citealt{world_health_organization_active_2002}). One important implication of such a concept is the elderly desire to remain active in the labor market. Elderly should be able to participate in the job markets according to their individual needs and abilities (\citealt{world_health_organization_active_2002}). Existing literature (e.g. \citealt{kubickova_active_2018}) indicates that the elderly decision to stay on in the labor market is due to various factors such as income, health condition, social contacts, government policies, and living conditions. Income and wealth are factors that determine one's decision to continue to work or retire. Some studies (e.g. \citealt{schils_early_2008}) find that workers with more accumulated wealth were more likely to retire as they feel financially secure and can choose between continue working or increase leisure time. Yet, other studies (e.g. \citealt{parker_retirement_2007}) find that individuals with higher incomes, due to high opportunity costs, may prefer to continue earning thus retiring later. \citet{gustman_employer_1994} find that workers delay retirement until the date at which they become eligible for health benefits after retirement and are more likely to retire once eligible. In the United States, for instance, eligibility for Medicare at age 65 is an important factor in retirement decisions. Workers who have employer-provided health insurance (EHI) but not retiree health insurance (RHI) were especially likely to retire when they turn 65 (\citealt{coe_how_2013}). In Thailand, retirement pension and healthcare eligibility are two important factors that affect elderly's decision to remain in the job market. Thai elderly can expect to receive different types of monetary and health welfare depending on their life long career. Elderly can be categorized into three groups based on their past career 1) elderly who worked in a public sector 2) elderly who worked in a private sector and 3) elderly who worked as a freelance. Elderly who worked in the public sector will receive a government pension and free access under certain conditions to public hospital until they die (\citealt{the_comptroller_generals_department_civil_2021}). Elderly who worked in the private sector have employer-sponsored pension plans and healthcare through Thailand Social Security Scheme (SSS). Yet, the healthcare coverage ends upon retirement. The most vulnerable group is the elderly who worked as freelance. Most freelance do not have pension plan and healthcare plan, rather they must rely on their own savings.\footnote{There are exceptions as some elderly do plan their retirement and set up private pension fund and buy their own private healthcare plan.} Further, many Thai elderly work in “informal” sector, agriculture, for instance, and there is no specific retirement age. Many elderly continue to work until they are not able to. While it is true that elderly in Thailand, as all Thai citizens with Thai national identification numbers, are entitled to a free healthcare program- a universal healthcare coverage often known as “Bat Thong,”\footnote{Some exclusion applies, for example, those who are eligible for other forms of healthcare services, such as government officials.} yet, this free healthcare program is widely known for its delay and inferiority in medical treatment. This could be an explanation as to why Thai elderly choose to remain in the labor market to continue receiving private healthcare provided by the employers. As seen in Table \ref{tab:eldnum}, many Thai elderly continue to work. In 2012, 3,403,873 elderly worked and by 2020 4,704,477 elderly worked. The percentage of elderly being in the labor market is between 39.46 to 42.14 between 2012 and 2020, reflecting that more than a third of Thai elderly still hold a job in their retirement age (\citealt{national_statistical_office_demographic_2021}). \begin{table}[H] \begin{center} \includegraphics[width=5.5in]{eldnum.jpg} \end{center} \caption{Number and percentage of elderly in the labor market in Thailand} \label{tab:eldnum} \end{table} This article examines the factors that are associated with the elderly's decision to remain in the labor market after retirement age. Doing so, we are able to accurately forecast using statistical models which elderly want to continue working. This will allow us to design a labor market that can respond to the needs of the elderly in the near future. \section{Data} \label{sec:data} \subsection{Sampling} \label{sub:sampling} We are using data from the National Statistical Office, Ministry of Digital Economy and Society. The 2017 Survey of the Older Persons in Thailand is the sixth survey that is collected by National Statistical Office of Thailand regarding elderly in Thailand. The data was collected countrywide from June to August 2017. This sample survey uses a stratified two-stage sampling. There are 77 stratums based on the province, then each stratum is categorized as two sub-stratum-municipal areas and non-municipal areas. This leads to 5,970 enumeration areas which then results in a total of 83,880 households with a total of 217,818 respondents. We then intentionally pick observations of those who are above 60\footnote{{We exclude 60-year-old people as retirement in the public sector in Thailand considers the year of birth for retirement. Therefore, some 60-year-old may still be working.}}, this leads to a total of 38,551 observations. After taking into account missing values, we are left with 31,190 observations. The 2017 Survey of the Older Persons in Thailand collects data on demographic, economic, social and health characteristics, and household types of elderly. The data has 186 variables that are categorized into ten groups: identification, general information, children status, job status, income and assets, living condition, support from children, health condition, participation in elder activities, caretaker, and residential condition. We predict the elderly's desire to work (yes/no) based on these fourteen variables: age, gender, education level, marital status, number of children, number of family members in the household (excluding him/herself), number of grandchildren in the household, percentage of elderly in the household, children contact frequency, total assets\footnote{Total assets variable was selected over income variable as most elderly who do not want to work have zero income. Therefore, if the income variable was selected the models will be very accurate yet other factors will not be detected.}, health condition\footnote{Health condition ranges from 1 to 5. One indicates that the elderly is in a very bad health condition, and five indicates that the elderly is in a very good health condition. The health condition is based on the elderly's own evaluation.}, happiness level\footnote{Happiness level ranges from 0 to 10. Zero indicates that the elderly is depressed and ten indicates that the elderly is exuberant. The happiness level is based on the elderly's own evaluation.}, healthcare eligibility, and residential type. From the 31,190 observations, the number of elderly who desire to work is 8,885 or 28.49 percent. Training the model on the 31,190 observations, the classification model predicts all observations as “no desire to work.” This model has a 28.49 percent error; however, we did not further investigate it as it does not provide useful information and it is not useful in making prediction. To understand how the fourteen variables influence the elderly's decision to work, we intentionally undersample the elderly who do not wish to work by uniformly choosing 8,885 in this class from the 22,305 observations. The final sample has 17,700 observations consisting of 8,885 who desire to work and 8,885 who do not desire to work. We use the undersampling technique rather than the oversampling technique such as Synthetic Minority Over-sampling Technique (SMOTE) as the sample is large and most of the variables are categorical. Since our undersampled data has an equal fifty percent of elderly who wish and do not wish to work, our model will not be biased towards each outcome. This then leads to effective models in term of prediction and important variable selection.\footnote{The undersampled data is used in lasso logistic regression and random forest analysis. The pre-undersampled data with 31,190 observations is used in a cross tabulation analysis.} \subsection{Response and predictors} The response variable is the desire to work of each elderly which can be categorized as ``does not desire to work'' and ``desire to work.'' The variable is a binary response of 0 and 1, respectively. The original data before undersampling consists of 28.49 percent of elderly who desire to work and 71.51 percent who do not desire to work. The variables predicting the desire to work for this study include: \begin{enumerate} \item Age ($x_1$) \item Gender ($x_2$) \item Education level ($x_3$) \item Marital status ($x_4$) \item Number of children ($x_5$) \item Number of family members in the household ($x_6$) \item Number of grandchildren in the household ($x_7$) \item Percent of elderly in the household ($x_{8}$) \item Children contact frequency ($x_{9}$) \item Total assets ($x_{10}$) \item Health condition ($x_{11}$) \item Happiness level ($x_{12}$) \item Healthcare eligibility ($x_{13}$) \item Residential type ($x_{14}$) \end{enumerate} Previous studies, for example, \citet{arkornsakul_labor_2020}, \citet{bai_financial_2020}, \citet{coe_how_2013}, \citet{kim_factors_2016}, \cite{matthews_family_2013}, \citet{turner_factors_1994} and \citet{wattanasaovaluk_economic_2021}, utilize only some of these mentioned variables. We strengthen our study by incorporating all of these variables and using a different statistical approach to see how each variable influences the elderly's desire to work. \begin{table}[H] \begin{center} \includegraphics[width=6in]{descriptive.jpg} \end{center} \caption{Predictor Descriptions, Metrics, Means, Medians, and Standard Deviations } \label{tab:descriptive} \end{table} In Table \ref{tab:descriptive}, the variables are listed. The variables $x_1,x_5,x_6,x_7$ and $x_8$ are quantitative. The variables $x_3, x_9,x_{10},x_{11}$ and $x_{12}$ are collected as categorical but are orderable as detailed. The nominal variables include $x_2, x_4, x_{13}$ and $x_{14}$. The descriptive statistics of all the predictors are shown in Table \ref{tab:descriptive}. For quantitative variables, the mean, median and SD are shown and for categorical variables, the categorical percentages are shown. Note that this dataset contains the current job and wage of the elderly who is still working but not their prior jobs. Therefore, we do not include these variables as we consider all elderly in the dataset. Based on the dataset, the elderly who are still working mostly still want to work. Next we report the generalized variance inflation factors (GVIF) (\citealt{fox1992generalized}) to show that these predictors have no colinearity issue. Recall that GVIF is the generalized version of the variance inflation factor (VIF) that can be used for categorical variable. We consider $\left(\texttt{GVIF}^{1/(2\cdot \texttt{DF})}\right)^2$ which can use the same rule of thumb as VIF. The values close to 1 indicates no colinearity issue and the values greater than 5 indicate too much colinearity. The GVIF and $\left(\texttt{GVIF}^{1/(2\cdot \texttt{DF})}\right)^2$ of the 14 predictors in Table \ref{tab:vif} below show no sign of colinearity. \begin{table}[h] \renewcommand{\arraystretch}{0.7} \caption{The GVIF table.} \centering \scriptsize \resizebox{\columnwidth}{!}{% \begin{tabular}{|c| c c c c c c c c c c c c c c |} \hline m & $x_{1}$ & $x_{2}$ & $x_{3}$ & $x_{4}$ & $x_{5}$ & $x_{6}$ & $x_{7}$ & $x_{8}$ & $x_{9}$ & $x_{10}$ & $x_{11}$ & $x_{12}$ & $x_{13}$ & $x_{14}$ \\ [0.5ex] \hline \\ [-0.3ex] GVIF & 1.18 & 1.21 & 1.45 & 2.40 & 1.59 & 3.41 & 1.84 & 2.82 & 1.73 & 1.07 & 1.28 & 1.29 & 1.36 & 1.08 \\ [0.5ex] DF & 1 & 1 & 1 & 5 & 1 & 1 & 1 & 1 & 4 & 1 & 1 & 1 & 10 & 3 \\ [0.5ex] $\left(\texttt{GVIF}^{1/(2\cdot \texttt{DF})}\right)^2$ & 1.18 & 1.21 & 1.45 & 1.19 & 1.59 & 3.41 & 1.84 & 2.82 & 1.15 & 1.07 & 1.28 & 1.29 & 1.03 & 1.03 \\ [1ex] \hline \end{tabular}% } \label{tab:vif} \end{table} While many researchers focus on work participation of the elderly and the factors that influence the elderly's choice, yet most use either linear regression or ordinary logistic regression (for example \citealt{haider_elderly_2001}; \citealt{jean-olivier_distance_2010}; \citealt{kubickova_active_2018}), rather than shrinkage logistic regression (lasso logistic regression) and decision tree (random forest). Lasso logistic regression is widely used in science, especially in the medical field (for example, see \citealt{dai_use_2016} and \citealt{lee_application_2014}) but it is not the case in social science. Lasso logistic regression has many advantages over other logistic regressions as it reduces the variation of the model and allows for variable selection. Several social science studies have begun to incorporate this method (for example see \citealt{elyasiani_determinants_2019}). A study by \citet{molee_study_2019} had attempted to use decision tree (random forest) in studying elderly, however, their study was focused on the risk of catching a disease among the elderly. The decision tree's main advantage is that it is easily visualizable and random forest provides the ranking of the variable importance. \section{Methods} \label{sec:method} \subsection{Cross tabulation} Prior to analyzing the data using lasso logistic regression and random forest, cross tabulation is used to visualize the relationship between each variable and the desire to work based on pre-undersampled data of 31,190 observations.\footnote{The results from the cross tabulation is merely for preliminary analysis. No conclusive results is established in this paper based on the cross tabulation as there could be a correlation between the predictors. For example, in a cross tab, a widower may show as not wanting to work, yet the actual reason could be that most widowers have the tendency to be older than other elderly.} Note that we do not report the chi-square test of independence as our sample is very large and thus any small difference will appear statistically significant. \subsection{Random forest} The idea of random forest was first introduced by \citet{ho1995random} and an extension of it which was termed random forest was registered as a trademark in 1998 by Breiman and Cutler (see \citealt{breiman2001random} or \citealt{james_introduction_2013} for reviews of the methods). We apply random forest with the undersampled data of 17,700 observations to obtain models to predict the elderly desire to work. Random forest is one of the tree-based methods that separates the predictors space into sub-regions where each region predicts the same outcome. Random forest is used to overcome the weakness of the basic decision tree that has a very high variance by generating several basic decision trees and makes a prediction towards the majority of the outcomes in each sub-regions from the trees. Random forest also allows us to select the number of variables to include in each tree and by selecting a smaller random tree in each step it will help detect the importance of each variable without biasing towards only the most important one. The most popular choice of the number of variables selected for each tree is $\sqrt{p}$ where $p$ is the total number of variables. In this research, we generate 100 trees and select four variables randomly for each tree. For an observation, each tree predicts 0 or 1 and we take the majority of either 0 or 1 as the final outcome. In this work, trees are grown based on Gini index which is defined as \begin{eqnarray*} G=\hat{p}_{m0}(1-\hat{p}_{m0})+\hat{p}_{m1}(1-\hat{p}_{m1})=2\hat{p}_{m1}(1-\hat{p}_{m1}) \end{eqnarray*} where $\hat{p}_{m0}$ and $\hat{p}_{m1}$ denote the proportions of training observations in the $m^{th}$ region that are from class 0 and 1, respectively. It is clear that the values of $\hat{p}_{m1}$ close to either 0 or 1 will result in a small Gini index. The random forest algorithm also provides the importance ranking of predictors. The well-known importance ranking is based on either mean decrease Gini (MDG) or mean decrease accuracy (MDA). MDG of each predictor is defined as the difference between the means of Gini index from all regions separated by the random forest model with and without that predictor. MDA is defined similarly with Gini index replaced by predicting accuracy. If the model without a particular predictor results in a much larger Gini index or much lower accuracy, then MDG and MDA of this predictor are large which imply that the predictor is important. Nevertheless, MDG and MDA do not necessarily lead to the same outcome. A variable may has a good MDA but not a good MDG, vice versa. \subsection{Lasso logistic regression} The idea of lasso was introduced by \citet{santosa1986linear} and later \citet{tibshirani1996regression} coined the term ``lasso.'' Binary logistic regression model (see \citealt{hastie_elements_2009} for review of the methods) is of the form \begin{eqnarray*} \log\left(\frac{P(Y=1)}{P(Y=0)}\right) = \beta_0+\beta_1X_1+\beta_2X_2+\cdots+\beta_pX_p \end{eqnarray*} where $P(Y=0)$ and $P(Y=1)$ denote the probability that the outcome is 0 and 1, respectively, and $X_1,X_2,\ldots,X_p$ denote $p$ predictors. The method solves for $\beta_j$ that maximize the binomial likelihood function \begin{eqnarray*} L(\beta_j) = \prod_{j:y_j=1} P(Y=1|\vec{x}_j) \prod_{j:y_j=0} P(Y=0|\vec{x}_j) \end{eqnarray*} where \begin{eqnarray*} P(Y=1|\vec{x})=\frac{e^{\beta_0+\beta_1 x_1+\beta_2 x_2 + \cdots +\beta_p x_p}}{1+e^{\beta_0+\beta_1 x_1+\beta_2 x_2+\cdots+\beta_p x_p}} \end{eqnarray*} and \begin{eqnarray*} P(Y=0|\vec{x})=\frac{1}{1+e^{\beta_0+\beta_1 x_1+\beta_2 x_2+\cdots+\beta_p x_p}}. \end{eqnarray*} Note that the model above is the same if we minimize $-L(\beta_j)$ instead. Lasso logistic regression is the model that solves $\beta_j$ that minimize \begin{eqnarray} \label{lassoMLE} -L(\beta_j) + \lambda \sum_{j=1}^p|\beta_j| \end{eqnarray} with $\lambda\ge 0 $ as a tuning parameter. The second term is called the penalty term and it will force $\beta_j$ to shrink towards zero as $\lambda$ is larger. Therefore, if we can seek an appropriate $\lambda$, it will reduce the variance of the logistic regression model and will select only the most important factors in the final model. We use this model as a classification tool by assigning an individual with $P(Y=1|\vec{x})>0$.5 or 50\% to those who correspond to ``have a desire to work'' throughout the paper. We remark here that in some applications it is possible that 50\% is not the best threshold. Logistic regression uses the undersampled data of 17,700 observations to obtain models to predict the elderly desire to work. \subsection{$k$-fold cross validation} In this work, we apply $k$-fold cross validation to each model in order to compare its accuracy. This method first splits the original dataset into $k$ nonoverlap sets. Then each time for $k$ times it constructs the model based on $k-1$ sets and leaves one set for testing accuracy. The $k$-fold cross validation accuracy is the average accuracy from $k$ times. The popular choice of $k$ in machine learning is $10$. We thus use $10$ throughout this work. This method has a main advantage in that it provides an accurate accuracy as it reduces the variance of the estimate accuracy by repeating the process $10$ times and taking the average. In lasso logistic regression, we apply $10$-fold cross validation to compare the models with different tuning parameters $\lambda$ in order to select an appropriate $\lambda$. We also use $10$-fold cross validation technique to compare the final models from random forest and lasso logistic regression based on their accuracy. \section{Results and discussions} \label{sec:result} \subsection{Cross tabulation} First, we perform cross tabulation analysis as a preliminary evaluation of the relationship between each of the fourteen predictors and the elderly's desire to work. The results are shown in Table \ref{tab:cttable}. We also plot bar graphs in Figure \ref{fig:bar} which illustrate the percentage of elderly who desire to work in each category of each predictor comparing to the average of all observations. Figure \ref{fig:bar}(A) shows the obvious, that is, as one ages the desire to work decreases. This is probably due to the physical stamina and the health condition of the elderly. This finding is in line with \citeauthor{wattanasaovaluk_economic_2021}'s study (\citeyear{wattanasaovaluk_economic_2021}) in that elderly exit the labor market because they are too old to work. As Table \ref{tab:eldnum} illustrates, in the past few years, elderly’s intentions to work have been increasing for both genders (\citealt{national_statistical_office_demographic_2021}.) Our study, Figure \ref{fig:bar}(B) shows that elderly men are more likely than elderly women to continue work after retirement. This is consistent with \citeauthor {arkornsakul_labor_2020}’s study (\citeyear{arkornsakul_labor_2020}) which claims that men are considered to be the head of the household thus the need to earn money to take care of the family and men are physically stronger than women which encourage men to stay in the labor market longer than women. For the education level variable (Figure \ref{fig:bar}(C)), it shows that elderly with no education and those with more than high school education have less tendency to want to work than the average. Only elderly with below primary school, primary school, and junior high as their highest educational attainment have the desire to work more than the average pool. Our findings is consistent with that of \citeauthor{kim_factors_2016}'s research (\citeyear{kim_factors_2016}) and \citeauthor{arkornsakul_labor_2020}'s study (\citeyear{arkornsakul_labor_2020}) which show that elderly workers with a higher education level were less likely to maintain employment. This perhaps is because those with high educational attainment are more likely to have a well-paid job during their working years and thus have enough savings to retire. On the other hand, elderly with no education at all are mostly very old and thus have no desire to work.\footnote{In 1921, the Compulsory Education Act required that every child from seven to fourteen attend schools. Throughout 1921 to 1960, the number of years of compulsory education increased however it was not strictly enforced. From 1960 to 1978, four years of primary education was required and in 1978, six years of primary schooling was made compulsory (\citealt{curran_boys_2002}; \citealt{keyes_proposed_1991}).} Figure \ref{fig:bar}(D) shows that married elderly and elderly who are separated are more likely to have the desire to be in the labor market. This is because elderly who are widowers tend to be much older than those of other marital status, hence the old age prevents one from working. As for other marital status categories, the total count is too small to have any statistical significance. Number of children plays an important role in whether or not an elderly wants to continue working. Figure \ref{fig:bar}(E) shows that the more children an elderly has, the more likely he/she does not wish to work. This possibly reflects Thai culture that children often give their parents money as they age. Hence, elderly with lots of children may be able to cover the entire expenses without the need to work (\citealt{knodel_familial_1992}; \citealt{knodel_intergenerational_2008}; \citealt{phijaisanit_how_2016}). \citeauthor{arkornsakul_labor_2020}'s study (\citeyear{arkornsakul_labor_2020}) finds that the elderly's intention to continue work depends on the number of household members. This is the case for agricultural careers as it requires a lot of workforces and physical effort. If there is a decrease of household members, the elderly will decide to continue working as to maintain their income level. While our study did not dissect studying elderly at a regional level as \citeauthor{arkornsakul_labor_2020}'s study (\citeyear{arkornsakul_labor_2020}), yet the big picture in Thailand as Figure \ref{fig:bar}(F) illustrates, elderly who has one or two family members in the household excluding the elderly themselves have the tendency to want to work more than the average pool. This possibly reflects that elderly may not have enough earnings to upkeep the household and thus need to continue working (\citealt{phijaisanit_how_2016}). Alternatively, it could be that these elderly choose to keep their employment as it maintains their well-being and ensures that they can support themselves in the future (\citealt{srisuchart_promotion_2019}). Elderly with many family members have less desire to work than the average pool, this perhaps is because each family members help pitch in to make ends meet. Interestingly, elderly who live by themselves are less likely than other elderly to want to continue work. This could be that they no longer need to financially support anyone but themselves thus they do not see the point of holding a job. This contradicts with \citeauthor{santiphop_analysis_2016}'s study (\citeyear{santiphop_analysis_2016}) which finds that elderly who lived alone have to remain employed as it may be equivalent to survival. Figure \ref{fig:bar}(G) shows that elderly with no grandchildren have the highest desire to continue working. This reflects that these elderly do not need to help pitch in to childcare and thus can decide to work. On the other hand, elderly with any amount of grandchildren have less than the average desire to work as they may be asked to help out with childcare. Globally, and especially so in Thailand, it is quite common for children to ask their parents to help take care of their kids while they are at work. Some grandparents take care of their grandchildren full time (\citealt{hank_grandparents_2009}; \citealt{hayslip_jr_grandparents_2005}; \citealt{kamnuansilpa_grandparents_2005}; \citealt{komonpaisarn2019providing}; \citealt{mehta_introduction_2012}; \citealt{nanthamongkolchai_physical_2012}). Figure \ref{fig:bar}(H) shows that in a household with most members being seniors, but not all seniors, more than an average tend to want to work. Perhaps, living with other elderly they still need to work as they do not have enough money to pay for their expenses and maybe they need to support other dependents within the households as well. However elderly in households with fewer elderly or households with one hundred percent elderly may have other family members support their cost of living or may not need to support dependents, respectively. This is consistent with \citeauthor{arkornsakul_labor_2020}'s study (\citeyear{arkornsakul_labor_2020}) and \citet{bank_of_thailand_aging_2018} which indicated that elderlies leave the labor market as they have to take care of the family members who are children, seniors, and/or sick. Activity theory considers the more activities and the more socialization that an elderly gets, the happier the elderly will be (\citealt{havinghurst_patterns_1968}; \citealt{little_aging_2016}). Children's contact frequency thus plays an important role in determining whether or not the elderly wish to work. Figure \ref{fig:bar}(I) demonstrates that elderly who have no contact with their children or have few contacts with their children per year still have the desire to work more than the average. Work for these elderly may be a way for the elderly to keep themselves busy and/or is their source of happiness. Elderly that have frequent contact with their children on the other hand are more likely to want to retire because they are able to seek a new source of happiness that is doing activities and spending time with their loved ones. Continuity theory explains that elderly do not change their ways of living in a sudden manner, rather they progressively make a rational choice about their future based on their social roles (\citealt{atchley_continuity_1989}; \citealt{von_bonsdorff_continuity_2013}). Figure \ref{fig:bar}(J) shows that the percentage of elderly with no asset or little to moderate asset (0 to 100,000 baht) has the desire to work below average. This conceivably is because there is no motivation in working as the compensations from work is rather low. The percent of elderly with lots of assets (above 200,000 baht), on the other hand, has the desire to work above average. This could be due to the high monetary returns. The figure further shows that there is an obvious peak at 700,000-1,000,000 baht. Due to data restriction which did not differentiate elderly with more than three million baht assets, we cannot directly infer the wealthy trends towards work desires. Nevertheless, due to the decreasing trend thereafter we believe that people with very high assets will have less desire to work. \citet{santiphop_analysis_2016} explain health condition as one of the main factor that affects elderly's employment status. Our study (Figure \ref{fig:bar}(K)) shows that elderly with better health are more likely to remain in the labor market. This is consistent with \citeauthor{haseen_self-assessed_2010}'s study (\citeyear{haseen_self-assessed_2010}) which finds that elderly who did not work were more likely to report poor health than those who worked. \begin{table}[H] \begin{center} \includegraphics[width=4.3in]{crosstabtable.jpg} \end{center} \caption{Cross tabulation table} \label{tab:cttable} \end{table} \begin{figure}[H] \begin{center} \includegraphics[width=3.7in]{crosstabgraph.jpg} \end{center} \caption{Cross tabulation bar graphs. (The blue column represents the percentage of elderly who desire to work in each category. The red horizontal line represents the overall percentage of elderly who desire to work.)} \label{fig:bar} \end{figure} Figure \ref{fig:bar}(L) shows that elderly who evaluate themselves as depressed do not want to work, whereas those who are happy tend to work more than the average. The figure shows that the elderly desire to work peaks at happiness level 8, this possibly reflects that those who still want to work, decide to work and thus are happy. Interestingly, those who are extremely happy (above 8) have less tendency to want to work. This may be that their happiness is not contingent only upon work but their means to happiness can come from other sources (\citealt{chyi_determinants_2012}; \citealt{gray_inner_2008}; \citealt{kehn_predictors_1995}; \citealt{nanthamongkolchai_physical_2012}). As mentioned earlier, eligibility for healthcare plays a great role in elderly’s decision to remain in the job market. Thai elderly receive different types of healthcare upon retirement depending on their life long career. In Figure \ref{fig:bar}(M), we can see two separate trends. Elderly who worked for the government are more likely to have less desire to continue work after retirement and this is precisely because working for the government elderly will receive healthcare coverage even upon retirement (\citealt{the_comptroller_generals_department_civil_2021}). On the other hand, elderly who worked in a private sector their healthcare coverage stop once they retire. Therefore, to have access to healthcare many elderly who worked in a private sector intentionally choose to continue work. Figure \ref{fig:bar}(N) shows that elderly who live in single houses and town houses have less desire than the average to continue work. On the other hand, elderly who live in commercial buildings have the desire to continue work. These elderly may explicitly choose to live in such residential type as they can continue their work, selling merchandises, for instance. \subsection{Random forest} We first apply the random forest technique that grows 500 trees with 4 as the number of random predictors for each tree. The random forest model results in a 10-fold cross validation accuracy of 68.19 percent. Figure \ref{fig:imp} shows the variables and their importance in predicting the elderly's decision to work based on MDG and MDA. \citet{strobl2007bias} finds that MDG is biased towards continuous variables and categorical variables with more numbers of categories. Since our predictors include both quantitative and categorical with different number of categories, MDA is more appropriate. As illustrated in Table \ref{tab:cttable}, for instance, men clearly have more desire to work than women but MDG suggests that this variable is the least important. Although these following predictors, age, education level, healthcare eligibility and marital status have MDA above 1 percent which have noticeably logical trends in the cross tabulation results in Figure \ref{fig:bar}, yet no conclusion can be made that the two results are consistent as random forest does not provide any directional impact information. \begin{figure}[H] \begin{center} \includegraphics[width=5.8in]{imprf2.jpg} \end{center} \caption{Variables ranking in predicting the elderly's decision to work based on mean Gini index decrease (A) and mean decrease accuracy (B)} \label{fig:imp} \end{figure} \subsection{Lasso logistic regression} While random forest model provides the importance ranking of each variable, yet it has two disadvantages. First, random forest model does not show the direction of its impact. Second, it does not show which category of the categorical variables has the most impact. To overcome this weakness, we perform lasso logistic regression and compare the accuracy of the two models. From Figure \ref{fig:imp}, children contact frequency, happiness level and number of grandchildren in the household are less impactful predictors (below 0.3 percent based on MDA) of the elderly's desire to work regarding the random forest model. We first remove them before performing lasso logistic regression. Note that we actually did some experiment and found that including these three variables in the lasso logistic regression model will lower the 10-fold cross validation accuracy by about 2 percent. \begin{figure}[h] \begin{center} \includegraphics[width=4in]{lambda.jpg} \end{center} \caption{Lasso logistic regression accuracy} \label{fig:lambda} \end{figure} Next, we apply lasso logistic regression after we scaled\footnote{Scaled means subtracting the mean and dividing by the standard deviation.} all numeric variables. We create a sequence of tuning parameter $\lambda$, as stated in \eqref{lassoMLE}, of the form $\lambda=10^k$ where $k$ is from $-4$ to $0$ and seek for an appropriate $\lambda$. Assigning an individual with $P(Y=1|x)>0.5$ to those who ``have a desire to work'', it turns out that the model with $\lambda=0.0001$ is the best model based on the 10-fold cross validation an accuracy of 69.67 percent. The coefficients of all the predictors from this model are nonzero which departs from our intention in using lasso logistic regression to also select important predictors. We thus intentionally move to the local peak as seen in Figure \ref{fig:lambda} based on accuracy at $\lambda = 0.0087$ which yields the 10-fold cross validation accuracy of 69.58 percent. The logit equation is \begin{eqnarray*} \log\left(\frac{P(Y=1)}{P(Y=0)}\right) &=& 5.819-0.697x_1+0.477x_{2,1}-0.252x_3+0.031x_{4,2} \\ && \hspace{1pt} -0.386x_{4,4}-0.054x_5+0.186x_{10}+0.243x_{11} \\ && \hspace{1pt}-0.436x_{13,3}+0.034x_{13,7}-0.345x_{14,2}+0.295x_{14,3} . \end{eqnarray*} where $x_{a,b}$ equals to 1 if the predictor $x_a$ is in category $b$ and 0 otherwise. The coefficients ranked by the sizes of their coefficients are summarized in Table \ref{tab:coef} below. \begin{table}[h] \renewcommand{\arraystretch}{0.7} \caption{Lasso logistic regression coefficients} \centering \scriptsize \begin{tabular}{|c|c|} \hline \textbf{Intercept} & \textbf{Coefficient} \\ [1ex] \hline Intercept & 5.819 \\ [1ex] \hline \textbf{Positive predictor} & \textbf{Coefficient} \\ [1ex] \hline Gender: Male & 0.477 \\ [1ex] Residential type : commercial building & 0.295 \\ [1ex] Health condition & 0.243 \\ [1ex] Total assets & 0.186 \\ [1ex] Healthcare eligibility : private insurance & 0.034 \\ [1ex] Marital status: married (living together) & 0.031 \\ [1ex] \hline \textbf{Negative predictor} & \textbf{Coefficient} \\ [1ex] \hline Age & -0.697 \\ [1ex] Healthcare eligibility : government support & -0.436 \\ [1ex] Marital status: widowed & -0.386 \\ [1ex] Residential type : townhouse & -0.345 \\ [1ex] Education level & -0.252 \\ [1ex] Number of children & -0.054 \\ [1ex] \hline \end{tabular} \label{tab:coef} \end{table} From Table \ref{tab:coef}, there are six predictors that have positive impacts on the desire to work which are gender (male), residential type (commercial building), health condition, total assets, healthcare eligibility (private insurance), and marital status (married(living together)). On the other hand, the following predictors: age, healthcare eligibility (government support), marital status (widowed), residential type (townhouse), education level, and number of children, have negative impacts on the desire to work. The results here are consistent with that of the cross tabulation on the directional impact towards the desire to work. Yet, as shown in Table \ref{tab:coef}, lasso logistic regression goes a step further as it allows us to understand not only in term of prediction but also the ranking of factors that have influence on the elderly's decision to work. The larger coefficient magnitude means the more influence it has on the elderly's decision to work. We remark here that marital status (never married), healthcare eligibility (no insurance) and residential type (single house) are baseline categories for the multiple categorical variables which affect the intercept of 5.819. We cannot conclude the actual impact of these baseline categories, nevertheless, we know from the cross tabulation result in Table \ref{tab:cttable} that they have positive, negative and neutral impact, respectively. \section{Conclusion and implication} \label{sec:discussion} Thai society has entered into an aging society since the year 2000 and is predicted to become a super-aged society by 2031 (\citealt{National_Statistical_Office}; \citealt{srisuchart_promotion_2019}; \citealt{wattanasaovaluk_economic_2021}). This change in the country's population demographic has important implication to the country in various dimensions, for example, the caregiving industry, the labor market, the social security payouts and the country’s economy productivity (\citealt{arkornsakul_labor_2020}; \citealt{bloom_macroeconomic_2015}). While many elderly consider retirement, yet another group of elderly choose to continue working. Using the 2017 Survey of the Older Persons in Thailand, this quantitative study uses cross tabulation, random forest with variable importance measure and lasso logistic regression to highlight factors that are associated with the elderly's decision to remain in the labor market and to build predictable elderly labor market models to respond to the aging society. While previous literature (see \citealt{arkornsakul_labor_2020}, \citealt{bai_financial_2020}, \citealt{coe_how_2013}, \citealt{kim_factors_2016}, \citealt{matthews_family_2013}, and \citealt{turner_factors_1994}, for instance,) have examine factors that influence elderly's decision to retire, yet our study advances the literature by incorporating more variables from different studies. Doing so makes our research more robust. Additionally, we are able to focus on which factors to take into consideration when we implement policy regarding elderly labor market. Another key difference from previous research is the use of random forest and shrinkage logistic regression. Random forest provides the variable importance ranking. Using random forest, our study shows that age, education level, healthcare eligibility, and marital status have the most impact on elderly's decision to remain in the market, with MDA of above 1 percent. Health condition, total assets, gender, residential type, percent of elderly in the household and number of children have some impact in elderly's decision-making with MDA of above 0.4 percent. Yet, as random forest model does not show the direction of its impact and does not show which category of the categorical variables has the most impact, we overcome this weakness by using lasso logistic regression. While lasso logistic regression is widely used in science, it is not the case in social science study. In the past, research on elderly's work participation and the factors that influence the elderly’s choice use either linear regression or ordinary logistic regression (for example \citealt{haider_elderly_2001}; \citealt{jean-olivier_distance_2010}; \citealt{kubickova_active_2018}). Lasso logistic regression has many advantages over other logistic regressions as it reduces the variation of the model and allows for variable selection. Using lasso logistic regression, our study illustrates that these following variables: gender (male), residential type (commercial building), health condition, total assets, healthcare eligibility (private insurance) and marital status (married and living together), have a positive impact on the elderly's decision to work while these variables: age, healthcare eligibility (government support), marital status (widowed), residential type (townhouse), education level and number of children, have a negative impact. This is consistent with the cross tabulation as all the positive (negative) impact quantitative and ordinal variables tend to have upward (downward) trends in the cross tabulation results while the percentages of elderly that still want to work in these positive (negative) impact categories in the nominal variable are above (below) average. We are able to achieve the ambitious goal in predicting the elderly's decision to work with 68.19 and 69.58 percent accuracy, respectively by using random forest and lasso logistic regression models. By understanding which factors contribute to the elderly wish to continue work, policymakers can use these models to predict the future labor market that can accommodate elderly in Thailand. Policymakers can apply our models (lasso logistic regression and random forest) to a well-collected sample of the population aged 50 to 60 to predict whether they are likely to continue work after sixty. The policymaker can make a long-term plan for the elderly's labor market based on their characteristics and backgrounds. Since 2017, Thailand Revenue Department encourages companies to hire elderly as it will qualify those companies for income tax exemption (\citealt{revenue_department_news_revenue_2017}). Several government sectors, Department of Employment, and Department of Older Persons, for instance, have introduced several measures to accommodate elderly to the labor market including upskilling and reskilling elderly with vocational training and information technology skills (\citealt{Department_of_Employment_2018}; \citealt{Department_of_Older_Persons_2021}). By adapting our model to each economic sector, we can help both the public and private sectors narrow down which skills should be highlighted. This will enable elderly to have the necessary skills and to adapt to the current labor market. Further, for elderly who are in the goods and services sectors, there should be an avenue specified only for elderly to minimize the competition which can ease the stress among the elderly in the economic sector. Moreover, by following our idea, a new model can be re-constructed each time a new data set is available. Doing so, we will have the most up-to-date model that accurately reflexes elderly's desire to work at a particular time. An important step for future research on elderly's desire to work is to improve measures of work type. This study was limited by the data set utilized as National Statistical Office of Thailand does not have details of the work type. As mentioned, elderly who do not retire does not mean that they desire to work full-time, some elderly engage in bridge employment (\citealt{beehr_working_2015}; \citealt{quinn_work_2010}) while other elderly engage in unretirement (\citealt{maestas_back_2010}). Further, this study can be strengthened if other variables were incorporated in the survey such as the last job that elderly hold, years of work, desire jobs (if the answer is “desire to work”), for instance. \section*{Acknowledgements} We are grateful for the dataset from the National Statistical Office of Thailand. \section*{Declarations} \begin{flushleft} \textbf{Funding:} This work was financially supported by Office of the Permanent Secretary, Ministry of Higher Education, Science, Research and Innovation (Grant number: RGNS 63-217). \textbf{Conflicts of interest/Competing interests:} No potential conflict of interest was reported by the authors. \textbf{Availability of data and material:} This dataset is under the license of National Statistical Office of Thailand. \textbf{Code availability:} Available upon request. \textbf{Author's contributions:} Both authors have equal authorship of this article. \textbf{IRB approval:} This research has IRB approvals, the certification ID is SWUEC/E-346/2564 and KMUTT-IRB-COE-2021-020. \end{flushleft}
{'timestamp': '2022-06-02T02:08:17', 'yymm': '2206', 'arxiv_id': '2206.00193', 'language': 'en', 'url': 'https://arxiv.org/abs/2206.00193'}
\section{{#1}}} \newcommand{\uple}[1]{\text{\boldmath${#1}$}} \def\stacksum#1#2{{\stackrel{{\scriptstyle #1}} {{\scriptstyle #2}}}} \def--{--} \def\map#1#2#3#4{\begin{matrix}#1&\to &#2 \\#3 &\to &#4 \end{matrix}} \newcommand{\uple{\alpha}}{\uple{\alpha}} \newcommand{\uple{\beta}}{\uple{\beta}} \newcommand{\uple{b}}{\uple{b}} \newcommand{\uple{a}}{\uple{a}} \newcommand{\uple{h}}{\uple{h}} \newcommand{\uple{l}}{\uple{l}} \newcommand{\bft}{\uple{t}} \newcommand{\uple{x}}{\uple{x}} \newcommand{\uple{y}}{\uple{y}} \newcommand{\uple{m}}{\uple{m}} \newcommand{\uple{n}}{\uple{n}} \newcommand{C(\mathcal{F})}{C(\mathcal{F})} \newcommand{\uple{I}}{\uple{I}} \newcommand{\uple{J}}{\uple{J}} \newcommand{\mathrm{e}_q}{\mathrm{e}_q} \newcommand{\mathrm{Std}}{\mathrm{Std}} \newcommand{\mathrm{Sym}}{\mathrm{Sym}} \newcommand{\mathrm{sym}}{\mathrm{sym}} \newcommand{\mathrm{arith}}{\mathrm{arith}} \newcommand{\mathrm{Irr}}{\mathrm{Irr}} \newcommand{\mathrm{geom}}{\mathrm{geom}} \newcommand{G^{\mathrm{arith}}}{G^{\mathrm{arith}}} \newcommand{G_n^{\mathrm{arith}}}{G_n^{\mathrm{arith}}} \newcommand{G^{\mathrm{geom}}}{G^{\mathrm{geom}}} \newcommand{G_{\mathcal{F},\mathrm{arith}}}{G_{\mathcal{F},\mathrm{arith}}} \newcommand{G_{\mathcal{F},\mathrm{geom}}}{G_{\mathcal{F},\mathrm{geom}}} \newcommand{\Garithd}[1]{G_{{#1},\mathrm{arith}}} \newcommand{\Ggeomd}[1]{G_{{#1},\mathrm{geom}}} \newcommand{K^{\mathrm{sep}}}{K^{\mathrm{sep}}} \newcommand{K_x^{\mathrm{sep}}}{K_x^{\mathrm{sep}}} \newcommand{\mathbf{0}}{\mathbf{0}} \newcommand{\mathbf{R}}{\mathbf{R}} \newcommand{\mathbf{S}}{\mathbf{S}} \newcommand{\mathbf{F}}{\mathbf{F}} \newcommand{\mathbf{K}}{\mathbf{K}} \newcommand{\mathbf{M}}{\mathbf{M}} \newcommand{\mathbf{N}}{\mathbf{N}} \newcommand{\mathbf{C}}{\mathbf{C}} \newcommand{\mathbf{S}}{\mathbf{S}} \newcommand{\mathbf{C}^\times}{\mathbf{C}^\times} \newcommand{\mathbf{N}}{\mathbf{N}} \newcommand{\mathbf{A}}{\mathbf{A}} \newcommand{\mathbf{G}_{m}}{\mathbf{G}_{m}} \newcommand{\mathbf{G}_{m,{\mathbf{F}_q}}}{\mathbf{G}_{m,{\mathbf{F}_q}}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{D}}{\mathbf{D}} \newcommand{\mathbf{Z}}{\mathbf{Z}} \newcommand{\mathbf{P}}{\mathbf{P}} \newcommand{\mathbf{R}}{\mathbf{R}} \newcommand{\mathbf{G}}{\mathbf{G}} \newcommand{q^{1/2}}{q^{1/2}} \newcommand{q^{-1/2}}{q^{-1/2}} \newcommand{\mathbf{H}}{\mathbf{H}} \newcommand{\mathbf{Q}}{\mathbf{Q}} \newcommand{\mathbf{Q}_{\ell}}{\mathbf{Q}_{\ell}} \newcommand{\ov{\mathbf{Q}_{\ell}}}{\ov{\mathbf{Q}_{\ell}}} \newcommand{\ov{{\mathbf{F}_q}}}{\ov{{\mathbf{F}_q}}} \newcommand{{\mathbf{F}_p}}{{\mathbf{F}_p}} \newcommand{{\mathbf{F}^\times_p}}{{\mathbf{F}^\times_p}} \newcommand{{\mathbf{F}_q}}{{\mathbf{F}_q}} \newcommand{{\mathbf{F}_{q^n}}}{{\mathbf{F}_{q^n}}} \newcommand{{\mathbf{F}^\times_{q^n}}}{{\mathbf{F}^\times_{q^n}}} \newcommand{{\mathbf{F}^\times_q}}{{\mathbf{F}^\times_q}} \newcommand{{\mathbf{F}_{q^d}}}{{\mathbf{F}_{q^d}}} \newcommand{{\mathbf{F}^\times_{q^d}}}{{\mathbf{F}^\times_{q^d}}} \newcommand{\mathbf{F}}{\mathbf{F}} \newcommand{\bar{\Ff}_p}{\bar{\mathbf{F}}_p} \newcommand{\bar{\Ff}_q}{\bar{\mathbf{F}}_q} \newcommand{\bar{\Qq}_{\ell}}{\bar{\mathbf{Q}}_{\ell}} \newcommand{\mathbf{T}}{\mathbf{T}} \newcommand{\mathbf{G}}{\mathbf{G}} \newcommand{g^\natural}{g^\natural} \newcommand{\boldsymbol{\mu}}{\boldsymbol{\mu}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{K}\ell}{\mathcal{K}\ell} \newcommand{\mathcal{K}\ell}{\mathcal{K}\ell} \newcommand{\overline{\mathbf{F}}}{\overline{\mathbf{F}}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\text{\boldmath$P$}}{\mathbf{P}} \newcommand{\text{\boldmath$E$}}{\mathbf{E}} \newcommand{\mathbf{V}}{\mathbf{V}} \newcommand{\mathbf{1}}{\mathbf{1}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{g^{\sharp}}{g^{\sharp}} \newcommand{y^{\sharp}}{y^{\sharp}} \newcommand{\clconj}[1]{{{#1}}^{\sharp}} \newcommand{\mods}[1]{\,(\mathrm{mod}\,{#1})} \newcommand{\sli}[1]{\underline{{#1}}} \newcommand{\ideal}[1]{\mathfrak{{#1}}} \newcommand{\mathrm{Id}}{\mathrm{Id}} \newcommand{\widehat}{\widehat} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathbf{G}}{\mathbf{G}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{D}}{\mathbf{D}} \newcommand{\mathbf{G}^{opt}}{\mathbf{G}^{opt}} \newcommand{\hautk}[2]{\mathbf{G}_{{#1},{#2}}} \newcommand{\hautz}[2]{\mathbf{G}^{a}_{{#1},{#2}}} \newcommand{\hauti}[3]{\mathbf{G}^{{#1}}_{{#2},{#3}}} \DeclareMathOperator{\frob}{Fr} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\skl}[1]{\sheaf{K}^{({#1})}} \newcommand{\hk}[1]{\sheaf{K}\ell_{{#1}}} \newcommand{\mutw}[3]{\mu_{{#3},{#2}}} \newcommand{\frtr}[3]{(\Tr{{#1}})({#2},{#3})} \DeclareMathOperator{\hypk}{Kl} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\rightarrow}{\rightarrow} \newcommand{\longrightarrow}{\longrightarrow} \newcommand{\twoheadrightarrow}{\twoheadrightarrow} \newcommand{\hookrightarrow}{\hookrightarrow} \newcommand{\hookleftarrow}{\hookleftarrow} \newcommand{\Longleftrightarrow}{\Longleftrightarrow} \newcommand{\fleche}[1]{\stackrel{#1}{\longrightarrow}} \newcommand{\flecheinj}[1]{\stackrel{#1}{\hookrightarrow}} \newcommand{\flechecinj}[1]{\stackrel{#1}{\hookleftarrow}} \newcommand{\barre}[1]{\overline{{#1}}} \DeclareMathOperator{\Spec}{Spec} \DeclareMathOperator{\Vol}{Vol} \DeclareMathOperator{\proj}{Proj} \DeclareMathOperator{\Card}{Card} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\rk}{rk} \DeclareMathOperator{\res}{Res} \DeclareMathOperator{\reg}{reg} \DeclareMathOperator{\ord}{ord} \DeclareMathOperator{\cl}{Cl} \DeclareMathOperator{\Div}{Div} \DeclareMathOperator{\divg}{divg} \DeclareMathOperator{\Pic}{Pic} \DeclareMathOperator{\vol}{Vol} \DeclareMathOperator{\Imag}{Im} \DeclareMathOperator{\Reel}{Re} \DeclareMathOperator{\syms}{Sym^{2}} \DeclareMathOperator{\symk}{Sym} \DeclareMathOperator{\li}{li} \DeclareMathOperator{\Frob}{\mathrm{Frob}} \DeclareMathOperator{\Fr}{\mathrm{Frob}} \DeclareMathOperator{\Kl}{\mathrm{Kl}} \DeclareMathOperator{\shKl}{\mathrm{Kl}} \DeclareMathOperator{\ET}{\mathrm{ET}} \DeclareMathOperator{\tr}{\mathrm{tr}} \DeclareMathOperator{\nr}{\mathrm{Nr}} \DeclareMathOperator{\Gal}{Gal} \DeclareMathOperator{\Ind}{Ind} \DeclareMathOperator{\Res}{Res} \DeclareMathOperator{\supp}{supp} \DeclareMathOperator{\im}{Im} \DeclareMathOperator{\Tr}{Tr} \DeclareMathOperator{\Hom}{Hom} \DeclareMathOperator{\End}{End} \DeclareMathOperator{\Aut}{Aut} \DeclareMathOperator{\varia}{Var} \DeclareMathOperator{\argu}{Arg} \DeclareMathOperator{\spect}{Spec} \DeclareMathOperator{\disc}{disc} \DeclareMathOperator{\swan}{Swan} \DeclareMathOperator{\Sing}{Sing} \DeclareMathOperator{\Drop}{drop} \DeclareMathOperator{\sw}{Swan} \DeclareMathOperator{\bb}{B} \DeclareMathOperator{\codim}{codim} \DeclareMathOperator{\ft}{FT} \DeclareMathOperator{\cond}{\mathbf{c}} \DeclareMathOperator{\Ad}{Ad} \DeclareMathOperator{\dual}{D} \DeclareMathOperator{\nearb}{R\Psi} \DeclareMathOperator{\van}{R\Phi} \DeclareMathOperator{\class}{c\ell} \newcommand{\varepsilon}{\varepsilon} \renewcommand{\rho}{\varrho} \DeclareMathOperator{\SL}{SL} \DeclareMathOperator{\GL}{GL} \DeclareMathOperator{\PGL}{PGL} \DeclareMathOperator{\PGLd}{PGL_2} \DeclareMathOperator{\rmT}{T} \DeclareMathOperator{\rmB}{B} \DeclareMathOperator{\rmG}{G} \DeclareMathOperator{\rmN}{N} \DeclareMathOperator{\rmU}{U} \DeclareMathOperator{\PSL}{PSL} \DeclareMathOperator{\Sp}{Sp} \DeclareMathOperator{\GSp}{GSp} \DeclareMathOperator{\SO}{SO} \DeclareMathOperator{\Ort}{O} \DeclareMathOperator{\SU}{SU} \DeclareMathOperator{\Un}{U} \DeclareMathOperator{\USp}{USp} \newcommand{{\textstyle{\frac{1}{2}}}}{{\textstyle{\frac{1}{2}}}} \newcommand{{\textstyle{\frac{1}{4}}}}{{\textstyle{\frac{1}{4}}}} \newcommand{{\textstyle{\frac{3}{2}}}}{{\textstyle{\frac{3}{2}}}} \newcommand{\avg}[1]{A[{#1}]} \newcommand{\underline{O}}{\underline{O}} \newcommand{O}{O} \newcommand{\sheaf}[1]{\mathcal{{#1}}} \newcommand{M}{M} \newcommand{linearly disjoint}{linearly disjoint} \newcommand{\sheafm}[1]{\tilde{\sheaf{{#1}}}_{\ell}} \DeclareMathSymbol{\gena}{\mathord}{letters}{"3C} \DeclareMathSymbol{\genb}{\mathord}{letters}{"3E} \def\mathop{\sum \Bigl.^{\flat}}\limits{\mathop{\sum \Bigl.^{\flat}}\limits} \def\mathop{\sum \Bigl.^{+}}\limits{\mathop{\sum \Bigl.^{+}}\limits} \def\mathop{\sum \sum}\limits{\mathop{\sum \sum}\limits} \def\mathop{\sum \sum \sum \sum}\limits{\mathop{\sum \sum \sum \sum}\limits} \def\mathop{\sum\ldots \sum}\limits{\mathop{\sum\ldots \sum}\limits} \def\mathop{\sum\bigl.^{\flat}}\limits{\mathop{\sum\bigl.^{\flat}}\limits} \def\mathop{\sum \Bigl.^{*}}\limits{\mathop{\sum \Bigl.^{*}}\limits} \def\mathop{\sum\sum \Bigl.^{*}}\limits{\mathop{\sum\sum \Bigl.^{*}}\limits} \def\mathop{\sum\sum \Bigl.^{\sharp}}\limits{\mathop{\sum\sum \Bigl.^{**}}\limits} \def\mathop{\sum\sum \Bigl.^{\sharp}}\limits{\mathop{\sum\sum \Bigl.^{\sharp}}\limits} \def\mathop{\prod \Bigl.^{*}}\limits{\mathop{\prod \Bigl.^{*}}\limits} \def\mathop{\sum \Bigl.^{h}}\limits{\mathop{\sum \Bigl.^{h}}\limits} \def\frac{1}{2i\pi}\mathop{\int}\limits{\frac{1}{2i\pi}\mathop{\int}\limits} \def\mathop{\oplus}\limits{\mathop{\oplus}\limits} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{conjecture}[theorem]{Conjecture} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{xca}[theorem]{Exercise} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathfrak{a}}{\mathfrak{a}} \newcommand{\mathfrak{p}}{\mathfrak{p}} \newcommand{\lambda_f}{\lambda_f} \newcommand{\rho_f}{\rho_f} \newcommand{\lambda_g}{\lambda_g} \newcommand{\rho_g}{\rho_g} \newcommand{\varphi}{\varphi} \renewcommand{\geq}{\geqslant} \renewcommand{\leq}{\leqslant} \renewcommand{\Re}{\mathfrak{Re}\,} \renewcommand{\Im}{\mathfrak{Im}\,} \newcommand{\eqref}{\eqref} \newcommand{\backslash}{\backslash} \newcommand{\ov}[1]{\overline{#1}} \newcommand{\peter}[1]{\langle{#1}\rangle} \newcommand\sumsum{\mathop{\sum\sum}\limits} \newcommand\sumsumsum{\mathop{\sum\sum\sum}\limits} \newcommand\sumsumnd{\mathop{{\sum\sum}^{nd}}\limits} \newcommand\delval{1/8} \newcommand\delvaln{1/16} \newcommand\finalexponent{1/24} \newcommand\rpfree{1/144} \begin{document} \title[]{Lectures on Applied $\ell$-adic Cohomology} \author{\'Etienne Fouvry} \address{Laboratoire de Math\'ematiques d'Orsay, Universit\'e Paris-Sud, CNRS, Universit\'e Paris-Saclay, \linebreak[4] 91405 Orsay, France} \email{etienne.fouvry@u-psud.fr} \author{Emmanuel Kowalski} \address{ETH Z\"urich -- D-MATH\\ R\"amistrasse 101\\ CH-8092 Z\"urich\\ Switzerland} \email{kowalski@math.ethz.ch} \author{Philippe Michel} \address{EPFL/SB/TAN, Station 8, CH-1015 Lausanne, Switzerland } \email{philippe.michel@epfl.ch} \author{Will Sawin }\address{Mathematics Department, Rm 411, MC 4439 2990 Broadway New York NY 10027, Columbia University, USA } \email{will.sawin@columbia.edu} \subjclass[2000]{Primary } \date{\today} \begin{abstract} We describe how a systematic use of the deep methods from $\ell$-adic cohomology pioneered by Grothendieck and Deligne and further developed by Katz and Laumon help make progress on various classical questions from analytic number theory. This text is an extended version of a series of lectures given during the 2016 Arizona Winter School. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \tableofcontents \section{Introduction} One of the most basic question in number theory is to understand how various sets of integers behave when restricted to (i.e.~intersected with) {\em congruence classes}, a notion that goes back at least to Euclid and was exposed systematically by Gauss in his 1801 {\em Disquisitiones Arithmeticae} (following works of Fermat, Euler, Wilson, Lagrange, Legendre and their predecessors from the middle ages and antiquity), and which is fundamental to number theory. Let us recall that given an integer $q\in\mathbf{Z}-\{0\}$, a {\em congruence class} (a.k.a. an {\em arithmetic progression}) modulo $q$ is a subset of $\mathbf{Z}$ of the shape $$a\mods q=a+q\mathbf{Z}\subset \mathbf{Z}$$ for some integer $a$. The set of congruence classes modulo $q$ is denoted $\mathbf{Z}/q\mathbf{Z}$; it is a finite ring of cardinality $q$ (with addition and multiplication induced by that of $\mathbf{Z}$). In number theory, especially analytic number theory, one is interested in studying the behaviour of some given arithmetic function along congruence classes, for instance to determine whether a set of integers has finite or infinite intersection with some congruence class. The analysis of such problem, which may involve quite sophisticated manipulations, often involves certain specific classes of functions on $\mathbf{Z}/q\mathbf{Z}$. When studying such functions, it is natural to invoke the {\em Chinese Remainder Theorem} $$\mathbf{Z}/q\mathbf{Z}\simeq \prod_{p^\alpha\|q}\mathbf{Z}/p^\alpha\mathbf{Z}$$ which largely reduces the study to the case of prime power moduli; then, in many instances, the deepest case is when $q$ is a prime; the ring $\mathbf{Z}/q\mathbf{Z}$ is then a finite field, denoted ${\mathbf{F}_q}$, and often the functions that occur are what we will call {\em trace functions.} The objective of these lectures is utilitarian: our aim is to describe these trace functions, many examples, their theory and most importantly how they are handled when they occur in analytic number theory. Indeed the mention of "\'etale" or "$\ell$-adic cohomology", "sheaves", "purity", "functors", "local systems" or "vanishing cycles" sounds forbidding to the working analytic number theorist and often prevents him/her to embrace the subject and make full use of the powerful methods that Deligne, Katz, Laumon have developed for us. It is our hope that after these introductory lectures, any of the remaining readers will feel ready for and at ease with more serious activities such as the reading of the wonderful series of orange books by Katz, and eventually will be able to tackle by him/herself any trace function that nature has laid in front of him/her. \subsection*{Acknowledgements} These expository notes are an expanded version of a series of lectures given by Ph.M. and W.S. during the 2016 Arizona Winter School and based on our recent joint works. We would like to thank the audience for its attention and its numerous questions during the daily lectures, as well as the teams of student, who engaged in the research activities that we proposed during the evening sessions, for their enthusiasm. Big thanks are also due to Alina Bucur, Bryden Cais and David Zureick-Brown for the perfect organization, making this edition of the AWS a memorable experience. We would also like to thank the referees for correcting many mistakes and typosin earlier versions of this text. \section{Examples of trace functions} Unless stated otherwise, we now assume that $q$ is a prime number. \subsection{Characters} {\em Trace functions modulo $q$} are special classes of $\mathbf{C}$-valued functions on ${\mathbf{F}_q}$ of geometric origin. Perhaps the first significant example, beyond the constant function $1$, is the {\em Legendre symbol} (for $q\geq 3$) $$\left(\frac{\cdot}q\right): x\in{\mathbf{F}_q}\to \begin{cases}0& \hbox{ if $x=0$}\\ +1&\hbox{ if $x\in({\mathbf{F}^\times_q})^2$}\\ -1&\hbox{ if $x\in{\mathbf{F}^\times_q}-({\mathbf{F}^\times_q})^2$} \end{cases} $$ which detects the squares modulo $q$, and whose arithmetic properties (especially the {\em quadratic reciprocity law}) were studied by Gauss in the {\em Disquisitiones}. The class of trace functions was further enriched by P. G. Dirichlet: on his way to proving his famous theorem on primes in arithmetic progressions, he introduced what are now called {\em Dirichlet characters}, i.e.~the homomorphisms of the multiplicative group $$\chi: (\mathbf{Z}/q\mathbf{Z})^\times\to \mathbf{C}^\times$$ (with $\chi(0)$ defined to be $0$ for $\chi$ non-trivial). Another significant class of trace functions are the additive characters $$\psi:(\mathbf{Z}/q\mathbf{Z},+)\to \mathbf{C}^\times.$$ These are all of the shape $$x\in\mathbf{Z}/q\mathbf{Z}\mapsto \mathrm{e}_q(ax):=\exp\left(2\pi i\frac{\tilde a\tilde x}q\right)$$ (say) for some $a\in\mathbf{Z}/q\mathbf{Z}$, where $\tilde a$ and $\tilde x$ denote elements (lifts) of the congruence classes $a\mods q$ and $x\mods q$. Both additive and multiplicative characters satisfy the important {\em orthogonality relations} $$\frac{1}{q}\sum_{x\in{\mathbf{F}_q}}\psi(x)\ov{\psi'(x)}=\delta_{\psi=\psi'},\ \frac{1}{q-1}\sum_{x\in{\mathbf{F}^\times_q}}\chi(x)\ov{\chi'(x)}=\delta_{\chi= \chi'};$$ and we will see later a generalization of these relations to arbitrary trace functions. Additive and multiplicative characters can be combined together (by means of a Fourier transform) to form the (normalized) {\em Gauss sums} $$\varepsilon_\chi(a)=\frac{1}{q^{1/2}}\sum_{x\in{\mathbf{F}^\times_q}}\chi(x)\mathrm{e}_q(ax),$$ but these are not really new functions of $a$: by a simple change of variable, one has $$\varepsilon_\chi(a)=\ov{\chi}(a)\varepsilon_\chi(1)$$ for $a\in{\mathbf{F}^\times_q}$. For $\chi$ non-trivial, Gauss proved that $$|\varepsilon_\chi(1)|=1.$$ \subsection{Algebraic exponential sums} Another important source of trace functions comes from the study of the diophantine equations \begin{equation}\label{eqdiophantine} Q(\uple{x})=0,\ \uple{x}=(x_1,\ldots,x_n)\in\mathbf{Z}^n,\ Q(X_1,\ldots,X_n)\in\mathbf{Z}[X_1,\ldots,X_n]. \end{equation} For instance, the analysis of the {\em major arcs} in the {\em circle method} of Hardy--Littlewood (cf.~\cite[Chap. 4]{Vaughan}) leads to the following algebraic exponential sums on $(\mathbf{Z}/q\mathbf{Z})^n$ obtained by Fourier transform $$(a,\uple{x})\in (\mathbf{Z}/q\mathbf{Z})^{n+1}\mapsto \frac{1}{q^{n/2}}\sum_{\uple{y}\in(\mathbf{Z}/q\mathbf{Z})^n}\mathrm{e}_q(aQ(\uple{y})+\uple{x}.\uple{y}).$$ In the 1926's, while studying the case of a positive definite homogeneous polynomial $Q$ of degree $2$ in four variables (a positive definite integral quaternary quadratic form), and introducing a new variant of the circle method, Kloosterman \cite{Kloost}, defined the so-called (normalized) {\em Kloosterman sums} $$\Kl_2(a;q)=\frac{1}{q^{1/2}}\sum_\stacksum{x,y\in{\mathbf{F}^\times_q}}{xy=a}\mathrm{e}_q(x+y).$$ This is another example of a trace function, and indeed one that is defined via Fourier transform. By computing their fourth moment (see \cite[(4.26)]{IwTopics}), Kloosterman was able to obtain the first non-trivial bound for Kloosterman sums, namely $$|\Kl_2(a;q)|\leq 2q^{1/4}.$$ This estimate proved crucial for the study of equation \eqref{eqdiophantine} in the case of quaternary positive definite quadratic forms. In the 1940's, this bound was improved by A. Weil, who as a consequence of his proof of the Riemann hypothesis for curves over finite fields proved the best individual upper bound (see \cite[\S 11.7]{IwKo}): $$|\Kl_2(a;q)|\leq 2.$$ In 1939, Kloosterman sums appeared again in the work of Petersson who related them to Fourier coefficients of modular forms.\footnote{In fact, Poincar\'e had already written them down in one of his last papers, published posthumously.} Since then, via the works of Selberg, Kuznetsov, Deshouillers-Iwaniec and many others, Kloosterman sums play a fundamental role in the analytic theory of automorphic forms\footnote{The double occurence of Kloosterman sums in the context of quadratic forms and of modular forms is explained by the theta correspondence}. A further important example of trace functions are the (normalized) {\em hyper-Kloosterman sums}. These are higher dimensional generalisations of Kloosterman sums, and are given, for any integer $k\geq 1$ by $$\Kl_k(a;q)=\frac{1}{q^{(k-1)/2}}\sum_\stacksum{x_1,\ldots, x_{k}\in{\mathbf{F}^\times_q}}{x_1.x_2.\ldots.x_k=a}\mathrm{e}_q(x_1+x_2+\ldots +x_k).$$ Hyper-Kloosterman sums were introduced by P. Deligne, who also established the following generalization of the Weil bound: $$|\Kl_k(a;q)|\leq k.$$ Hyper-Kloosterman sums can be interpreted as inverse (discrete) Mellin transforms of powers of Gauss sums, and therefore can be used to study the distribution of Gauss sums. As was denoted by Katz in \cite{Sommes}, this fact and Deligne's bound imply the following\footnote{See \cite{KatzConvol} for a considerable generalisation of this theorem.} \begin{theorem} As $q\rightarrow\infty$, the set of (normalized) Gauss sums $$\{\varepsilon_\chi(1),\ \chi\mods q\hbox{ non trivial }\}$$ become equidistributed on the unit circle $\mathbf{S}^1\subset \mathbf{C}^\times$ with respect to the uniform (Haar) probability measure. \end{theorem} Hyper-Kloosterman sums also occur in the theory of automorphic forms; for instance, Luo, Rudnick and Sarnak used the fact that powers of Gauss sums occur in the root number of the functional equation of certain automorphic $L$-functions, the inverse Mellin transform property and Deligne's bound, to obtain non-trivial estimates for the Langlands parameters of automorphic representations on $\GL_n$ (giving in particular the first improvement of Selberg's famous $3/16$ bound for the Laplace eigenvalues of Maass cusp forms). In addition, just as for the classical Kloosterman sums, hyper-Kloosterman sums also occur in the spectral theory of $\GL_k$ automorphic forms. There are many more examples of trace functions, and we will describe some below along with ways to construct new trace functions from older ones. \section{Trace functions and Galois representations} Let $\mathbf{P}^1_{{\mathbf{F}_q}}$ be the projective line and $\mathbf{A}^1_{{\mathbf{F}_q}}\subset \mathbf{P}^1_{{\mathbf{F}_q}}$ be the affine line and $K={\mathbf{F}_q}(X)$ be the field of functions of $\mathbf{P}^1_{{\mathbf{F}_q}}$. In the sequel we fix some prime $\ell\not=q$, $\ov{\mathbf{Q}_{\ell}}$ an algebraic closure of the field of $\ell$-adic numbers $\mathbf{Q}_{\ell}$ and an embedding $\iota\colon\ov{\mathbf{Q}_{\ell}}\hookrightarrow \mathbf{C}$ into the complex numbers. Trace functions modulo $q$ are $\ov\mathbf{Q}_{\ell}$-valued functions\footnote{Hence $\mathbf{C}$-valued via the fixed embedding $\iota$} defined on the set of ${\mathbf{F}_q}$-points of the affine line $\mathbf{A}^1({\mathbf{F}_q})\simeq {\mathbf{F}_q}$. They are obtained from {\em constructible} $\ell$-adic sheaves (often denoted $\mathcal{F}$) for the \'etale topology on $\mathbf{P}^1_{{\mathbf{F}_q}}$. All these notions are quite forbidding at first; fortunately the category of {\em constructible} $\ell$-adic sheaves on $\mathbf{P}^1_{{\mathbf{F}_q}}$ can be rather conveniently described in terms of the category of representations of the Galois group of $K$. Following \cite{Sommes,GKM}, we will start from this viewpoint. Let $K^{\mathrm{sep}}\supset K$ be a separable closure of $K$, and $\ov\eta$ the associated geometric generic point (i.e.~$\Spec(K^{\mathrm{sep}})=\ov\eta$). Let $\ov{\mathbf{F}_q}\subset K^{\mathrm{sep}}$ denote the separable (or algebraic) closure of ${\mathbf{F}_q}$ in $K^{\mathrm{sep}}$. We denote $$G^{\mathrm{geom}}:=\Gal(K^{\mathrm{sep}}/\ov{\mathbf{F}_q}.K)\subset G^{\mathrm{arith}}=\Gal(K^{\mathrm{sep}}/K), $$ the {\em geometric}, resp.~{\em arithmetic}, Galois group. By restricting the action of an element of $G^{\mathrm{arith}}$ to $\ov{\mathbf{F}_q}$ we have the exact sequence \begin{equation} 1\rightarrow G^{\mathrm{geom}}\rightarrow G^{\mathrm{arith}}\rightarrow \Gal(\ov{\mathbf{F}_q}/{\mathbf{F}_q})\rightarrow 1.\label{exactgalois} \end{equation} \begin{definition} Let $U\subset \mathbf{A}^1_{\mathbf{F}_q}$ be a non-empty open subset of $\mathbf{A}^1_{\mathbf{F}_q}$ that is defined over ${\mathbf{F}_q}$. An $\ell$-adic sheaf lisse on $U$, say $\mathcal{F}$, is a continuous finite-dimensional Galois representation $$\rho_\mathcal{F}:G^{\mathrm{arith}}\to \GL(V_\mathcal{F})$$ where $V_\mathcal{F}$ is a finite dimensional $\ov{\mathbf{Q}_{\ell}}$-vector space, which is unramified at every closed point $x$ of $U$. The dimension $\dim V_\mathcal{F}$ is called the rank of $\mathcal{F}$ and is denoted $\rk(\mathcal{F})$. The vector space $V_{\mathcal{F}}$ is also denoted $\mathcal{F}_{\ov\eta}$. \end{definition} \subsection{Closed points on the affine line} In this section we spell-out the meaning of the sentence "unramified at every closed point $x$ of $U$". Let us recall that the datum of closed point of $\mathbf{P}^1_{{\mathbf{F}_q}}$ is equivalent to the datum of an embedding $\mathcal{O}_x\hookrightarrow K$ of a local ring\footnote{A PID with a unique prime ideal \cite[Chap. 1]{Serre}} $\mathcal{O}_x$ (the ring of rational functions defined in a neighborhood of $x$) whose field of fractions is $K$. Given such an embedding, we denote by $\mathfrak{p}_x$ its unique prime ideal, $\pi_x$ a generator of $\pi_x$ (an uniformizer) and by $v_x:K\rightarrow\mathbf{Z}\cup\{\infty\}$ the associated discrete valuation (normalized so that $v_x(\pi_x)=1$): we have $$\mathcal{O}_x=\{f\in K,\ v_x(f)\geq 0\}\supset \mathfrak{p}_x=\{f\in K,\ v_x(f)> 0\}.$$ We denote by $k_x=\mathcal{O}_x/\mathfrak{p}_x$ its residue field and by $q_x=|k_x|=:q^{\deg x}$ the size of $k_x$ and $\deg x$ its degree The set of closed points of the projective line $\mathbf{P}^1_{{\mathbf{F}_q}}$ is the union of the set of closed points of the affine line $\mathbf{A}^1_{{\mathbf{F}_q}}$ which is indexed by the set of monic, irreducible (non-constant) polynomials of ${\mathbf{F}_q}[X]$ and the point $\infty.$ \begin{itemize} \item For $\pi$ irreducible, monic and not constant, the local ring $\mathcal{O}_\pi$ is the localization of ${\mathbf{F}_q}[X]$ at the prime ideal $(\pi) \subseteq {\mathbf{F}_q}[X]$: $$\mathcal{O}_\pi=\{P/Q,\ P,Q\in{\mathbf{F}_p}[X],\ \pi\!\!\not|Q\}\supset \mathfrak{p}_\pi=\{P/Q,\ P,Q\in{\mathbf{F}_p}[X],\ \pi|P,\ \pi\!\!\not|Q\},$$ the valuation $v_\pi$ is the usual valuation: for any polynomial $P\in {\mathbf{F}_q}[X]$, $v_x(P)=v_\pi(P)$ is the exponent of the highest power of $\pi$ dividing $P$ which is extended to $K$ by setting $v_x(P/Q)=v_\pi(P)-v_\pi(Q)$, and the degree is $\deg \pi$. \item For $\infty$, $$\mathcal{O}_\infty=\{P/Q,\ P,Q\in{\mathbf{F}_p}[X],\ \deg P\leq\deg Q\}\supset \mathfrak{p}_\infty=\{P/Q,\ P,Q\in{\mathbf{F}_p}[X],\ \deg P<\deg Q\},$$ the valuation is minus the degree of the rational fraction $v_\infty(P/Q)=\deg(Q)- \deg(P),$ and the degree of $\infty$ is $1$. \end{itemize} \begin{remark}\label{closedpointsident} We denote by $\mathbf{P}^1({\mathbf{F}_q})$ the set of closed points of degree $1$ and by $\mathbf{A}^1({\mathbf{F}_q})=\mathbf{P}^1({\mathbf{F}_q})-\{\infty\}$. Note that $\mathbf{A}^1({\mathbf{F}_q})$ is identified with ${\mathbf{F}_q}$ by identifying $x\in{\mathbf{F}_q}$ with the degree $1$ (irreducible) polynomial $X-x$. Similarly a non-empty open set $U\subset \mathbf{A}^1_{{\mathbf{F}_q}}$ is the open complement of the closed set $Z_Q\subset \mathbf{A}^1_{{\mathbf{F}_q}}$ of zeros of some (non-zero) polynomial $Q\in{\mathbf{F}_q}[X]$, i.e.~defined by the equation $Q(x)=0$. The "closed points of $U$" are the closed point associated with the irreducible monic polynomials $\pi\in{\mathbf{F}_q}[X]$ coprime to $Q$ and the set of closed points of degree $1$, is identified with the complement of the set of roots of $Q$ contained in ${\mathbf{F}_q}$: $$U({\mathbf{F}_q})\simeq \{x\in{\mathbf{F}_q},\ Q(x)\not=0\}\subset{\mathbf{F}_q}.$$ \end{remark} \subsubsection{Decomposition group, inertia and Frobenius} The valuation $v_x$ can be extended (in multiple ways) to a ($\mathbf{Q}$-valued) valuation on $K^{\mathrm{sep}}$ and the choice of one such extension (denoted $v_{\{x\}}$) determines a decomposition and an inertia subgroup in the arithmetic Galois group $$I_{\{x\}}\subset D_{\{x\}}\subset G^{\mathrm{arith}}$$ fitting in the exact sequence \begin{equation}\label{eqlocalgal} 1\rightarrow I_{\{x\}}\rightarrow D_{\{x\}}\rightarrow \Gal(\ov{\mathbf{F}_q}/k_x)\rightarrow 1. \end{equation} Let also us recall that $\Gal(\ov{\mathbf{F}_q}/k_x)$ is topologically generated by the {\em arithmetic Frobenius} $$\Frob^{\mathrm{arith}}_{k_x}\colon\map{\ov{\mathbf{F}_q}}{\ov{\mathbf{F}_q}}{u}{u^{q_x}}.$$ In the sequel we will denote by $\Frob^{\mathrm{geom}}_{k_x}$ its inverse, also called the {\em geometric Frobenius}. The lifts of the (geometric) Frobenius therefore define a (left) $I_{\{x\}}$-class in the decomposition subgroup which we denote by $$\Fr_{\{x\}}\subset D_{\{x\}}$$ and which we call the Frobenius class at $\{x\}$. \begin{remark}\label{abuse} The choice of a different extension $v_{\{x\}'}$ of $v_x$ yields a priori another decomposition, inertia subgroups and Frobenius class, $D_{\{x\}'}, I_{\{x\}'},\ Fr_{\{x\}'}$, but these are conjugate to $D_{\{x\}}, I_{\{x\}},\ Fr_{\{x\}}$ because $G^{\mathrm{arith}}$ acts transitively on the set of extensions. As we will see the various quantities that we will discuss in relation to these sets will be conjugacy-invariant and therefore depend only on $x$ but not of a choice of $\{x\}$ and will use the indice $x$ instead of $\{x\}$. Sometimes, to simplify notations, we will implicitly assume the choice of an $\{x\}$ without mentioning it and will simply write $D_x, I_x, \Fr_x$ \end{remark} We can now explain the term unramified. \begin{definition} Given $x$ a closed point of $\mathbf{P}^1_{{\mathbf{F}_q}}$, a $G^{\mathrm{arith}}$-module $V$ is unramified (or lisse) at $x$ at if for one (or equivalently any) extension $\{x\}$, the corresponding inertia subgroup $I_{\{x\}}$ acts trivially on $V$. Otherwise $V$ is ramified at $x$. If $V$ is unramified at $x$, all the elements in the Frobenius class $\Fr_{\{x\}}$ act by the same automorphism of $V$ and we will denote this automorphism by $(\Fr_{\{x\}}|V)$. Moreover if we change the extension $\{x\}$ we obtain an automorphism which is $G^{\mathrm{arith}}$-conjugate to $(\Fr_{\{x\}}|V)$. We denote by $(\Fr_x|V_\mathcal{F})$ this conjugacy class. \end{definition} It follows from this discusion that for any sheaf $\mathcal{F}$ there is a non-empty open subset on which $\mathcal{F}$ is unramified and maximal for this property. We will note this open set $U_\mathcal{F}$. \subsection{The trace function attached to a lisse sheaf} Let $\mathcal{F}$ be an $\ell$-adic sheaf lisse on $U\subset\mathbf{A}^1_{{\mathbf{F}_q}}$ and $$\rho_\mathcal{F}\colon G^{\mathrm{arith}}\to \GL(V_\mathcal{F})$$ the corresponding representation. For $x\in U({\mathbf{F}_q})$ a closed point of degree $1$ at which the representation $\rho_\mathcal{F}$ is unramified, we have, in the previous section, associated a Frobenius conjugacy class $(\Fr_{x}|V_\mathcal{F})$ namely the union of all the $(\Fr_{\{x\}}|V_\mathcal{F})$. By conjugacy, the trace of all these automorphisms $(\Fr_{\{x\}}|V_\mathcal{F})$ is constant within that class: we denote this common value by $$\tr(\Fr_{x}|V_\mathcal{F})$$ and call it the Frobenius trace of $\mathcal{F}$ at $x$. \begin{definition} Given an $\ell$-adic sheaf $\mathcal{F}$ lisse on $U\subset \mathbf{A}^1_{{\mathbf{F}_q}}$; the trace function $K_\mathcal{F}$ associated to this situation is the function on $U({\mathbf{F}_q})$ given by $$x\in U({\mathbf{F}_q})\mapsto K_\mathcal{F}(x)=\tr(\Frob_x|V_\mathcal{F}).$$ This is a priori a $\ov{\mathbf{Q}_{\ell}}$-valued function but it can be considered complex-valued via the fixed embedding $\iota\colon\ov{\mathbf{Q}_{\ell}}\hookrightarrow \mathbf{C}$. \end{definition} \begin{remark}As we have seen in Remark \ref{closedpointsident} $U({\mathbf{F}_q})$ is identified with $$\{x\in{\mathbf{F}_q},\ Q(x)\not=0\}\subset {\mathbf{F}_q}$$ and therefore $K_\mathcal{F}$ can be considered as a function defined on a subset of ${\mathbf{F}_q}$. \end{remark} \begin{remark}\label{remextension} There are several ways by which one could extend $K_\mathcal{F}$ to the whole of $\mathbf{A}^1({\mathbf{F}_q})$. The simplest way is the extension by zero outside $U({\mathbf{F}_q})$; another possible extension (called the {\em middle extension}) would be to set for any $x\in\mathbf{A}^1({\mathbf{F}_q})$, $$K_\mathcal{F}(x):=\tr(\Frob_{\{x\}}|V_\mathcal{F}^{I_{\{x\}}})$$ where $V_\mathcal{F}^{I_{\{x\}}}\subset V_\mathcal{F}$ is the subspace of $I_{\{x\}}$-invariant vectors: the action of the Frobenius class $\Frob_{\{x\}}$ on $V_\mathcal{F}^{I_{\{x\}}}$ is well-defined and its trace does not depend on $\{x\}$. For our purpose, any of the two extensions would work (cf.~ Remark \ref{remdeligneext}). \end{remark} \subsection{Trace functions over $U({\mathbf{F}_{q^n}})$} In fact, an $\ell$-adic sheaf, lisse on $U_{\mathbf{F}_q}$ give rise to a whole family of trace functions. For any $n\geq 1$, let us consider the finite extension $\mathbf{F}_{q^n}$ let us and base change the whole situation to that field: this amounts to replace $\mathbf{P}^1_{{\mathbf{F}_q}}$ by $\mathbf{P}^1_{{\mathbf{F}_{q^n}}}$, $K={\mathbf{F}_q}(X)$ by $K_n={\mathbf{F}_{q^n}}(X)$, and the arithmetic Galois group $G^{\mathrm{arith}}$ by $G_n^{\mathrm{arith}}=\Gal(K^{\mathrm{sep}}/K_n)$ (notice that the geometric Galois group does not change). The group $G_n^{\mathrm{arith}}$ is a normal subgroup of $G^{\mathrm{arith}}$ (whose quotient is $\Gal({\mathbf{F}_{q^n}}/{\mathbf{F}_q})$, so we may restrict our initial Galois representation to it: in that way we obtain another $\ell$-adic sheaf denoted $\mathcal{F}_n$ $$\rho_{\mathcal{F}_n}\colonG_n^{\mathrm{arith}}\rightarrow \GL(V_\mathcal{F})$$ and another trace function $$K_{\mathcal{F},n}\colon\map{U({\mathbf{F}_{q^n}})}{\mathbf{C}}{x}{\tr(\Fr_x|V_\mathcal{F})}$$ where $U({\mathbf{F}_{q^n}})$ denotes now the set of closed points of $\mathbf{P}^1_{\mathbf{F}_{q^n}}$ of degree $1$ which are contained in $U$: this set is identified with the set of irreducible monic polynomials of degree $1$ coprime with $Q$ and is therefore identified with $$\{x\in\mathbf{F}_{q^n},\ Q(x)\not=0\}.$$ As we will see below, the existence of this sequence of auxiliary functions is very important: for instance (the Chebotareff density theorem) the full sequence $(K_{\mathcal{F},n})_{n\geq 1}$ characterizes the representation $\rho_\mathcal{F}$ up to semi-simplification. \begin{remark}\label{remwarning} As we have remarked already one has the identifications $$U({\mathbf{F}_q})\simeq \{x\in{\mathbf{F}_q},\ Q(x)\not=0\},\ U({\mathbf{F}_{q^n}})\simeq \{x\in{\mathbf{F}_{q^n}},\ Q(x)\not=0\}.$$ However the inclusion $$\{x\in{\mathbf{F}_q},\ Q(x)\not=0\}\subset \{x\in{\mathbf{F}_{q^n}},\ Q(x)\not=0\}$$ does NOT imply that the function $K_\mathcal{F}$ is "the restriction" of $K_{\mathcal{F},n}$ to $U({\mathbf{F}_q})$. More precisely, if we denote by $x$ the closed point in $U({\mathbf{F}_q})$ associated with the polynomial $X-x\in{\mathbf{F}_q}[X]$ and by $x_n$ the closed point in $U({\mathbf{F}_{q^n}})$ associated with the same polynomial $X-x\in{\mathbf{F}_{q^n}}[X]$ one has the formula $$K_{\mathcal{F},n}(x_n)=\tr(\Frob_{x_n}|V_\mathcal{F})=\tr(\Frob^n_{x}|V_\mathcal{F}).$$ More generally, for $d$ dividing $n$ let $\pi\in{\mathbf{F}_q}[X]$ be a monic irreducible polynomial of degree $d$ and coprime to $Q$. Then $\pi$ defines a closed point $x_\pi$ of $U$ of degree $d$. Since $d|n$, the polynomial $\pi$ splits in ${\mathbf{F}_{q^n}}$ $$\pi(X)=\prod_{i=1}^d(X-x_i)$$ and any of its roots $x_i$ defines a closed point in $U({\mathbf{F}_{q^n}})$ (corresponding to the polynomial $X-x_i\in{\mathbf{F}_{q^n}}[X]$); we then have for $i=1,\ldots,d$ \begin{equation}\label{eqwarning} K_{\mathcal{F},n}(x_i)=\tr(\Frob_{x_i}|V_\mathcal{F})=\tr(\Frob_{\pi}^{n/d}|V_\mathcal{F}). \end{equation} \end{remark} \begin{remark} There is, a priori, no reason to limit ourselves to the affine line: if $\mathcal{C}_{{\mathbf{F}_q}}$ is any smooth geometrically connected curve over ${\mathbf{F}_q}$ with function field $K_\mathcal{C}$ (which is a finite extension of ${\mathbf{F}_q}(X)$) and any dense open subset $U\subset\mathcal{C}$ defined over ${\mathbf{F}_q}$, an $\ell$-adic sheaf $\mathcal{F}$ on $\mathcal{C}$ lisse on some non-empty open set $U$ is a continuous representation$$\rho_\mathcal{F}\colon\Gal(K^{sep}_\mathcal{C}/K_\mathcal{C})\rightarrow \GL(V_\mathcal{F})$$ which is unramified at every closed point of $U$. \end{remark} \subsection{The language of representations} The definition of sheaves and trace functions in terms of Galois representations enable to use consistently the vocabulary from representation theory. For instance \begin{itemize} \item An $\ell$-adic sheaf is {\em irreducible} (resp.~{\em isotypic}) if the representation $\rho_\mathcal{F}$ is. \item An $\ell$-adic sheaf is {\em geometrically irreducible} (resp.~{\em geometrically isotypic}) if the {\em restriction} of $\rho_\mathcal{F}$ to the {\em geometric Galois group} $G^{\mathrm{geom}}$ is. \item An $\ell$-adic sheaf is {\em trivial} if the representation $\rho_\mathcal{F}$ is. The trace function is constant, equal to $1$. \item An $\ell$-adic sheaf is {\em geometrically trivial} if the {\em restriction} of $\rho_\mathcal{F}$ to the {\em geometric Galois group} $G^{\mathrm{geom}}$ is. In view of \ref{exactgalois} its trace function is a constant, say $K_\mathcal{F}(x)=\alpha$ and for any $n\geq 1$, $$K_{\mathcal{F},n}(x)=\alpha^n.$$ \end{itemize} One can also create new sheaves and trace function from other sheaves. \begin{itemize} \item The {\em dual sheaf} $D(\mathcal{F})$ is the contragredient representation $D(\rho_\mathcal{F})$ acting on the dual space $\Hom(V_\mathcal{F},\ov{\mathbf{Q}_{\ell}})$. This sheaf is also lisse on $U$ and its trace function is given for $x\in U({\mathbf{F}_q})$ by $$K_{D(\mathcal{F})}(x)=\tr(\Frob_x^{-1}|V_\mathcal{F}).$$ \item Given another sheaf $\mathcal{G}$ lisse on some $U'$, one can form the {\em direct sum sheaf} $\mathcal{F}\oplus\mathcal{G}$ whose representation is $\rho_{\mathcal{F}\oplus\mathcal{G}}=\rho_\mathcal{F}\oplus\rho_\mathcal{G}$; the sheaf is lisse (at least) on $U\cap U'$, of rank the sum of the ranks, and its trace function is given, for $x\in U({\mathbf{F}_q})\cap U'({\mathbf{F}_q})$ by the sum $$K_{\mathcal{F}\oplus\mathcal{G}}(x)=K_{\mathcal{F}}(x)+K_{\mathcal{G}}(x).$$ \item Given another sheaf $\mathcal{G}$ lisse on some $U'$, one can form the {\em tensor product sheaf} $\mathcal{F}\otimes\mathcal{G}$ whose representation is $\rho_{\mathcal{F}\otimes\mathcal{G}}=\rho_\mathcal{F}\otimes\rho_\mathcal{G}$; the sheaf is lisse (at least) on $U\cap U'$, of rank the product of the ranks, and its trace function is given, for $x\in U({\mathbf{F}_q})\cap U'({\mathbf{F}_q})$ by the product $$K_{\mathcal{F}\otimes\mathcal{G}}(x)=K_{\mathcal{F}}(x)K_{\mathcal{G}}(x).$$ \item As a special case, one construct the {\em sheaf of homomorphisms} between $\mathcal{F}$ and $\mathcal{G}$ and the {\em sheaf of endomorphisms} of $\mathcal{F}$, $$\mathrm{Hom}(\mathcal{F},\mathcal{G})=D(\mathcal{F})\otimes\mathcal{G} ,\ \mathrm{End}(\mathcal{F})=D(\mathcal{F})\otimes \mathcal{F}.$$ \item Let $H\subset\GL(V_\mathcal{F})$ be an algebraic group containing $\rho_\mathcal{F}(G^{\mathrm{arith}})$ and let $r\colon H\to \GL(V')$ be a finite-dimensional continuous $\ell$-adic representation; the composite representation $r\circ \rho_\mathcal{F}$ defines an $\ell$-adic sheaf, denoted $r\circ\mathcal{F}$, which is lisse on $U$ and has rank $\dim V'$. Its trace function is given, for $x\in U({\mathbf{F}_q})$ by $$K_{r\circ \mathcal{F}}(x)=\tr(r(\Frob_x|V_\mathcal{F})|V').$$ \item Let $f\in{\mathbf{F}_q}(X)$ be non-constant; we can view $f$ as a non-constant morphism $\mathbf{P}^1_{{\mathbf{F}_q}}\to \mathbf{P}^1_{{\mathbf{F}_q}}$. The Galois subgroup corresponding to this covering $$\Gal(K^{\mathrm{sep}}/{\mathbf{F}_q}(f(X)))\subset G^{\mathrm{arith}}$$ is isomorphic to $G^{\mathrm{arith}}$ and therefore the restriction of $\rho_{\mathcal{F}}$ to $\Gal(K^{\mathrm{sep}}/{\mathbf{F}_q}(f(X)))$ defines an $\ell$-adic sheaf on $\mathbf{P}^1_{{\mathbf{F}_q}}$ lisse on $f^{-1}(U)$ which is denoted $f^*\mathcal{F}$ and is called the {\em pull-back} of $\mathcal{F}$ by $f$. Its rank equals the rank of $\mathcal{F}$ and its trace function is given, for $x\in f^{-1}(U)({\mathbf{F}_q})-\{\infty\}$ by $$K_{f^*\mathcal{F}}(x)=K_\mathcal{F}(f(x)).$$ \item If the sequel, we will use this pull-back sheaf construction for the following morphisms: This are special cases of {\em fractional linear transformations}: given $\gamma=\begin{pmatrix}a&b\\c&d \end{pmatrix}\in\PGLd({\mathbf{F}_q})$ (the group of automorphisms of $\mathbf{P}^1_{{\mathbf{F}_q}}$) one defines the automorphism $$[\gamma]\colon x\to \frac{ax+b}{cx+d}.$$ We denote the pull-back sheaf by $\gamma^*\mathcal{F}$. In particular, for $\gamma=n(b)=\begin{pmatrix}1&b\\0&1 \end{pmatrix}$ we obtain the the additive translation map $[+b]\colon x\to x+b$, and for $\gamma=t(a)=\begin{pmatrix}a&0\\0&1 \end{pmatrix},\ a\not=0$ we obtain the multiplicative translation map $[\times a]\colon x\to ax.$ \end{itemize} \subsection{Purity} We will be interested in the size of trace functions. For this we need the notion of {\em purity}. \begin{definition} Let $w\in\mathbf{Z}$. an $\ell$-adic sheaf $\mathcal{F}$, lisse on $U$ is punctually pure of weight $w$ if, for any $x\in U_{{\mathbf{F}_q}}$, the eigenvalues of $(\Frob_x|V_\mathcal{F})$ are complex numbers\footnote{via the fixed embedding $\ov{\mathbf{Q}_{\ell}}\hookrightarrow\mathbf{C}$.} of modulus $q_x^{w/2}$. It is mixed of weights $\leq w$ if (as a representation) it is a successive extension of sheaves punctually pure of weights $\leq w$. In particular, if $\mathcal{F}$ is mixed of weights $\leq w$, one has for any $x\in U({\mathbf{F}_q})$ \begin{equation}\label{infinitybound} |K_\mathcal{F}(x)|\leq \rk(\mathcal{F})q^{w/2}. \end{equation} \end{definition} \begin{remark} It is always possible to reduce to the case of $\ell$-adic sheaves mixed of weight $w=0$. For any $w\in\mathbf{Z}$ there exist an $\ell$-adic sheaf denoted $\ov{\mathbf{Q}_{\ell}}({w/2})$ of rank $1$, lisse on $\mathbf{P}^1_{\mathbf{F}_q}$, whose restriction to $G^{\mathrm{geom}}$ is trivial and such that $\Fr_x$ acts by multiplication by $q_x^{-w/2}$ (in particular $\ov{\mathbf{Q}_{\ell}}({w/2})$ is pure of weight $-w$). Given $\mathcal{F}$ of some weight $w'$, the tensor product $$\mathcal{F}(w/2):=\mathcal{F}\otimes \ov{\mathbf{Q}_{\ell}}({w/2})$$ has weight $w'-w$ and has trace function given by $$x\mapsto q^{-w/2}K_\mathcal{F}(x).$$ \end{remark} In the sequel, unless stated otherwise, we will always assume that trace functions are associated with sheaves which are mixed of weights $\leq 0$. \begin{remark}\label{remdeligneext} Deligne proved (\cite[Lemme (1.8.1)]{WeilII}) that for a sheaf punctually pure of weight $w$, for any closed point $x\in\mathbf{P}^1_{\mathbf{F}_q}$, the eigenvalues of $(\Frob_x|V_\mathcal{F}^{I_x})$ have modulus $\leq q_x^{w/2}$. In particular $$|\tr(\Frob_x|V_\mathcal{F}^{I_x})|\leq \rk(\mathcal{F})q_x^{w/2}.$$ In particular (assuming that $w=0$) $\ell^\infty$-norm of the difference between the extension by $0$ of $K_\mathcal{F}$ from $U({\mathbf{F}_q})$ to $\mathbf{A}^1({\mathbf{F}_q})$ and the middle-extension (described in Remark \ref{remextension}) is bounded by $$\rk(\mathcal{F})|\mathbf{A}^1(\ov{\mathbf{F}_q})-U(\ov{\mathbf{F}_q})|.$$ As we will see, we will be interested in situations where this quantity is bounded by an absolute constant (independent of $q$) the consequence being that whatever extension we choose between the two, it won't make much of a difference. \end{remark} \subsection{Other functions} There are other functions on ${\mathbf{F}_q}$ of great interest which do not qualify as trace functions under our current definition. For instance the Dirac function at some point $a\in{\mathbf{F}_q}$ $$\delta_a(n)=\begin{cases}1&\hbox{ if }n\equiv a\mods q\\0&\hbox{ otherwise }. \end{cases} $$ which, extended to $\mathbf{Z}$ is the characteristic function of the arithmetic progression $a+q\mathbf{Z}$ (obviously of considerable interest for analytic number theory). It turns out that such functions can be related to trace functions in our sense by very natural transformations and this will allow us to make some progress on problems from "classical" analytic number theory. \begin{remark} In fact this function could be interpreted as the trace function of a {\em skyscraper sheaf} supported at the closed point $a$ but we will not do this here. \end{remark} \subsection{Local monodromy representations} Given $\mathcal{F}$ some $\ell$-adic sheaf, let $$D^{ram}_{\mathcal{F}}\subset \mathbf{P}^1(\ov{{\mathbf{F}_q}})-U(\ov{\mathbf{F}_q})$$ be the set of geometric points where the representation $\rho_\mathcal{F}$ is ramified, that is the inertia group $I_x$ acts non-trivially. The restricted representation $$\rho_{\mathcal{F}|I_x}=\rho_{\mathcal{F},x}$$ is called the local monodromy representation of $\mathcal{F}$ at $x$ (cf.~Remark \ref{abuse} for the abuse of notation). Although $D^{ram}_{\mathcal{F}}$ is disjoint from $U(\ov{\mathbf{F}_q})$, this finite set of representations is fundamental to study $\mathcal{F}$ and its trace function. Let us recall the exact sequence \cite[Chap. 1]{GKM} $$1\rightarrow P_x\rightarrow I_x\rightarrow I_x^{tame}\rightarrow 1$$ where $I_x^{tame}$ is the {\em tame inertia quotient} and is isomorphic to $\prod_{p\not= q}\mathbf{Z}_p$, while $P_x$ is the $q$-Sylow subgroup of $I_x$ and is called the wild inertia subgroup. \begin{definition} The sheaf is tamely ramified at $x$ if $P_x$ acts trivially on $V_\mathcal{F}$ (so that $\rho_{\mathcal{F},x}$ factors through $I_x^{tame}$) and is called wildly ramified otherwise. \end{definition} \subsubsection{The Swan conductor} If the representation is wildly ramified one can measure how deep it is by means of a numerical invariant: {the Swan conductor}. The wild inertia subgroup $I_x$ is equipped with the decreasing {\em upper numbering filtration} $I_x^{(\lambda)}$ indexed by non-negative real numbers $\lambda\geq 0$, such that $$P_x=I_x^{(>0)}=\bigcup_{\lambda>0}I_x^{\lambda}.$$ Given $V=V_\mathcal{F}$ as above there is a $P_x$-stable direct sum decomposition $$V=\bigoplus_{\lambda\in \mathrm{Break}(V)}V(\lambda)$$ indexed by a finite set of rational numbers $ \mathrm{Break}(V)\subset \mathbf{Q}_{\geq 0}$ (the set of {\em breaks} of the $I_x$-module $V$) such that $$V(0)=V^{P_x},\ V(\lambda)^{I_x^{(\lambda)}}=0,\ V(\lambda)^{I_x^{(\lambda')}}=V(\lambda),\ \lambda'>\lambda$$ (see \cite[Chap. 1]{GKM}). The {\em Swan conductor} is defined as $$\swan_x(\mathcal{F})=\sum_{\lambda\in \mathrm{Break}(V)}\lambda\dim V(\lambda)$$ and turns out to be an integer \cite[Prop. 1.9]{GKM}. In the decomposition $$V=V(0)\oplus \bigoplus_\stacksum{\lambda\in \mathrm{Break}(V)}{\lambda>0}V(\lambda)=V(0)\oplus V(>0):=V^{tame}\oplus V^{wild}$$ the first summand is called the {\em tame part} and the remaining one the {\em wild part}. \section{Summing trace functions over ${\mathbf{F}_q}$} Let $K_\mathcal{F}$ be the trace function associated to a sheaf $\mathcal{F}$ lisse on $U_{\mathbf{F}_q}$. It is a function on $U({\mathbf{F}_q})$ which we may extend by zero to $\mathbf{A}^1({\mathbf{F}_q})\simeq{\mathbf{F}_q}=\mathbf{Z}/q\mathbf{Z}$. The Grothendieck-Lefschetz trace formula provides an alternative expression for the sum of $K_\mathcal{F}$ over the whole $\mathbf{A}^1({\mathbf{F}_q})$ \begin{theorem}[Grothendieck-Lefschetz trace formula] Let $\mathcal{F}$ be lisse on $U$; there exists three finite dimensional $\ell$-adic representations of $\Gal(\ov{{\mathbf{F}_q}}/{\mathbf{F}_q})$, $H^i_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})$ such that \begin{equation}\label{GLformula} \sum_{x\in U({\mathbf{F}_q})}K_\mathcal{F}(x)=\sum_{x\in U({\mathbf{F}_q})}\tr(\frob_x|\mathcal{F})=\sum_{i=0}^2(-1)^i\tr(\Frob_q|H^i_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})). \end{equation} More generally, for any $n\geq 1$, $$\sum_{x\in U({\mathbf{F}_{q^n}})}K_{\mathcal{F},n}(x)=\sum_{x\in U({\mathbf{F}_{q^n}})}\tr(\frob_x|\mathcal{F})=\sum_{i=0}^2(-1)^i\tr(\Frob^n_q|H^i_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})).$$ \end{theorem} The $\ov{\mathbf{Q}_{\ell}}$-vector spaces $H^i_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})$ are the so-called compactly supported étale cohomology groups of $\mathcal{F}$ and can also be considered as $\ell$-adic sheaves over the point $\Spec({\mathbf{F}_q})$. The above formula reduces the evaluation of averages of trace functions to that of the three summands $$\tr(\Frob_q|H^i_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})),\ i=0,1,2,$$ we need therefore to control the dimension of these spaces as well as the size of the eigenvalues. We start with the former. \subsection{Bounding the dimension of the cohomology groups} The extremal cohomology groups have a simple interpretation. First $$H^0_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})=\begin{cases}0&\hbox{ if $U\not=\mathbf{P}^1_{{\mathbf{F}_q}}$}\\ V_\mathcal{F}^{G^{\mathrm{geom}}}&\hbox{ if $U=\mathbf{P}^1_{{\mathbf{F}_q}}$}. \end{cases} $$ As a $\Gal(\ov{{\mathbf{F}_q}}/{\mathbf{F}_q})$-representation, one has an isomorphism \begin{equation}\label{H2isom} H^2_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})\simeq V_{\mathcal{F},G^{\mathrm{geom}}}(-1) \end{equation} (ie $H^2_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})$ is isomorphic to the quotient of $G^{\mathrm{geom}}$-coinvariants of $V_{\mathcal{F}}$ twisted by $\ov{\mathbf{Q}_{\ell}}(-1)$). In particular, if $\mathcal{F}$ is geometrically irreducible (non geometrically trivial) or more generally geometrically isotypic (the underlying geometric irreducible representation being non trivial) one has $$H^2_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})=0.$$ In any case, one has $$\dim H_c^0(U_{\ov{{\mathbf{F}_q}}},\mathcal{F}),\ \dim H_c^2(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})\leq\rk(\mathcal{F}).$$ The dimension of the middle cohomology group is now determined by the \begin{theorem}[The Grothendieck-Ogg-Shafarevich formula] One has the following equality $$\chi(U_{\ov{\mathbf{F}_q}},\mathcal{F})=\sum_{i=0}^2(-1)^i\dim H^i_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})=\rk(\mathcal{F})(2-|\mathbf{P}^1(\ov{\mathbf{F}_q})-U(\ov{\mathbf{F}_q})|)-\sum_{x\in D^{ram}_{\mathcal{F}}(\ov{\mathbf{F}_q})}\swan_x(\mathcal{F}).$$ \end{theorem} Observe that the quantities that occur are local geometric data associated to the sheaf yet this collection of local data provides global informations. We then define the following ad-hoc numerical invariant which serves as a measure of the complexity of the sheaf $\mathcal{F}$: \begin{definition} The conductor of $\mathcal{F}$ is defined via the following formula $$C(\mathcal{F})=\rk(\mathcal{F})+|\mathbf{P}^1(\ov{\mathbf{F}_q})-U(\ov{\mathbf{F}_q})|+\sum_{x\in D^{ram}_{\mathcal{F}}(\ov{\mathbf{F}_q})}\swan_x(\mathcal{F}).$$ \end{definition} In view of this definition we have \begin{equation}\label{eqdimbound} \sum_{i=0}^2\dim H^i_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})\ll C(\mathcal{F})^2. \end{equation} \subsection{Examples} \subsubsection{The trivial sheaf} The trivial representation $\ov{\mathbf{Q}_{\ell}}$ is everywhere lisse, pure of weight $0$, of rank $1$ and conductor $1$ and $$K_{\ov{\mathbf{Q}_{\ell}}}(x)=1.$$ \subsubsection{Kummer sheaf \cite{sga4h}} For any non-trivial Dirichlet character $\chi\colon({\mathbf{F}^\times_q},\times)\to \mathbf{C}^\times$ there exists an $\ell$-adic sheaf (a Kummer sheaf) denoted $\mathcal{L}_\chi$ which is of rank $1$, pure of weight $0$, lisse on $\mathbf{G}_{m,{\mathbf{F}_q}}=\mathbf{P}^1_{\mathbf{F}_q}-\{0,\infty\}$ with trace function $$K_{\mathcal{L}_\chi}(x)=\chi(x),\ K_{\mathcal{L}_\chi,n}(x)=\chi(\nr_{{\mathbf{F}_{q^n}}/{\mathbf{F}_q}}(x))=:\chi_n(x)$$ and conductor $$C(\mathcal{L}_\chi)=3;$$ indeed $\swan_0(\mathcal{L}_\chi)=\swan_\infty(\mathcal{L}_\chi)=0$. \subsubsection{Artin-Schreier sheaf \cite{sga4h}} For any additive character $\psi\colon({\mathbf{F}_q},+)\to \mathbf{C}^\times$ there exists an $\ell$-adic sheaf (an Artin-Schreier sheaf) denoted $\mathcal{L}_\psi$ which is of rank $1$, pure of weight $0$, lisse on $\mathbf{A}^1_{{\mathbf{F}_q}}=\mathbf{P}^1_{\mathbf{F}_q}-\{\infty\}$ with trace function $$K_{\mathcal{L}_\psi}(x)=\psi(x),\ K_{\mathcal{L}_\psi,n}(x)=\psi(\tr_{{\mathbf{F}_{q^n}}/{\mathbf{F}_q}}(x))=:\psi_n(x)$$ and conductor (if $\psi$ is non-trivial) $$C(\mathcal{L}_\psi)=3.$$ (indeed $\swan_\infty(\mathcal{L}_\psi)=1$). If $f\in{\mathbf{F}_q}(X)-{\mathbf{F}_q}$, the pull-back sheaf $\mathcal{L}_{\psi(f)}$ is geometrically irreducible and has conductor $$1+\hbox{ number of poles of $f$}+\hbox{ sum of multiplicities of the poles of $f$}.$$ More generally any character $\psi$ of $({\mathbf{F}_{q^n}},+)$ is of the shape $$x\mapsto \psi_1(\tr_{{\mathbf{F}_{q^n}}/{\mathbf{F}_q}}(ax))$$ for $\psi_1$ a non-trivial character of $({\mathbf{F}_q},+)$ and $a\in{\mathbf{F}_{q^n}}$, and associated to each such character is an Artin-Schreier sheaf $\mathcal{L}_\psi$. \subsubsection{(hyper)-Kloosterman sheaves \cite{GKM}} Hyper-Kloosterman sums are formed by multiplicative convolution out of additive characters. Given $K_1,K_2\colon{\mathbf{F}^\times_q}\rightarrow\mathbf{C}$ one defines their (normalized) multiplicative convolution: $$K_1\star K_2\colon x\in{\mathbf{F}^\times_q}\to \frac{1}{q^{1/2}}\sum_\stacksum{x_1,x_2\in{\mathbf{F}^\times_q}}{x_1.x_2=x}K_1(x_1)K_2(x_2)=\frac{1}{q^{1/2}}\sum_{x_1\in{\mathbf{F}^\times_q}}K_1(x_1)K_2(x/x_1).$$ Similarly for any $n\geq 1$ one defines the multiplicative convolution of $K_{1,n},K_{2,n}\colon {\mathbf{F}^\times_{q^n}}\rightarrow\mathbf{C}$ as $$K_{1,n}\star K_{2,n}\colon x\in{\mathbf{F}^\times_{q^n}}\to \frac{1}{q^{n/2}}\sum_\stacksum{x_1,x_2\in{\mathbf{F}^\times_{q^n}}}{x_1.x_2=x}K_{1,n}(x_1)K_{2,n}(x_2).$$ Now, given a non-trivial additive character $\psi$ of ${\mathbf{F}_q}$ and $k\geq 2$, the hyper-Kloosterman sums can be expressed as the $k$-fold multiplicative convolutions of $\psi$: $$\Kl_{k,\psi}(x;q)=\star_{k\hbox{ times}}\psi(x)=\frac{1}{q^{({k-1})2}}\sum_\stacksum{x_1,\ldots,x_k\in{\mathbf{F}^\times_q}}{x_1.\ldots.x_k=x}\psi(x_1+\ldots+x_k)$$ and more generally, one defines hyper-Kloosterman sums over ${\mathbf{F}^\times_{q^n}}$ $$\Kl_{k,\psi}(x;q^n)=\star_{k\hbox{ times}}\psi_n(x)=\frac{1}{q^{n({k-1})2}}\sum_\stacksum{x_1,\ldots,x_k\in{\mathbf{F}^\times_{q^n}}}{x_1.\ldots.x_k=x}\psi_n(x_1+\ldots+x_k).$$ These are in fact trace functions: their underlying sheaves were constructed by Deligne and were subsequently studies in depth by Katz \cite{GKM}: \begin{theorem} For any $k\geq 2$, there exists an $\ell$-adic sheaf (the Kloosterman sheaf) denoted $\mathcal{K}\ell_{k,\psi}$, of rank $k$, pure of weight $0$, geometrically irreducible, lisse on $\mathbf{G}_{m,{\mathbf{F}_q}}$ with trace function $$K_{\mathcal{K}\ell_{k,\psi}}(x)=\Kl_{k,\psi}(x;q)$$ and more generally, for any $n\geq 1$ $$K_{\mathcal{K}\ell_{k,\psi},n}(x)=\Kl_{k,\psi}(x;q^n).$$ One has $\swan_0(\mathcal{K}\ell_{k,\psi})=0$ and $\swan_\infty(\mathcal{K}\ell_{k,\psi})=1$ so that the conductor of that sheaf equals $$C(\mathcal{K}\ell_{k,\psi})=k+2+1.$$ The Kloosterman sheaves have trivial determinant $$\det\mathcal{K}\ell_k=\ov{\mathbf{Q}_{\ell}}$$ and if (and only if) $k$ is even, the Kloosterman sheaf $\mathcal{K}\ell_k$ is self-dual: $$D(\mathcal{K}\ell_k)\simeq \mathcal{K}\ell_k.$$ \end{theorem} \begin{remark} When $\psi(\cdot)=\mathrm{e}_q(\cdot)$ we will not mention the additive character $\mathrm{e}_q$ in the notation. \end{remark} \subsection{Deligne's Theorem on the weight} Now that we control the dimension of the cohomology groups occurring in the Grothendieck-Lefschetz trace formula, it remains to control the size of their Frobenius eigenvalues. Suppose that $\mathcal{F}$ is pure of weight $0$ so that $$|K_\mathcal{F}(x)|\leq \rk(\mathcal{F}).$$ As we have seen, as long as $U\not=\mathbf{P}^1$, $H^0_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})=0$. By \eqref{H2isom}, the eigenvalues of $\Frob_q$ acting on $H^2_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})$ are of the form $$q\alpha_i,\ i=1,\ldots,\dim(V_{\mathcal{F},G^{\mathrm{geom}}})\hbox{ with }|\alpha_i|=1.$$ The trace of the Frobenius on the middle cohomology group $\tr(\Frob_q|H^1_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F}))$ is much more mysterious but fortunately we have the following theorem of Deligne \cite{WeilII}. \begin{theorem}[The Generalized Riemann Hypothesis for finite fields]\label{delignethm} The eigenvalues of $\Frob_q$ acting on $H^1_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F})$ are complex numbers of modulus $\leq q^{1/2}$. \end{theorem} We deduce from this \begin{corollary}\label{delignecor} Let $\mathcal{F}$ be an $\ell$-adic sheaf lisse on some $U$ pure of weight $0$; one has $$\sum_{x\in{\mathbf{F}_q}}K_\mathcal{F}(x)-{\tr(\Frob_q|H^2_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F}))}\ll C(\mathcal{F})^2q^{1/2}.$$ More generally for any $n\geq 1$ $$\sum_{x\in{\mathbf{F}_{q^n}}}K_{\mathcal{F},n}(x)-{\tr(\Frob^n_q|H^2_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F}))}\ll C(\mathcal{F})^2q^{n/2}.$$ In particular if $\mathcal{F}$ is geometrically irreducible or isotypic with no trivial component, one has $$\sum_{x\in{\mathbf{F}_q}}K_\mathcal{F}(x)\ll C(\mathcal{F})^2q^{1/2}.$$ Here, the implied constants are absolute. \end{corollary} In practical applications we will be faced with situations where we have a sequence of sheaves $(\mathcal{F}_q)_q$ indexed by an infinite set of primes (with $\mathcal{F}_q$ a sheaf over the field ${\mathbf{F}_q}$) such that the sequence of conductors $(C(\mathcal{F}_q))_q$ remains uniformly bounded (by $C$ say). In such situation, the above formula represents an asymptotic formula as $q\rightarrow\infty$ for the sum of $q-O(1)$ terms $$\sum_{x\in U({\mathbf{F}_q})}K_\mathcal{F}(x)$$ with main term ${\tr(\Frob_q|H^2_c(U_{\ov{{\mathbf{F}_q}}},\mathcal{F}))}$ (possibly $0$) and an error term of size $\ll C^2q^{1/2}$. \section{Quasi-orthogonality relations} We will often apply the trace formula and Deligne's theorem to the following sheaf: given $\mathcal{F}$ and $\mathcal{G}$ two $\ell$-adic sheaves both lisse on some non-empty open set $U\subset\mathbf{A}^1_{{\mathbf{F}_q}}$ and both pure of weight $0$; consider the tensor product $\mathcal{F}\otimes D(\mathcal{G})$. This sheave is also lisse on $U$ and pure of weight $0$, moreover from the definition of the conductor (see \cite[Chap. 1]{GKM}) one sees that \begin{equation}\label{conductortensor} C(\mathcal{F}\otimes D(\mathcal{G}))\leq C(\mathcal{F})C(\mathcal{G}). \end{equation} The trace functions of $\mathcal{F}\otimes D(\mathcal{G})$ are given for $x\in U({\mathbf{F}_{q^n}})$ by $$x\mapsto K_{\mathcal{F}\otimes D(\mathcal{G}),n}(x)=K_{\mathcal{F},n}(x)\ov{K_{\mathcal{G},n}(x)}.$$ Therefore the trace formula can be used to evaluate the correlation sums between the trace function of $\mathcal{F}$ and $\mathcal{G}$, $$\mathcal{C}(\mathcal{F},\mathcal{G}):=\frac{1}{q}\sum_{x\in {\mathbf{F}_q}}K_{\mathcal{F}}(x)\ov{K_{\mathcal{G}}(x)};$$ more generally for any $n\geq 1$ we set $$\mathcal{C}_n(\mathcal{F},\mathcal{G}):=\frac{1}{q^n}\sum_{x\in {\mathbf{F}_{q^n}}}K_{\mathcal{F},n}(x)\ov{K_{\mathcal{G},n}(x)}.$$ Indeed, by Corollary \ref{delignecor}, one has \begin{equation}\label{corrn} \mathcal{C}_n(\mathcal{F},\mathcal{G})=\tr(\Frob^n_q|V_{\mathcal{F}\otimes D(\mathcal{G}),G^{\mathrm{geom}}})+O(\frac{C(\mathcal{F})C(\mathcal{G})}{q^{n/2}}). \end{equation} In particular if $C(\mathcal{F})C(\mathcal{G})$ are bounded while $q^n\rightarrow\infty$, one obtains an asymptotic formula whose main term is given by the trace of the powers of Frobenius acting on the coinvariants of $\mathcal{F}\otimes D(\mathcal{G})\simeq \Hom(\mathcal{G},\mathcal{F})$. \subsection{Decomposition of sheaves and trace functions} Using first a weaker version of the formula (with an error term converging to $0$ as $n\rightarrow\infty$), Deligne, on his way to the proof of Theorem \ref{delignethm}, established that any $\ell$-adic sheaf pure of weight $0$ is geometrically semi-simple (the representation $\rho_{\mathcal{F}|G^{\mathrm{geom}}}$ decomposes into a direct sum of irreducible representations (of $G^{\mathrm{geom}}$)) \cite[Th\'eor\`eme (3.4.1)]{WeilII}; the irreducible components occurring in the decomposition of $\rho_{\mathcal{F}|G^{\mathrm{geom}}}$ are called the {\it geometric irreducible components of $\mathcal{F}$.} This is not exactly valid for the arithmetic representation, but considering its semi-simplification, one obtains a decomposition $$\rho^{ss}_{\mathcal{F}}=\bigoplus_{i\in I}\rho_{\mathcal{F}_i}$$ where the $\rho_{\mathcal{F}_i}$ are arithmetically irreducible (and pure) and lisse on $U$. Regarding geometric reducibility, each $\rho_{\mathcal{F}_i}$ is either geometrically isotypic or is induced from a representation of $\Gal(K^{\mathrm{sep}}/k.K)$ for $k$ some finite extension of ${\mathbf{F}_q}$. Since semi-simplification does not change the trace function, we obtain a decomposition of the trace function $$K_\mathcal{F}=\sum_i K_{\mathcal{F}_i}.$$ Moreover a computation shows that whenever $\mathcal{F}_i$ is induced one has $K_{\mathcal{F}_i}\equiv 0$ on $U({\mathbf{F}_q})$. Therefore we obtain \begin{proposition}\label{propdecomp} The trace function associated to some punctually pure sheaf $\mathcal{F}$ lisse on $U$ can be decomposed into the sum of $\leq C(\mathcal{F})$ trace functions associated to sheaves $\mathcal{F}_i$, that are lisse on $U$, punctually pure of weight $0$, geometrically isotypic with conductors $C(\mathcal{F}_i)\leq C(\mathcal{F})$. \end{proposition} This proposition reduces the study of trace functions to trace functions associated to geometrically isotypic or (most of the time) geometrically irreducible sheaves. From now on (unless stated otherwise) we will assume that the trace functions are associated to sheaves that are punctually pure of weight $0$ and geometrically isotypic. To ease notations, we say that such sheaves are "isotypic" or "irreducible" omitting the mention "geometrically" and likewise will speak of isotypic or irreducible trace functions. In such situation, using Schur lemma, the formula for \eqref{corrn} specializes to the \begin{theorem}[Quasi-orthogonality relations]\label{thmcorrelation} Supppose that $\mathcal{F}$ and $\mathcal{G}$ are both geometrically isotypic with $n_\mathcal{F}$ copies of the irreducible component $\ov\mathcal{F}_{irr}$ for $\mathcal{F}$ and $n_\mathcal{G}$ copies of the irreducible component $\ov\mathcal{G}_{irr}$ for $\mathcal{G}$. There exists $n_\mathcal{F}.n_\mathcal{G}$ complex numbers $\alpha_{i,\mathcal{F},\mathcal{G}}$ of modulus $1$ such that \begin{equation}\label{quasiorth1} \mathcal{C}_n(\mathcal{F},\mathcal{G})=(\sum_{i=1}^{n_\mathcal{F} n_\mathcal{G}}\alpha^n_{i,\mathcal{F},\mathcal{G}})\delta_{\ov\mathcal{F}\sim_{geom}\mathcal{G}}+O(C(\mathcal{F})^2C(\mathcal{G})^2q^{-n/2}). \end{equation} In particular if $\mathcal{F}$ and $\mathcal{G}$ are both geometrically irreducible there exist $\alpha_{\mathcal{F},\mathcal{G}}\in\mathbf{S}^1$ such that \begin{equation}\label{quasiorth2} \mathcal{C}_n(\mathcal{F},\mathcal{G})=\alpha^n_{\mathcal{F},\mathcal{G}}\delta_{\ov\mathcal{F}\sim_{geom}\mathcal{G}}+O(C(\mathcal{F})^2C(\mathcal{G})^2q^{-n/2}). \end{equation} In both \eqref{quasiorth1} and \eqref{quasiorth2} the implicit constants are independent of $n$. \end{theorem} \begin{remark} Observe that for $\mathcal{F}$ and $\mathcal{G}$ either the Kummer or Artin-Schreier sheaves these equalities correspond to the orthogonality relations of characters. \end{remark} \begin{remark} If two geometrically irreducible sheaves $\mathcal{F},\mathcal{G}$ are geometrically isomorphic, then their trace functions are proportional: more precisely one has for any $n$ $$K_{\mathcal{F},n}=\alpha^n_{\mathcal{F},\mathcal{G}}K_{\mathcal{G},n}$$ where $\alpha_{\mathcal{F},\mathcal{G}}$ is the complex number of modulus $1$ introduced in the previous statement. \end{remark} When $q^n$ is large compared to $C(\mathcal{F})^2C(\mathcal{G})^2$, the above formula gives a useful criterion to detect whether $\mathcal{F}$ and $\mathcal{G}$ have geometric irreducible components in common. While our focus is on the case $n=1$ and $q\rightarrow\infty$ (while $C(\mathcal{F})^2C(\mathcal{G})^2$ remains bounded), the case $n\rightarrow\infty$ will also prove useful. We start with the following easy lemma \begin{lemma} Given $\alpha_1,\ldots,\alpha_d\in\mathbf{S}^1$, arbitrary complex numbers of modulus $1$, one has $$\limsup_{n\rightarrow\infty}(\alpha_1^n+\ldots+\alpha_d^n)=d.$$ \end{lemma} Using this lemma together with the decomposition into irreducible representations, one obtains the following \begin{corollary}[Katz's Diophantine criterion for irreducibility] Let $\mathcal{F}$ be an $\ell$-adic sheaf lisse on $U$ pure of weight $0$ with decomposition into geometrically irreducible subsheaves denoted $$\mathcal{F}^{geom}=\bigoplus_i\ov{\mathcal{F}}_{i}^{\oplus n_{i}}.$$ Then $$\limsup_{n\rightarrow\infty}\mathcal{C}_n(\mathcal{F},\mathcal{F})=\sum_{\ov\mathcal{F}_{i}}n_{i}^2.$$ In particular, $\mathcal{F}$ is geometrically irreducible if and only if $$\limsup_{n\rightarrow\infty}\mathcal{C}_n(\mathcal{F},\mathcal{F})=1.$$ \end{corollary} \subsection{Counting trace functions} The above orthogonality relations lead to upper bounds for the number of geometric isomorphism classes of $\ell$-adic sheaves of bounded conductor (see \cite{MRL} for the proof): \begin{theorem} Let $C\geq 1$, the number of geometric isomorphism classes of geometrically irreducible $\ell$-adic sheaves of conductor $\leq C$ is finite and bounded by $$q^{O(C^6)}$$ where the implied constant is absolute. \end{theorem} \proof The principle of the proof is as follows: the sheaf-to-trace-function map $\mathcal{F}\to t_\mathcal{F}$ associates to the geometric isomorphism class of some sheaf a line in the $q$-dimensional Hermitian space $\mathbf{C}^{{\mathbf{F}_q}}$ of complex-valued functions on ${\mathbf{F}_q}$ with inner product $$\peter{K,K'}=\frac{1}{q}\sum_{x\in{\mathbf{F}_q}}K(x)\ov{K'}(x).$$ The quasi-orthogonality relations show that these different lines are almost orthogonal to one another and so one obtains a number of almost orthogonal (circles of) unit vectors in the corresponding unit sphere. A sphere-packing argument for high-dimensional hermitian spaces (see \cite{kale}) implies that the number of such vectors cannot be too large.\qed \section{Trace functions over short intervals}\label{Secshort} In the next few sections, we discuss the correlations between trace functions and other classical arithmetic functions. Indeed given a trace function $$K_\mathcal{F}\colon \mathbf{A}^1({\mathbf{F}_q})={\mathbf{F}_q}\rightarrow \mathbf{C}$$ (extended from $U({\mathbf{F}_q})$ to $\mathbf{A}^1({\mathbf{F}_q})$ either by zero or by the middle-extension) we obtain a $q$-periodic function on $\mathbf{Z}$ (which we also denote by $K_\mathcal{F}$) via the $\mods q $-map $$K=K_\mathcal{F}\colon \mathbf{Z}\to \mathbf{Z}/q\mathbf{Z}=\mathbf{A}^1({\mathbf{F}_q})\rightarrow \mathbf{C}.$$ Given some other arithmetic function $\lambda\colon \mathbf{N}\rightarrow \mathbf{C}$ it is natural to compare them by evaluating their correlation sums $$\sum_{n\leq N} K(n)\ov{\lambda(n)}$$ as $N\rightarrow\infty$ (in suitable ranges of interest depending on $C(\mathcal{F})$ and $\lambda$). \subsection{The P\'olya-Vinogradov method} We start with the basic case where $\lambda=1_I$ is the characteristic function of an interval $I$ of $\mathbf{Z}$ (which we may assume is contained in $[0,q-1]$). We want to evaluate non-trivially the sum $$S(K;I):=\sum_{n\in I}K(n).$$ Remember that we mayand do assume that $\mathcal{F}$ is geometrically isotypic and that if $I= [0,q-1]$ such sum can be dealt with by Deligne's theorem. By Parseval, one has $$S(K;I)=\sum_{y\in {\mathbf{F}_q}}\widehat K(y)\ov{\widehat{1_I}}(y)$$ where \begin{equation}\label{eqfourier} \widehat K(y)=\frac{1}{q^{1/2}}\sum_{x\in{\mathbf{F}_q}}K(x)\mathrm{e}_q(xy) \end{equation} and $$\widehat{1_I}(y)=\frac{1}{q^{1/2}}\sum_{x\in I}\mathrm{e}_q(xy)$$ are the (normalized) Fourier transforms of $K$ and $1_I$ (for the abelian group $({\mathbf{F}_q},+)$). One has $$|\widehat{1_I}(y)|\ll \frac{1}{q^{1/2}}\min(|I|,\|\frac yq\|^{-1}) \ll \frac{1}{q^{1/2}}\min(|I|,\frac q{|y|})$$ (here $\|y/q\|$ denote the distance to the nearest integer) which implies that $$\|\widehat{1_I}\|_1\ll \frac{|I|}{q^{1/2}}+q^{1/2}\log q.$$ Therefore one has $$\sum_{n\in I}K(n)\ll \|\widehat K\|_\infty q^{1/2}\log q.$$ This leads us to look at the size of the Fourier transform $y\mapsto \widehat K(y)$. If $K$ is of the shape $\mathrm{e}_q(ax)$ for some $a\in{\mathbf{F}_q}$, its Fourier transform is a Dirac function $$\widehat K(y)=q^{1/2}\delta_{y=a\mods q}$$ and is therefore highly concentrated. To avoid this we make the following \begin{definition} An isotypic sheaf $\mathcal{F}$ is Fourier if its geometric irreducible component is not (geometrically) isomorphic to any Artin-Schreier sheaf $\mathcal{L}_\psi$. \end{definition} In particular, if $K$ is Fourier of conductor $C(\mathcal{F})$, it follows from Theorem \ref{thmcorrelation} that for any $y\in{\mathbf{F}_q}$ $$\widehat K(y)\ll C(\mathcal{F})^2.$$ In that way we obtain the \begin{theorem}[P\'olya-Vinogradov bound] Let $\mathcal{F}$ be a Fourier sheaf of conductor $C(\mathcal{F})$ and $K$ its associated trace function. For any interval $I$ of length $\leq q$, one has $$\sum_{x\in I}K(x)\ll C(\mathcal{F})^2q^{1/2}\log q;$$ here the implicit constant is absolute. \end{theorem} \begin{remark} This statement was obtained for the first time by P\'olya and Vinogradov, independently, in the case of Dirichlet characters $\chi$. In that case the Fourier transform is the normalized Gauss sum $$\widehat\chi(y)=\varepsilon_\chi(y)=\frac{1}{q^{1/2}}\sum_{x\in{\mathbf{F}_q}}\chi(x)\mathrm{e}_q(xy)$$ which is bounded in absolute value by $1$. \end{remark} Observe that this bound is better than the trivial bound $$|\sum_{x\in I}K(x)|\leq C(\mathcal{F})|I|$$ as long as $$|I|\gg_{C(\mathcal{F})}q^{1/2}\log q.$$ This range is called the {\em P\'olya-Vinogradov range} and the question of bounding non-trivially for as many trace functions as possible over shorter intervals is a fundamental problem in analytic number theory with many striking applications. At this moment, the problem is solved only in a very limited number of cases. One important example is the celebrated work of Burgess on Dirichlet characters \cite{Bur} which we discuss in \S \ref{Bursec}. A lot of the forthcoming lectures will indeed be concerned with breaking this barrier in specific cases or in different contexts, and to give some applications. \subsubsection{Bridging the P\'olya-Vinogradov range} The following argument of Fouvry, Kowalski, Michel, Rivat, Soundararajan and Raju improves slightly the P\'olya-Vinogradov range: \begin{theorem}\cite{FKMRRS} Let $\mathcal{F}$ be a Fourier sheaf of conductor $C(\mathcal{F})$ and $K$ its associated trace function. For any interval $I$ of length $\sqrt q<|I|\leq q$, we have $$\sum_{x\in I}K(x)\ll C(\mathcal{F})^2q^{1/2}(1+\log(|I|/q^{1/2})).$$ \end{theorem} \proof Given $r\in\mathbf{Z}$, let $I_r=r+I$; this is again an interval and $S(K;I)$ and $S(K;I_r)$ differ only by $O(\|K\|_\infty r)$, which is a useful bound when $r$ is not too large. Moreover $$\widehat{1_{I_r}}(y)=\mathrm{e}_q(ry)\widehat{1_I}(y).$$ We have therefore $$S(K;I)=\sum_{|y|\leq q/2}\widehat K(y)\ov{\widehat{1_I}(y)}\frac{1}{R}\sum_{0\leq r\leq R-1}\mathrm{e}_q(-ry).$$ We choose $R=[q^{1/2}]+1$; using the bounds $$|\widehat{1_I}(y)|\ll q^{-1/2}\min(|I|,q/|y|),\ \sum_{0\leq r\leq R-1}\mathrm{e}_q(-ry)\ll \min(R,q/|r|)$$ and $$\|K\|_\infty+\|\widehat K\|_\infty\ll C(\mathcal{F})^2$$ we obtain the result. \qed \subsection{A smoothed version of the P\'olya-Vinogradov method} Often in analytic number theory one is not faced with summing a trace function over an interval but instead against some smooth compactly supported function, for instance one has to evaluate sums of the shape $$\sum_{n\in\mathbf{Z}}K(n)V(\frac{n}N),\ V\in C_c^\infty(\mathbf{R})\hbox{ fixed}.$$ By the Poisson summation formula one has the identity \begin{equation}\label{eqpoisson} \sum_{n\in\mathbf{Z}}K(n)V(\frac{n}N)=\frac{N}{q^{1/2}}\sum_{n\in\mathbf{Z}}\widehat K(n)\widehat V(\frac{n N}q) \end{equation} where $$\widehat V(y)=\int_\mathbf{R} V(x)e(xy)dx$$ is the Fourier transform of $V(x)$ (over $\mathbf{R}$). Observe that $\widehat V(y)$ is not compactly supported but at least is of rapid decay: $$\forall A\geq 0,\ \widehat V(y)\ll_{V,A}(1+|y|)^{-A}.$$ Therefore the dual sum in \eqref{eqpoisson} decays rapidly for $n\gg q/N$ and we obtain \begin{proposition} We have \begin{equation}\label{eqsmoothPV} \sum_{n\in\mathbf{Z}}K(n)V(\frac{n}N)\ll_{V} q^{1/2}\|\widehat K\|_\infty\ll_{V,C(\mathcal{F})} q^{1/2}. \end{equation} \end{proposition} \subsection{The Deligne-Laumon Fourier transform} The Fourier transform $$K\mapsto \widehat K\colon y\to \frac{1}{q^{1/2}}\sum_{x\in{\mathbf{F}_q}}K(x)\mathrm{e}_q(-xy)$$ is a well-known and very useful operation on the space of function on $(\mathbf{Z}/q\mathbf{Z},+)$. It serves to realize the spectral decomposition of the functions on $\mathbf{Z}/q\mathbf{Z}$ in terms of eigenvectors of the irreducible representations (characters) of $\mathbf{Z}/q\mathbf{Z}$. Let us recall that \begin{itemize} \item The Fourier transform is essentially involutive: $$\widehat{\widehat K}(x)=K(-x);$$ stated otherwise, one has the Fourier inversion formula: $$K(x)=\sum_{y\in{\mathbf{F}_q}}\widehat K(y)\mathrm{e}_q(yx).$$ \item The Fourier transform is an isometry on $L^2(\mathbf{Z}/q\mathbf{Z})$; stated otherwise, one has the Plancherel formula $$\sum_{x\in{\mathbf{F}_q}}K(x)\ov{K'(x)}=\sum_{y\in{\mathbf{F}_q}}\widehat K(y)\ov{\widehat{K'}(y)}.$$ \item The Fourier transform behaves well with respect to to additive and multiplicative shifts: for $a\in{\mathbf{F}_q},\ z\in{\mathbf{F}^\times_q}$, $$\widehat{[+a]K}(y)=\mathrm{e}_q(ay)\widehat K(y),\ \widehat{[\times z]K}(y)=[\times z^{-1}]\widehat K(y)=\widehat K(y/z).$$ \end{itemize} A remarkable fact, due to Deligne is that, to the Fourier transform for trace functions corresponds a "geometric Fourier transform" for sheaves. The following theorem is due to G. Laumon \cite{laumon87}: \begin{theorem} Let $\mathcal{F}$ be a Fourier sheaf, lisse on $U$ and pure of weight $0$. There exists a Fourier sheaf $\widehat \mathcal{F}$, lisse on some open set $\widehat U$, pure of weight $0$, such that if $K_{\mathcal{F},n}$ denotes the (middle-extension of the) trace function of $\mathcal{F}$, the (middle extension of the) trace function of $\widehat\mathcal{F}$ is given by the Fourier transform $\widehat{K_{\mathcal{F},n}}$ where $$\widehat {K_{\mathcal{F},n}}(x)=\frac{1}{q^{n/2}}\sum_{y}K_{\mathcal{F},n}(y)\mathrm{e}_q(\tr_{\mathbf{F}_{q^n}/{\mathbf{F}_q}}(xy)).$$ The map\footnote{This is in fact a functor in the derived category of constructible $\ell$-adic sheaves.} $\mathcal{F}\mapsto \widehat\mathcal{F}$ is called the geometric Fourier transform. The geometric Fourier transform satisfies (for $a\in{\mathbf{F}_q},\ z\in{\mathbf{F}^\times_q}$) $$\widehat{\widehat\mathcal{F}}=[\times -1]^*\mathcal{F},\ \widehat{[+a]^*\mathcal{F}}=\mathcal{L}_{\mathrm{e}_q(a).}\otimes \widehat\mathcal{F},\ \widehat{[\times z]^*\mathcal{F}}=[\times z^{-1}]^* \widehat\mathcal{F}.$$ \end{theorem} In addition, Laumon also defined local versions of the geometric Fourier transform making possible the computation of the local monodromy representations of $\widehat\mathcal{F}$ in terms of those of $\mathcal{F}$; using these results one deduces \begin{proposition} Given $\mathcal{F}$ as above, one has $$C(\widehat \mathcal{F})\leq 10 C(\mathcal{F})^2.$$ \end{proposition} Also the Fourier transform preserves irreducibility: \begin{proposition} The Fourier transform maps irreducible (resp.~isotypic) sheaves to irreducible (resp.~isotypic) sheaves. \end{proposition} \proof Given $\mathcal{F}$ a geometrically irreducible sheaf pure of weight $0$, to prove that $\widehat\mathcal{F}$ is irreducible, it is enough to show (by Katz's irreducibility criterion) that $$\limsup_n \mathcal{C}_n(\widehat\mathcal{F},\widehat\mathcal{F})=\limsup_n \frac{1}{q^n}\sum_{x\in{\mathbf{F}_{q^n}}}|\widehat {K_{\mathcal{F},n}}(x)|^2=1$$ but by the Plancherel formula $$\frac{1}{q^n}\sum_{x\in{\mathbf{F}_{q^n}}}|\widehat {K_{\mathcal{F},n}}(x)|^2= \frac{1}{q^n}\sum_{y\in{\mathbf{F}_{q^n}}}| {K_{\mathcal{F},n}}(y)|^2$$ and $$\limsup_n \frac{1}{q^n}\sum_{y\in{\mathbf{F}_{q^n}}}| {K_{\mathcal{F},n}}(y)|^2=1$$ by Katz's irreducibility criterion applied in the reverse direction. \qed \begin{xca} Prove that the hyper-Kloosterman sheaves are geometrically irreducible ( hint: observe that the hyper-Kloosterman sums $\Kl_{k+1}$ can be expressed in terms of the Fourier transform of $\Kl_{k}$). \end{xca} \section{Autocorrelation of trace functions; the automorphism group of a sheaf} The next couple of appplications we are going to discuss involve a special type of correlation sums between a trace function and its transform by an automorphism of the projective line. Let $\mathcal{F}$ be an $\ell$-adic sheaf lisse on $U\subset\mathbf{P}^1_{\mathbf{F}_q}$, pure of weight $0$, geometrically irreducible but non trivial, with conductor $C(\mathcal{F})$. Let $\gamma$ be an automorphism of $\mathbf{P}^1_{\mathbf{F}_q}$: $\gamma$ is a fractional linear transformation: $$\gamma\colon z\to \gamma\cdot z=\frac{az+b}{cz+d},\ \begin{pmatrix}a&b\\c&d \end{pmatrix}\in\PGL_2({\mathbf{F}_q}). $$ Let $\gamma^*\mathcal{F}$ be the associated pull-back sheaf; it is lisse on $\gamma^{-1}\cdot U$ and its trace function is $$\gamma^*K(z)=K(\gamma\cdot z)=K(\frac{az+b}{cz+d}).$$ Moreover since $\gamma$ is an automorphism of $\mathbf{P}^1_{\mathbf{F}_q}$, one has $C(\gamma^*\mathcal{F})=C(\mathcal{F})$. The correlations sums we will consider are those of $K$ and $\gamma^*K(z)$ $$\mathcal{C}(\mathcal{F},\gamma):=\mathcal{C}(K,\gamma^*K)=\frac{1}q\sum_{z}K(z)\ov{K(\gamma\cdot z)}$$ and $$\mathcal{C}_n(\mathcal{F},\gamma):=\mathcal{C}_n(K,\gamma^*K)=\frac{1}{q^n}\sum_{z\in{\mathbf{F}_{q^n}}}K_n(z)\ov{K_n(\gamma\cdot z)}$$ which are associated to the tensor product sheaf $$\mathcal{F}\otimes \gamma^*D(\mathcal{F})$$ which is lisse on $U_\gamma=U\cap \gamma^{-1}\cdot U.$ \subsection{The automorphism group} The question of the size of the sums $\mathcal{C}(\mathcal{F},\gamma)$ is largely determined by the following invariant of $\mathcal{F}$ (see \cite{FKM1,FKM2}) \begin{definition} Given $\mathcal{F}$ as above, the group of automorphisms of $\mathcal{F}$, denoted $\Aut_{\mathcal{F}}({\mathbf{F}_q})\subset\PGL_2({\mathbf{F}_q})$, is the group of $\gamma\in \PGL_2({\mathbf{F}_q})$ such that $$\gamma^*\mathcal{F}\simeq_\mathrm{geom}\mathcal{F}.$$ The group $\Aut_{\mathcal{F}}({\mathbf{F}_q})$ is the group of ${\mathbf{F}_q}$-points of an algebraic subgroup, $\Aut_{\mathcal{F}}\hookrightarrow\PGL_{2}$ defined over ${\mathbf{F}_q}$. Let $B\subset \PGL_2$ the subgroup generated by upper-triangular matrices; we define $$B_\mathcal{F}:=\Aut_{\mathcal{F}}\cap B$$ the subgroup of upper-triangular matrices of $\Aut_{\mathcal{F}}$ and $B_\mathcal{F}({\mathbf{F}_q})$ the group of ${\mathbf{F}_q}$-points. \end{definition} The relevance of this notion for the above correlations sums is the following \begin{proposition} For $\gamma\not\in \Aut_{\mathcal{F}}({\mathbf{F}_q})$, one has $$\mathcal{C}(K,\gamma)=O_{C(\mathcal{F})}(q^{-1/2}).$$ \end{proposition} In view of this proposition it is important to determine $\Aut_{\mathcal{F}}({\mathbf{F}_q})$ and $B_\mathcal{F}({\mathbf{F}_q})$. \begin{example} Obviously any element of $\Aut_{\mathcal{F}}$ has to leave $\mathbf{P}^1(\ov{\mathbf{F}_q})-U(\ov{\mathbf{F}_q})$ invariant and all the points in the same orbit have isomorphic local monodromies. This may impose very strong constraints on $\Aut_{\mathcal{F}}$. \begin{itemize} \item If $\mathcal{F}$ is geometrically trivial then $\Aut_{\mathcal{F}}=\PGL_2$. \item If $\psi\colon ({\mathbf{F}_q},+)\rightarrow\mathbf{S}^1$ is non trivial then $G_{\mathcal{L}_\psi}=N=\{\begin{pmatrix}1&x\\&1 \end{pmatrix}\subset\PGL_2 \}$. \item If $\chi\colon ({\mathbf{F}_q},+)\rightarrow\mathbf{S}^1$ is non trivial, then $$G_{\mathcal{L}_\chi}=T^{0,\infty}=\{\begin{pmatrix}a&0\\0&d \end{pmatrix}\subset\PGL_2 \}$$ is the diagonal torus, unless $\chi$ is quadratic in which case $G_{\mathcal{L}_\chi}=N(T^{0,\infty})$ is the normalizer of the diagonal torus. \item For the Kloosterman sheaves, one can show that $\mathcal{G}_{\mathcal{K}\ell_k}$ is trivial: since $\mathcal{K}\ell_k$ is not lisse at $0$ and $\infty$, with Swan conductor $0$ at $0$ and $1$ at $\infty$, one has $\mathcal{G}_{\mathcal{K}\ell_k}\subset T^{0,\infty}$. One can then show (see \cite{MiDMJ}) that $[\times a]^*{\mathcal{K}\ell_k}\simeq_\mathrm{geom}{\mathcal{K}\ell_k}$ iff $a=1$. \end{itemize} \end{example} Given $x\not=y\in\mathbf{P}^1(\ov{\mathbf{F}_q})$, we denote by $T^{x,y}$ the pointwise stabilizer of the pair $(x,y)$ (this is a maximal torus defined over some finite extension of ${\mathbf{F}_q}$) and $N(T^{x,y})$ its normalizer. The torus $T^{x,y}$ is defined over ${\mathbf{F}_q}$ if $x,y$ belong to $\mathbf{P}^1({\mathbf{F}_q})$ or if $x,y$ belong to $\mathbf{P}^1(\mathbf{F}_{q^2})$ and are Galois conjugates. \begin{proposition}\label{thmautogroup} Suppose $q\geq 7$. Given $\mathcal{F}$ as above, at least one of the following holds: \begin{itemize} \item $C(\mathcal{F})>q$. \item $q$ does not divide $|\Aut_{\mathcal{F}}({\mathbf{F}_q})|$ and either $\Aut_{\mathcal{F}}({\mathbf{F}_q})$ is of order $\leq 60$ or is a subgroup of the normalizer of some maximal torus $N(T^{x,y})$ defined over ${\mathbf{F}_q}$. \item $q$ divides $|\Aut_{\mathcal{F}}({\mathbf{F}_q})|$ and then $\mathcal{F}\simeq \sigma^*\mathcal{L}_\psi$ for some $\psi$ and $K(x)=\alpha\psi(\sigma.x)$ for for some $\sigma\in\PGL_2({\mathbf{F}_q})$ and $\Aut_{\mathcal{F}}({\mathbf{F}_q})=\sigma N\sigma^{-1}$. \end{itemize} \end{proposition} \begin{remark} Observe that in the last case $$\mathcal{C}(K,\gamma)=|K(0)|^2\mathcal{C}(\psi(\sigma.x),\gamma)$$ \end{remark} Concerning the size of the group $B_\mathcal{F}({\mathbf{F}_q})$, one can show that \begin{theorem}\label{thmautogroupB} Let $\mathcal{F}$ be an isotypic sheaf whose geometric components are not isomorphic to $[+x]^*\mathcal{L}_\chi$ for some $x\in{\mathbf{F}_q}$ and some multiplicative character $\chi$ and such that $$C(\mathcal{F})< q.$$ Then $$|B_\mathcal{F}({\mathbf{F}_q})|\leq C(\mathcal{F}).$$ \end{theorem} The proof of this theorem involves the following rigidity statements \cite[Lemma 2.6.13]{KatzRLS}: \begin{proposition} Let $\mathcal{L} $ be geometrically irreducible. \begin{itemize} \item If for some $x\in{\mathbf{F}^\times_q}$, $[+x]^*\mathcal{L}\simeq\mathcal{L}$, then either $$C(\mathcal{L})>q\hbox{ or }\mathcal{L}\simeq \mathcal{L}_\psi \hbox{ for some } \psi.$$ \item If $\Aut_\mathcal{L}({\mathbf{F}_q})$ contains a subgroup of order $m$ of diagonal matrices then either $$c(\mathcal{L})>m\hbox{ or }\mathcal{L}\simeq\mathcal{L}_\chi \hbox{ for some } \chi.$$ \end{itemize} \end{proposition} \section{Trace functions vs.~primes}\label{Secprimes} Another possible question to consider (natural from the viewpoint of analytic number theory at least) is how trace functions correlate with the characteristic function of the primes. In this section, we discuss the structure of the proof of the following result: \begin{theorem}[Trace function vs.~primes, \cite{FKM2}]\label{thmprimesumthm} Let $\mathcal{F}$ be a geometrically isotypic sheaf of conductor $C(\mathcal{F})$ whose geometric components are not of the shape $\mathcal{L}_\psi\otimes\mathcal{L}_\chi$ and let $K$ its associated trace function. For any $V\in C^\infty_c(\mathbf{R}_{>0})$, one has \par \begin{align} \label{primesuminterval} \sum_\stacksum{p\ \text{prime}}{p \leq X}K(p)&\ll X(1+q/X)^{1/12}p^{-\eta/2},\\ \label{primesumsmooth}\sum_{p\ \text{prime}}K(p)V\Bigl(\frac{p}X\Bigr)&\ll X(1+q/X)^{1/6}q^{-\eta}, \end{align} for $X\ll q$ and $\eta<1/24$. The implicit constants depend only on $\eta$, $C(\mathcal{F})$ and $V$. Moreover, the dependency on $C(\mathcal{F})$ is at most polynomial. \end{theorem} \begin{remark} This result exhibits cancellations when summing trace functions along the primes in intervals of length larger than $q^{3/4}$. It is really a pity that Dirichlet characters are excluded by our hypotheses: such a bound in that case would amount to a quasi generalized Riemann hypothesis for the corresponding Dirichlet character $L$-function ! \end{remark} We discuss the proof for $X=q$. \subsection{Combinatorial decomposition of the characteristic function of the primes} As is well-known, the problem is equivalent to bounding the sum $$\sum_{n}\Lambda(n) K(n)V\Bigl(\frac{n}q\Bigr)$$ where $$\Lambda(n)=\begin{cases}\log p&\hbox{ if }n=p^\alpha\ \alpha\geq 1\\ 0&\hbox{ otherwise, } \end{cases}$$ is the von Mangoldt function. A standard method in analytic number theory is a combinatorial decomposition of this function as a sum of Dirichlet convolutions; one way to achieve this is to use the celebrated Heath-Brown identity: \begin{lemma}[Heath-Brown] \label{lemHB} For any integer $J\geq 1$ and $n< 2X$, we have $$ \Lambda(n)=-\sum_{j=1}^J(-1)^j\binom{J}{j} \sum_{m_1,\ldots, m_j\leq Z}\mu(m_1)\ldots\mu(m_j) \sum_{m_1\ldots m_jn_{1}\ldots n_{j}=n}\log n_1, $$ where $Z=X^{1/J}$. \end{lemma} Hence splitting the range of summation of the various variables appearing (using partition of unity) and separating these variables, our preferred sum decomposes (essentially) into $O((\log X)^{2J})$ sums of the shape $$\Sigma(M_1,\ldots,M_{2j})=\sumsum_{m_1,\ldots m_{2j}}\mu(m_1)\ldots\mu(m_j) K(m_1.\ldots.m_{2j})V_{1} \Bigl(\frac{m_{1}}{M_{1}}\Bigr)\ldots V_{2j} \Bigl(\frac{m_{2j}}{M_{2j}}\Bigr)$$ for $j\leq J$; here $V_i,\ i=1,\ldots 2j$ are smooth functions compactly supported in $]1,2[$, and $(M_1,\ldots,M_{2j})$ is a tuple satisfying $$M_i=:q^{\mu_i},\ \forall i\leq j,\ \mu_i\leq 1/J,\ \sum_{i\leq 2j}{\mu_i}=1+o(1);$$ The objective is to show that $$\Sigma(M_1,\ldots,M_{2j})\ll q^{1-\eta}$$ for some fixed $\eta>0$. We will take $J=3$ so that $Z=q^{1/3}$. We may assume that $$\mu_1\leq\ldots\leq \mu_j\leq 1/3,\ \mu_{j+1}\leq \ldots\leq\mu_{2j}.$$ We will bound these sums differently depending on the vector $(\mu_1,\ldots,\mu_{2j})$. Let $0<\delta<1/6$ be some small but fixed parameter to be chosen optimally later. \begin{enumerate} \item Suppose that $\mu_{2j}\geq 1/2+\delta$. Then $m_{2j}$ is a long "smooth variable" (because the weight attached to it is smooth); therefore using \eqref{eqsmoothPV} to sum over $m_{2j}$ while fixing the other variables, we get $$\Sigma(M_1,\ldots,M_{2j})\ll q^{\mu_1+\ldots\mu_{2j-1}}q^{1/2+o(1)}=q^{1-\delta+o(1)}.$$ (In the literature, sum of that shape are called "type I" sums). \item We may therefore assume that $$m_{j+1}\leq \ldots\leq \mu_{2j}\leq 1/2+\delta;$$ in other words, there is no long smooth variable. What one can then do is to group variables together to form longer ones: for this one partitions the indexing set into two blocks $$\{1,\ldots,2j\}=\mathcal{I}\sqcup \mathcal{I}',$$ and form the variables $$m=\prod_{i\in \mathcal{I}} m_i,\ n=\prod_{i'\in\mathcal{I}'}m_{i'}$$ so that denoting by $\alpha_m$ the Dirichlet convolutions of either $\mu(\cdot)V(\frac{\cdot}{M_i})$ or $V(\frac{\cdot}{M_i})$ for $i\in\mathcal{I}$ and similarly for $\beta_n$ for $i'\in\mathcal{I}'$, we are led to bound bilinear sums of the shape \begin{equation}\label{eqbilineardef} B(K;\alpha,\beta)=\sumsum_{m\ll M,n\ll N}\alpha_m\beta_n K(mn). \end{equation} where $$M=q^\mu,\ \mu=\sum_{i\in \mathcal{I}}\mu_i,\ N=q^\nu,\ \nu=\sum_{i'\in \mathcal{I}'}\mu_{i'}.$$ The weights $\alpha_m,\beta_n$ are rather irregular and it is difficult to exploit their structure (such sums are called "type II"). Assuming that the irreducible component of $\mathcal{F}$ is not of the shape $\mathcal{L}_\chi\otimes\mathcal{L}_\psi$, we will prove in Theorem \ref{thmbilinear} below the following bound $$\Sigma(M_1,\ldots,M_{2j})=B(K;\alpha,\beta)\ll_{C(\mathcal{F})}\|\alpha_M\|_2\|\beta_N\|_2(MN)^{1/2}(\frac{1}M+\frac{q^{1/2}\log q}{N})^{1/2}.$$ Assuming that $$\mu\geq {\delta}\hbox{ and }\nu\geq {1/2+\delta}$$ we obtain that $$B(K;\alpha,\beta)\ll q^{1-\delta/2+o(1)}.$$ \item It remains to treat the sums for which neither $\mu_{2j}\leq 1/2+\delta$ nor a decomposition as in (2) exist. This necessarily implies that $\sum_{i\leq j}\mu_i\leq 1/3,$ $j\geq 2$ and $\mu_{2j-1}+\mu_{2j}\geq 1-\delta.$ Setting $M=M_{2j-1}$ and $N=M_{2j}$ , denoting $$a=m_1\ldots m_{2j-2}\ll q^{\delta},$$ it will be sufficient to obtain a bound of the shape $$\sum_{m,n\geq 1}K(amn)V(\frac{m}M)W(\frac{n}N)\ll_{V,W} (MN)^{1-\eta}$$ for some $\eta>0$ whenever $MN$ is sufficiently close to $q$. What we have are is a sum involving two smooth variables which are however too short for the P\'olya-Vinogradov method to work, but whose product is rather long. We call these sums "type I$1/2$". We will then use Theorem \ref{cortypeI1/2} below whose proof is discussed in \S \ref{sectracemodular}. Observe that this theorem provides a bound which is non trivial as long as $MN\geq q^{3/4}$. \item Optimizing parameters in these three approaches leads to Theorem \ref{thmprimesumthm}. \end{enumerate} \begin{theorem}\label{cortypeI1/2} Let $\mathcal{F}$ be a geometrically isotypic Fourier sheaf of conductor $C(\mathcal{F})$ and $K$ its associated trace function. For any $V,W\in C^\infty_c(\mathbf{R}_{>0})$, any $M,N\geq 1$ and any $\eta<1/8$, one has $$\sum_{m,n\geq 1}K(mn)V(\frac{m}M)W(\frac{n}N)\ll_{V,W,C(\mathcal{F})} MN(1+\frac{q}{MN})^{1/2}q^{-\eta/2}.$$ \end{theorem} \section{Bilinear sums of trace functions}\label{secbilinear} Let $K$ be a trace function associated to some isotypic sheaf $\mathcal{F}$, pure of weight $0$ and let $(\alpha_m)_{m\leq M}$, $(\beta_n)_{n\leq N}$ be arbitrary complex numbers. In this section, we bound the "type II" bilinear sums encountered in the previous section : $$B(K;\alpha,\beta)=\sumsum_{m\leq M,n\leq N}\alpha_m\beta_n K(mn).$$ Using the Cauchy-Schwarz inequality, the trivial bound is $$|B(K;\alpha,\beta)|\ll_{C(\mathcal{F})}\|\alpha_M\|_2\|\beta_N\|_2(MN)^{1/2}.$$ We wish to improve over this bound. \begin{theorem}[Bilinear sums of trace functions]\label{thmbilinear} Notations as above; assume that $1\leq M,N<q$ and that the irreducible component of $\mathcal{F}$ is not of the shape $\mathcal{L}_\chi\otimes\mathcal{L}_\psi$. Then $$B(K;\alpha,\beta)\ll_{C(\mathcal{F})}\|\alpha_M\|_2\|\beta_N\|_2(MN)^{1/2}(\frac{1}M+\frac{q^{1/2}\log q}{N})^{1/2}.$$ \end{theorem} \begin{remark}This bound is non-trivial as soon as $M\gg 1$ and $N\gg q^{1/2}\log q$. \end{remark} \proof By Cauchy-Schwarz, we have \begin{equation}\label{eqfirstCS} |B(K;\alpha,\beta)|^2\leq \|\beta_N\|_2^2\sum_{m_1,m_2\leq M}\alpha_{m_1}\ov{\alpha_{m_2}}\sum_{n\leq N}K(m_1n)\ov K(m_2n). \end{equation} We do not expect to gain anything from the diagonal terms $m_1\equiv m_2\mods q$ (equivalently, $m_1=m_2$ since $M<q$) and the contribution of such terms is bounded trivially by \begin{equation}\label{eqdiagbound} \ll_{C(\mathcal{F})}\|\alpha_M\|_2^2\|\beta_N\|^2_2N. \end{equation} As for the non-diagonal terms, their contribution is $$ \|\beta_N\|_2^2\sum_{m_1\not=m_2\mods q}\alpha_{m_1}\ov{\alpha_{m_2}}\sum_{n\leq N}K(m_1n)\ov K(m_2n).$$ Using the P\'olya-Vinogradov method, we are led to evaluate the Fourier transform of $$n\mapsto K(m_1n)\ov K(m_2n).$$ By the Plancherel formula, this Fourier transform equals \begin{eqnarray*} y\mapsto\frac{1}{q^{1/2}}\sum_{x\in{\mathbf{F}_q}}K(m_1x)\ov K(m_2x)\mathrm{e}_q(-yx)&=& \frac{1}{q^{1/2}}\sum_{z\in{\mathbf{F}_q}}\widehat K((z-y)/m_1)\ov {\widehat K}(z/m_2) \\&=&\frac{1}{q^{1/2}}\sum_{z\in{\mathbf{F}_q}}\widehat K((m_2z-y)/m_1)\ov {\widehat K}(z)\\ &=& \frac{1}{q^{1/2}}\sum_{z\in{\mathbf{F}_q}}\widehat K(\gamma z)\ov {\widehat K}(z) \end{eqnarray*} with $$\gamma=\begin{pmatrix} m_2/m_1&-y/m_1\\0&1 \end{pmatrix}\in B({\mathbf{F}_q}). $$ This sum is $q^{1/2}$ times $\mathcal{C}(\widehat\mathcal{F},\gamma)$, the correlation sum associated to the isotypic sheaves $\widehat\mathcal{F}$ and $\gamma^*\widehat\mathcal{F}$, whose conductors are controlled in terms of $C(\mathcal{F})$. If $\gamma\not\in B_\mathcal{F}({\mathbf{F}_q})$ we have \begin{equation}\label{eqcorB} \mathcal{C}(\widehat\mathcal{F},\gamma)\ll_{C(\mathcal{F})}\frac{1}{q^{1/2}}. \end{equation} The condition that the irreducible component of $\mathcal{F}$ is not of the shape $\mathcal{L}_\chi\otimes\mathcal{L}_\psi$ translates into the irreducible component of $\widehat\mathcal{F}$ not being of the shape $[+x]^*\mathcal{L}_{\ov\chi}$. In that case, by Theorem \ref{thmautogroupB}, there is a set $S_\mathcal{F}\subset {\mathbf{F}^\times_q}$ such that for any $(m_1,m_2,y)\in{\mathbf{F}^\times_q}\times{\mathbf{F}^\times_q}\times{\mathbf{F}_q}$ for which $m_2/m_1\not\in S_\mathcal{F}$ one has $$\mathcal{C}(\widehat\mathcal{F},\gamma)\ll_{C(\mathcal{F})} q^{-1/2}.$$ Returning to \eqref{eqfirstCS}, we bound trivially (by \eqref{eqdiagbound}) the contribution of the $O_\mathcal{F}(M)$ $(m_1,m_2)$ such that the ratio $m_2/m_1\mods q$ is in $S_\mathcal{F}$. For the other terms, we may use the P\'olya-Vinogradov method and bound these terms by $$\ll_{C(\mathcal{F})}\|\alpha_M\|_2^2\|\beta_N\|_2^2Mq^{1/2}\log q.$$ Combining these bounds leads to the final result. \qed \section{Trace functions vs.~modular forms}\label{sectracemodular} In this section we discuss the proof of Theorem \ref{cortypeI1/2}. This theorem is a special case of the resolution of another problem: the question of the correlation between trace functions and the Fourier coefficients $(\rho_f(n))_{n}$ of some modular Hecke eigenform (cf.~\cite[Chap. 14\&15]{IwKo} and references herein for a quick introduction to the theory modular forms). Given some trace function, we consider the correlation sum $$\mathcal{S}(K,f;X):=\sum_{n\leq X}\rho_f(n)K(n)$$ or its smoothed version $$\mathcal{S}_V(K,f;X):=\sum_{n}\rho_f(n)K(n)V(\frac{n}X).$$ These sums are bounded (using the Rankin-Selberg method) by $$O_{C(\mathcal{F}),f}(X\log^{3} X).$$ It turns out that the problem of bounding $\mathcal{S}(K,f;X)$ and $\mathcal{S}_V(K,f;X)$ non-trivially is most interesting when $N$ is of size $q$ or smaller. In this section, we sketch the proof of the following \begin{theorem}[Trace function vs.~modular forms, \cite{FKM1}]\label{thmKmodular} Let $\mathcal{F}$ be an irreducible Fourier sheaf of weight $0$ and $K$ its associated trace function. Let $(\rho_f(n))_{n\geq 1}$ be the sequence of Fourier coefficients of some modular form $f$ with trivial nebentypus and $V\in C^\infty_c(\mathbf{R}_{>0})$. For $X\geq 1$ and any $\eta<1/8$, we have $$\mathcal{S}(K,f;X)\ll X(1+\frac{q}X)^{1/2}q^{-\eta/2},$$ and $$\mathcal{S}_V(K,f;X)\ll X(1+\frac{q}X)^{1/2}q^{-\eta}.$$ The implicit constants depend only on $\eta$, $f$, $C(\mathcal{F})$ and $V$. Moreover, the dependency on $C(\mathcal{F})$ is at most polynomial. \end{theorem} This result shows the absence of correlation when $X\gg q^{1-1/8}$. The proof, which uses the amplification method and the Petersson-Kuznetzov trace formula, will ultimately be a consequence of Theorem \ref{thmautogroup}. We give below an idea of the proof. To simplify matters, we will assume that $X=q$ and we wish to bound non-trivially the sum \begin{equation}\label{SVdefnoq} \mathcal{S}_V(K,f):=\sum_{n\geq 1}\rho_f(n)K(n)V(\frac{n}q) \end{equation} for $V$ a fixed smooth function. Moreover, to simplify things further, we will assume that $f$ has level $1$ and is cuspidal and holomorphic of very large (but fixed) weight. \subsection{Trace functions vs.~the divisor function} An important special case of Theorem \ref{thmKmodular} is when $f$ is an Eisenstein series, for instance when $$f(z)=\frac{\partial}{\partial s}E(z,s)_{|s=1/2}\hbox{ for }E(z,s)=\frac{1}2\sum_{(c,d)=1}\frac{y^s}{|cz+d|^{2s}}$$ is the non-holomorphic Eisenstein series at the central point. In that case we have $$\rho_f(n)=d(n)$$ the divisor function, and so one has \begin{equation}\label{eqbounddiv} \sum_{m,n\geq 1}K(mn)V(\frac{mn}X)\ll_{V,C(\mathcal{F})} X(1+\frac{q}X)^{1/2}q^{-\eta} \end{equation} whenever $K$ is the trace function of a Fourier sheaf. This bound holds similarly for the unitary Eisenstein series $E(z,s)$ at any $s=\frac12+it$, where the divisor function is replaced by $$d_{it}(n)=\sum_{ab= n}(a/b)^{it}.$$ Such general bounds make it possible to separate the variables $m,n$ in \eqref{eqbounddiv} and eventually to prove Theorem \ref{cortypeI1/2}. \begin{remark} As we will see below, the proof of Theorem \ref{thmKmodular} is not a "modular form by modular form" analysis; instead the proof is global, involving the full automorphic spectrum, and establishes the required bound "for all modular forms $f$ at once", including Eisenstein series and therefore proving Theorem \ref{cortypeI1/2} on the way. \end{remark} \subsection{Functional equations} Our first objective is to understand why the range $X=q$ is interesting. This come from the functional equations satisfied by modular forms as a consequence of their automorphic properties. These equations present themselves in various shapes. One is the Voronoi summation formula, which in its simplest form is the following: \begin{proposition}[Voronoi summation formula]\label{Voronoi} Let $f$ be a holomorphic modular form of weight $k$ and level $1$ with Fourier coefficients $(\rho_f(n))_n$. Let $V$ be a smooth compactly supported function, $q\geq 1$ and $(a,q)=1$. We have for $X>0$ $$ \sum_{n\geq 1} \rho_f(n)V\Bigl(\frac{n}X\Bigr)e\Bigl(\frac{an}{q}\Bigr) = \varepsilon(f)\frac{X}{q} \sum_{n\geq 1}\rho_f(n)e\Bigl(-\frac{\overline{a}n}{q}\Bigr) \widetilde V\Bigl(\frac{Xn}{q^2}\Bigr) $$ where $\varepsilon(f)=\pm 1$ denotes the sign of the functional equation of $L(f,s)$, and $$\widetilde V(y)=\int_{0}^\infty V(u)\mathcal{J}_k(4\pi\sqrt{ uy})du,$$ with \begin{equation* \mathcal{J}_k(u) = 2\pi i^kJ_{k-1}(u), \end{equation*} where $$J_{k-1}(x)=\sum_{l=0}^\infty\frac{(-1)^l}{l !(l+k-1) !}(\frac{x}{2})^{2l+k-1}$$ is the Bessel function of order $k-1$. \end{proposition} There are several possible proofs of this proposition: one can proceed classically from the Fourier expansion of the modular form $f$ using automorphy relations (see \cite[Theorem A.4]{KMVDMJ}). Another more conceptual approach is to use the Whittaker model of the underlying automorphic representation; this approach extends naturally to higher rank automorphic forms (see \cite{IT}). One could also point out other related works like \cite{MilSch} as well as the recent paper \cite{KirZhou}. We can extend this formula to general functions modulo $q$. Given $K\colon \mathbf{Z}\rightarrow\mathbf{C}$ a $q$-periodic function, we define its \emph{Voronoi transform} $\bessel{K}$ of $K$ as $$ \bessel{K}(n) = \frac{1}{\sqrt{q}}\sum_{\substack{h\bmod q\\(h,q) =1}} \fourier{K}(h) \mathrm{e}_q ({\overline h n} )=\frac{1}{\sqrt{q}}\sum_{\substack{h\bmod q\\(h,q) =1}} \fourier{K}(h^{-1}) \mathrm{e}_q ({h n} ). $$ Combining the above formula with the Fourier decomposition $$K(n)=\frac{1}{q^{1/2}}\sum_{a\mods q}\widehat K(a)\mathrm{e}_q(-an),$$ we get \begin{corollary}\label{corvoronoi} Notations are above, given $K$ a $q$-periodic arithmetic function, we have for $X>0$ \begin{eqnarray*} \sum_{n\geq 1}\rho_f(n)K(n)V\Bigl(\frac nX\Bigr)&=& \frac{\widehat K(0)}{q^{1/2}}\sum_{n\geq 1} \rho_f(n)V\Bigl(\frac nX\Bigr)+\\ &&\ \varepsilon(f)\frac{X}{q} \sum_{n\geq 1}\rho_f(n)\widecheck K(- n)\widetilde V\Bigl(\frac{nX}{q^2}\Bigr). \end{eqnarray*} \end{corollary} \begin{remark} Another way to obtain such result is to consider the Mellin transform of (the restriction to ${\mathbf{F}^\times_q}$ of) $K$: $$\tilde K(\chi)=\frac{1}{(q-1)^{1/2}}\sum_{x\in{\mathbf{F}^\times_q}}K(x)\chi(x)$$ so that for $x\in{\mathbf{F}^\times_q}$ $$K(x)=\frac{1}{(q-1)^{1/2}}\sum_{\chi}\tilde K(\chi)\chi^{-1}(x).$$ One can then use the (archimedean) inverse-Mellin transform and the functional equation satisfied by the Hecke $L$-function $$L(f\otimes\chi,s)=\sum_{n\geq 1}\frac{\rho_f(n)\chi(n)}{n^{s}}$$ to obtain the formula. For this, one observes that the Mellin transform of $\widecheck{K}_{|{\mathbf{F}^\times_q}}$ is proportional to $$\chi\mapsto \varepsilon(\chi)\tilde K(\chi^{-1})$$ where $\varepsilon(\chi)$ is the normalized Gauss sum. This method extends easily to automorphic forms of higher rank but uses the fact that $q$ is prime (so that ${\mathbf{F}^\times_q}$ is not much smaller that ${\mathbf{F}_q}$). \end{remark} The identity of Corollary \ref{corvoronoi} is formal and has nothing to do whether $K$ is a trace function or not. In particular applying it to the Dirac function $\delta_a(n)=\delta_{n\equiv a\mods q}$, for some $a\in{\mathbf{F}^\times_q}$ we obtain $$\widehat{\delta_a}(h)=\frac{1}q^{1/2} \mathrm{e}_q(ah),\ \widecheck \delta_a(n)=\frac{1}{q^{1/2}}\Kl_2(an;q)$$ so that \begin{eqnarray}\label{eqdeltacase} q^{1/2}\sum_{n\equiv a\mods q}\rho_f(n)V\Bigl(\frac nX\Bigr)&=& \frac{1}{q^{1/2}}\sum_{n\geq 1} \rho_f(n)V\Bigl(\frac nX\Bigr)+\\ &&\ \varepsilon(f)\frac{X}{q} \sum_{n\geq 1}\rho_f(n)\Kl_2(-an;q)\widetilde V\Bigl(\frac{nX}{q^2}\Bigr). \nonumber \end{eqnarray} This is an example of a natural transformation which, starting from the elementary function $\delta_a$ produces a genuine trace function ($\Kl_2$). Besides this case we would like to use the formula for $K$ a trace function. We observe that the Voronoi transform $\widecheck K$ is "essentially" the Fourier transform of the function $$h\in{\mathbf{F}^\times_q}\mapsto \widehat K(h^{-1})=\widehat K(w\cdot h)$$ with $w=\begin{pmatrix}0&1\\1&0 \end{pmatrix}$; it is therefore essentially involutive. It would be useful to know that $\widecheck K$ is a trace function. Suppose that $K$ is associated to some isotypic Fourier sheaf $\mathcal{F}$, then $\widecheck K$ is a (isotypic) trace function as long as $w^*\widehat\mathcal{F}$ is a Fourier sheaf. This means that $\widehat\mathcal{F}$ has no irreducible constituent of the shape $w^*\mathcal{L}_\psi$ which (by involutivity of the Fourier transform means that $\mathcal{F}$ has no irreducible constituent isomorphic to some Kloosterman sheaf $\mathcal{K}\ell_2$. This reasoning\footnote{by involutivity of the Voronoi transform} is essentially the reverse of the one leading to \eqref{eqdeltacase}. Let us assume that $\widecheck K$ is also a trace function. Then, integration by parts show that for $V$ smooth and compactly supported, $\widetilde V(x)$ has rapid decay for $x\gg 1$. Hence Corollary \ref{corvoronoi} is an equality between a sum of length $X$ and a sum of length about $q^2/X$ (up to the term $\frac{\widehat K(0)}{q^{1/2}}\sum_{n\geq 1} \rho_f(n)V\Bigl(\frac nX\Bigr)$ which is easy to understand). The two lengths are the same when $X=q$. \subsection{The amplification method} As mentioned above Theorem \ref{thmKmodular} is proven "for all modular forms at one" as a consequence of the amplification method. The principle of the amplification method (invented by H. Iwaniec and which in the special case $K=\chi$ was used first by Bykovskii) consist, in the following. For $L\geq 1$ and $(x_l)_{l\leq L}$ real numbers we consider the following average over orthogonal bases of modular forms (holomorphic or general) of level $q$: \begin{equation}\label{momentholk} M_k(K ):=\sum_{g\in\mathcal{B}_k(q)}|A(g )|^2|\mathcal{S}_V(g,K)|^2 \end{equation} (cf.~\eqref{SVdefnoq} for the definition of $\mathcal{S}_V(g,K)$) and \begin{multline}\label{momentdef} M(K ):=\sum_{k \equiv 0 \mods{2},\ k>0} \dot{\phi}(k)(k-1) \sum_{g\in\mathcal{B}_k(q)}|A(g )|^2|\mathcal{S}_V(g,K)|^2 \\ + \sum_{g\in\mathcal{B}(q)} \tilde{\phi}(t_g)\frac{4 \pi }{\cosh(\pi t_g)}|A(g )|^2|\mathcal{S}_V(g,K)|^2\\ + \,\sum_{g\in \mathcal{B}_E(q)}\int_{-\infty}^{\infty}\tilde{\phi}(t)\frac{1}{\cosh(\pi t)} |A(g,t)|^2|\mathcal{S}_V(E_{g}(t),K)|^2\,dt, \end{multline} where $\mathcal{B}_{k}(q),\ \mathcal{B}(q),\ \mathcal{B}_E(q)$ denote orthonormal bases of Hecke eigenforms of level $q$ (either holomorphic of weight $k$ or Maass or Eisenstein series), $\dot{\phi},\ \tilde{\phi}$ are weights constructed from some smooth function, $\phi$, rapidly decreasing at $0$ and $\infty$, which depend only on the spectral parameters of the forms and for each form $g$, $A(g )$ ("A" is for amplifier) is the linear form in the Hecke eigenvalues $(\lambda_g(n))_{(n,q)=1}$ given by $$A(g )=\sum_{l\leq L}x_l\lambda_g(l).$$ The weights $\tilde{\phi}$ are positive while the weight $\dot{\phi}(k)$ is positive at least for $k$ large enough; one can then add to this quantity a finite linear combination of the $M_k(K),\ k\ll 1$ from which one can bound \begin{multline}\label{momentdef2} |M|(K ):=\sum_{k \equiv 0 \mods{2},\ k>0} |\dot{\phi}(k)|(k-1) \sum_{g\in\mathcal{B}_k(q)}|A(g )|^2|\mathcal{S}_V(g,K)|^2 \\ + \sum_{g\in\mathcal{B}(q)} \tilde{\phi}(t_g)\frac{4 \pi }{\cosh(\pi t_g)}|A(g )|^2|\mathcal{S}_V(g,K)|^2\\ + \,\sumsum_{g\in \mathcal{B}_E(q)}\int_{-\infty}^{\infty}\tilde{\phi}(t)\frac{1}{\cosh(\pi t)} |A(g,t)|^2|\mathcal{S}_V(E_{g}(t),K)|^2\,dt. \end{multline} As we explain below one will be able to prove the following bound \begin{equation}\label{Mbound} M(K ), M_{k}(K )\ll_{C(\mathcal{F})} q^{o(1)}(q\sum_{l\leq L}|x_l|^2+q^{1/2}L(\sum_{l\leq L}|x_l|)^2). \end{equation} Now if $f$ is a Hecke-eigenform of level $1$ (of $L^2$ norm $1$ for the usual inner product on the level one modular curve) then $f/(q+1)^{1/2}$ embeds in an orthonormal basis of forms of level $q$. Since all the terms in $|M|(K )$ are non-negative, this sums bounds any of its terms occurring discretely (i.e.~when $f$ is a cusp form). Therefore we obtain $$\frac{1}{q+1}|A(f )|^2|\mathcal{S}_V(f,K)|^2\ll_{C(\mathcal{F}),f} q^{o(1)}(q\sum_{l\leq L}|x_l|^2+q^{1/2}L(\sum_{l\leq L}|x_l|)^2).$$ Now we perform amplification by choosing some bounded sequence $(x_l)_{l\leq L}$ tailor made for $f$ such that $A(f)$ is "large". Specifically, choosing $$x_l=\mathrm{sign}(\lambda_f(l)),$$ we obtain $$|A(f)|\gg L^{1+o(1)}.$$ Dividing by $L$ we obtain $$|\mathcal{S}_V(f,K)|^2\ll q^{o(1)}(q^2/L+q^{3/2}L^2)$$ and the optimal choice is $L=q^{1/6}$ giving us $$\mathcal{S}_V(f,K)\ll q^{1-1/12+o(1)}.$$ \subsection{Computing the moments} We now bound $M(K)$. Opening squares and using the multiplicative properties of Hecke eigenvalues, we are essentially reduced to bounding sums of the shape \begin{equation}\label{eqmodelsummodular} \sumsum_{m,n}V(\frac{m}q)V(\frac{n}q)K(m)\ov{K(n)}\Delta_{q,\phi}(lm,n) \end{equation} and \begin{equation}\label{eqmodelsummodular} \sumsum_{m,n}V(\frac{m}q)V(\frac{n}q)K(m)\ov{K(n)}\Delta_{q,k}(lm,n) \end{equation} where $1\leq l\leq L^2$, $$\Delta_{q,k}(lm,n)=\sum_{g\in\mathcal{B}_k(q)}\rho_g(lm)\ov{\rho_g(n)}$$ and \begin{eqnarray*}\label{moment2} \Delta_{q,\phi}(lm,n)&&=\sum_{k \equiv 0 \mods{2},\ k>0} \dot{\phi}(k)(k-1) \sum_{g\in\mathcal{B}_k(q)}\rho_g(lm)\ov{\rho_g(n)} \\ &&\ \ \ + \sum_{g\in\mathcal{B}(q)} \tilde{\phi}(t_g)\frac{4 \pi }{\cosh(\pi t_g)}\rho_g(lm)\ov{\rho_g(n)}\\ &&\ \ \ + \,\sum_{g\in \mathcal{B}_E(q)}\int_{-\infty}^{\infty}\tilde{\phi}(t)\frac{1}{\cosh(\pi t)} \rho_g(lm,t)\ov{\rho_g(n,t)}\,dt. \end{eqnarray*} The Petersson-Kuznetzov formula expresses $\Delta_{q,k}(m,n)$ $\Delta_{q,\phi}(m,n)$ as sums of Kloosterman sums: \begin{equation} \label{Pet} \Delta_{q,k}(m,n)=\delta_{m=n}+2\pi i^{-k}\sum_{c} \frac{1}{cq} S(m, n;cq)J_{k-1}\left(\frac{4\pi\sqrt{mn}}{cq}\right). \end{equation} and \begin{equation} \label{Kuz} \Delta_{q,\phi}(m,n)=\sum_{c} \frac{1}{cq} S(m, n;cq)\phi\left(\frac{4\pi\sqrt{mn}}{cq}\right), \end{equation} where $$S(m, n;cq)=\sum_{(x,cq)=1}e\left(\frac{mx+n\ov x}{cq}\right)$$ is the non-normalized Kloosterman sum of modulus $cq$ (where $x.\ov x\equiv 1\mods{cq}$). In \eqref{eqmodelsummodular}, because $m$ and $n$ are of size $q$ and $\phi$ is rapidly decreasing at $0$, the contribution of the $c\gg l^{1/2}$ is small. We will simplify further by evaluating only the contribution of $c=1$, that is $$\frac1q\sumsum_{m,n}V(\frac{m}q)V(\frac{n}q)K(m)\ov{K(n)}S(lm,n;q)\phi(\frac{4\pi\sqrt{lmn}}{q}).$$ Our next step is to open the Kloosterman sum and apply the Poisson summation formula on the $m$ and $n$ variables. We obtain $$\frac{1}q\frac{q^2}{(q^{1/2})^2}\sumsum_{m^*,n^*}\widehat{W}(m^*,n^*)\sum_{x\in{\mathbf{F}^\times_q}}\widehat K(lx+m^*)\ov{\widehat K(x^{-1}+n^*)}$$ where $$W(x,y)=V(x)V(y)\phi(4\pi\sqrt{lxy}).$$ In particular, the Fourier transform $\widehat{W}(m^*,n^*)$ is very small unless $m^*+n^*\ll l$ so the above sum is over $m^*,n^*\ll l$. Setting $$\gamma_1=\begin{pmatrix}l&m^*\\&1 \end{pmatrix},\ \gamma_2=\begin{pmatrix}n^*&1\\1&0 \end{pmatrix} $$ we see that the $x$-sum is the correlation sum $q\mathcal{C}(K,\gamma_2.\gamma_1^{-1})$ which is $\ll q^{1/2}$ if $\gamma_2.\gamma_1^{-1}$ does not belong to the group of automorphism of ${\widehat\mathcal{F}}$. Using Theorem \ref{thmautogroup} one show that if $l$ is a sufficiently small fixed (positive) power of $q$, the bound $$\sum_{x\in{\mathbf{F}^\times_q}}\widehat K(lx+m^*)\ov{\widehat K(x^{-1}+n^*)}\ll_{C(\mathcal{F})} q^{1/2}$$ holds for most pairs $(m^*,n^*)$. From this we deduce \eqref{Mbound}. \section{The ternary divisor function in arithmetic progressions to large moduli}\label{Secternary} Given some arithmetic function $\lambda=(\lambda(n))_{n\geq 1}$, a natural question in analytic number theory is to understand how well $\lambda$ is distributed in arithmetic progressions: given $q\geq 1$ and $(a,q)=1$ one would like to evaluate the sum $$\sum_\stacksum{n\leq X}{n\equiv a\mods q}\lambda(n)$$ as $X\rightarrow\infty$ and for $q$ as large as possible with respect to $X$. It is natural to evaluate the difference $$E(\lambda;q,a):=\sum_\stacksum{n\leq X}{n\equiv a\mods q}\lambda(n)-\frac{1}{\varphi(q)}\sum_\stacksum{n\leq X}{(n,q)=1}\lambda(n)$$ and assuming that $\lambda$ is "essentially" bounded the target would be to obtain a bound of the shape \begin{equation}\label{eqEbound} E(\lambda;q,a)\ll_A \frac{X}q(\log X)^{-A} \end{equation} for any $A\geq 0$, as $X\rightarrow+\infty$ and for $q$ as large as possible compared to $X$. The emblematic case is when $\lambda=1_\mathcal{P} $ is the characteristic function of the primes. In that case the problem can be approached through the analytic properties of Dirichlet $L$-functions and in particular the localization of their zeros. The method of Hadamard-de la Vallee-Poussin (adapted to this setting by Landau) and the Landau-Siegel theorem show that \eqref{eqEbound} is satisfied for $q\leq (\log X)^{B}$ for any given $B$, while the validity of the generalized Riemann hypothesis would give \eqref{eqEbound} for $q\ll X^{1/2-\delta}$ for any fixed $\delta>0$. Considering averages over $q$, it is possible to reach the GRH range and this is the content of the Bombieri-Vinogradov theorem \begin{theorem}[Bombieri-Vinogradov] For any $A\geq 0$ there exists $B=B(A)$ such that for $Q\leq X^{1/2}/\log^BX$ $$\sum_{q\leq Q}\mathop{\mathrm{Max}}\limits_{(a,q)=1}|E(1_\mathcal{P};q,a)|\ll X/\log^AX.$$ \end{theorem} Passing the GRH/Bombieri-Vinogradov range and reaching the inequality $Q\leq x^{1/2+\eta}$ for some $\eta>0$ is a fundamental problem in analytic number theory with many major applications. For instance, Y. Zhang's breakthrough on the existence of bounded gaps between primes proceeded by establishing a version of the Bombieri-Vinogradov theorem going beyond the $Q=X^{1/2}$ range on average over smooth moduli. \cite{YZhang}; we will discuss some of the techniques entering his proof below. Several arithmetic functions are of interest besides the characteristic function of the primes or other sequences. One of the simplest are the divisor functions $$d_k(n)=\sum_{n_1.\ldots n_k=n}1.$$ For $k=2$, Selberg and others established the following (still unsurpassed) \begin{theorem} [The divisor function in arithmetic progressions to large moduli]\label{thmd2} For every non-zero integer $a$, every $\varepsilon,A>0$, every $X\geq 2$ and every prime $q$, coprime with $a$, satisfying $$q\leq X^{2/3-\varepsilon},$$ we have $$ E(d_2;q,a)\ll \frac{X}{q}(\log X)^{-A}, $$ where the implied constant only depends on $\varepsilon$ and $A$ (and not on $a$). \end{theorem} \proof(Sketch) To simplify matters we consider the problem of evaluating the model sum $$\sum_{n_1n_2\equiv a\mods q}V(\frac{n_1}{N_1})V(\frac{n_2}{N_2})$$ for $N_1N_2=X$ and $V\in\mathcal{C}^\infty_c(]1,2[)$. We apply the Poisson summation formula to the $n_1$ variable and to the $n_2$ variable. The condition $n_1n_2\equiv a\mods q$ get transformed into $$\delta_{n_1n_2\equiv a\mods q}\rightarrow q^{-1/2}\mathrm{e}_q(an_1/n_2)\rightarrow q^{-1/2}\Kl_2(an_1n_2;q).$$ The ranges the ranges $N_1,N_2$ are transformed into $$N_1^*=q/N_1,N_2^*=q/N_2$$ and the whole model sum is transformed into a sum of the shape $$MT(a;q)+ET(a;q)$$ where $MT(a;q)$ is a main term which we will not specify (but is of the right order of magnitude), and $ET(a;q)$ is an error term of the shape $$ET(a;q)=\frac{1}{q^{1/2}}\frac{N_1}{q^{1/2}}\frac{N_2}{q^{1/2}}\sum_{n_1,n_2}\Kl_2(an_1n_2;q)\tilde V(\frac{n_1}{N_1^*})\tilde V(\frac{n_2}{N_2^*})$$ where $\tilde V$ is a rapidly decreasing function. By Weil's bound for Kloosterman sums, the error term is bounded by $q^{1/2+\epsilon}$ which is smaller that $X(\log X)^{-A}/q$ as long as $X\leq q^{2/3-2\varepsilon}$. \qed \begin{remark}\label{remselberg} Improving the exponent $2/3$ is tantamount to detect cancellation in the sum of Kloosterman sums above. We have given such an improvment in \eqref{eqbounddiv}; unfortunately in the present case the range of the variable $n_1n_2$ is $N_1^*N_2^*=q^2/X\leq q^{1/2}$ which is too short with current technology. See however the \cite{FoIw} for an improvement beyond the $q=x^{2/3}$ limit on average over a family of moduli $q$ admitting a specific factorisation. \end{remark} We now show how to go beyond the Bombieri-Vinogradov range for the specific case of the ternary divisor function $$d_3(n)=\sum_{n_1n_2n_3=n}1$$ (in fact in a stronger form because it is not even necessary to average over the modulus $q$ !). The very first result of that kind is due to Friedlander-Iwaniec \cite{FrIw} (with $\frac12+\eta=\frac12+\frac1{231}$) and was later improved by Heath-Brown (with $\frac12+\eta=\frac12+\frac1{81}$) \cite{HBActa}. When the modulus $q$ is prime, the best result to date is to be found in \cite{FKM3}: \begin{theorem}[The ternary divisor function in arithmetic progressions to large moduli]\label{thmd3} For every non-zero integer $a$, every $A>0$, every $X\geq 2$ and every prime $q$, coprime with $a$, satisfying $$q\leq X^{\frac{1}2+\frac{1}{47}},$$ we have $$ E(d_3;q,a)\ll \frac{X}{q}(\log X)^{-A}, $$ where the implied constant only depends on $A$ (and not on $a$). \end{theorem} \begin{remark} One may wonder why these higher order divisor functions are so interesting: one reason is that these problems can be considered as approximations for the case of the von Mangoldt function. Indeed, the Heath-Brown identity (Lemma \ref{lemHB}) expresses the von Mangoldt function as a linear combination of arithmetic functions involving higher divisor functions, therefore studying higher divisor functions in arithmetic progressions to large moduli will enable to progress on the von Mangoldt function.\footnote{This was formalised by Fouvry \cite{FouCrelle}.} \end{remark} \proof We consider again a model sum of the shape $$\sum_{n_1n_2n_3\equiv a\mods q}V(\frac{n_1}{N_1})V(\frac{n_2}{N_2})V(\frac{n_3}{N_3})$$ for $N_1N_2N_3=X$ and $V\in\mathcal{C}^\infty_c(]1,2[)$. We apply the Poisson summation formula to the variables $n_1$ $n_2$ and $n_3$. The condition $n_1n_2n_3\equiv a\mods q$ is this time transformed into the hyper-Kloosterman sum $$\frac{1}{q^{1/2}}\Kl_3(an_1n_2n_3;q).$$ The model sum is transformed into a main term (of the correct order of magnitude) and an error term $$ET_3(a;q)=\frac{1}{q^{1/2}}\frac{N_1}{q^{1/2}}\frac{N_2}{q^{1/2}}\frac{N_3}{q^{1/2}}\sum_{n_1,n_2,n_3}\Kl_2(an_1n_2n_3;q)\tilde V(\frac{n_1}{N_1^*})\tilde V(\frac{n_2}{N_2^*})\tilde V(\frac{n_3}{N_3^*})$$ with $$N_i^*=q/N_i,\ i=1,2,3.$$ The objective is to obtain a bound of the shape \begin{equation}\label{d3goal} \Sigma_3:=\sum_{n_1,n_2,n_3}\Kl_3(an_1n_2n_3;q)\tilde V(\frac{n_1}{N_1^*})\tilde V(\frac{n_2}{N_2^*})\tilde V(\frac{n_3}{N_3^*})\ll\frac{q}{\log^Aq} \end{equation} for $X=q^{2-\eta}$ for some fixed $\eta>0$ (small), or equivalently for $$N_1^*N_2^*N_3^*=q^{1+\eta}.$$ We will show that when $\eta=0$, \eqref{d3goal} holds with the stronger bound $\ll q^{1-\delta}$ for some $\delta>0$. A variation of this argument will show \eqref{d3goal} for some positive $\eta$. Write $$N_i^*=q^{\nu_i},\ i=1,2,3,\ \nu_1+\nu_2+\nu_3=1;$$ we assume that $$0\leq \nu_1\leq\nu_2\leq\nu_3.$$ Suppose that $\nu_3\geq 1/2+\delta$. Then the P\'olya-Vinogradov method, applied to the $n_3$ variable, leads to a bound of the shape $$\Sigma_3\ll q^{1-\nu_3+1/2}\log q\ll q^{1-\delta}\log q.$$ Otherwise we have $\nu_3\leq 1/2+\delta$. We assume now that $\nu_1\geq 2\delta$; then $\nu_1\leq 1/3$, so that grouping the variables $n_2n_3$ into a single variable $n$ of size $\geq q^{2/3}$ (weighted by a divisor like function) and applying Theorem \ref{thmbilinear}, we obtain the bound $$\Sigma_3\ll q^{1-\delta}\log^3 q.$$ We may therefore assume that $$\nu_1\leq 2\delta,\ \nu_2+\nu_3\geq 1-2\delta.$$ The $n_2n_3$-sum is similar to the sum in \eqref{eqbounddiv} (for $K(n)=\Kl_3(an_1n;q)$) and indeed the same bound holds, so that for any $\varepsilon>0$, we have $$\Sigma_3\ll_\varepsilon q^{\nu_1+\frac{\nu_2+\nu_3}{2}+\frac12-\frac18+\epsilon}\ll_\varepsilon q^{2\delta+1-\frac18+\epsilon}$$ which gives the required bounds if $\delta$ is chosen $<1/24$. \qed \section{The geometric monodromy group and Sato-Tate laws} In this section we discuss an important invariant attached an $\ell$-adic sheaf: its geometric monodromy group. This will be crucial in the next section to study more advanced sums of trace functions (multicorrelation sums). Another rather appealing outcome of this notion are the {\em Sato-Tate} type laws which describe the distribution of the set of values of trace functions as $q^n$ grows. \subsection{Sato-Tate laws for elliptic curves} The term "Sato-Tate law" comes from the celebrated {\em Sato-Tate Conjecture} for elliptic curves over $\mathbf{Q}$ which is now a theorem established in a series of papers principally by Clozel, Harris, Shepherd-Barron and Taylor \cite{CHT,HSBT,Tay,BGHT}. Let $E/\mathbf{Q}$ be an elliptic curve defined over $\mathbf{Q}$ with a model over $\mathbf{Z}$ --for instance given by the Weierstrass equation $$E\colon zy^2=x^3-azx^2-bz^3,\ a,b\in\mathbf{Z},\ \Delta(a,b)=4a^3-27b^2\not=0.$$ For any prime $q$, we denote by $E({\mathbf{F}_q})$ the reduction modulo $q$ of $E$; we have (Hasse bound) $$a_q(E):=q+1-|E({\mathbf{F}_q})|\in[-2q^{1/2},2q^{1/2}];$$ we can then define the angle $\theta_{E,q}\in [0,\pi]$ of $E$ at the prime $q$ by the formula $$a_q(E)/q^{1/2}=2\cos(\theta_{E,q}).$$ \begin{theorem}[Sato-Tate law for an elliptic curve]\label{origST} Let $E/\mathbf{Q}$ be a non-CM elliptic curve. As $X\rightarrow\infty$, the multiset of angles $\{\theta_{E,q},\ q\leq X,\ q\ prime\}$ becomes equidistributed on $[0,\pi]$ with respect to the so-called Sato-Tate measure $\mu_{ST}$ whose density is given by $$d\mu_{ST}=\frac{2}\pi\sin^2(\theta)d\theta.$$ In other words, for any interval $I\subset[0,\pi]$, we have $$\frac{|\{q\leq X,\ q\ prime,\ \theta_{E,q}\in I\}|}{\pi(X)}\rightarrow \mu_{ST}(I)=\frac{2}\pi\int_I\sin^2(\theta)d\theta$$ as $X\rightarrow\infty$. \end{theorem} The Sato-Tate measure $\mu_{ST}$ introduced in this statement has a more conceptual description: let $\SU_2(\mathbf{C})$ be the special unitary group in two variables and let $\SU_2(\mathbf{C})^\natural$ be its space of conjugacy classes, that space is identified with $[0,\pi]$ via the map $$\begin{pmatrix}e^{i\theta}&0\\0&e^{-i\theta} \end{pmatrix}^\natural \mapsto \theta\mods \pi. $$ The Sato-Tate measure $\mu_{ST}$ then corresponds to the direct image of the Haar measure on $\SU_2(\mathbf{C})$ under the natural projection $\SU_2(\mathbf{C})\mapsto \SU_2(\mathbf{C})^\natural$: this follows from the Weyl integration formula. Now let us recall that attached to the elliptic curve $E$ is a Galois representation on its $\ell$-adic Tate module\footnote{which is an $\ell$-adic sheaf over $\Spec(\mathbf{Z})$} $$\rho_E\colon \Gal(\ov\mathbf{Q}/\mathbf{Q})\to \GL(V_\ell(E))$$ which is unramified at every prime $q$ not dividing the discriminant (of the integral model) of $E$ and for such a prime, the Frobenius conjugacy class satisfies $$\tr(\Frob_q|V_\ell(E))=a_q(E)=2q^{1/2}\cos(\theta_{E,q})$$ hence defines a complex conjugacy class $$\begin{pmatrix}e^{i\theta_{E,q}}&0\\0&e^{-i\theta_{E,q}} \end{pmatrix}^\natural.$$ The Sato-Tate law for non-CM elliptic curves then states that this collection of Frobenius conjugacy classes becomes equidistributed relative to this measure. \begin{remark} For CM-elliptic curves there is also a (different) Sato-Tate law which was established by Hecke much earlier: the angles $\theta_{E,q}$ are equidistributed with respect to the uniform measure. \end{remark} The proof of the Sato-Tate conjecture in the non-CM case is one of the crowning achievements of the Langlands program; several decades before its proof, several variants of this conjecture have been established for {\em families} of elliptic curves over finite fields: given $a,b\in{\mathbf{F}_q}$ such that $\Delta(a,b):=4a^3-27b^2\not=0$ the Weierstrass equation $$E_{a,b}\colon y^2=x^3-ax^2-b$$ defines an elliptic curve over ${\mathbf{F}_q}$ and let $$a_q(a,b)=q+1-|E_{a,b}({\mathbf{F}_q})|=2q^{1/2}\cos(\theta_{a,b,q}).$$ Using the Selberg trace formula, Birch \cite{BirchST}, established the following variant of the Sato-Tate law for elliptic curves \begin{theorem} As $q\rightarrow\infty$ the multiset of angles $\{\theta_{a,b,q},\ (a,b)\in\mathbf{F}_q^2,\ \Delta(a,b)\not=0\}$ becomes equidistributed on $[0,\pi]$ with respect to $\mu_{ST}$: for any interval $I\subset[0,\pi]$, we have $$\frac{|\{(a,b)\in\mathbf{F}_q^2,\ \Delta(a,b)\not=0, \theta_{a,b,q}\in I\}|}{|\{(a,b)\in\mathbf{F}_q^2,\ \Delta(a,b)\not=0\}|}\rightarrow \mu_{ST}(I),\ q\rightarrow\infty.$$ \end{theorem} There is another variant, spelled out by Katz and which is consequence of Deligne's work \cite{WeilII}; it concerns one parameter families of elliptic curves: let $a(T),b(T)\in\mathbf{Z}[T]$ be polynomials such that $\Delta(T):=4a(T)^3+27b(T)^2\not=0$; for $q$ a sufficiently large prime, the equation over ${\mathbf{F}_q}$, $$E_t\colon y^2=x^3-a(t)x^2-b(t)$$ defines a family of elliptic curves indexed by the set $U({\mathbf{F}_q}):=\{t\in{\mathbf{F}_q},\ \Delta(t)\not=0\}.$ For any $t\in U({\mathbf{F}_q})$ we set $$\theta_{t,q}:=\theta_{a(t),b(t),q}\in[0,\pi].$$ \begin{theorem}\label{Ellequid} Assume that the $j$-invariant $j(T)=-1728\frac{4a(T)^3}{\Delta(T)}$ is not constant, then the multiset $\{\theta_{t,q},\ t\in U({\mathbf{F}_q})\}$ becomes equidistributed on $[0,\pi]$ with respect to $\mu_{ST}$ as $q\rightarrow\infty$. In other words, for any interval $I\subset[0,\pi]$, we have $$\frac{|\{t\in U(\mathbf{F}_q),\ \theta_{t,q}\in I\}|}{|U({\mathbf{F}_q})|}\rightarrow \mu_{ST}(I),\ q\rightarrow\infty.$$ \end{theorem} \begin{remark} Deligne \cite[Proposition 3.5.7]{WeilII} proved another variant of the Sato-Tate law when the parameter set is $U({\mathbf{F}_{q^n}})$ with $q$ fixed (large enough) and $n\rightarrow\infty$; this is in fact a special case of "Deligne's equidistribution theorem" \cite[Theorem 3.5.3]{WeilII} \end{remark} Theorem \ref{Ellequid} is a special case of very general Sato-Tate laws for $\ell$-adic sheaves: indeed the function $$t\in U({\mathbf{F}_q})\mapsto \frac{a_q(t)}{q^{1/2}}$$ is the trace function of some geometrically irreducible $\ell$-adic sheaf $\mathcal{E}_{a,b}$ whose associated trace function is given by \begin{equation}\label{ellsheaf} t\mapsto -\frac{1}{q^{1/2}}\sum_{x\in{\mathbf{F}_q}}(\frac{x^3+a(t)x+b(t)}q), \end{equation} where $\left(\frac{\cdot}{q}\right)$ is the Legendre symbol. A key player for such Sato-Tate law is the \subsection{The geometric monodromy group of a sheaf} \begin{definition}[\cite{GKM}{ Chap. 3}] Let $\mathcal{F}$ be a sheaf pure of weight $0$ and let $\rho_\mathcal{F}$ be the associated Galois representation. The geometric (resp.~arithmetic) monodromy group $G_{\mathcal{F},\mathrm{geom}}$ (resp.~$G_{\mathcal{F},\mathrm{arith}}$) is the Zariski closure of $\rho_\mathcal{F}(G^{\mathrm{geom}})$ (resp.~$\rho_\mathcal{F}(G^{\mathrm{arith}})$) inside $\GL(V_\mathcal{F})$; in particular $$G_{\mathcal{F},\mathrm{geom}}\subsetG_{\mathcal{F},\mathrm{arith}}.$$ It follows from \cite[Th\'eor\`eme (3.4.1)]{WeilII} that the connected component $G_{\mathcal{F},\mathrm{geom}}^0$ of $G_{\mathcal{F},\mathrm{geom}}$ is semisimple. \end{definition} \begin{example} \begin{itemize} \item In the case of the trace function \eqref{ellsheaf}, Deligne showed \cite[Lemme 3.5.5]{WeilII}, that if $q>2$ and the $j$-invariant $j(T)\mods q$ is not constant, one has $$\Ggeomd{\mathcal{E}_{a,b}}=\Garithd{\mathcal{E}_{a,b}}=\SL_2.$$ \item In his numerous books \cite{GKM,ESDE,Katzbull,MMP,TLM,KatzConvol} Katz computed the monodromy groups of various classes of sheaves: for instance, he proved in \cite[Theorem 11.1]{GKM} that for Kloosterman sheaves one has (for $q>2$) $$\Ggeomd{\mathcal{K}\ell_k}=\Garithd{\mathcal{K}\ell_k}=\begin{cases}\SL_k&\hbox{ if $k$ is odd}\\ \Sp_{k}&\hbox{ if $k$ is even}. \end{cases} $$ \end{itemize} \end{example} \subsection{Sato-Tate laws}\label{secSTlaw} In the sequel we make the simplifying hypothesis that \begin{equation}\label{eqGarithincluded} \Ggeomd{\mathcal{F}}=\Garithd{\mathcal{F}}. \end{equation} \subsubsection{Moments of trace functions} Before presenting the Sato-Tate laws in general, let us consider the very specific concrete problem of evaluating the {\em moments} of a trace function $K$. For $l\geq 0$ an integer, the $2l$-th moment of $K$ is the average $$\mathcal{M}_{2l}(K)=\frac{1}{q}\sum_{x\in{\mathbf{F}_q}}|K(x)|^{2l}.$$ The possibility of evaluating these comes from the fact that $x\mapsto |K(x)|^{2l}$ is indeed a trace function (not necessarily and in fact almost never irreducible). Indeed let $\mathrm{Std}\colon \Ggeomd{\mathcal{F}}\hookrightarrow \GL(V_\mathcal{F})$ be the standard representation of the group $\Ggeomd{\mathcal{F}}$ and let $\rho_{l,l}$ be the representation $$\rho_{l,l}=(\mathrm{Std}\otimes \mathrm{Std}^{*})^{\otimes l}.$$ Because of our assumption \eqref{eqGarithincluded} , the composition $$\rho_{l,l}(\mathcal{F})"="\rho_{l,l}\circ\rho_\mathcal{F}$$ is a representation of $\Garithd{\mathcal{F}}$ hence defines an $\ell$-adic sheaf pure of weight $0$ whose trace function is\footnote{at least at the $x$ where it is lisse} $x\mapsto |K(x)|^{2l}.$ The decomposition of this representation into irreducible representations of $\Ggeomd{\mathcal{F}}$ $$\rho_{l,l}=m_1(\rho_{l,l}).1\oplus\bigoplus_{1\not=r\in\mathrm{Irr}(\Ggeomd{\mathcal{F}})}m_r(\rho_{l,l}).r$$ yields a decomposition of $\rho_{l,l}(\mathcal{F})$ into a sum of geometrically irreducible sheaves $$\rho_{l,l}\circ \mathcal{F}=m_1(\rho_{l,l})\ov{\mathbf{Q}_{\ell}}\oplus\bigoplus_{1\not=r\in\mathrm{Irr}(\Ggeomd{\mathcal{F}})}m_r(\rho_{l,l})r\circ \mathcal{F}$$ and a decomposition of $|K(x)|^{2l}$ as a sum of trace functions $$|K(x)|^{2l}=m_1(\rho_{l,l})+\sum_{1\not=r}m_r(\rho_{l,l})K_{r\circ\mathcal{F}}(x).$$ From Deligne's Theorem (Cor. \ref{delignecor}) one deduce that $$\frac{1}{q}\sum_{x}|K(x)|^{2l}=m_1(\rho_{l,l})+O_{C(\mathcal{F}),l}(q^{-1/2})$$ where $m_1(\rho_{l,l})$ is the multiplicity of the trivial representation in the representation $(\mathrm{Std}\otimes \mathrm{Std}^{*})^{\otimes l}$ of $\Ggeomd{\mathcal{F}}$. In the same way, we could evaluate (in terms of the representation theory of the group $\Ggeomd{\mathcal{F}}$) more general moments like $$ \frac{1}{q}\sum_{x\in{\mathbf{F}_q}}|K(x)|^{2l}K(x)^{l'}$$ for integers $l,l'\geq 0$. \subsubsection{Equidistribution of Frobenius conjugacy classes} There is a more conceptual interpretation of these moments. For any $x\in U({\mathbf{F}_q})$, the Frobenius at $x$ acting on $V_\mathcal{F}$ produces a $\rho_\mathcal{F}(G^{\mathrm{arith}})$-conjugacy class $$\rho_\mathcal{F}(\Frob_x)\subset\Garithd{\mathcal{F}}(\mathbf{C})=G_{\mathcal{F},\mathrm{geom}}(\mathbf{C}).$$ The {\em Frobenius conjugacy class} of $\mathcal{F}$ at $x$ is by definition the $G_{\mathcal{F},\mathrm{geom}}(\mathbf{C})$-conjugacy class of its semisimple part (in the sense of Jordan decomposition) and is denoted $\theta_{x,\mathcal{F}}.$ Let $K$ be any maximal compact subgroup of $\Ggeomd{\mathcal{F}}(\mathbf{C})$ and $K^\natural$ its space of conjugacy classes. As explained in \cite{GKM}(Chap. 3), the conjugacy class $\theta_{x,\mathcal{F}}$ defines a unique conjugacy class in $K$, also denoted $\theta_{x,\mathcal{F}}\in K^\natural$. The Sato-tate laws describe the distribution of the set $\{\theta_{x,\mathcal{F}},\ x\in U({\mathbf{F}_q})\}\hbox{ inside $K^\natural$}$ as $q\rightarrow\infty$. More precisely, let $G$ be a connected semisimple algebraic group over ${\ov{\mathbf{Q}_{\ell}}}$ and $K\subset G(\mathbf{C})$ a maximal compact subgroup. Let $\mu^\natural$ be the direct image of the Haar probability measure on $K$ under the projection $K\mapsto K^\natural$. \begin{theorem}[Sato-Tate law] Let $G$ and $K\subset G(\mathbf{C})$ as above. Suppose we are given a sequence of primes $q\rightarrow\infty$ and for each such prime some $\ell$-adic sheaf $\mathcal{F}$ over ${\mathbf{F}_q}$, satisfying \eqref{eqGarithincluded}, whose conductor $C(\mathcal{F})$ is bounded independently of $q$, such that $$\Ggeomd{\mathcal{F}}=G_{\mathcal{F},\mathrm{arith}} =G.$$ For any such $q$ and $x\in U({\mathbf{F}_q})$ let $\theta_{x,\mathcal{F}}\in K^\natural$ be the conjugacy class of $\mathcal{F}$ at $x$ relative to $K$. As $q\rightarrow\infty$ the sets of conjugacy classes $$\{\theta_{x,\mathcal{F}},\ x\in U({\mathbf{F}_q})\}$$ become equidistributed with respect to the measure $\mu^\natural$: the probability measure $$\frac{1}{|U({\mathbf{F}_q})|}\sum_{x\in U({\mathbf{F}_q})}\delta_{\theta_{x,\mathcal{F}}}$$ converges weakly to $\mu^\natural$. In other words, for any $f\in\mathcal{C}(K^\natural)$ \begin{equation}\label{eqequid} \frac{1}{|U({\mathbf{F}_q})|}\sum_{x\in{\mathbf{F}_q}}f(\theta_{x,\mathcal{F}})\rightarrow \int_{K^\natural}f(\theta)d\mu^\natural(\theta),\ q\rightarrow\infty. \end{equation} \end{theorem} \proof By the Peter-Weyl theorem, the functions $$\tr(r)\colon \theta\in K^\natural\to \tr(r(\theta))\in\mathbf{C}$$ when $r$ ranges over all the irreducible representations of $G$, form an orthonormal basis of $L^2(K^\natural,\mu^\natural)$ and generates a dense subspace of the space of continuous functions on $K^\natural$. By Weyl equidistribution criterion it is therefore sufficient to show that for any $r$ irreducible and non-trivial, one has $$\frac{1}{|U({\mathbf{F}_q})|}\sum_{x\in U({\mathbf{F}_q})}\tr(r(\theta_{x,\mathcal{F}}))\rightarrow \mu^\natural(\tr(r))=0.$$ The function $$K_{r,\mathcal{F}}\colon x\in U({\mathbf{F}_q})\to r(\theta_{x,\mathcal{F}})$$ is the trace function associated to the sheaf $r\circ\mathcal{F}$ corresponding to the representation of $G_{\mathcal{F},\mathrm{arith}}$, $r\circ\rho_{\mathcal{F}}$ (because of \eqref{eqGarithincluded} this composition is well defined). That sheaf is by construction geometrically irreducible, non-trivial and its conductor is bounded in terms of $C(\mathcal{F})$ and $r$ only, so it follows from Deligne's Theorem that $$\frac{1}{|U({\mathbf{F}_q})|}\sum_{x\in U({\mathbf{F}_q})}\tr(r(\theta_{x,\mathcal{F}}))\ll_{C(\mathcal{F}),r}q^{-1/2}\rightarrow 0.$$ \qed \subsubsection{The case of Kloosterman sums} As we have seen above, for the Kloosterman sums $\Kl_2(x;q)$, we have $$G=\Sp_2=\SL_2,\ K=\mathrm{SU}_2(\mathbf{C})$$ and, via the identification $K^\natural\simeq[0,\pi]$, the measure $\mu^{\natural}$ is identified with the { Sato-Tate} measure $\mu_{ST}$. For $x\in{\mathbf{F}^\times_q}$, we define the angle $\theta_{q,x}\in [0,\pi]$ of the Kloosterman sum $\Kl_2(x;q)$ as $$\Kl_2(x;q)=\tr\begin{pmatrix}e^{i\theta_{q,x}}&0\\0&e^{-i\theta_{q,x}} \end{pmatrix}=2\cos(\theta_{q,x}). $$ The Sato-Tate law becomes the following explicit statement (due to Katz): \begin{theorem}[Sato-Tate law for Kloosterman sums] For any interval $I\subset[0,\pi]$ $$\frac{1}{q-1}|\{x\in{\mathbf{F}^\times_q},\ \theta_{q,x}\in I\}|\rightarrow \frac{2}\pi\int_I\sin^2(\theta)d\theta,\ q\rightarrow\infty.$$ \end{theorem} The above Sato-Tate law is called "vertical" as it describes the distribution of Kloosterman sums with varying parameters $x\in{\mathbf{F}^\times_q}$ as $q\rightarrow\infty$; such law is analogous to the Sato-Tate law of Theorem \ref{Ellequid}. In \cite{Sommes}, Katz in analogy with the original Sato-Tate conjecture (Theorem \ref{origST}) asked for the distribution of the Kloosterman sums for a fixed value of the parameter (say $x=1$) and for a varying prime modulus $q$. Katz made the following \begin{conjecture}[Horizontal Sato-Tate law for Kloosterman sums]\label{katzconj} As $X\rightarrow\infty$, the multiset of Kloosterman angles $\{\theta_{q,1},\ q\leq X,\ prime\}$ becomes equidistributed with respect to the Sato-Tate measure: for any $[a,b]\subset[0,\pi]$, we have $$\frac{1}{\pi(X)}|\{q\leq X,\ q\ prime,\ \theta_{q,1}\in[a,b]\}|\rightarrow \frac{2}\pi\int_a^b\sin^2(\theta)d\theta$$ as $X\rightarrow\infty$. \end{conjecture} \begin{remark} There are other variants of this vertical equidistribution conjecture that have been established recently: \begin{itemize} \item Heath-Brown and Patterson \cite{HBPatt} have proven that the angles of cubic Gauss sums of varying prime moduli are equidistributed with respect to the uniform measure. \item Even closer to the current discussion, Duke, Friedlander and Iwaniec \cite{DFISalie} have proven the vertical equidistribution of the angles $\theta^S_{q,1}$ of {\em Sali\'e} sums defined by $$S(1;q):=\frac{1}{q^{1/2}}\sum_\stacksum{x,y\in{\mathbf{F}^\times_q}}{xy=1}(\frac{x}{q})e\left(\frac{x+y}{q}\right)=:2\cos(\theta^S_{q,1})$$ again with respect to the uniform measure. \end{itemize} \end{remark} \subsection{Towards the horizontal Sato-Tate conjecture for almost prime moduli} Unlike the original Sato-Tate conjecture the prospect for a proof of Conjecture \ref{katzconj} seem very distant at the moment. Even the following very basic consequences of this conjecture seem today completely out of reach: \begin{itemize} \item There exist infinitely many primes $q$ such that $|\Kl_2(1;q)|\geq 2017^{-2017}$, \item There exist infinitely many primes $q$ such that $\Kl_2(1;q)>0$ (resp.~$\Kl_2(1;q)<0$) \end{itemize} In this section we will explain how some of the results discussed so far enable to say something non-trivial as the cost of replacing the prime moduli $q$ by {\em almost prime} moduli (that is squarefree-integers with an absolutely bounded number of prime factors). Recall that for $c\geq 1$ a squarefree integer and $(a,c)=1$ the normalized Kloosterman sum of modulus $c$ and parameter $a$ is $$\Kl_2(a;c)=\frac{1}{c^{1/2}}\sum_{x\in(\mathbf{Z}/c\mathbf{Z})^\times}e\left(\frac{\ov x+ax}c\right).$$ By the Chinese remainder theorem, Kloosterman sums satisfy the {\em twisted multiplicativity} relation: for $c=c_1c_2$, $(c_1,c_2)=1$ one has \begin{equation}\label{twistedmult} \Kl_2(a;c)=\Kl_2(a\ov{c_2}^2;c_1)\Kl_2(a\ov{c_1}^2;c_2) \end{equation} so that by Weil's bound one has $$|\Kl_2(a;c)|\leq 2^{\omega(c)}$$ where $\omega(c)$ is the number of prime factors of $c$. We can then define the corresponding Kloosterman angle by $$\cos(\theta_{c,a})=\frac{\Kl_2(a;c)}{2^{\omega(c)}}.$$ It is then natural to make the following \begin{conjecture}[Horizontal Sato-Tate law for Kloosterman sums with composite moduli]\label{katzconjk} Given $k\geq 1$ un integer, let $\pi_k(X)$ be the number of squarefree integers $\leq X$ with exactly $k$ prime factors and let $\mu_{ST,k}$ be the Sato-Tate measure of order $k$, defined as the push-forward of the measure $\mu_{ST}^{\otimes k}$ on $[0,\pi]^k$ by the map $$(\theta_1,\ldots,\theta_k)\in [0,\pi]^k\mapsto \arccos(\cos(\theta_1)\times\ldots\times\cos(\theta_k)))\in[0,\pi].$$ for any $k\geq 1$, the multiset of Kloosterman angles $$\{\theta_{c,1},\ c\leq X,\ c\hbox{ is squarefree with $k$ prime factors}\}$$ becomes equidistributed with respect to $\mu_{ST,k}$ as $X\rightarrow\infty$. \end{conjecture} This conjecture for any $k\geq 2$ seem as hard as the original one (and is not implies by it). On the other hand it is possible to establish some of its consequences: \begin{theorem} There exists $k\geq 2$ such that \begin{enumerate} \item for infinitely many square-free integers $c$ with at most $k$ prime factors,$$|\Kl_2(1;c)|\geq 2017^{-2017};$$ \item for infinitely many square-free integers $c$ with at most $k$ prime factors,$$\Kl_2(1;c)>0;$$ \item for infinitely many square-free integers $c$ with at most $k$ prime factors,$$\Kl_2(1;c)<0.$$ \end{enumerate} \end{theorem} The first statement above was proven in \cite{Mi1} for $k=2$ (with $2017^{-2017}$ replaced by $4/25$; the second and the third were first proven in \cite{FouMiAnn} for $k=23$; this value was subsequently improved by Sivak, Matom\"aki and Ping who holds the current record with $k=7$ \cite{Sivak,Matomaki,Xi,Xi2}. \subsubsection{Kloosterman sums can be large} We start with the first statement which we prove for $c=pq$ a product of two distinct primes. The main idea is to use the twisted multiplicativity relation $$\Kl_2(1;pq)=\Kl_2(\ov p^2;q)\Kl_2(\ov q^2;p)$$ and to establish the existence of some $\kappa$ for which there exist infinitely many pairs of distinct primes $(p,q)$ such that $$|\Kl_2(\ov p^2;q)|\ |\Kl_2(\ov q^2;p)|\geq \kappa.$$ Indeed, for such pairs we have $$|\Kl_2(1;pq)|\geq \kappa^2.$$ Given $X$ large, we will consider pairs $(p,q)$ such that $p,q\in[X^{1/2},2X^{1/2}[$ and will show that for $\kappa$ small enough the two sets $$\{(p,q),\ p\not=q\in[X^{1/2},2X^{1/2}[,\ p,q\hbox{ primes}\ |\Kl_2(\ov p^2;q)|\geq \kappa\}$$ $$\{(p,q),\ p\not=q\in[X^{1/2},2X^{1/2}[,\ p,q\hbox{ primes}\ |\Kl_2(\ov q^2;p)|\geq \kappa\}$$ are large enough to have a non-empty (and in fact large) intersection as $X\rightarrow\infty$. This is a consequence of the following equidistribution statement \begin{proposition} Given $X\geq 1$, and a prime $q\in[X^{1/2},2X^{1/2}]$, the (multi)-set of Kloosterman angles $$\{\theta_{q,\ov p^{2}},\ p\in[X^{1/2},2X^{1/2}[,\ p\hbox{ prime,}\ p\not=q\}$$ is equidistributed with respect to the Sato-Tate measure: for any interval $[a,b]\subset [0,\pi]$ $$\frac{|\{p\in[X^{1/2},2X^{1/2}[,\ p\not=q\hbox{ prime,}\ \theta_{q,\ov p^{2}}\in[a,b]\}|}{|\{p\in[X^{1/2},2X^{1/2}[,\ p\not=q\hbox{ prime}\}|}\rightarrow \frac{2}\pi\int_a^b\sin^2(\theta)d\theta$$ as $X\rightarrow\infty$. \end{proposition} \proof We consider the pull-back sheaf $\mathcal{K}:=[x\rightarrow x^{-2}]^*\mathcal{K}\ell_2$ whose trace function is given by $x\rightarrow \Kl_2(\ov x^2;q)$. As a representation of the geometric Galois group, it corresponds to restricting the representation $\mathcal{K}\ell_2$ to a subgroup of index $2$. Since the geometric monodromy group of $\mathcal{K}\ell_2$ is $\SL_2$, the same is true for the pull-back (the algebriac group $\SL_2$ has no non-trivial finite-index subgroups); therefore $$\Ggeomd{\mathcal{K}}=\Garithd{\mathcal{K}}=\SL_2.$$ The non-trivial irreducible representations of $\SL_2$ are the symmetric powers of the standard representation, $\mathrm{Sym}_k(\mathrm{Std}),\ k\geq 1$. Given $k\geq 1$ the composed sheaf $$\mathcal{K}_k=\mathrm{Sym}_k\circ\mathcal{K}$$ is by construction geometrically irreducible, has rank $k+1$ with conductor bounded in terms of $k$ only and its trace function equals $$K_k(x)=\tr(\mathrm{Sym}_k\begin{pmatrix}e^{i\theta_{q,\ov x^{2}}}&0\\0&e^{-i\theta_{q,\ov x^{2}}} \end{pmatrix})=\sum_{j=0}^ke^{i(k-j)\theta_{q,\ov x^{2}}}e^{-ij\theta_{q,\ov x^{2}}}=\frac{\sin((k+1)\theta_{q,\ov x^{2}})}{\sin(\theta_{q,\ov x^{2}})}.$$ In particular $\mathcal{K}_k$ cannot be geometrically isomorphic to any tensor product of an Artin-Schreier sheaf and a Kummer sheaf (as they have rank $1$). Hence by a simple variant of Theorem \ref{thmprimesumthm} we obtain that $$\frac{1}{\pi(2X^{1/2})-\pi(X^{1/2})}\sum_\stacksum{p\not=q}{p\sim X^{1/2}}K_k(p)\rightarrow 0=\frac{2}\pi\int_0^\pi\frac{\sin((k+1)\theta)}{\sin(\theta)}\sin^2(\theta)d\theta$$ \qed Averaging over $q$, we deduce the existence of some $\kappa>0$ ($\kappa=0,4$) such that for $X$ large enough $$\frac{|\{(p,q),\ p\not=q\in[X^{1/2},2X^{1/2}[,\ p,q\hbox{ primes},\ |\Kl_2(\ov p^2;q)|\geq \kappa\}|}{|\{(p,q),\ p\not=q\in[X^{1/2},2X^{1/2}[,\ p,q\hbox{ primes}\}|}\geq 0,51$$ hence \begin{equation}\label{lowerKl1} {|\{(p,q),\ p\not=q\in[X^{1/2},2X^{1/2}[,\ p,q\hbox{ primes}\ |\Kl_2(1;pq)|\geq \kappa^2\}|}\geq (0,01+o(1))\frac{X}{(\frac12\log X)^2}. \end{equation} \subsubsection{Kloosterman sums change sign} We now discuss briefly the proof of the remaining two statements: to establish the existence of sign changes, it suffices to prove that given $V\in\mathcal{C}_c^\infty(]1,2[)$ some non-zero non-negative smooth function, there exists $u>0$ such that, for $X$ large enough \begin{equation}\label{compareKl} \bigl|\sum_\stacksum{c\geq 1}{p|c\Rightarrow p\geq X^{1/u}}\Kl_2(1;c)V(\frac{c}X)\bigr|<\sum_\stacksum{c\geq 1}{p|c\Rightarrow p\geq X^{1/u}}|\Kl_2(1;c)|V(\frac{c}X). \end{equation} which will prove the existence of sign changes for Kloosterman sums $\Kl_2(1;c)$ whose modulus has at most $1/u$ prime factors. Using sieve methods and the Petersson-Kuznetzov formulas to express sums of Kloosterman sums in terms of Fourier coefficients of modular forms (\eqref{Pet} and \eqref{Kuz}) and using the theory of automorphic forms, one can show that (see \cite{FouMiAnn} for a proof) \begin{proposition} For any $\eta>0$, there exists $u=u(\eta)>0$ such that $$\bigl|\sum_\stacksum{c\geq 1}{p|c\Rightarrow p\geq X^{1/u}}\Kl_2(1;c)V(\frac{c}X)\bigr|\leq \eta\frac{X}{\log X}$$ for $X$ large enough (depending on $\eta$ and $V$). \end{proposition} To conclude, it is sufficient to show that for some $u=u_0$, one has \begin{equation}\label{lowerwish} \sum_\stacksum{c\geq 1}{p|c\Rightarrow p\geq X^{1/u}}|\mu^2(c)\Kl_2(1;c)|V(\frac{c}X)\gg_{V} \frac{X}{\log X} \end{equation} (the left-hand side is an increasing function of $u$ so the above inequality remains valid for any $u\geq u_0$). The inequality \eqref{lowerKl1} points in the right direction (for $u_0=2$), however as stated it is off by a factor $\log X\log\log X$. One can however recover this factor $\log X$ entierely and prove the lower bound $$\sum_\stacksum{c\geq 1}{p|c\Rightarrow p\geq X^{3/8}}\mu^2(c)|\Kl_2(1;c)|V(\frac{c}X)\gg_{V} \frac{X}{\log X}.$$ The reason is that Theorem \ref{thmprimesumthm} applies also when $p$ is significantly smaller than $q$ ( if $q\simeq X^{1/2+\delta}$ one can obtain a non-trivial bound in \eqref{primesumsmooth} for $p$ of size $X^{1/2-\delta}$ for $\delta\in[0,1/8[$). The details involve making a partition of unity and we leave it to the interested reader. Another possibility (the one followed originally in \cite{FouMiAnn}) is to establish the lower bound \eqref{lowerwish} for a suitable $u$ by restricting to moduli $c$ which are products of exactly three prime factors, using the techniques discussed so far. \section{Multicorrelation of trace functions}\label{multisec} So far we have mainly discussed the evaluation of correlation sums associated to two trace functions $K_1$ and $K_2$ (especially the case $K_1=K$ and $K_2=\gamma^*K$), namely $$\mathcal{C}(K_1,K_2)=\frac{1}q\sum_{x}K_1(x)\ov{K_2(x)}.$$ In many applications, multiple correlation sums occur: sums of the shape $$\mathcal{C}(K_1,K_2,\ldots, K_{L}):=\frac{1}q\sum_{x}K_1(x)K_2(x)\ldots K_L(x)$$ where the $K_i,\ i=1,\ldots, L$ are trace functions; of course rewriting the inner term of the sum above as a product of two factors reduces to evaluating a double correlation sum, say associated to the sheaves $$\mathcal{F}=\mathcal{K}_1\otimes\ldots\mathcal{K}_{l},\ \mathcal{G}=\mathcal{K}_{l+1}\otimes\ldots\mathcal{K}_{L}$$ but it would remain to determine if $\mathcal{F}$ and $\mathcal{G}$ share a common irreducible component and this may be a hard task. In practice, the multicorrelation sums that occur (due to the application of some H\"older inequality and of the P\'olya-Vinogradov method) are often of the shape $$\mathcal{C}(K,\uple{\gamma},h)=\frac{1}q\sum_{x}K(\gamma_1\cdot x)\ldots K(\gamma_l\cdot x)\ov{K(\gamma'_1\cdot x)\ldots K(\gamma'_l\cdot x)}\mathrm{e}_q(xh)$$ for $K$ the trace function of some geometrically irreducible sheaf $\mathcal{F}$, pure of weight $0$, $${\uple{\gamma}}=(\gamma_1,\ldots,\gamma_l,\gamma'_1,\ldots,\gamma'_l)\in\PGL_2({\mathbf{F}_q})^{2l}$$ and some $h\in{\mathbf{F}_q}$. This sum is the correlation associated to the trace functions of the sheaves $$\gamma_1^*\mathcal{F}\otimes\ldots\otimes\gamma_l^*\mathcal{F}\ \hbox{and }{\gamma'}_1^*\mathcal{F}\otimes\ldots\otimes{\gamma'}_l^*\mathcal{F}\otimes\mathcal{L}_\psi$$ whose conductors are bounded polynomially in terms of $C(\mathcal{F})$. If $\mathcal{F}$ has rank one, the two sheaves above have rank one and it is usually not difficult to determine whether these sheaves are geometrically isomorphic or not. For $\mathcal{F}$ of higher rank, we describe a method due to Katz which has been axiomatized in \cite{FKMSP}: this method rests on the notion of geometric monodromy group which we discussed in the previous section. \subsection{A theorem on sums of products of trace functions} In this section we discuss some general result making it possible to evaluate multicorrelations sums of trace functions of interest for analytic number theory. The method is basically due to Katz and was used on several occasions, for instance in \cite{Mi1,FoMi}. The general result presented here is a special case of the results of \cite{FKMSP}. For this we need to introduce the following variants of the group of automorphism of a sheaf: one is the group of projective automorphisms $$\Aut^p_\mathcal{F}({\mathbf{F}_q})=\{\gamma\in\PGLd({\mathbf{F}_q}),\ \exists \hbox{ some rank one sheaf $\mathcal{L}$ s.t. }\gamma^*\mathcal{F}\simeq_{geom}\mathcal{F}\otimes\mathcal{L}\},$$ the other is the right-$\Aut^p_\mathcal{F}({\mathbf{F}_q})$-orbit $$\Aut^d_\mathcal{F}({\mathbf{F}_q})=\{\gamma\in\PGLd({\mathbf{F}_q}),\ \exists \hbox{ some rank one sheaf $\mathcal{L}$ s.t. }\gamma^*\mathcal{F}\simeq_{geom}D(\mathcal{F})\otimes\mathcal{L}\}.$$ Let $\mathcal{F}$ be a weight $0$, rank $k$, irreducible sheaf. We assume that \begin{itemize} \item the geometric monodromy group equals $\Ggeomd{\mathcal{F}}=\SL_k\hbox{ or }\Sp_k,$ (we then say that $\mathcal{F}$ is of $\SL$ or $\Sp$-{\em type}), \item the equality \eqref{eqGarithincluded} holds, \item $\Aut^p_\mathcal{F}({\mathbf{F}_q})=\{\mathrm{Id}\}$; in particular $\Aut^d_\mathcal{F}({\mathbf{F}_q})$ is either empty or is reduced to a single element, $\xi_\mathcal{F}$ which is a possibly trivial involution ($\xi_\mathcal{F}^2=\mathrm{Id}$) and is called the {\em special involution.} \end{itemize} \begin{example} The Kloosterman sheaves $\mathcal{K}\ell_k$ have this property \cite{GKM}. The special involution is either $\mathrm{Id}$ if $k$ is even ($\mathcal{K}\ell_k$ is self-dual) or the matrix $\xi=\begin{pmatrix}-1&\\&1 \end{pmatrix}$ for $k$ odd. \end{example} Finally we introduce the following ad-hoc definition: \begin{definition} Given $${\uple{\gamma}}=(\gamma_1,\ldots,\gamma_l,\gamma'_1,\ldots,\gamma'_l)\in\PGL_2({\mathbf{F}_q})^{2l},$$ one says that \begin{itemize} \item ${\uple{\gamma}}$ is normal if there is $\gamma\in\PGLd({\mathbf{F}_q})$ such that $$|\{i,\ \gamma_i=\gamma\}|+|\{j,\ \gamma'_j=\gamma\}|\equiv 1\mods 2.$$ \item For $k\geq 3$, ${\uple{\gamma}}$ is $k$-normal if there exists $\gamma\in\PGLd({\mathbf{F}_q})$ such that $$|\{i,\ \gamma_i=\gamma\}|-|\{\gamma'_j=\gamma\}|\not\equiv 0\mods k.$$ \item For $k\geq 3$, and $\xi\in\PGLd({\mathbf{F}_q})$ a non-trivial involution, ${\uple{\gamma}}$ is $k$-normal w.r.t. $\xi$ if there exist $\gamma\in\PGLd({\mathbf{F}_q})$ such that $$|\{i,\ \gamma_i=\gamma\}|+|\{j,\ \gamma'_j=\xi\gamma\}|-|\{j,\ \gamma'_j=\gamma\}|-|\{i,\ \gamma_i=\xi\gamma\}|\not\equiv 0\mods{k}.$$ \end{itemize} \end{definition} \begin{theorem}\label{cor-concrete} Let $K$ be the trace function of a sheaf $\mathcal{F}$ as above, $l\geq 1$, $\uple{\gamma}\in\PGL_2({\mathbf{F}_q})^{2l}$ and $h\in{\mathbf{F}_q}$. We assume that either \par \emph{(1)} the sheaf $\sheaf{F}$ is self-dual (so that $K$ is real-valued) and $\uple{\gamma}$ is normal \par \emph{(2)} the $\sheaf{F}$ is of $\SL$-type of rank $k\geq 3$, $q>r$, and $\uple{\gamma}$ is $k$-normal or $k$-normal w.r.t. the special involution of $\sheaf{F}$, if it exists. \par \emph{(3)} or $h\not=0$. We have $$\mathcal{C}(K,\uple{\gamma},h)=\frac{1}q\sum_{x}K(\gamma_1\cdot x)\ldots K(\gamma_l\cdot x)\ov{K(\gamma'_1\cdot x)\ldots K(\gamma'_l\cdot x)}\mathrm{e}_q(xh)\ll_{l,C(\mathcal{F})}\frac{1}q^{1/2}.$$ \end{theorem} \proof We discuss the proof only in the self-dual case for simplicity. We group together identical $\gamma_i,\gamma_j'$ and the sum becomes $$\frac{1}q\sum_{x}K(\gamma''_1\cdot x)^{m_1}\ldots K(\gamma''_{t}\cdot x)^{m_t}\mathrm{e}_q(xh)$$ where $t\leq 2l$, the $\gamma''_i$ are distinct and by hypothesis one of the $m_i$ is odd. The above sum is associated to the trace function of the sheaf $$\bigotimes_{i=1}^t \mathrm{Std}(\gamma_i''^*\mathcal{F})^{\otimes{m_i}}\otimes \mathcal{L}_\psi$$ where $\psi(\cdot)=\mathrm{e}_q(h\cdot)$ and $\mathrm{Std}$ is the tautological representation. We decompose each representation into irreducible $$\rho_{m,0}=\mathrm{Std}(G)^{\otimes m}=\sum_{r}m_r(\rho_{m,0})r$$ and are reduced to considering various sheaves of the shape \begin{equation}\label{eqlesheaf} \bigotimes_{i=1}^t r_i(\gamma_i''^*\mathcal{F})\otimes \mathcal{L}_\psi \end{equation} where $(r_i)_{i\leq t}$ is a tuple of irreducible representations of $G$; by our hypothesis, we know that either $\mathcal{L}_\psi$ is not trivial or at least one of the $r_i$ is not trivial (and necessarily of dimension $>1$). It is then sufficient to show that, under these assumptions, the sheaves \eqref{eqlesheaf} are irreducible. For this we consider the direct sum sheaf $$\bigoplus_i\gamma_i''^*\mathcal{F}$$ and let $\Ggeomd{\oplus}\subset \prod_i G$ be the Zariski closure of the image of $G^{\mathrm{geom}}$ under the sum of representations. The following very useful criterion is due to Katz \begin{theorem}[Goursat-Kolchin-Ribet criterion] Let $(\mathcal{F}_i)_i$ be a tuple of geometrically irreducible sheaves lisse on $U\subset \mathbf{A}^1_{{\mathbf{F}_q}}$, pure of weight $0$, with geometric monodromy groups $G_i$. We assume that \begin{itemize} \item For every $i$, $G_i=\Sp_{k_i}$ or $\SL_{k_i}$, \item for any rank $1$ sheaf $\mathcal{L}$ and any $i\not=j$ there is no geometric isomorphism between $\mathcal{F}_i\otimes\mathcal{L}$ and $\mathcal{F}_j$, \item for any rank $1$ sheaf $\mathcal{L}$ and any $i\not=j$ there is no geometric isomorphism between $\mathcal{F}_i\otimes\mathcal{L}$ and $D(\mathcal{F}_j)$. \end{itemize} Then the geometric monodromy group of the sheaf $\bigoplus_i\mathcal{F}_i$ equals $\prod_i G_i$ . \end{theorem} Our assumptions (the projective automorphism group of $\mathcal{F}$ is trivial, $\uple{\gamma}$ is normal and the geometric monodromy group is either $\SL$ or $\Sp$) imply that the above criterion holds and this implies that $$\bigotimes_i r_i(\gamma_i''^*\mathcal{F})\otimes \mathcal{L}_\psi$$ is always irreducible. \qed \subsection{Application to non-vanishing of Dirichlet $L$-functions} We now discuss a beautiful application of bounds for multicorrelation sums due to R. Khan and H. Ngo \cite{KhNg}. It concerns the proportion of non-vanishing of Dirichlet $L$-functions at the central point $1/2$. The interest in this kind of problems from analytic number theory was renewed with the work of Iwaniec and Sarnak in their celebrated attempt to prove the non-existence of a Landau-Siegel zero \cite{IS1}. Their approach was based on the following general problem: {\em given a family of $L$-functions $$\{L(f,s)=\sum_{n\geq 1}\frac{\lambda_f(n)}{n^s},\ f\in\mathcal{F}\}$$indexed by a "reasonable" family of automorphic forms $\mathcal{F}$\footnote{A reasonable definition of the notion of "reasonable" can be found in \cite{K,SST}}, show that for many $f\in\mathcal{F}$, one has $$L(f,1/2)\not=0.$$ } In their work \cite{IS1}, Iwaniec and Sarnak showed specifically that for $\mathcal{F}=\mathcal{S}_2(q)$ the set of holomorphic new-forms of weight $2$ and prime level $q$ (with trivial nebentypus), if one could show that for $q$ large enough at least $(25+2017^{-2017})\%$ of the central $L$-values $L(f,1/2)$ do not vanish (more precisely that at least $(25+2017^{-2017})\%$ of these central values are larger than $\log^{-2017}q$ ) then there would be no Landau-Siegel zero. They eventually proved \begin{theorem}[\cite{IS1}] As $q\rightarrow\infty$ along the primes one has $$\frac{|\{f\in \mathcal{S}_2(q),\ L(f,1/2)\geq \log^{-2}q\}|}{|\mathcal{S}_2(q)|}\geq 1/4-o(1).$$ \end{theorem} This is "just" at the limit. The possibility of producing a positive proportion of non-vanishing is not limited to this specific family and one of the most powerful and general tools to achieve this is via the {\em mollification method}. The principle of mollification method is as follows: given the family $\mathcal{F}$, one considers for some parameter $L\geq 1$ and some suitable vector $\uple{x}_L=(x_\ell)_{\ell\leq L}\in\mathbf{C}^{\ell}$ the linear form \begin{equation}\label{linear} \mathcal{L}(\mathcal{F},\uple{x}_L):=\frac{1}{|\mathcal{F}|}\sum_{f\in\mathcal{F}}L(f,1/2)M(f,\uple{x}_L) \end{equation} and the quadratic form \begin{equation}\label{quadratic} \mathcal{Q}(\mathcal{F},\uple{x}_L):=\frac{1}{|\mathcal{F}|}\sum_{f\in\mathcal{F}}|L(f,1/2)M(f,\uple{x}_L)|^2 \end{equation} where $M(f,\uple{x}_L)$ is the linear form (called "mollifier") $$M(f,\uple{x}_L)=\sum_{\ell\leq L}\frac{\lambda_f(\ell)}{\ell^{1/2}}x_\ell$$ and the $x_\ell$ are coefficients to be chosen in an optimal way with the idea of approximating the inverse $L(f,1/2)^{-1}$. Such coefficients are almost bounded, i.e.~satisfy: $$x_\ell= |\mathcal{F}|^{o(1)}.$$ By Cauchy's inequality one has $$\frac{|\{f\in\mathcal{F},\ L(f,1/2)\not=0\}|}{{|\mathcal{F}|}}\geq \frac{|\mathcal{L}(\mathcal{F},\uple{x}_L)|^2}{\mathcal{Q}(\mathcal{F},\uple{x}_L)}.$$ For suitable families one can evaluate asymptotically $\mathcal{L}(\mathcal{F},\uple{x}_L)$ and $\mathcal{Q}(\mathcal{F},\uple{x}_L)$ (the hard case being $\mathcal{Q}$) when $L=|\mathcal{F}|^\lambda$ for $\lambda>0$ some fixed constant and (upon minimizing $\mathcal{Q}(\mathcal{F},\uple{x}_L)$ with respect to $\mathcal{L}(\mathcal{F},\uple{x}_L)$) one usually shows that \begin{equation}\label{ratiomolli} \frac{|\mathcal{L}(\mathcal{F},\uple{x}_L)|^2}{\mathcal{Q}(\mathcal{F},\uple{x}_L)}=F(\lambda)+o(1) \end{equation} for $F$ some increasing rational fraction with $F(0)=0$. In \cite{IS1}, Iwaniec and Sarnak have also implemented this strategy for the (simpler) family of Dirichlet $L$-functions of modulus $q$ $$\{L(\chi,s)=\sum_{n\geq 1}\frac{\chi(n)}{n^s},\ \chi\in\widehat{(\mathbf{Z}/q\mathbf{Z})^\times}\}$$ and were able to evaluate \eqref{linear} and \eqref{quadratic} for any $\lambda<1/2$ and to prove \eqref{ratiomolli} with $$F(\lambda)=\frac{\lambda}{\lambda+1}$$ hence: \begin{theorem}[\cite{IS2}] As $q\rightarrow\infty$ along the primes one has $$\frac{|\{\chi\mods q,\ L(\chi,1/2)\not=0\}|}{|\{\chi\mods q\}|}\geq 1/3-o(1).$$ \end{theorem} Thus the proportion of non-vanishing can be arbitrarily close to $33.33\dots\%$. Shortly after, Michel and Vanderkam \cite{MvdK} obtained the same proportion by a slightly different method: taking into account the fact that for a complex character, the $L$-function $L(\chi,s)$ is not self-dual ($L(\chi,s)\not=L(\ov\chi,s)$) and has root number $$\varepsilon_\chi=i^{\mathfrak{a}}\frac{\tau(\chi)}{q^{1/2}},\ \mathfrak{a}=\frac{\chi(-1)-1}2$$ were $\tau(\chi)$ is the Gauss sum, they introduced a symmetrized mollifier of the shape $$M^s(\chi,\uple{x}_L)=M(\chi,\uple{x}_L)+\ov{\varepsilon_\chi}M(\ov \chi,\uple{x}_L)=\sum_{\ell\leq L}\frac{\chi(\ell)+\ov{\varepsilon_\chi}.\ov\chi(\ell)}{\ell^{1/2}}x_\ell.$$ Because of the oscillation of the root number $\varepsilon_\chi$, they could evaluate \eqref{quadratic} only in the shorter range $\lambda<1/4$. However this weaker range is offset by the fact that the symmetrized mollifier is more effective: indeed the rational fraction $F(\lambda)$ is then replaced by $$F^s(\lambda)=\frac{2\lambda}{2\lambda+1}$$ which takes value $1/3$ at $\lambda=1/4$. Recently R. Khan and H. Ngo founds a better method to bound the exponential sums considered in \cite{MvdK} building on Theorem \ref{cor-concrete} and they increased the allowed range from $\lambda<1/4$ to $\lambda<3/10$: \begin{theorem}[\cite{KhNg}]\label{KhNgthm} As $q\rightarrow\infty$ along the primes one has $$\frac{|\{\chi\mods q,\ L(\chi,1/2)\not=0\}|}{|\{\chi\mods q\}|}\geq 3/8-o(1).$$ \end{theorem} The key step in their proof is the asymptotic evaluation of the second mollified moment \begin{equation}\label{2ndchi} \frac{1}{\varphi(q)}\sum_{\chi\mods q}|L(\chi,1/2)|^2|M^s(\chi,\uple{x}_L)|^2 \end{equation} for $L=q^\lambda$, and any fixed $\lambda<3/10$. By (nowadays) standard methods\footnote{inappropriately called "approximate functional equation"} the $L$-value $L(\chi,1/2)$ can be written as a sum of rapidly converging series (cf.~\cite[Theorem 5.3]{IwKo}): for $q$ prime and $\chi\not=1$ $$|L(\chi,1/2)|^2=2\sum_{n_1,n_2\geq 1}\frac{\chi(n_1)\ov\chi(n_2)}{(n_1n_2)^{1/2}}V(\frac{n_1n_2}{q})$$ where $V$ is a rapidly decreasing function which depends on $\chi$ only through its parity $\chi(-1)=\pm 1$. Plugging this expression in the second moment \eqref{2ndchi} and unfolding, one finds that the key point is to obtain a bound of the following shape\footnote{for simplicity we ignore the dependency of $V$ in the parity of the $\chi$'s} \begin{equation}\label{KhNggoal} \sumsum_\stacksum{\ell_1,\ell_2\leq L,n_1,n_2}{(l_1l_2n_1n_2,q)=1}\frac{x_{l_1}\ov{x_{l_2}}}{(ql_1l_2n_1n_2)^{1/2}}V(\frac{n_1n_2}q)e\left(\frac{n_2\ov{l_1l_2n_1}}{q}\right)\ll q^{-\delta} \end{equation} for some $\delta=\delta(\lambda)>0$ for any fixed $\lambda<3/10$. This sum can be decomposed in various sub-sums in which the variables are localized to specific ranges. The problem becomes essentially that of bounding by $O(q^{-\delta})$ the family of bilinear sums $$\Sigma(L_1,L_2,N_1,N_2)=\frac{1}{(qL_1L_2N_1N_2)^{1/2}}\sumsum_\stacksum{l_i\sim L_i,i=1,2}{n_1,n_2}x_{l_1}\ov{x_{l_2}}W(\frac{n_1}{N_1})W(\frac{n_2}{N_2})e\left(\frac{n_2\ov{l_1l_2n_1}}{q}\right)$$ where $W\in\mathcal{C}_c(]1/2,2[)$, $L_1,L_2\leq L$ and $N_1N_2\leq q$. The $n_2$-sum is essentially a geometric series bounded by $$\ll \min(N_2,{\|\ov{l_1l_2n_1}/q\|^{-1}})$$ where $\|\cdot\|$ is the distance to the nearest integer. Hence \begin{align} \nonumber \Sigma(L_1,L_2,N_1,N_2)&\ll \frac{q^\varepsilon}{(qL_1L_2N_1N_2)^{1/2}}\sum_{m\approx L_1L_2N_1}\min(N_2,{\|\ov{m}/q\|^{-1}})\\ &\ll\frac{q^{2\varepsilon}}{(qL_1L_2N_1N_2)^{1/2}}\mathop{\mathrm{Max}}\limits_{1\leq U\leq q/2}\min(N_2,\frac{q}{U})\sumsum_\stacksum{m\approx L_1L_2N_1,\ ,\ u\sim U}{um\equiv \pm 1\mods q}1\nonumber\\ \nonumber &\ll\frac{q^{2\varepsilon}}{(qL_1L_2N_1N_2)^{1/2}}\mathop{\mathrm{Max}}\limits_{1\leq U\leq q/2}\min(N_2,\frac{q}{U})(\frac{L_1L_2N_1U}{q}+1)\\ &\ll q^{2\varepsilon}\frac{L}{q^{1/2}}(\frac{N_1}{N_2})^{1/2}.\label{KNbound1} \end{align} (Observe that for $\frac{L_1L_2N_1U}{q}\ll 1$ the equation $um\equiv\pm 1\mods q$ has no solution unless $L_1L_2N_1U\ll 1$). Alternatively, applying the Poisson summation formula to the $n_1$ variable we obtain a sum of the shape $$\Sigma(L_1,L_2,N_1,N_2)=\frac{1}{(qL_1L_2N_1N_2)^{1/2}}\frac{N_1}{q^{1/2}}\sumsum_\stacksum{l_i\sim L_i,i=1,2}{n_1,n_2}x_{l_1}\ov{x_{l_2}}\widetilde W(\frac{n_1}{q/N_1})W(\frac{n_2}{N_2})\Kl_2(\ov{l_1l_2}n_1n_2;q)$$ where $\widetilde W$ is bounded and rapidly decreasing. Bounding this sum trivially (using that $|\Kl_2(m;q)|\leq 2$) yields \begin{equation}\label{trivialbound12} \Sigma(L_1,L_2,N_1,N_2)\ll q^{\varepsilon}L(\frac{N_2}{N_1})^{1/2}. \end{equation} The expression $\min(\frac{L}{q^{1/2}}(\frac{N_1}{N_2})^{1/2},L(\frac{N_2}{N_1})^{1/2})$ is maximal for $\frac{N_1}{N_2}=q^{1/2}$ and equals $L/q^{1/4}$ which is $O(q^{-\delta})$ if $\lambda<1/4$. The bound \eqref{trivialbound12} did not exploit cancellation from the $n_1,n_2,l_1,l_2$ averaging and indeed this is not evident because in the limiting case $N_1=q^{3/4},\ N_2=q/N_1=q^{1/4}$, $L_1=L_2=L=q^{1/4}$, one has $$n_1\approx n_2\approx l_1\approx l_2\approx q^{1/4}$$ which is pretty short. Nevertheless Khan and Ngo where able to detect further cancellation from summing of these short variables. The idea, which we have met already, is to group some of these variables to form longer variables. One possibility could be to group together $n_1$, $n_2$ on the one hand and $l_1$, $l_2$ on the other hand with the idea of applying the methods of \S \ref{secbilinear}. However, the new variables would have size $q^{1/2}$, which is the P\'olya-Vinogradov range at which point the standard completion method just fails. Instead, one can group $n_1$, $n_2$ and $l_2$ together and leave $l_1$ alone. The variable $r=n_1n_2\ov{l_2}\mods q$ takes essentially $q^{3/4}$ distinct values but over all of ${\mathbf{F}^\times_q}$ and does not vary along an interval. To counter this defect, one uses the Holder inequality instead of Cauchy-Schwarz. Proceeding as above, we write $$\Sigma(L_1,L_2,N_1,N_2)=\frac{1}{(qL_1L_2N_1N_2)^{1/2}}\frac{N_1}{q^{1/2}}\sumsum_{r\in{\mathbf{F}^\times_q},l_1}x_{l_1}\nu(r)\Kl_2(\ov{l_1}r;q)$$ where $$\nu(r)=\sumsum_\stacksum{l_2,n_1,n_2}{r=n_1n_2\ov{l_2}(q)}\ov{x_{l_2}}\widetilde W(\frac{n_1}{q/N_1})W(\frac{n_2}{N_2}).$$ Under the assumption \begin{equation}\label{moment2hyp} L_2\frac{q}{N_1}N_2<q/100\Longrightarrow L_2\frac{N_2}{N_1}<1/100 \end{equation} we have $$\sum_r|\nu(r)|+\sum_r|\nu(r)|^2\ll q^\varepsilon L_2\frac{q}{N_1}N_2.$$ Indeed under \eqref{moment2hyp} one has $$\ov l_2n_1n_2\equiv \ov l'_2n'_1n'_2\mods q \Longleftrightarrow l'_2n_1n_2\equiv l_2n'_1n'_2\mods q \Longleftrightarrow l'_2n_1n_2= l_2n'_1n'_2$$ and the choice of $l'_2,n_1,n_2$ determines $l_2,n'_1,n'_2$ up to $O(q^\varepsilon)$ possibilities. Hence, applying Cauchy's inequality twice, we obtain $$\Sigma(L_1,L_2,N_1,N_2)=\frac{q^\varepsilon}{(qL_1L_2N_1N_2)^{1/2}}\frac{N_1}{q^{1/2}}(L_2\frac{q}{N_1}N_2)^{3/4}\left(\sum_{r\in{\mathbf{F}^\times_q}}|\sum_{l\sim L_1}x_l\Kl_2(\ov lr;q)|^4\right)^{1/4}.$$ Now (using that $\Kl_2(n;q)\in\mathbf{R}$) $$\sum_{r\in{\mathbf{F}^\times_q}}|\sum_{l\sim L_1}x_l\Kl_2(\ov lr;q)|^4\ll q^\varepsilon\sum_\uple{l}|\sum_{r\in{\mathbf{F}^\times_q}}\prod_{i=1}^4\Kl_2(\ov{l_i}r;q)|$$ where $\uple{l}=(l_1,l_2,l_3,l_4)\in[L_1,2L_1[^4$. Theorem \ref{cor-concrete}, applied to the Kloosterman sheaf, gives $$\sum_{r\in{\mathbf{F}^\times_q}}\prod_{i=1}^4\Kl_2(\ov{l_i}r;q)\ll q^{1/2}$$ unless there exists a partition $\{1,2,3,4\}=\{i,j\}\sqcup\{k,l\}$ such that $$l_i=l_j,\ l_k=l_l.$$ In this case, we use the trivial bound $$\sum_{r\in{\mathbf{F}^\times_q}}\prod_{i=1}^4\Kl_2(\ov{l_i}r;q)\ll q.$$ Hence $$\sum_\uple{l}|\sum_{r\in{\mathbf{F}^\times_q}}\prod_{i=1}^4\Kl_2(\ov{l_i}r;q)|\ll L_1^2q+L_1^4q^{1/2}$$ and \begin{align}\nonumber \Sigma(L_1,L_2,N_1,N_2)&\ll\frac{q^\varepsilon}{(qL_1L_2N_1N_2)^{1/2}}\frac{N_1}{q^{1/2}}(L_2\frac{q}{N_1}N_2)^{3/4}(L_1^{1/2}q^{1/4}+L_1q^{1/8})\\ &\ll q^{\varepsilon}L(\frac{N_2}{N_1})^{1/2}(Lq\frac{N_2}{N_1})^{-1/4}(L^{-1/2}q^{1/4}+q^{1/8}). \label{KhNgbound3} \end{align} For $L\geq q^{1/4}$ (the range one would like to improve) one obtains under \eqref{moment2hyp} \begin{equation}\label{KhNgbound3final} \Sigma(L_1,L_2,N_1,N_2) \ll q^{\varepsilon}L(\frac{N_2}{N_1})^{1/2}(Lq^{1/2}\frac{N_2}{N_1})^{-1/4}. \end{equation} Suppose now we are in a limiting case for \eqref{trivialbound12}, namely $L^2N_2/N_1=1$. Then \eqref{moment2hyp} holds as long as $L\gg 1$ and \eqref{KhNgbound3final} improves over \eqref{trivialbound12} by a factor $(q^{1/2}/L)^{1/4}$, which is $<1$ as long as $L< q^{1/2}$. A more detailed analysis combining \eqref{KNbound1}, \eqref{trivialbound12} and \eqref{KhNgbound3final} shows that \eqref{KhNggoal} holds for any fixed $\lambda<3/10$, and hence leads to Theorem \ref{KhNgthm}. \section{Advanced completion methods: the $q$-van der Corput method} In this section and the next ones, we discuss general methods to evaluate trace functions along intervals of length smaller than the P\'olya-Vinogradov range discussed in \S \ref{Secshort}. \subsection{The $q$-van der Corput method} One of the most basic techniques encountered in analytic number to estimate sums of (analytic) exponentials is the {\em van der Corput method} (see \cite[Chap. 8]{IwKo}). The $q$-Van der Corput method is an arithmetic variant due to Heath-Brown which replace archimedean analysis with $q$-adic analysis. That method concerns $c$-periodic functions for $c$ a {\em composite number}. Suppose (to simplify the presentation) that $c=pq$ for two primes $p$ and $q$ and let $$K_c=K_pK_q\colon \mathbf{Z}/c\mathbf{Z}\to \mathbf{C}$$ be some function modulo $c$ which is the product of two trace functions modulo $p$ and $q$ (of conductor bounded by some constant $C$). We consider the sum $$S_V(K,N):=\sum_{n}K_c(n)V(\frac nN)=\sum_{n}K_p(n\mods p)K_q(n\mods q)V(\frac nN)$$ where $V\in\mathcal{C}^\infty(]1,2[)$ and $2N<c=pq$. We will explain the proof of the following result \begin{theorem}[$q$-van der Corput method] Let $c=pq$ a product of two primes and $K_c=K_p.K_q$ as above; assume that $K_q$ is the trace function associated with a geometrically irreducible sheaf $\mathcal{F}$, which is not geometrically isomorphic to a linear or quadratic phase (i.e.~not of the shape $[P]^*\mathcal{L}_\psi$ for $P$ a polynomial of degre $\leq 2$). Then for $2N<pq$, we have $$S_V(K_c,N)\ll_C N^{1/2}(p+q^{1/2})^{1/2}.$$ \end{theorem} \begin{remark}This bound is non trivial as long as $$N\geq \mathop{\mathrm{Max}}\limits(p,q^{1/2}),$$ which is a weaker condition than $N\geq (pq)^{1/2}$ as long as $$1<p<q.$$ We have therefore improved over the P\'olya-Vinogradov range; moreover the range of non triviality is maximal when $p\approx c^{1/3}$ and $q\approx c^{2/3}$. In that case, one obtains \begin{equation}\label{optimalvdC} S_V(K,N)\ll_C N^{1/2}c^{1/6} \end{equation} which is non-trivial as long as $$N\geq c^{1/3}.$$ \end{remark} \proof The proof makes use of the (semi-)invariance of $K$ under translations: $$K(n+ph)=K_p(n)K_q(n+ph).$$ For $H\leq N/100p$ we have $$S_V(K,N)=\frac{1}{2H+1}\sum_{|h|\leq H}\sum_{n}K_p(n)K_q(n+ph)V(\frac {n+ph}N) $$ $$= \frac{1}{2H+1}\sum_{|n|\leq 3N}K_p(n)\sum_{|h|\leq H}K_q(n+ph)V(\frac {n+ph}N)$$ $$\ll \frac{1}{2H+1}N^{1/2}\bigl(\sum_{|n|\leq 3N}\bigl|\sum_{|h|\leq H}K_q(n+ph)V(\frac {n+ph}N)\bigr|^2\bigr)^{1/2}$$ $$=\frac{N^{1/2}}{H}\bigl(\sumsum_{|h|,|h'|\leq H}\sum_{n}K_q(n+ph)\ov{K_q(n+ph')}W_{p,h,h'}(\frac{n}N)\bigr)^{1/2}$$ where $$W_{p,h,h'}(\frac{n}N)=V(\frac {n+ph}N)\ov{V(\frac {n+ph'}N)}.$$ We split the $h,h'$-sum into its diagonal and non-diagonal contribution $$\sumsum_{|h|,|h'|\leq H}\ldots=\sumsum_\stacksum{|h|,|h'|\leq H}{h=h'}\ldots+\sumsum_\stacksum{|h|,|h'|\leq H}{h\not=h'}\ldots\ .$$ The diagonal sum contributes by $O(NH)$ and it remains to consider the correlation sums $$\mathcal{C}(K_q,h,h'):=\sum_{n}K_q(n+ph)\ov{K_q(n+ph')}W_{p,h,h'}(\frac{n}N)$$ for $h\not=h'$. Observe that this is the sum of a trace function of modulus $q$ of length $\approx N$. By comparison with the initial sum, we had a trace function of modulus ${pq}$ of length $\approx N$ so the relative length of $n$ compared to the modulus has increased ! By the P\'olya-Vinogradov method, it is sufficient to determine whether the sheaf $$[+ph]^*\mathcal{F}\otimes [+ph']^*D(\mathcal{F})$$ has an Artin-Schreier sheaf in its irreducible components. This is equivalent to whether one has an isomorphism $$[+p(h-h')]^*\mathcal{F}\simeq \mathcal{F}\otimes\mathcal{L}_\psi$$ for some Artin-Schreier sheaf. We will answer this question in a slighly more general form: \begin{definition} For $d$ an integer satisfying $1\leq d<q$, a polynomial phase sheaf of degree $d$ is a sheaf of the shape $[P]^*\mathcal{L}_\psi$ for $P$ a polynomial of degree $d$ and $\psi$ a non-trivial additive character. It is lisse on $\mathbf{A}^1_{{\mathbf{F}_q}}$, ramified at infinity with Swan conductor equal to $d$ and its trace function equals $$x\mapsto \psi(P(x)).$$ \end{definition} We can now invoke the following \begin{proposition}[\cite{Polymath8a}] Let $d$ be an integer satisfying $1\leq d<q$. Suppose that $\mathcal{F}$ is geometrically irreducible, not isomorphic to a polynomial phase of degree $\leq d$ and that $C(\mathcal{F})\leq q^{1/2}$. Then for any $h\in{\mathbf{F}_q}-\{0\}$ and any non-constant polynomial $P$ of degree $\leq d-1$,$$[+h]^*\mathcal{F}\hbox{ and } \mathcal{F}\otimes [P]^*\mathcal{L}_\psi$$ are not geometrically isomorphic. \end{proposition} \proof We will only give the easiest part of it and refer to \cite[Thm. 6.15]{Polymath8a} for the complete argument. Suppose that $\mathcal{F}$ is ramified at some point $x_0\in\mathbf{A}^1(\ov{{\mathbf{F}_q}})$, since polynomial phases are ramified only at $\infty$ the isomorphism $$[+h]^*\mathcal{F}\simeq \mathcal{F}\otimes [P]^*\mathcal{L}_\psi$$ restricted to the inertia group $I_x$ implies that $\mathcal{F}$ is ramified at $x_0-h$ and iterating at $x_0-nh$ for any $n\in\mathbf{Z}$, this would imply that $C(\mathcal{F})\geq q$ which is excluded. It remains to deal with the case where $\mathcal{F}$ is ramified only at $\infty$. \qed Under our assumptions the above proposition implies that for $h\not=h'$ $$\mathcal{C}(K_q,h,h')=O(q^{1/2})$$ and that $$S_V(K,N)\ll N^{1/2}(\frac{N}{H}+q^{1/2})^{1/2}$$ and we choose $H=N/100 p$ to conclude the proof. \qed \subsection{Iterating the method} Suppose more generally that $c$ is a squarefree number and that $$K_c=\prod_{q|c}K_q$$ is a product of trace functions associated to sheaves not containing any polynomial phases. One can repeat the above argument after factoring $c$ into a product of squarefree coprime moduli $r.s$ and decompose accordingly $$K_c=K_r.K_s.$$ Thus, we have to bound sums of the shape \begin{equation}\label{ssum} \sum_{n}K_s(n+rh)\ov{K_s(n+rh')}W_{r,h,h'}(\frac{n}{N}) \end{equation} This time we need to be a bit more careful and decompose the $h,h'$ sum according to the gcd $(h-h',s)$. After applying the Poisson summation formula (cf.~\eqref{eqpoisson}) we can factor the resulting Fourier transform modulo $s$ into sums over prime moduli $q|s$: $$\widehat{K_s}(y)=\prod_{q|s}\widehat{K_q}(\ov{s_q}y\mods q),\ y\in\mathbf{Z}/s\mathbf{Z},\ s_q=s/q.$$ If $q|h-h'$ we use the trivial bound $\widehat{K_q}(\ov{s_q}y\mods q)\ll q^{1/2}$ and if $q\!\!\not|h-h'$ we use the non-trivial bound $\widehat{K_q}(\ov{s_q}y\mods q)\ll 1$. We eventually obtain (see \cite{Polymath8a}) \begin{theorem} Let $C\geq 1$, let $c$ be squarefree and let $K_c\colon \mathbf{Z}/c\mathbf{Z}\rightarrow\mathbf{C}$ be a product of trace functions $K_q$ such that for any prime $q|c$ the underlying sheaf $\mathcal{F}_q$ is of conductor $\leq C$ , is geometrically irreducible and is not geometrically isomorphic to any polynomial phase of degree $\leq 2$. Then $$S_V(K_c,N)\ll_{C,\varepsilon} c^{\varepsilon}N^{1/2}(r+s^{1/2})^{1/2}$$ for any $\varepsilon>0$. \end{theorem} If $s$ is not a prime, we could also iterate, factor $s$ into $s=r_2s_2$ and instead of applying the P\'olya-Vinogradov completion method to the sum \eqref{ssum}, we could also apply the $q$-van der Corput method with the trace functions $$n\mapsto K_q(n+rh)\ov{K_q(n+rh')},\ q|s_1.$$ This leads us to the quadruple correlation sum $$\mathcal{C}(K_q,{\mathbf \gamma},\alpha)=\frac{1}q\sum_{x}K_q(\gamma_1\cdot x)K_q(\gamma_2\cdot x)\ov{K_q(\gamma'_1\cdot x)K_q(\gamma'_2\cdot x)}\mathrm{e}_q(\alpha x)$$ where the $\gamma_i,\gamma'_j,\ i,j=i,2$ are unipotent matrices $$\gamma_i=\begin{pmatrix}1&h_i\\0&1 \end{pmatrix},\ \gamma'_i=\begin{pmatrix}1&h'_j\\0&1 \end{pmatrix} $$ In suitable situations, we can then apply Theorem \ref{cor-concrete} from the previous section. An important example is when $$K_c(n)=\Kl_k(n;c)=\frac{1}{c^{(k-1)/2}}\sum_\stacksum{x_1,\ldots,x_k\in(\mathbf{Z}/c\mathbf{Z})^\times}{x_1.\ldots.x_k=n}e\left(\frac{x_1+\ldots+x_k}{c}\right)$$ is a hyper-Kloosterman sum. For any $q|c$, one has $$K_q(y)=\Kl_k(\ov{c_q}^ky;q)\text{ with }\ c_q=c/q $$ and the underlying sheaf is the multiplicatively shifted Kloosterman sheaf $\mathcal{F}_q=[\times \ov{c_q}^k]^*\mathcal{K}\ell_k$. In that case Theorem \ref{cor-concrete} applies and we eventually obtain the bound $$S_V(\Kl_k(\cdot;c),N)\ll_k c^{\varepsilon}N^{1/2}\left(r+(N^{1/2}(s_1+s_2^{1/2}))^{1/2}\right)^{1/2}.$$ for any factorisation $c=rs_1s_2$. In particular, if there exists a factorisation $c=rs_1s_2$ such that $$r\approx c^{1/4},\ s_1\approx c^{1/4},\ s_2\approx c^{1/2}$$ we obtain $$S_V(\Kl_k(\cdot;c),N)\ll_k N^{1-\eta}$$ for some $\eta=\eta(\delta)>0$ as long as $$N\geq c^{1/4+\delta}.$$ Iterating once more we see that for any factorisation $c=rs_1s_2s_3$ one has \begin{equation}\label{vdCiterate3} S_V(\Kl_k(\cdot;c),N)\ll_{k,\varepsilon} c^{\varepsilon}N^{1/2}\left(r+(N^{1/2}(s_1+(N^{1/2}(s_2+s_3^{1/2}))^{1/2}))^{1/2}\right)^{1/2} \end{equation} so if there exists a factorisation $c=rs_1s_2s_3$ such that $$r\approx c^{1/5},\ s_1\approx c^{1/5},\ s_2\approx c^{1/5},\ s_3\approx c^{2/5}$$ then $$S_V(\Kl_k(\cdot;c),N)\ll_{k,\varepsilon} N^{1-\eta}$$ for some $\eta=\eta(\delta)>0$ as long as $$N\geq c^{1/5+\delta}.$$ We can continue this way as long as enough factorisation for $c$ are available. Such availability is garanteed by the notion of friability: \begin{definition} An integer $c\not=0$ is $\Delta$-friable if $$q|c\ (q\hbox{ prime })\Rightarrow q\leq \Delta.$$ \end{definition} Using the reasoning above, Irving \cite{IrvingIMRN} proved the following result for $k=2$ (in a quantitative form): \begin{theorem} For any $L\geq 2$ there exists $l=l(L)\geq 1$ and $\eta=\eta(L)>0$ such that for $c$ a squarefree integer which is $c^{1/l}$-friable and any $k\geq 2$, one has, $$S_V(\Kl_k(\cdot;c),N)\ll_{k,V} N^{1-\eta}$$ whenever $N\geq c^{1/L}$. \end{theorem} Therefore one can obtain non-trivial bounds for extremely short sums of hyper-Kloosterman sums as long as their modulus is firable enough. In particular for $k=2$ we have seen in Remark \ref{remselberg} that improving on Selberg's $2/3$-exponent for the distribution of the divisor function in arithmetic progressions to large moduli (Theorem \ref{thmd2}) was essentially equivalent to bounding non-trivially sums of the shape $$\sumsum_{n_1,n_2}\Kl_2(an_1n_2;c)V(\frac{n_1}{N^*_1})V(\frac{n_2}{N^*_2})$$ for $$N^*_1N^*_2\approx c^{1/2}.$$ If $N^*_1N^*_2\approx c^{1/2}$ then $\mathop{\mathrm{Max}}\limits(N^*_1,N^*_2)\gg c^{1/4}$ and we can use the \eqref{vdCiterate3} to bound non-trivially the above sum granted that $c$ is friable enough. This leads to the following theorem (compare with Theorem \ref{thmd2} for $c$ a prime): \begin{theorem}{\cite{IrvingIMRN}} There exists $L\geq 4$ and $\eta>0$ such that for any $c\geq 1$ which is squarefree and $c^{1/L}$-friable and any $a$ coprime with $c$, one has for $c\leq X^{2/3+\eta}$ and any $A\geq 0$ $$E(d_2;c,a)\ll_A \frac{X}{c}(\log X)^{-A}.$$ \end{theorem} See \cite{IrvingIMRN2} and \cite{WuPing} for further applications of these ideas. \section{Around Zhang's theorem on bounded gaps between primes } Some of the arguments of the previous chapter can be found in Yitang Zhang's spectacular proof of the existence of bounded gaps between the primes: \begin{theorem}[\cite{YZhang}]\label{thmZhang} Let $(p_n)_{n\geq 1}$ be the sequence of primes in increasing order ($p_1=2,p_2=3,p_3=5,\ldots$). There exists an absolute constant $C$ such that $$p_{n+1}-p_n\leq C$$ for infinitely many $n$. \end{theorem} Besides Zhang's original paper, we refer to \cite{Gran,KowBBK1} for a detailed description of Zhang's proof and the methods involved and historical background. Let us however mention a few important facts: \begin{itemize} \item The question of the existence of small gaps between primes has occupied analytic number theorists for a very long time and has been the motivations for the invention of many techniques, in particular the {\em sieve method} to detect primes with additional constraints. A conceptual breakthrough occurred with the work of Goldston, Pintz and Y\i ld\i r\i m \cite{GYP} who proved the weaker result $$\liminf_{n}\frac{p_{n+1}-p_n}{\log p_n}=0$$ and who on this occasion invented a technique which is also key to Zhang's approach (see Soundararajan's account of their works \cite{SoundGYP}). \item Zhang's theorem can be seen as an approximation to the twin prime conjecture: $$\hbox{\em{There exist infinitely many primes $p$ such that $p+2$ is prime}}.$$ Indeed, Zhang's theorem with $C=2$ is equivalent to the twin prime conjecture. \item A value for the constant $C$ can be given explicitly : Zhang himself gave $$C=70.10^6$$ and mentioned that this could certainly be improved. Improving the value of this constant was the objective of the Polymath8 project: following and optimizing Zhang's method in several aspects (some to be explained below), the value was reduced to $$C=4680.$$ However Maynard \cite{Maynard} made independently another conceptual breakthrough, simplifying the whole proof and making it possible to obtain stronger results and improving the constant to $$C=600.$$ Eventually the Polymath8 project joined with Maynard ; optimizing his argument, the value $$C=246$$ was reached (cf.~\cite{Polymath8b}). A side-effect of Maynard's approach is that what we are going to describe now plays no role anymore in this specific application. Nevertheless, it adresses another important question in analytic number theory. \end{itemize} \subsection{The Bombieri-Vinogradov theorem and beyond} The breakthrough of Goldston, Pintz and Y\i ld\i r\i m that is at the origin of Zhang's work builds on the use of sieve methods to detect the existence of infinitely many pairs of primes at distance $\leq C$ from one another. The fuel to be put in this sieve machine are results concerning the distribution of primes in arithmetic progressions to moduli large with respect to the size of the primes which are sought after. In this respect the Bombieri-Vinogradov theorem already discussed in \S \ref{Secternary} is a powerful substitute to GRH: \begin{theorem}[Bombieri-Vinogradov] For any $A>0$ there is $B=B(A)>0$ such that for $x\geq 2$ $$\sum_{q\leq x^{1/2}/\log^B x}\mathop{\mathrm{Max}}\limits_{(a,q)=1}\left|\psi(x;q,a)-\frac{\psi(x;q)}{\varphi(q)}\right|\ll \frac{x}{\log^Ax}.$$ \end{theorem} For the question of the existence of bounded gaps between primes, the exponent $1/2$ appearing in the constraint $q\leq x^{1/2}/\log^B x$ turns out to be crucial. In their seminal work \cite{GPY}, Goldston-Pintz-Y\i ld\i r\i m had pointed out that the Bombieri-Vinogradov theorem with the exponent $1/2$ replaced by any strictly larger constant would be sufficient to imply Theorem \ref{thmZhang}. The possibility of going beyond Bombieri-Vinogradov is not unexpected: the Elliott-Halberstam conjecture predicts that any fixed exponent $<1$ could replace $1/2$. That this conjecture is not wishful thinking comes from the work of Fouvry, Iwaniec and Bombieri-Friedlander-Iwaniec from the 80's \cite{FIActaAr, Fou,BFI} who proved versions of the Bombieri-Vinogradov theorem with exponents $>1/2$ but for "fixed" congruences classes (for instance with the sum involving the difference $|\psi(x;q,1)-\frac{\psi(x;q)}{\varphi(q)}|$ instead of $\mathop{\mathrm{Max}}\limits_{(a,q)=1}|\psi(x;q,a)-\frac{\psi(x;q)}{\varphi(q)}|$). Zhang's groundbreaking insight has been to nail down a beyond-Bombieri-Vinogradov type theorem that could be established unconditionally and would be sufficient to establish the existence of bounded gaps between primes. The following theorem is a variant of Zhang's theorem (\cite[Thm 1.1]{Polymath8a}). Let us recall that an integer $q\geq 1$ is $\Delta$-friable if any prime $p$ dividing $q$ is $\leq \Delta$. \begin{theorem}\label{thmzhang} Let $\uple{a}=(a_p)_{p\in\mathcal{P}}$ be a sequence of integers indexed by the primes such that $a_p$ is coprime with $p$ for all $p$. For any squarefree integer $q$, let $a_q\mods q$ be the unique congruence class modulo $q$ such that $$\forall p|q,\ a_q\equiv a_p\mods p;$$ in particular $a_q\in(\mathbf{Z}/ q\mathbf{Z})^\times$. There exist absolute constants $\theta>1/2$ and $\delta>0$, independent of $\uple{a}$, such that for any $A>0$, $x>2$ one has $$\sum_\stacksum{q\leq x^{\theta},\ {sqfree}}{q\ x^\delta-{friable}}|\psi(x;q,a_q)-\frac{\psi(x;q)}{\varphi(q)}|\ll \frac{x}{\log^Ax}.$$ Here the implicit constant depends only on $A$, but not on $\uple{a}$. \end{theorem} \begin{remark} Zhang essentially proved this theorem for $\theta=1/2+1/585$ and in an effort to improve Zhang's constant, the Polymath8 project improved $1/585$ to $7/301$. \end{remark} We will now describe some of the principles of the proof of this theorem and especially at the points where algebraic exponential sums occur. We refer to the introduction of \cite{Polymath8a} and to E. Kowalski's account in the Bourbaki seminar \cite{KowBBK1}. Let us write $c(q)$ for $\mu^2(q)$ times the sign of the difference $\psi(x;q,a_q)-\frac{\psi(x;q)}{\varphi(q)}$. The above sum equals $$\sum_\stacksum{q\leq x^{\theta}}{q\ x^\delta-\hbox{friable}}c(q)\sum_{n\leq x}\Lambda(n)\Delta_{\uple{a}}(n;q).$$ where $$\Delta_{\uple{a}}(n):=\delta_{n\equiv a_q\mods q}-\frac{\delta_{(n,q)=1}}{\varphi(q)}$$ As is usual when counting primes numbers, the next step is to decompose the von Mangoldt function $\Lambda(n)$ into a sum of convolution of arithmetic functions (for instance by using Heath-Brown's identity Lemma \ref{lemHB} as in \S \ref{Secprimes}): we essentially arrive at the problem of bounding $(\log x)^{O_J(1)}$ of the following model sums (for $j\leq J$ and $J$ is a fixed and large integer) $$\Sigma(\mathbf{M};\uple{a},Q):=\sum_\stacksum{q\sim Q}{q\ x^\delta-\hbox{friable}}c(q)\sumsum_{m_1,\ldots, m_{2j}}\mu(m_1)\ldots\mu(m_j) V_{1} \Bigl(\frac{m_{1}}{M_{1}}\Bigr)\ldots V_{2j} \Bigl(\frac{m_{2j}}{M_{2j}}\Bigr)\Delta_{a_q}(m_1\ldots m_{2j}) $$ where $V_i,\ i=1,\ldots, 2j$ are smooth functions compactly supported in $]1,2[$ and $\mathbf{M}=(M_1,\ldots,M_{2j})$ is a tuple satisfying $$Q\leq x^{\theta},\ M_i=:x^{\mu_i},\ \forall i\leq j,\ \mu_i\leq 1/J,\ \sum_{i\leq 2j}{\mu_i}=1+o(1).$$ Our target is the bound \begin{equation}\label{zhangtarget} \Sigma(\mathbf{M};\uple{a},Q) \overset{?}{\ll} \frac{x}{\log^A x}. \end{equation} The most important case is when $$Q= x^\theta=x^{1/2+\varpi}$$ for some fixed sufficiently small $\varpi>0$. The variables with index $j+1\leq \leq 2j$ are called {\em smooth} because they are weighted by smooth functions and this makes it possible to use the Poisson summation formula on them to analyze the congruence condition mod $q$. This is going to be efficient if the range $M_i$ is sufficiently big relatively to $q\sim Q$. The variables with indices $1\leq i\leq j$ are weighted by the M\"obius function but (at least as long as some strong form of the Generalized Riemann Hypothesis is not available) we cannot exploit this information and we will consider the M\"obius functions like arbitrary bounded functions. The tradeoff to non-smoothness is that the range of these variables is pretty short $M_i\leq x^{1/J}$, especially if $J$ is choosen large. As we did before we will aggregate some of the variables $m_i,\ i=1,\ldots,2j$ so as to form two new variables whose ranges are located adequately (similarly to what we did in \S \ref{Secprimes}) and will use different methods to bound the sums depending on the size and the type of these new variables. More precisely, we define $$\alpha_i(m)=\begin{cases}\mu(m)V_{i} \Bigl(\frac{m}{M_{i}}\Bigr)&\ 1\leq i\leq j\\ V_{i}\Bigl(\frac{m}{M_{i}}\Bigr)&\ j+1\leq i\leq 2j . \end{cases} $$ Given some partition of the set of $m$-indices $$\{1,\ldots,2j\}=\uple{I}\sqcup\uple{J}$$ let $$M=\prod_{i\in\uple{I}}M_i,\ N=\prod_{j\in\uple{J}}M_j$$ and $$\mu_\uple{I}:=\sum_{i\in\uple{I}}\mu_i,\ \mu_\uple{J}:=\sum_{i\in\uple{J}}\mu_i.$$ We have $$ \mu_\uple{I}+\mu_\uple{J}=1+o(1),\ M=x^{\mu_{\uple{I}}},\ N=x^{\mu_{\uple{J}}}.$$ In the sequel we will always make the convention that $N\leq M$ or equivalently $\mu_\uple{I}\geq\mu_\uple{J}$. Finally we define the Dirichlet convolution functions $$\alpha(m):=\star_{i\in\uple{I}}\alpha_i(m),\ \beta(n):=\star_{i\in\uple{J}}\alpha_i(n).$$ We are reduced to bound sums of the shape \begin{equation}\label{partitionsum} \sum_\stacksum{q\sim Q}{x^\delta-\text {friable}}c(q)\sumsum_\stacksum{m\sim M}{n\sim N}\alpha(m)\beta(n)\Delta_{a_{q}}(mn) \overset{?}{\ll} \frac{x}{\log^A x}. \end{equation} Observe that the functions $\alpha,\beta$ are essentially bounded $$\forall\varepsilon>0, \alpha(m),\beta(n)\ll x^\varepsilon$$ so we need only to improve slightly over the trivial bound. \subsection{Splitting into types} The sums \eqref{partitionsum} will be subdivided into three different types and their treatment will depend on which type the sum belong. This subdivision follows from the following simple combinatorial Lemma (cf.~\cite[Lem. 3.1]{Polymath8a}): \begin{lemma} Let $1/10<\sigma<1/2$ and let $\mu_i,\ i=1,\ldots 2j$ be some non-negative real numbers such that $$\sum_{i=1}^{2j}\mu_i=1.$$ One of the following holds \begin{itemize} \item Type 0: there exists $i$ such that $\mu_i\geq 1/2+\sigma$. \item Type II: there exists a partition $$\{1,\ldots,2j\}=\uple{I}\sqcup\uple{J}$$ such that $$1/2-\sigma\leq \sum_{i\in\uple{J}}\mu_i\leq \sum_{i\in\uple{I}}\mu_i<1/2+\sigma.$$ \item Type III: there exist distincts $i_1,i_2,i_3$ such that $$2\sigma\leq \mu_{i_1}\leq \mu_{i_2}\leq \mu_{i_3}\leq 1/2-\sigma\hbox{ and }\mu_{i_1}+\mu_{i_2}\geq 1/2+\sigma.$$ \end{itemize} \end{lemma} \begin{remark}\label{remTypeIII} If $\sigma>1/6$ the Type III situation never occurs since $2\sigma>1/2-\sigma$. \end{remark} Given $\sigma$ such that $$1/10<\sigma<1/2$$ we assume that $J$ is choosen large enough so that \begin{equation}\label{Jcond} 1/J\leq \min(1/2-\sigma,\sigma). \end{equation} We say that a sum \eqref{partitionsum} is of \begin{itemize} \item Type 0, if there exists some $i_0$ such that $\mu_{i_0}\geq 1/2+\sigma$. We choose $$\uple{I}=\{i_0\}\hbox{ and $\uple{J}$ the complement.}$$ Since for any $i\leq j$, one has $\mu_i\leq 1/J<1/2+\sigma$, necessarily $i_0\geq j+1$ corresponds to a smooth variable; the corresponding sum therefore equals \begin{equation} \label{type0}\sum_\stacksum{q\sim Q}{x^\delta-\hbox{friable}}c(q)\sumsum_{m\geq 1, n\sim N}V(\frac{m}{M_{i_0}})\beta(n)\Delta_{a_{q}}(mn). \end{equation} \item Type I/II if one can partition the set of indices $$\{1,\ldots,2j\}=\uple{I}\sqcup\uple{J}$$ in a way that the corresponding ranges $$M=\prod_{i\in\uple{I}}M_i=x^{\mu_\uple{I}}\geq N=\prod_{i\in\uple{J}}M_{i}=x^{\mu_\uple{J}}$$ satisfy \begin{equation}\label{NcondtypeI} {1/2-\sigma}\leq \mu_\uple{J}=\sum_{i\in\uple{J}}\mu_{i}\leq {1/2} \end{equation} \item Type III if we are neither in the Type 0 or Type I/II situation: there exist distinct indices $i_1,i_2,i_3$ such that $$2\sigma\leq \mu_{i_1}\leq \mu_{i_2}\leq \mu_{i_3}\leq 1/2-\sigma\hbox{ and }\mu_{i_1}+\mu_{i_2}\geq 1/2+\sigma.$$ We choose $$\uple{I}=\{i_1,i_2,i_3\}\hbox{ and $\uple{J}$ to be the complement.}$$ Again, since $1/J<2\sigma$ by \eqref{Jcond}, the indices $i_1,i_2,i_3$ are associated to smooth variables and the Type III sums are of the shape $$\sum_\stacksum{q\sim Q}{x^\delta-\hbox{friable}}c(q)\sumsum_\stacksum{m_1,m_2,m_3}{n\sim N}V(\frac{m_1}{M_{i_1}}) V(\frac{m_2}{M_{i_2}})V(\frac{m_3}{M_{i_3}})\beta(n)\Delta_{a_{q}}(m_1m_2m_3n).$$ \end{itemize} \begin{remark}In the paper \cite{Polymath8a} the "Type II" sums introduced here were split into two further types that were called "Type I" and "Type II". These are the sums for which the $N$ variable satisfies \begin{gather*} \hbox{ Type I: }x^{1/2-\sigma}\leq N< x^{1/2-\varpi-c} \\ \hbox{ Type II: }x^{1/2-\varpi-c}\leq N\leq x^{1/2} \end{gather*} for some extra parameter $c$ satisfying $$1/2-\sigma<1/2-\varpi-c<1/2.$$ This distinction was necessary for optimisation purposes and especially to achieve the exponent $1/2+7/301$ in Theorem \ref{thmzhang}. \end{remark} Zhang's Theorem now essentially follows from \begin{theorem} There exist $\varpi,\sigma>0$ with $1/10<\sigma<1/2$ such that the bound \eqref{partitionsum} holds for the Type 0, II and III sums. \end{theorem} For the rest of this section we will succinctly describe how each type of sum is handled. The case of Type 0 sums \eqref{type0} is immediate: one applies the Poisson summation formula to the $m$ variable to decompose the congruence $mn\equiv a_q\mods q$. The zero frequency contribution is cancelled up to an error term by the second term of $\Delta_{a_{q}}(mn)$ while the non-zero frequencies contribute a negligible error term as long as the range of the $m$ variable is larger than the modulus, i.e.~ $$1/2+\sigma>1/2+\varpi$$ which can be assumed. \subsection{Treatment of type II sums} \subsubsection{The art of applying Cauchy-Schwarz} The Type II sums are more complicated to deal with because we have essentially no control on the shape of the coefficients $\alpha(m),\beta(n)$ (except that they are being essentially bounded). The basic principle is to consider the largest variable $m\sim M$, to make it smooth using the Cauchy-Schwarz inequality and then resolve the congruence $$m\equiv \ov n a_q\mods q$$ using the Poisson summation formula. This is the essence of the {\em dispersion method} of Linnik. When implementing this strategy one has to decide which variables to put "inside" the Cauchy-Schwarz inequality and which to leave "outside". To be more specific, suppose we need to bound a general trilinear sum $$\sumsum_{m\sim M,n\sim N}\sum_{q\sim Q}\alpha_m\beta_n\gamma_q K(m,n,q)$$ and wish to smooth the $m$ variable using Cauchy-Schwarz. There are two possibilities, either $$ \sumsum_{m\sim M,n\sim N}\sum_{q\sim Q}\alpha_m\beta_n\gamma_q K(m,n,q)\ll \|\alpha\|_2\|\gamma\|_2\biggl(\sumsum_{m\sim M,q\sim Q}|\sum_{n\sim N}\beta_n K(m,n,q)|^2\biggr)^{1/2}$$ or $$ \sumsum_{m\sim M,n\sim N}\sum_{q\sim Q}\alpha_m\beta_n\gamma_q K(m,n,q)\ll \|\alpha\|_2\biggl(\sum_{m\sim M}|\sumsum_{n\sim N,q\sim Q}\beta_n\gamma_q K(m,n,q)|^2\biggr)^{1/2} $$ In the first case the inner sum of the second factor equals $$\sumsum_{n_1,n_2\sim N}\beta_{n_1}\ov{\beta_{n_2}}\sumsum_{m\sim M,q\sim Q} K(m,n_1,q)\ov{K(m,n_2,q)} $$ and in the second case $$\sumsum_{n_1,n_2\sim N}\sumsum_{q_1,q_2\sim Q}\beta_{n_1}\gamma_{q_1}\ov{\beta_{n_2}\gamma_{q_2}}\sum_{m\sim M} K(m,n_1,q_1)\ov{K(m,n_2,q_2)}.$$ In either case, one expects to be able to detect cancellation from the $m$-sum, at least when the other variables $(n_1,n_2)$ or $(n_1,n_2,q_1,q_2)$ are not located on the diagonal (i.e.~$n_1=n_2$ or $n_1=n_2,\ q_1=q_2$). If the other variables are on the diagonal, no cancellation is possible but the diagonal is small compared to the space of variables. We are faced with the following trade-off: \begin{itemize} \item For the first possibility, the $m$-sum is simpler (it involves three parameters $n_1,n_2,q$) but the ratio "size of the diagonal"$/$"size of the set of parameters" is $N/N^2=N^{-1}$. \item For the second possibility, the $m$-sum is more complicated as it involves more auxiliary parameters $n_1,n_2,q_1,q_2$ but the ratio "size of the diagonal"$/$"size of the set of parameters" $NQ/N^2Q^2=1/NQ$ is smaller (hence more saving can be obtained from the diagonal part). \end{itemize} \subsubsection{The Type II sums} We illustrate this discussion in the case of Type II sums. If we apply Cauchy with the $q$ variable outside the diagonal $n_1=n_2$ would not provide enough saving. If, on the other hand, we apply Cauchy with $q$ inside, then the diagonal is large but we have to analyze the congruence $$mn_1\equiv a\mods{q_1},\ mn_2\equiv a\mods{q_2}$$ which is a congruence modulo $[q_1,q_2]$. Assuming we are in the generic case of $q_1,q_2$ coprime, the resulting modulus is $q_1q_2\sim Q^2=x^{1+2\varpi}$ while $m\sim M\leq x^{1/2}$, which is too small for the Poisson formula to be efficient. There is fortunately a middle-ground: we can use the extra flexibility (due to Zhang's wonderful insight) that our problem involves {\em friable} moduli: by the greedy algorithm, one can factor $q\sim Q$ into a product $q=rs$ where $r$ and $s\sim Q/r$ vary over ranges that we can essentially choose as we wish (up to a small indeterminacy of $x^\delta$ for $\delta$ small). In other words, we are reduced to bounding sums of the shape $$\Sigma(M,N;\uple{a},R,S)=\sumsum_\stacksum{r\sim R,\ s\sim S}{rs\ x^\delta-\hbox{friable}}c(rs)\sumsum_\stacksum{m\sim M}{n\sim N}\alpha(m)\beta(n)\Delta_{a_{rs}}(mn)$$ for any factorisation $RS=Q$ that fits with our needs. Now, when applying Cauchy-Schwarz, we have the extra flexibility of having the $r$ variable "out" and the $s$ variable "in". We do this and get \begin{multline*} \sumsum_{r\sim R,s\sim S}c(rs)\sumsum_\stacksum{m\sim M}{n\sim N}\alpha(m)\beta(n)\Delta_{a_{rs}}(mn)=\sum_{r\sim R}\sum_{m\sim M}\alpha(m)\sum_sc(rs)\sum_{n\sim N}\beta(n)\Delta_{a_{rs}}(mn)\\ \ll_\varepsilon R^{1/2}M^{1/2+\varepsilon}\biggl(\sum_r\sumsum_{s_1,s_2,n_1,n_2}c(rs_1)\ov{c(rs_2)}\beta(n_1)\ov{\beta(n_2)}\sum_{m}V(\frac{m}M)\Delta_{a_{rs_1}}(mn_1)\Delta_{a_{rs_2}}(mn_2)\biggr)^{1/2} \end{multline*} for $V$ a smooth function compactly supported in $[M/4,4M]$. We choose $R$ of the shape $$R=Nx^{-\varepsilon}\leq Mx^{-\varepsilon}$$ for $\varepsilon>0$ but small. Expanding the square, we obtain a sum involving four terms. The most important one comes from the product \begin{equation}\label{Deltaproduct} \Delta_{a_{rs_1}}(mn_1)\Delta_{a_{rs_2}}(mn_2)=(\delta_{mn_1\equiv a_{rs_1}\mods{rs_1}}-\frac{\delta_{(n,rs_1)=1}}{\varphi(rs_1)})(\delta_{mn_2\equiv a_{rs_2}\mods {rs_2}}-\frac{\delta_{(n,rs_2)=1}}{\varphi(rs_2)}) . \end{equation} We will concentrate on the contribution of this term from now on. The generic and main case is when $(s_1,s_2)=1$, so that $m$ satisfies a congruence modulo $rs_1s_2\sim RS^2=Mx^{2\varpi+\varepsilon}$ which is not much larger than $M$ if $\varpi$ is small. Observe that $$mn_i\equiv a_{rs_i}\mods{rs_i},\ i=1,2\Longrightarrow \ n_1\equiv n_2\mods r.$$ We can therefore write $n_1=n,\ n_2=n+rl$ with $|l|\ll N/R=x^{\varepsilon}$. By the Poisson summation formula, we have $$\sum_{m}V(\frac{m}M)\delta_{m\equiv b\mods{rs_1s_2}}=\frac{M}{rs_1s_2}\widehat V(0)+\frac{M}{rs_1s_2}\sum_{h\not=0}\widehat V(\frac{h}{rs_1s_2/M})e\left(\frac{hb}{rs_1s_2}\right)$$ where $b=b(n,l)\mods {rs_1s_2}$ is such that $$b\equiv a_{rs_1s_2}\ov n\mods r,\ b\equiv a_{rs_1s_2}\ov n\mods{s_1}, b\equiv a_{rs_1s_2}\ov{n+lr}\mods{s_2}.$$ The $h=0$ contribution provides a main term which is cancelled up to an admissible error term by the main contributions coming from the other summands of \eqref{Deltaproduct}. The contribution of the frequencies $h\not=0$ will turn out to be error terms. We have to show that $$\sum_r\sumsum_{s_1,s_2,n,l}c(rs_1)\ov{c(rs_2)}\beta(n)\ov{\beta(n+rl)}\frac{M}{rs_1s_2}\sum_{h\not=0}\widehat V(\frac{h}{rs_1s_2/M})e\left(\frac{hb}{rs_1s_2}\right){\ll} \frac{MN^2}Rx^{-\eta}=x^{1-\eta+\varepsilon}$$ for some fixed $\eta>0.$ The length of the $h$ sum is essentially $$H=RS^2/M=Q^2N/(xR)=x^{2\varpi+\varepsilon}$$ which is small (if $\varpi$ and $\varepsilon$ are). We therefore essentially need to prove that \begin{multline}\label{TypeIIinter}\frac{1}{H}\sum_{r\sim R}\sum_{l\ll N/R}\sum_{n}\beta(n)\ov{\beta(n+lr)}\sum_{0\not=h\ll H}\left|\sum_{s_1,s_2}c(rs_1)\ov{c(rs_2)}e\left(h\frac{a_{rs_1s_2}\ov n}{rs_1}+h\frac{a_{rs_1s_2}\ov{n+lr}}{rs_2}\right)\right|\\ {\ll}x^{1-\eta+\varepsilon}. \end{multline} We can now exhibit cancellation in the $n$-sum by smoothing out the $n$ variable using the Cauchy-Schwarz inequality for any fixed $r,l$: letting the $h$ variable "in" we obtain exponential sums of the shape $$\sum_{n\sim N}e\left(h\frac{a_{rs_1s_2}\ov n}{rs_1}-h'\frac{a_{rs'_1s'_2}\ov n}{rs'_1}+h\frac{a_{rs_1s_2}\ov{n+lr}}{rs_2}- h'\frac{a_{rs'_1s'_2}\ov{n+lr}}{rs'_2}\right).$$ The generic case is when $h-h',s_1,s_2,s'_1,s'_2$ are all coprime. In that case the above exponential sum has length $$N\in[x^{1/2-\sigma},x^{1/2}]$$ and the moduli involved are of size $$RS^4=Q^4/R^3=x^{O(\varepsilon)}Q^4/N^3=[x^{1/2+4\varpi+O(\varepsilon)},x^{1/2++4\varpi+3\sigma+O(\varepsilon)}].$$ Therefore if $\sigma,\varpi,\varepsilon$ are small, the length $N$ is not much smaller than the modulus so we could apply the completion method to improve over the trivial bound $O(N)$ for the $n$-sum. If we apply the P\'olya-Vinogradov method, the trivial bound is replaced by $O((RS^4)^{1/2+o(1)})$ and we find that the left-hand side of \eqref{TypeIIinter} is bounded by $$\frac{1}HR.\frac{N}RN^{1/2}(H^2S^4(RS^4)^{1/2+o(1)})^{1/2}=x^{O(\varepsilon)+o(1)}N^{3/2}S^3R^{1/4}=x^{\frac{7}8+3\varpi+\frac{5}4\sigma+O(\varepsilon)+o(1)}$$ which is $\ll x^{1-\eta}$ for some $\eta>0$ whenever $\sigma<1/10$ and $\varpi$ and $\varepsilon$ are small enough. Instead of using the P\'olya-Vinogradov bound, we could take advantage of the fact that the modulus $rs_1s'_1s_2s'_2$ is $x^\delta$-friable (again we can take $\delta>0$ as small as we need) and apply the $q$-van der Corput method from the previous section. Factoring $rs_1s'_1s_2s'_2$ into a product $r's'$ such that $r'\sim (rs_1s'_1s_2s'_2)^{1/3+O(\delta)}$, $s'\sim (rs_1s'_1s_2s'_2)^{2/3+O(\delta)}$, a suitable variant of \eqref{optimalvdC} bounds the $n$-sum by $O(N^{1/2}(RS^4)^{1/6+O(\delta)+o(1)})$ and the left-hand side of \eqref{TypeIIinter} is bounded by $$\frac{R}H\frac{N}RN^{\frac{1}2}(H^2S^4N^{1/2}(RS^4)^{1/6})^{\frac{1}2+o(1)+O(\delta)}=x^{O(\varepsilon+\delta)+o(1)}N^{7/4}S^{7/3}R^{1/12}=x^{\frac{11}{12}+\frac73\varpi+\frac{1}2\sigma+O(\varepsilon+\delta)+o(1)}$$ which is $\ll x^{1-\eta}$ for some $\eta>0$ whenever $\sigma<1/6$ and $\varpi$ and $\varepsilon$ are small enough. \subsection{Treatment of type III sums} Our objective for the Type III sums is the following bound: for some $\eta>0$, we have \begin{equation}\label{typeIIIgoal} \sum_\stacksum{q\sim Q}{x^\delta-\hbox{friable}}c(q)\sum_{n\sim N}\beta(n)\sum_{m}\tau_{3,\mathbf{M}}(m)\Delta_{a_{q}}(m_1m_2m_3n){\ll}x^{1-\eta}, \end{equation} where $\mathbf{M}=(M_{i_1},M_{i_2},M_{i_3})$ and $$\tau_{3,\mathbf{M}}(m):=\sum_{m_1m_2m_3=m}V(\frac{m_1}{M_{i_1}}) V(\frac{m_2}{M_{i_2}})V(\frac{m_3}{M_{i_3}})$$ and $M_{i_1},M_{i_2},M_{i_3}$ satisfy $$M=M_{i_1}M_{i_2}M_{i_3}\geq x^{1/2+3\sigma}.$$ The function $$m\mapsto \tau_{3,\mathbf{M}}(m)$$ is basically a smoothed version of the ternary divisor function $m\mapsto \tau_3(m)$ that we have discussed in \S \ref{Secternary}. In fact, while describing the proof of Theorem \ref{thmd3}, we have shown that for $M=x$, and for $q$ a prime satisfying $$q\sim x^{1/2+\varpi},\ \varpi=1/47$$ one has $$ \sum_{m}\tau_{3,\mathbf{M}}(m)\Delta_{a_{q}}(m_1m_2m_3n)\ll \frac{x^{1-\eta}}q$$ for some $\eta>0$. We have therefore the required bound but for individual moduli instead of having it on average. As we have observed when discussing Type II sums, the parameter $\sigma$ can be taken as close to $1/6$ as we wish and in particular $M\in [x^{1+3(\sigma-\frac16)},x]$ can be made as close as we wish from $x$ and $N\in [1,x^{3(\frac16-\sigma)}]$ as we wish from $x$ (in the logarithmic scale). In particular, this establishes \eqref{typeIIIgoal} for prime moduli $q\sim Q$ for some value of $\sigma$ (close enough to $1/6$), and some value of $\varpi$ (close enough to $0$) and some $\eta>0$. The case of $x^\delta$-friable moduli uses similar methods and (besides some elementary technical issues) is maybe simpler than in the prime modulus case because of the extra flexibility provided by the friable moduli. \begin{remark} By a more elaborate treatment, involving different uses of the Cauchy-Schwarz inequality and iterations of the $q$-van der Corput method, it is possible to bounds successfully all the Type II sums associated to some explicit parameter $\sigma>1/6$. As pointed out in Remark \ref{remTypeIII}, this makes the section devoted to Type III sums (and in particular the theory of hyper-Kloosterman sums $\Kl_3(x;q)$) unnecessary. The interest of this remark comes from the fact that the trace functions occurring in the treatment of the sums of Type II are exclusively algebraic exponentials: $$x\mapsto \mathrm{e}_q({f(x)}),\ \hbox{for }f(X)\in{\mathbf{F}_q}(X).$$ For such trace functions, Corollary \ref{delignecor} "only" uses Weil's resolution of the Riemann Hypothesis for curves over finite fields \cite{Weil0} and not the full proof of the Weil conjectures by Deligne \cite{WeilII}. \end{remark} \section{Advanced completions methods: the $+ab$ shift} In this last section, we describe another method allowing to break the P\'olya-Vinogradov barrier for prime moduli. This method has its origins in the celebrated work of Burgess on short sums of Dirichlet characters \cite{Bur}. \subsection{Burgess's bound}\label{Bursec} Let $q$ be a prime and le $\chi\colon {\mathbf{F}^\times_q}\rightarrow\mathbf{C}^\times$ be a non trivial multiplicative character. Consider the sum $$S_V(\chi,N):=\sum_{n}\chi(n)V(\frac nN)$$ where $V\in\mathcal{C}^\infty(]1,2[)$. \begin{theorem}[Burgess]\label{Burgessbound} For any $N\geq 1$ and $l\geq 1$ such that \begin{equation}\label{Burgesscond} q^{1/2l}\leq N< \frac12q^{1/2+1/4l} \end{equation} we have $$S_V(\chi,N)\ll_{V,l}q^{o(1)}N(N/q^{1/4+1/4l})^{-1/l}.$$ \end{theorem} \begin{remark} Observe that this bound is non-trivial (sharper than $S_V(\chi,N)\ll N$) whenever $$q^{1/4+1/4l+o(1)}\leq N< \frac12q^{1/2+1/4l}.$$ Moreover, for $N\geq \frac12q^{1/2+1/4l}$, the P\'olya-Vinogradov bound $S_V(\chi,N)\ll q^{1/2}$ is non trivial, therefore, we see that by taking $l$ large enough, that \eqref{Burgessbound} yields a non-trivial bound for $S_V(\chi,N)$ as long as $$N\geq q^{1/4+\delta}$$ for some fixed $\delta>0$. \end{remark} \proof Burgess's argument exploits two features in a critical way: the first one is that an interval is "essentially" invariant under sufficiently small additive translations and the second is the multiplicativity of the Dirichlet character. Let $A,B\geq 1$ be parameters such that $AB\leq N/2$; we will also assume that $2B<q$. We have $$S_V(\chi,N)=\frac{1}{AB}\sum_{|n|\leq 2N}\sumsum_{a\sim A,b\sim B}\chi(n+ab)V(\frac{n+ab}N).$$ The next step is to invoke the Fourier inversion formula to separate the variables $n$ and $ab$: one has $$V(\frac{n+ab}N)=\int_\mathbf{R}\widehat V(t)e(\frac{tn}N)e(\frac{tab}N)dt.$$ Plugging this formula in our sum, we obtain \begin{align*} S_V(\chi,N) &=\frac{1}{AB}\int_\mathbf{R}\sum_{|n|\leq 2N}e(\frac{tn}N)\sumsum_{a\sim A,b\sim B}\chi(n+ab)e(\frac{tab}N)\widehat V(t)dt \\ &\leq \frac{1}{AB}\int_\mathbf{R}\sum_{|n|\leq 2N}\sum_{a\sim A}\bigl|\frac{\chi(a)}{a}\widehat V(\frac{t}a)\bigr|\bigl|\sum_{b\sim B}\chi(\ov an+b)e(\frac{tb}N)\bigr| dt\\ &\leq \frac{1}{AB}\int_\mathbf{R} \sum_{|n|\leq 2N}\sum_{a\sim A}\bigl|\sum_{b\sim B}\chi(\ov an+b)e(\frac{tAb}N)\bigr||W(t)|dt \end{align*} for $W$ some bounded rapidly decaying function. \begin{remark}\label{remchi} Observe that the factor $\chi(a)$ coming from the identity \begin{equation}\label{chimult} \chi(n+ab)=\chi(a(\ov an+b))=\chi(a)\chi(\ov an+b) \end{equation} has been absorbed in the absolute value of the first inequality above. \end{remark} The innermost sum can be rewritten $$\sum_{|n|\leq 2N}\sum_{a\sim A}\bigl|\sum_{b\sim B}\chi(\ov an+b)e(\frac{tAb}N)\bigr|=\sum_{r\in{\mathbf{F}^\times_q}}\nu(x)|\sum_{b\sim B}\eta_b\chi(r+b)\bigr|$$ where $\eta_b=e(\frac{tAb}N)$ and $$\nu(r):=|\{(a,n)\in[A,2A[\times[-2N,2N],\ \ov an=r\mods q\}|.$$ Consider the map $$(a,n)\in[A,2A[\times[-2N,2N]\mapsto \ov an\mods q=r\in {\mathbf{F}_q}.$$ The function $\nu(r)$ is the size of the fiber of that map above $r$. We will show that this map is "essentially injective" (has small fibers on average). Suppose that $A$ is chosen such that $4AN<q;$ then one has $$\sum_r\nu(r)\ll AN,\ \sum_r\nu^2(r)\ll (AN)^{1+o(1)}$$ where the first bound is obvious while for the second we observe that $$\sum_r\nu^2(r)=|\{(a,a',n,n'), \ a,a'\in[A,2A[,\ |n|,|n'|\ll N,\ an'\equiv an\mods q\}|,$$ then use the fact that $AN<q$ and that the integer $an'$ has at most $(an')^{o(1)}$ decomposition of the shape $an'=a'n$. This map however is not surjective nor even close to being so in general, so that the change of variable $\ov a.n\leftrightarrow x$ is not very effective. A way to moderate ineffectiveness is to use H\"older's inequality. Let $l\geq 1$ be some integer parameter. Applying H\"older's inequality with $1/p=1-1/2l,\ 1/q=1/2l$ and the above estimate one obtains \begin{align*} \sum_{x\in{\mathbf{F}^\times_q}}\nu(x)|\sum_{b\sim B}\eta_b\chi(x+b)\bigr|&\leq (\sum_x\nu(x)^{\frac{2l}{2l-1}})^{1-1/2l}(\sum_x |\sum_{b\sim B}\eta_b\chi(x+b)\bigr|^{2l})^{1/2l}\\ &\ll (AN)^{1-1/2l+o(1)}(\sum_x |\sum_{b\sim B}\eta_b\chi(x+b)\bigr|^{2l})^{1/2l}. \end{align*} The $x$-sum in the rightmost factor equals $$\sum_{\uple{b}}\eta_\uple{b}\sum_{r\in{\mathbf{F}_q}}\chi(\frac{\prod_{i=1}^l(r+b_i)}{\prod_{i=i}^l(r+b_{k+i})})$$ where $\uple{b}=(b_1,\ldots,b_{2l})\in[B,2B[^{2l}$ and $\eta_\uple{b}=\prod_{i=1}^{2l}\eta_{b_i}$. Consider the fraction $$F_\uple{b}(X):=\frac{\prod_{i=1}^l(X+b_i)}{\prod_{i=i}^l(X+b_{k+i})}\in \mathbf{Q}(X)$$ and the function on ${\mathbf{F}_q}$ $$r\in{\mathbf{F}_q}\mapsto \chi(F_\uple{b}(r))$$ (extended by $0$ for $r=-b_i\mods q,\ i=1,\ldots,2l$). This function is the trace function of the rank one sheaf $[F_\uple{b}]^*\mathcal{L}_\chi$ whose conductor is bounded in terms of $l$ only and (because it is of rank $1$) which is geometrically irreducible if not-geometrically constant. If not geometrically constant one has\footnote{It is not necessary to invoke Deligne's main theorem here: this follows from A. Weil's proof of the Riemann hypothesis for curves \cite{Weil0}.} $$\sum_{r\in{\mathbf{F}_q}}\chi(F_\uple{b}(r))\ll_lq^{1/2}.$$ If $q>\mathop{\mathrm{Max}}\limits(l,2B)$ this occurs precisely when $F_\uple{b}(X)$ is not constant nor a $k$-th power, where $k$ is the order of $\chi$. Hence this holds for $\uple{b}$ outside an explicit set $\mathcal{B}^{bad}\subset [B,2B[^{2l}$ of size bounded by $O(B^{l})$. If $\uple{b}\in \mathcal{B}^{bad}$, we use the triv,ial bound $$|\sum_{r\in{\mathbf{F}_q}}\chi(F_\uple{b}(r))|\leq q.$$ All in all, we eventually obtain $$\sum_{\uple{b}}\eta_\uple{b}\sum_{x}\chi\left(\frac{\prod_{i=1}^l(x+b_i)}{\prod_{i=i}^l(x+b_{k+i})}\right)\ll |\mathcal{B}^{bad}|q+|\mathcal{B}-\mathcal{B}^{bad}|q^{1/2}\ll B^lq+B^{2l}q^{1/2}.$$ Choosing $B=q^{1/2l}$ (so as to equal the two terms in the bound above) and $A\approx Nq^{-1/2l}$ with the condition $4AN<q$, which is equivalent to \eqref{Burgesscond}, we obtain that $$S_V(\chi,N)\ll_l \frac{q^{o(1)}}{AB}(AN)^{1-1/2l}(q^{3/2})^{1/2l}\ll q^{o(1)}N^{1-1/{l}}q^{3/4l-(1-1/2l)/2l}=q^{o(1)}N(N/q^{1/4+1/4l})^{-1/l}.$$ \qed \subsection{The $+ab$-shift for type I sums} It is natural to try to extend this method to other trace functions; unfortunately the above argument breaks down because the identity \eqref{chimult} is not valid in general. It is however possible to mitigate this problem by introducing an extra average. This technique goes back to Karatsuba and Vinogradov (for the function $x\mapsto \chi(x+1)$). It has been also used by Friedlander-Iwaniec \cite{FrIw} (for the function $x\mapsto e\left(\frac{\ov x}q\right)$), Fouvry-Michel \cite{FoMi} and Kowalski-Michel-Sawin \cite{KMS,KMS2}. Instead of a single sum $S_V(K,N)$, one considers the following average of multiplicative shifts $$B_V(K,\uple{\alpha},N):=\sum_{m\sim M}\alpha_m\sum_n V(\frac{n}{N})K(mn)$$ where $1\leq M<q$ and $(\alpha_m)_{m\sim M}$ is a sequence of complex numbers of modulus $\leq 1$ (this includes the averaged sum $\sum_{m\sim M}\bigl|\sum_nK(mn)V(\frac{n}{N})\bigr|=\sum_m|S_V([\times m]^*K,N)|$). The objective here is to improve over the trivial bound $$B_V(K,\uple{\alpha},N)\ll \|K\|_\infty MN.$$ Proceeding as above we have \begin{align*} B_V(K,\uple{\alpha},N)&=\frac{1}{AB}\sum_{m}\alpha_m\sum_{n}\sumsum_{a\sim A,b\sim B}K(m(n+ab))V(\frac{n+ab}N)\\ &\leq \frac{1}{AB}\int_\mathbf{R} \sum_{m\sim M}\alpha_m\sum_{|n|\leq 2N}\sum_{a\sim A}\bigl|\sum_{b\sim B}K(am(\ov an+b))e(\frac{tAb}N)\bigr||W(t)|dt. \end{align*} We have $$\sum_{m\sim M}\alpha_m\sum_{|n|\leq 2N}\sum_{a\sim A}\bigl|\sum_{b\sim B}K(am(\ov an+b))e(\frac{tAb}N)\bigr|=\sumsum_{r,s\in{\mathbf{F}_q}}\nu(r,s)\bigl|\sum_{b\sim B}\eta_b K(s(r+b))\bigr|$$ with $$\nu(r,s)=\sum_{m\sim M}\sum_{|n|\leq 2N}\sum_{a\sim A}\alpha_m\delta_{\ov an=r,am=s\mods q}.$$ Assuming that $4AN<q$ and evaluating the number of solutions to the equations $$am=a'm',\ a\ov n\equiv a'\ov n'\mods q,\ (a,m,n)\in[A,2A[\times[M,2M[\times[N,2N[$$ one finds that $$\sumsum_{r,s\in{\mathbf{F}_q}}|\nu(r,s)|\ll AMN,\ \sumsum_{r,s\in{\mathbf{F}_q}}|\nu(r,s)|^2\ll q^{o(1)}AMN$$ which we interpret as saying that the map $$(a,m,n)\in[A,2A[\times[M,2M[\times[N,2N[\rightarrow (r,s)=(\ov a.n,am)\in {\mathbf{F}_q}\times [AM,4AM[$$ is essentially injective (i.e.~has small fibers on average). As before, this map is far from being surjective but one can dampen this with H\"older's inequality: $$\sumsum_\stacksum{r\in{\mathbf{F}_q}}{1\leq s\leq 4AM}\nu(r,s)\bigl|\sum_{b\sim B}\eta_b K(s(r+b))\bigr|\ll \big(\sumsum_{r,s}|\nu(r,s)|^{\frac{2l}{2l-1}}\big)^{1-1/2l}\big(\sumsum_{r,s}\bigl|\sum_{b\sim B}\eta_b K(s(r+b))\bigr|^{2l}\big)^{1/2l}$$ $$\ll q^{o(1)}(AMN)^{1-1/2l}\bigl(\sum_{\uple{b}}\eta_{\uple{b}}\sum_{r,s}\prod_{i=1}^lK(s(r+b_i))\ov{K(s(r+b_{i+l}))}\bigr)^{1/2l}.$$ We are now reduced to the problem of bounding the two variable sum \begin{equation}\label{generalKVsum} \sum_{r,s}\prod_{i=1}^lK(s(r+b_i))\ov{K(s(r+b_{i+l}))}=\sum_r\sum_{s}\mathbf{K}(sr,s\uple{b})=\sum_{r}\mathbf{R}(r,\uple{b}) \end{equation} (say) where \begin{equation}\label{KRdef} \mathbf{K}(r,\uple{b}):=\prod_{i=1}^lK(r+b_i)\ov{K(r+b_{i+l})},\ \mathbf{R}(r,\uple{b})=\sum_{s}\mathbf{K}(sr,s\uple{b}). \end{equation} The bound will depend on the vector $\uple{b}\in[B,2B[^{2l}$. To get a feeling of what is going on, let us consider one of cases treated in \cite{FoMi}: let $$K(x)=\mathrm{e}_q(\ov x+x).$$ We have $$\mathbf{R}(sr,s\uple{b})=\sum_{s\in{\mathbf{F}^\times_q}}\mathrm{e}_q(\ov s\sum_{i=1}^l(\ov{r+b_i}-\ov{r+b_{i+l}})+s\sum_{i=1}^l({b_i}-{b_{i+l}})).$$ This sum is either \begin{enumerate} \item Equal to $q-1$, if and only if the vector $(b_1,\ldots, b_l)$ equals the vector $(b_{l+1},\ldots, b_{2l})$ up to permutation of the entries. \item Equal to $-1$ if $\uple{b}$ is not as in (1) but is in the hyperplane with equation $\sum_{i=1}^l({b_i}-{b_{i+l}})=0$. \item The Kloosterman sum $$\mathbf{R}(r,\uple{b})=q^{1/2}\Kl_2\left(\frac{\sum_{i=1}^l(\ov{r+b_i}-\ov{r+b_{i+l}})}{\sum_{i=1}^l({b_i}-{b_{i+l}})};q\right)$$ otherwise. \end{enumerate} The last case is the most interesting. Given $\uple{b}$ as in the last situation, we have to evaluate $$q^{1/2}\sum_r\Kl_2(G_\uple{b}(r);q)$$ where \begin{equation}\label{Gbdef} G_\uple{b}(X)=\frac{\sum_{i=1}^l(\ov{X+b_i}-\ov{X+b_{i+l}})}{\sum_{i=1}^l({b_i}-{b_{i+l}})}. \end{equation} \begin{lemma}For $\uple{b}=(b_1,\ldots,b_{2l})\in{\mathbf{F}_q}^{2l}$ such that \begin{equation}\label{bcond} (b_1,\ldots, b_l)\hbox{ is not equal to }(b_{l+1},\ldots, b_{2l}) \hbox{ up to permutation and }\sum_{i=1}^l({b_i}-{b_{i+l}})\not=0, \end{equation} one has $$\sum_r\Kl_2(G_\uple{b}(r);q)\ll_l q^{1/2}.$$ \end{lemma} \proof The function $$r\mapsto \Kl_2(G_\uple{b}(r);q)$$ is the trace function of the rank $2$ sheaf $[G_\uple{b}]^*\mathcal{K}\ell_2$ obtained by pull-back from the Kloosterman sheaf $\mathcal{K}\ell_2$ of morphism $$x\mapsto G_\uple{b}(x)$$ which is non-constant by assumption. Moreover, one can show that he conductor of $[G_\uple{b}]^*\mathcal{K}\ell_2$ is bounded in terms of $l$ only, and moreover the geometric monodromy group of $[G_\uple{b}]^*\mathcal{K}\ell_2$ is obtained as the (closure of the) image of the representation $\rho_{\mathcal{K}\ell_2}$ restricted to a finite index subgroup of $\Gal(K^{\mathrm{sep}}/\ov{{\mathbf{F}_q}}.K)$. Since the geometric monodromy group of $\mathcal{K}\ell_2$ is $\SL_2$ which has no finite index subgroup, the geometric monodromy group of $[G_\uple{b}]^*\mathcal{K}\ell_2$ is $\SL_2$ as well. It follows that the sheaf $[G_\uple{b}]^*\mathcal{K}\ell_2$ is geometrically irreducible (and not geometrically trivial because of rank $2$) and the estimate follows by Deligne's theorem. \qed It follows from this analysis that $$\sumsum_{r,s}\bigl|\sum_{b\sim B}\eta_b K(s(r+b))\bigr|^{2l}\ll B^{l}q^{2}+B^{2l}q,$$ hence choosing $B=q^{1/l}$, $AB\approx N$ and $A\approx Nq^{-1/l}$ we obtain $$B_V(K,\uple{\alpha},N)\ll\frac{q^{o(1)}}{AB}(AMN)^{1-1/2l}q^{3/2l}=q^{o(1)}MN(\frac{N^2M}{q^{1+1/l}})^{-1/2l}.$$ To resume we have therefore proven the \begin{theorem} Let $K(x)=\mathrm{e}_q(\ov x+x)$ and $M,N,l\geq 1$ and $(\alpha_m)_{m\sim M}$ be a sequence of complex numbers of modulus bounded by $1$. Assuming that $$q^{1/l}\leq N<\frac12 q^{1/2+1/2l}$$ we have $$\sum_{m\sim M}\alpha_m\sum_{n}V(\frac{n}N)K(mn)\ll q^{o(1)}MN(\frac{N^2M}{q^{1+1/l}})^{-1/2l}.$$ \end{theorem} This bound is non trivial (sharper than $\ll MN$) as long as\footnote{If $N\geq \frac12 q^{1/2+1/2l}$ the P\'olya-Vinogradov inequality is non trivial already.} $$N^2M\geq q^{1+1/l}.$$ For instance, if $M=q^\delta$ for some $\delta>0$, the above bound is nontrivial for $l$ large enough and $N\geq q^{1/2+\delta/3}$. Alternatively if $M=N$, this bound is non trivial as long as $$N=M\geq q^{1/3+\delta}$$ if $l$ is taken large enough. Therefore this method improves the range of non-triviality in Theorem \ref{thmbilinear}. \subsection{The $+ab$-shift for type II sums} With this method, it is also possible to deal with the more general (type II) bilinear sums $$B(K,\uple{\alpha},\uple{\beta})=\sumsum_{m\sim M,n\sim N}\alpha_m\beta_n K(mn)$$ where $(\alpha_m)_{m\sim M}$, $(\beta_n)_{n\sim N}$ are sequences of complex numbers of modulus bounded by $1$. We leave it to the interested reader to fill in the details (or to look at \cite{FoMi,KMS} or \cite{KMS2}). The first step is to apply Cauchy-Schwarz to smooth out the $n$ variable: for a suitable smooth function $V$, compactly supported in $[1/2,5/2]$ and bounded by $1$, one has $$\bigl|\sumsum_{m\sim M,n\sim N}\alpha_m\beta_n K(mn)\bigr|\leq N^{1/2}\bigl(\sumsum_{m_1,m_2\sim M}\alpha_{m_1}\ov{\alpha_{m_2}}\sum_{n}V(\frac{n}N)K(m_1n)\ov{K(m_2n)}\bigr)^{1/2}.$$ The next step is to perform the $+ab$-shift on the $n$ variable and to make the change of variables $$(a,m_1,m_2,n)\in[A,2A[\times[M,2M[^2\times[N,2N[\longleftrightarrow (\ov an,am_1,am_2)\mods q=(r,s_1,s_2)\in\mathbf{F}_q^3.$$ Considering the fiber counting function for that map, namely $$\nu(r,s_1,s_2):=\sumsum_\stacksum{(a,n,m_1,m_2)}{a\sim A, |n|\leq 2N,\ m_i\simeq M}\alpha_{m_1}\ov{\alpha_{m_2}}\delta_{\ov an=r,\ am_i=s_i\mods q}$$ one shows that for $AN<q/2$ one has $$\sumsum_{(r,s_1,s_2)\in{\mathbf{F}_q}^3}|\nu(r,s_1,s_2)|\ll AM^2N,\ \sumsum_{(r,s_1,s_2)\in{\mathbf{F}_q}^3}|\nu(r,s_1,s_2)|^2\leq q^{o(1)}AM^2N.$$ Applying H\"older's inequality leads us to the problem of bounding the following complete sum indexed by the parameter $\uple{b}$ \begin{equation}\label{typeIIcomplete} \sum_{r\in{\mathbf{F}_q}}|\mathbf{R}(r,\uple{b})|^2-q\sum_{r\in{\mathbf{F}_q}}|\mathbf{K}(r,\uple{b})|^2. \end{equation} We will explain what is expected in general in a short moment but let us see what happens for our previous case $K(x)=\mathrm{e}_q(\ov x+x)$: for $\uple{b}=(b_1,\ldots,b_{2l})\in{\mathbf{F}_q}^{2l}$ satisfying \eqref{bcond} the sum \eqref{typeIIcomplete} equals $$q\sum_\stacksum{r\in{\mathbf{F}_q}}{r\not=-b_i}|\Kl_2(G_\uple{b}(r);q)|^2-q\sum_\stacksum{r\in{\mathbf{F}_q}}{r\not=-b_i}1=q\sum_\stacksum{r\in{\mathbf{F}_q}}{r\not=-b_i}(|\Kl_2(G_\uple{b}(r);q)|^2-1)+O_l(q)$$ where $G_\uple{b}(X)$ is defined in \eqref{Gbdef} \begin{lemma}For $\uple{b}=(b_1,\ldots,b_{2l})\in{\mathbf{F}_q}^{2l}$ satisfying \eqref{bcond}, one has $$\sum_r(|\Kl_2(G_\uple{b}(r);q)|^2-1)\ll_l q^{1/2}.$$ \end{lemma} \proof This follows from the fact that $[G_\uple{b}]^*\mathcal{K}\ell_2$ is geometrically irreducible with geometric monodromy group equal to $\SL_2$: since the tensor product of the standard representation of $\SL_2$ with itself equals the trivial representation plus the symmetric square of the standard representation which is non-trivial and irreducible, $$x\mapsto |\Kl_2(G_\uple{b}(r);q)|^2-1$$ is the trace function of a geometrically irreducible sheaf.\qed Using this bound and trivial estimates for $\uple{b}$ not satisfying \eqref{bcond}, one eventually obtains \begin{theorem}\label{kloosfracprop} Let $K(x)=\mathrm{e}_q(\ov x+x)$, $1\leq M,N<q$ and $l\geq 1$ some integer. Assuming that $$N<\frac12 q^{1/2+1/2l},$$ one has $$B(K,\uple{\alpha},\uple{\beta})=\sumsum_{m\sim M,n\sim N}\alpha_m\beta_n K(mn)\ll q^{o(1)}MN(\frac1M+(\frac{MN}{q^{3/4+3/4l}})^{-1/4l})^{1/2}.$$ \end{theorem} \begin{remark} For $l$ large enough, this bound is non-trivial as long as $M\geq q^\delta$ and $MN\geq q^{3/4+\delta}$, again improving on Theorem \ref{thmbilinear} in this specific case. \end{remark} \subsection{The $+ab$-shift for more general trace functions} For applications to analytic number theory, it is highly desirable to extend the method of the previous section to trace functions as general as possible. This method may be axiomatized in the following way. Let $q$ be a prime, $K\colon {\mathbf{F}_q}\rightarrow \mathbf{C}$ a complex valued function bounded by $1$ in absolute value, $1\leq M,N<q$ some parameters and $\uple{\alpha}=(\alpha_m)_{m\sim M}$, $\uple{\beta}=(\beta_n)_{n\sim N}$ sequences of complex number bounded by $1$. We define the type I sum $$B(K,\uple{\alpha},1_N)=\sumsum_{m\sim M,n\sim N}\alpha_m K(mn)$$ and the type II sum $$B(K,\uple{\alpha},\uple{\beta})=\sumsum_{m\sim M,n\sim N}\alpha_m\beta_n K(mn).$$ For $l\geq 1$ an integer, let $\mathbf{K}(r,\uple{b})$ and $\mathbf{R}(r,\uple{b})$ be the functions of the variables $(r,\uple{b})\in{\mathbf{F}_q}\times{\mathbf{F}_q}^{2l}$ given by \eqref{KRdef}. For $B\geq 1$ we set $$\mathcal{B}=\mathbf{Z}^{2l}\cap[B,2B[^{2l}.$$ An axiomatic treatment of the type I sums $B(K,\uple{\alpha},1_N)$ is provided by the following: \begin{theorem}\label{thmtypeI} Notations as above, let $B,C\geq 1$ and $\gamma\in [0,2]$ be some real numbers. \begin{itemize} \item Let $\mathcal{B}^{\Delta}\subset \mathcal{B}$ be the set of $\uple{b}\in\mathcal{B}$ for which \begin{equation}\label{Rcontrol} \hbox{ there exists }r\in{\mathbf{F}_q}\hbox{ satisfying } |\mathbf{R}(r,\uple{b})|> Cq^{1/2}. \end{equation} \item Let $\mathcal{B}_I^{bad}\subset \mathcal{B}$ be the union of $\mathcal{B}^{\Delta}$ and the set of $\uple{b}\in\mathcal{B}$ such that \begin{equation}\label{RScontrol}\bigl|\sum_{r\in{\mathbf{F}_q}}\mathbf{R}(r,\uple{b})\bigr|> Cq. \end{equation} \end{itemize} Suppose that for any $1\leq B<q/2$ one has \begin{equation}\label{badsetbound1}|\mathcal{B}^{\Delta}|\leq CB^l,\ |\mathcal{B}_I^{bad}|\leq B^{(2-\gamma)l} . \end{equation} Then, if $N$ satisfies $$q^{1/l}\leq N\leq \frac12q^{1/2+1/2l},$$ one has for any $\varepsilon>0$ \begin{equation}\label{typeIKMS}B(K,\uple{\alpha},1_N)\ll_{C,l,\varepsilon}q^\varepsilon MN(\frac{q^{1+1/l}}{MN^2}+\frac{q^{3/2-\gamma+1/l}}{MN^2})^{1/2l}. \end{equation} \end{theorem} An axiomatic treatment of the type II sums $B(K,\uple{\alpha},\uple{\beta})$ is provided by the following \begin{theorem}\label{thmtypeII} Notations as above, let $B,C\geq 1$ and $\gamma\in [0,2]$ be some real numbers, \begin{itemize} \item Let $\mathcal{B}^{\Delta}\subset \mathcal{B}$ be the set of $\uple{b}\in\mathcal{B}$ for which $$\hbox{ there exists }r\in{\mathbf{F}_q}\hbox{ satisfying } |\mathbf{R}(r,\uple{b})|> Cq^{1/2}.$$ \item Let $\mathcal{B}_{II}^{bad}\subset \mathcal{B}$ be the union of $\mathcal{B}^{\Delta}$ and the set of $\uple{b}\in\mathcal{B}$ such that \begin{equation}\label{RScontrol2}\bigl|\sum_{r\in{\mathbf{F}_q}}|\mathbf{R}(r,\uple{b})|^2-q\sum_{r\in{\mathbf{F}_q}}|\mathbf{K}(r,\uple{b})|^2\bigr|> C q^{3/2} . \end{equation} \end{itemize} Assume that for any $B\in[1,q/2[$ one has \begin{equation}\label{badsetbound2}|\mathcal{B}^{\Delta}|\leq CB^l,\ |\mathcal{B}_{II}^{bad}|\leq CB^{(2-\gamma)l}. \end{equation} Then, if $N$ satisfies $$q^{3/2l}\leq N\leq \frac12q^{1/2+3/4l},$$ one has for any $\varepsilon>0$, \begin{equation}\label{typeIIKMS} B(K,\uple{\alpha},\uple{\beta})\ll_{C,l,\varepsilon}q^\varepsilon MN\bigl(\frac{1}{M}+(\frac{q^{1-\frac{3}4\gamma+\frac{3}{4l}}}{MN}+\frac{q^{\frac{3}{4}+\frac{3}{4l}}}{MN})^{\frac1{l}}\bigr)^{1/2}. \end{equation} \end{theorem} We conclude these lectures with a few remarks concerning these two theorems: \begin{enumerate} \item In the case $K(x)=\mathrm{e}_q(\ov x+x)$, we have just verified that the conditions \eqref{badsetbound1} and \eqref{badsetbound2} hold with $\gamma=1$. In \cite{FoMi}, this was shown to hold more generally for the trace functions $$K(x)=\mathrm{e}_q(x^{-k}+ax),\ a\in{\mathbf{F}_q},\ k\geq 1.$$ \item For more general trace functions, the first condition in \eqref{badsetbound1} and \eqref{badsetbound2} can be verified using some variant of the "sums of products" Theorem \ref{cor-concrete} and does not constitute a big obstacle. One should also notice that Theorem \ref{cor-concrete} implies that for any $\uple{b}=(b_1,\ldots,b_{2l})$ on the "first" diagonal (i.e.~$b_1=b_{l+1},\ldots,b_{l}=b_{2l}$) one has $$\mathbf{R}(r,\uple{b})=\sum_{s}\prod_{i=1}^l|K(s(r+b_i))|^{2}=|K(0)|^{2l}+\sum_{s\not=0}\prod_{i=1}^l|K(s(r+b_i))|^{2}\gg_l q$$ and therefore $$|\mathcal{B}^\Delta|\geq B^{l}.$$ It follows that the first bound in \eqref{badsetbound1} and \eqref{badsetbound2} is sharp and for the second condition one cannot expect $\gamma$ to be greater than $1$. \item In order to reach the best available bound by the above method, it is not necessary to aim for $\gamma=1$: it is sufficient to establish \eqref{badsetbound1} with $\gamma\geq 1/2$ and \eqref{badsetbound2} with $\gamma\geq 1/3$. In such a case, the bounds of Theorem \ref{thmtypeI} and Theorem \ref{thmtypeII} are non trivial as long as $$MN^2\geq q^{1+1/l},\ MN\geq q^{3/4+3/4l},$$ respectively. \item Checking the second bound in \eqref{badsetbound1} and \eqref{badsetbound2} for general trace functions is much more difficult. In \cite{KMS}, with specific applications in mind, these bounds have been established for $l=2$ and $\gamma=1/2$ for the hyper-Kloosterman sums $$K(x)=\Kl_k(x;q),\ k\geq 2.$$ Because $l=2$ is too small, this alone is not sufficient to improve over the P\'olya-Vinogradov type bound of Theorem \ref{thmbilinear} (one would have needed $l\geq 4$). A more refined treatment is necessary: instead of letting (somewhat wastefully) the variables $s=am\mods q$ or $s_1=am_1,s_2=am_2 \mods q$ vary freely over the whole interval $[0,q-1]\simeq {\mathbf{F}_q}$, one uses the fact that $s,s_1,s_2$ belong to the shorter interval $[AM,4AM[$. Applying the P\'olya-Vinogradov completion method to detect this inclusion with additive characters, this leads to bounds for complete sums analogous to \eqref{RScontrol} and \eqref{RScontrol2} but for the additively twisted variant of $\mathbf{R}(r,\uple{b})$ defined by $$\mathbf{R}(r,\lambda,\uple{b})=\sum_{s}\mathbf{K}(sr,s\uple{b})e\left(\frac{\lambda s}q\right),\ \hbox{ for }\lambda\in{\mathbf{F}_q}.$$ Specifically, the bounds are: for all $ \uple{b}\in\mathcal{B}-\mathcal{B}^{\Delta},$ we have $$\forall\lambda\in{\mathbf{F}_q},\ |\mathbf{R}(r,\lambda,\uple{b})|\leq Cq^{1/2},$$ and for all $\uple{b}\in\mathcal{B}-\mathcal{B}_I^{bad}$, we have $$\forall\lambda\in{\mathbf{F}_q},\ |\sum_r\mathbf{R}(r,\lambda,\uple{b})|\leq Cq,$$ and for all $\uple{b}\in\mathcal{B}-\mathcal{B}_{II}^{bad}$, we have $$\forall\lambda,\lambda'\in{\mathbf{F}_q},\ \Bigl|\sum_r\mathbf{R}(r,\lambda,\uple{b})\ov{\mathbf{R}(r,\lambda',\uple{b})}-q\delta_{\lambda=\lambda'}\sum_{s}\prod_{i=1}^l|K(s(r+b_i))|^{2}\Bigr|\leq Cq^{3/2}. $$ In \cite{KMS}, these bounds were established for $l=2$ and $\uple{b}$ outside the sets $\mathcal{B}^\Delta$, $\mathcal{B}_{I}^{bad}$ and $\mathcal{B}_{II}^{bad}$ satisfying $$|\mathcal{B}^\Delta|\leq B^2,\ |\mathcal{B}_{I,II}^{bad}|\leq CB^{3}.$$ \item In the paper \cite{KMS2}, the bounds \eqref{badsetbound1} and \eqref{badsetbound2} are established for the hyper-Kloosterman sums and generalized Kloosterman sums for every $l\geq 2$ and $\gamma=1/2$. \end{enumerate} \subsection{Some applications of the $+ab$-shift bounds} The problem of estimating bilinear sums of trace functions below the critical P\'olya-Vinogradov range already had several applications in analytic number theory. We list some of them below with references for the interested remaining reader(s). \begin{itemize} \item This method was used by Karatsuba and Vinogradov, for the function $$K(n)=\chi(n+a)$$ where $(a,q)=1$ and $\chi\mods q$ is a non-trivial Dirichlet character, to bound non-trivially its sum along the primes over short intervals (now a special case of Theorem \ref{thmprimesumthm}). In particular, Karatsuba \cite{Kar} proved for any $\varepsilon>0$, the bound $$\sum_\stacksum{p\leq x}{p\ \mathrm{ prime}}\chi(p+a)\ll x^{1-\varepsilon^2/1024}$$ whenever $x\geq q^{1/2+\varepsilon}$. This bound is therefore non-trivial in a range which is wider than that established in Theorem \ref{thmprimesumthm} for general trace functions. \item The method was used by Friedlander-Iwaniec for the function $$K(n)=e\left(\frac{\ov n}q\right),\ n.\ov n\equiv 1\mods q$$ to show that the ternary divisor function $d_3(n)$ is well distributed in arithmetic progressions to modulus $q\leq x^{1/2+1/230}$, passing for the first time the Bombieri-Vinogradov barrier (see Theorem \ref{thmd3}). \item In the case of the Kloosterman sums $$K(n)=\Kl_2(n;q),$$ the bound established in \cite{KMS} together with \cite{BlMi,BFKMM} leads to an asymptotic formula for the second moment of character twists of modular $L$-functions: for $f$ a fixed primitive cusp form, one has $$\frac{1}{q-1}\sum_{\chi\mods q}|L(f\otimes\chi,1/2)|^2= MT_f(\log q)+O_f(q^{-1/145})$$ for $q$ prime, where $MT_f(\log q)$ is a polynomial in $\log q$ (of degree $\leq 1$) depending on $f$. This completes the work of Young for $f$ an Eisenstein series \cite{Young} and of Blomer-Milicevic for $f$ cuspidal and $q$ suitably composite \cite{BlMi}. \item Using this method, Nunes \cite{nunes} obtained non-trivial bounds, below the P\'olya-Vinogradov range, for the (smooth) bilinear sum $$\sumsum_\stacksum{m\leq M}{n\leq N}K(mn^2)$$ where $K$ is the Kloosterman-like trace function $$K(n;q):=\frac{1}{q^{1/2}}\sum_{x\in{\mathbf{F}^\times_q}}e_q(a\ov x^{2}+bx)$$ (where $a,b$ are some integral parameters such that $(ab,q)=1$). He deduced from this bound that the characteristic function of squarefree integers is well-distributed in arithmetic progression to prime modulus $$q\leq x^{2/3+1/57}.$$ \end{itemize} The previous best result, due to Prachar \cite{Prachar}, was $q\leq x^{2/3-\varepsilon}$ (similar to Selberg's Theorem \ref{thmd2} for the divisor function $d_2(n)$) dated to 1958 ! \begin{bibdiv} \begin{biblist} \bib{BGHT}{article}{ author={Barnet-Lamb, Tom}, author={Geraghty, David}, author={Harris, Michael}, author={Taylor, Richard}, title={A family of Calabi-Yau varieties and potential automorphy II}, journal={Publ. Res. Inst. Math. Sci.}, volume={47}, date={2011}, number={1}, pages={29--98}, issn={0034-5318}, } \bib{BirchST}{article}{ author={Birch, B. J.}, title={How the number of points of an elliptic curve over a fixed prime field varies}, journal={J. London Math. Soc.}, volume={43}, date={1968}, pages={57--60}, issn={0024-6107}, } \bib{BlMi}{article}{ author={Blomer, V.}, author={Mili{\'c}evi{\'c}, D.}, title={The second moment of twisted modular $L$-functions}, journal={Geom. Funct. Anal.}, volume={25}, date={2015}, number={2}, pages={453--516}, } \bib{BFKMM}{article}{ author={Blomer, Valentin}, author={Fouvry, \'Etienne}, author={Kowalski, Emmanuel}, author={Michel, Ph.}, author={Mili\'cevi\'c, Djordje}, title={On moments of twisted $L$-functions}, journal={Amer. J. Math.}, volume={139}, date={2017}, number={3}, pages={707--768}, note={\url{arXiv:1411.4467}} } \bib{BFI}{article}{ author={Bombieri, E.}, author={Friedlander, J. B.}, author={Iwaniec, H.}, title={Primes in arithmetic progressions to large moduli}, journal={Acta Math.}, volume={156}, date={1986}, number={3-4}, pages={203--251}, issn={0001-5962}, } \bib{Bur}{article}{ author={Burgess, D. A.}, title={On character sums and primitive roots}, journal={Proc. London Math. Soc. (3)}, volume={12}, date={1962}, pages={179--192}, issn={0024-6115}, } \bib{CHT}{article}{ author={Clozel, Laurent}, author={Harris, Michael}, author={Taylor, Richard}, title={Automorphy for some $l$-adic lifts of automorphic mod $l$ Galois representations}, note={With Appendix A, summarizing unpublished work of Russ Mann, and Appendix B by Marie-France Vign\'eras}, journal={Publ. Math. Inst. Hautes \'Etudes Sci.}, number={108}, date={2008}, pages={1--181}, issn={0073-8301}, } \bib{WeilII}{article}{ author={Deligne, P.}, title={La conjecture de Weil, II}, journal={Publ. Math. IH\'ES}, volume={52}, date={1980}, pages={137--252}, } \bib{DFISalie}{article}{ author={Duke, W.}, author={Friedlander, J. B.}, author={Iwaniec, H.}, title={Equidistribution of roots of a quadratic congruence to prime moduli}, journal={Ann. of Math. (2)}, volume={141}, date={1995}, number={2}, pages={423--441}, issn={0003-486X}, } \bib{Fou}{article}{ author={Fouvry, \'Etienne}, title={Autour du th\'eor\`eme de Bombieri-Vinogradov}, language={French}, journal={Acta Math.}, volume={152}, date={1984}, number={3-4}, pages={219--244}, issn={0001-5962}, } \bib{FouCrelle}{article}{ author={Fouvry, \'E.}, title={Sur le probl\`eme des diviseurs de Titchmarsh}, language={French}, journal={J. Reine Angew. Math.}, volume={357}, date={1985}, pages={51--76}, issn={0075-4102}, } \bib{FIActaAr}{article}{ author={Fouvry, \'E.}, author={Iwaniec, H.}, author={}, title={Primes in arithmetic progressions}, journal={Acta Arith.}, volume={42}, date={1983}, number={2}, pages={197--218}, issn={0065-1036}, } \bib{FoIw}{article}{ author={Fouvry, {\'E}tienne}, author={Iwaniec, Henryk}, title={The divisor function over arithmetic progressions}, note={With an appendix by Nicholas Katz}, journal={Acta Arith.}, volume={61}, date={1992}, number={3}, pages={271--287}, issn={0065-1036}, } \bib{FKM1}{article}{ author={Fouvry, {\'E}.}, author={Kowalski, E.}, author={Michel, Ph.}, title={Algebraic twists of modular forms and Hecke orbits}, journal={GAFA}, volume={25}, note={\url{arXiv:1207.0617}}, date={2015}, number={2}, pages={580-657}, } \bib{MRL}{article}{ author={Fouvry, {\'E}.}, author={Kowalski, E.}, author={Michel, Ph.}, title={Counting sheaves using spherical codes}, journal={Math. Res. Lett.}, volume={20}, date={2013}, number={2}, pages={305--323}, } \bib{FKMSP}{article}{ author={Fouvry, {\' E}.}, author={Kowalski, E.}, author={Michel, Ph.}, title={A study in sums of products}, journal={Philos. Trans. A}, volume={373}, date={2015}, number={2040}, pages={20140309, 26pp.}, note={\url{arXiv:1304.3199}}, } \bib{FKM2}{article}{ author={Fouvry, {\'E}.}, author={Kowalski, E.}, author={Michel, Ph.}, title={Algebraic trace functions over the primes}, journal={Duke Math. J.}, volume={163}, number={9}, pages={1683--1736}, date={2014}, note={\url{arXiv:1211.6043}}, } \bib{FKM3}{article}{ author={Fouvry, {\'E}.}, author={Kowalski, E.}, author={Michel, Ph.}, title={On the exponent of distribution of the ternary divisor function}, journal={Mathematika}, note={\url{arXiv:1304.3199}}, date={2015}, volume={61}, number={1}, pages={121-144}, } \bib{FoMi}{article}{ author={Fouvry, {\'E.}}, author={Michel, {Ph.}}, title={Sur certaines sommes d'exponentielles sur les nombres premiers}, journal={Ann. Sci. \' Ecole Norm. Sup. (4)}, volume={31}, number={1}, date={1998}, pages={93--130}, } \bib{FouMiAnn}{article}{ author={Fouvry, \'E.}, author={Michel, Ph.}, title={Sur le changement de signe des sommes de Kloosterman}, journal={Ann. of Math. (2)}, volume={165}, date={2007}, number={3}, pages={675-715}, } \bib{FKMRRS}{article}{ author={Fouvry, {\'E}tienne}, author={Kowalski, Emmanuel}, author={Michel, Ph.}, author={Raju, C. S.}, author={Rivat, J.} author={Soundararajan, K.} title={On short sums of trace functions}, note={\tt arxiv:1508.00512}, journal={Ann. Inst. Fourier (Grenoble)} date={2017}, volume={167}, number={1}, pages={423--449} } \bib{FrIw}{article}{ author={Friedlander, J.B.}, author={Iwaniec, H.}, title={Incomplete Kloosterman sums and a divisor problem}, note={(with an appendix by B. J. Birch and E. Bombieri)}, journal={Ann. of Math. (2)}, volume={121}, date={1985}, number={2}, pages={319--350}, } \bib{GYP}{article}{ author={Goldston, Daniel A.}, author={Pintz, J\'anos}, author={Y\i ld\i r\i m, Cem Y.}, title={Primes in tuples. I}, journal={Ann. of Math. (2)}, volume={170}, date={2009}, number={2}, pages={819--862}, issn={0003-486X}, } \bib{Gran}{article}{ author={Granville, Andrew}, title={Primes in intervals of bounded length}, journal={Bull. Amer. Math. Soc. (N.S).}, volume={52}, date={2015}, number={2}, pages={171--222}, issn={0273-0979}, } \bib{HSBT}{article}{ author={Harris, Michael}, author={Shepherd-Barron, Nick}, author={Taylor, Richard}, title={A family of Calabi-Yau varieties and potential automorphy}, journal={Ann. of Math. (2)}, volume={171}, date={2010}, number={2}, pages={779--813}, issn={0003-486X}, } \bib{HBPatt}{article}{ author={Heath-Brown, D. R.}, author={Patterson, S. J.}, title={The distribution of Kummer sums at prime arguments}, journal={J. Reine Angew. Math.}, volume={310}, date={1979}, pages={111--130}, issn={0075-4102}, } \bib{HBActa}{article}{ author={Heath--Brown, D.R.}, title={The divisor function $d_3(n)$ in arithmetic progressions}, journal={Acta Arith.}, date={1986}, volume={47}, pages={29--56}, label={HB86}, } \bib{IT}{article}{ author={Ichino, Atsushi}, author={Templier, Nicolas}, title={On the Vorono\u\i \ formula for ${\rm GL}(n)$}, journal={Amer. J. Math.}, volume={135}, date={2013}, number={1}, pages={65--101}, issn={0002-9327}, } \bib{IrvingIMRN}{article}{ author={Irving, Alastair}, title={The divisor function in arithmetic progressions to smooth moduli}, journal={Int. Math. Res. Not. IMRN}, date={2015}, number={15}, pages={6675--6698}, issn={1073-7928}, } \bib{IrvingIMRN2}{article}{ author={Irving, Alastair}, title={Estimates for character sums and Dirichlet $L$-functions to smooth moduli}, journal={Int. Math. Res. Not. IMRN}, date={2016}, number={15}, pages={4602--4633}, issn={1073-7928}, } \bib{IwTopics}{book}{ author={Iwaniec, Henryk}, title={Topics in classical automorphic forms}, series={Graduate Studies in Mathematics}, volume={17}, publisher={American Mathematical Society, Providence, RI}, date={1997}, pages={xii+259}, isbn={0-8218-0777-3}, } \bib{IwKo}{book}{ author={Iwaniec, H.}, author={Kowalski, E.}, title={Analytic number theory}, publisher={American Mathematical Society Colloquium Publications, American Mathematical Society}, volume={53}, address={Providence, RI}, date={2004}, } \bib{IS1}{article}{ author={Iwaniec, Henryk}, author={Sarnak, Peter}, title={The non-vanishing of central values of automorphic $L$-functions and Landau-Siegel zeros}, journal={Israel J. Math.}, volume={120}, date={2000}, number={part A}, part={part A}, pages={155--177}, issn={0021-2172}, } \bib{IS2}{article}{ author={Iwaniec, H.}, author={Sarnak, P.}, title={Dirichlet $L$-functions at the central point}, conference={ title={Number theory in progress, Vol. 2}, address={Zakopane-Ko\'scielisko}, date={1997}, }, book={ publisher={de Gruyter, Berlin}, }, date={1999}, pages={941--952}, } \bib{kale}{article}{ author={Kabatjanski{\u\i}, G. A.}, author={Leven{\v{s}}te{\u\i}n, V. I.}, title={Bounds for packings on the sphere and in space}, language={Russian}, journal={Problemy Pereda\v ci Informacii}, volume={14}, date={1978}, number={1}, pages={3--25}, issn={0555-2923}, } \bib{Kar}{article}{ author={Karatsuba, A. A.}, title={Sums of characters with prime numbers}, language={Russian}, journal={Izv. Akad. Nauk SSSR Ser. Mat.}, volume={34}, date={1970}, pages={299--321}, issn={0373-2436}, } \bib{Sommes}{book}{ author={Katz, N. M.}, title={Sommes exponentielles}, series={Ast\'erisque}, volume={79}, publisher={Soci\'et\'e Math\'ematique de France}, address={Paris}, date={1980}, pages={209}, } \bib{GKM}{book}{ author={Katz, N. M.}, title={Gauss sums, Kloosterman sums, and monodromy groups}, series={Annals of Mathematics Studies}, volume={116}, publisher={Princeton University Press}, address={Princeton, NJ}, date={1988}, } \bib{ESDE}{book}{ author={Katz, N. M.}, title={Exponential sums and differential equations}, series={Annals of Mathematics Studies}, volume={124}, publisher={Princeton University Press}, address={Princeton, NJ}, date={1990}, } \bib{Katzbull}{article}{ author={Katz, Nicholas M.}, title={Exponential sums over finite fields and differential equations over the complex numbers: some interactions}, journal={Bull. Amer. Math. Soc. (N.S).}, volume={23}, date={1990}, number={2}, pages={269--309}, issn={0273-0979}, } \bib{KatzRLS}{book}{ author={Katz, N. M.}, title={Rigid local systems}, series={Annals of Mathematics Studies}, volume={139}, publisher={Princeton University Press}, address={Princeton, NJ}, date={1996}, } \bib{MMP}{book}{ author={Katz, N. M.}, title={Moments, monodromy, and perversity: a Diophantine perspective}, series={Annals of Mathematics Studies}, volume={159}, publisher={Princeton University Press}, address={Princeton, NJ}, date={2005}, } \bib{TLM}{book}{ author={Katz, N. M.}, title={Twisted L-Functions and Monodromy}, series={Annals of Mathematics Studies}, volume={150}, publisher={Princeton University Press}, address={Princeton, NJ}, date={2005}, } \bib{KatzConvol}{book}{ author={Katz, Nicholas M.}, title={Convolution and equidistribution: Sato-Tate theorems for finite-field Mellin transforms}, series={Annals of Mathematics Studies}, volume={180}, publisher={Princeton University Press, Princeton, NJ}, date={2012}, pages={viii+203}, isbn={978-0-691-15331-5}, } \bib{KhNg}{article}{ author={Khan, Rizwanur}, author={Ngo, Hieu T.}, title={Nonvanishing of Dirichlet $L$-functions}, journal={Algebra Number Theory}, volume={10}, date={2016}, number={10}, pages={2081--2091}, issn={1937-0652}, } \bib{KirZhou}{article}{ author={K\i ral, Eren Mehmet}, author={Zhou, Fan}, title={The Voronoi formula and double Dirichlet series}, journal={Algebra Number Theory}, volume={10}, date={2016}, number={10}, pages={2267--2286}, issn={1937-0652}, } \bib{Kloost}{article}{ author={Kloosterman, H. D.}, title={On the representation of numbers in the form $ax^2+by^2+cz^2+dt^2$}, journal={Acta Math.}, volume={49}, date={1927}, number={3-4}, pages={407--464}, issn={0001-5962}, } \bib{K}{article}{ author={Kowalski, E.}, title={Families of cusp forms}, conference={ title={Actes de la Conf\'erence ``Th\'eorie des Nombres et Applications''}, }, book={ series={Publ. Math. Besan\c{c}on Alg\`ebre Th\'eorie Nr.}, volume={2013}, publisher={Presses Univ. Franche-Comt\'e, Besan\c{c}on}, }, date={2013}, pages={5--40}, } \bib{KowBBK1}{article}{ author={Kowalski, Emmanuel}, title={Gaps between prime numbers and primes in arithmetic progressions [after Y. Zhang and J. Maynard]}, journal={Ast\'erisque}, number={367-368}, date={2015}, pages={Exp. No. 1084, ix, 327--366}, } \bib{KMS}{article}{ author={Kowalski, Emmanuel}, author={Michel, Ph.}, author={Sawin, Will}, title={Bilinear forms with Kloosterman sums and applications}, journal={Ann. of Math. (2)}, volume={186}, date={2017}, number={2}, pages={413--500}, note={\tt arXiv:1511.01636}, } \bib{KMS2}{article}{ author={Kowalski, Emmanuel}, author={Michel, Ph.}, author={Sawin, Will}, title={Stratification and averaging for exponential sums : bilinear forms with generalized Kloosterman sums}, note={\url{https://arxiv.org/abs/1802.09849}}, date={2018} } \bib{KMVDMJ}{article}{ author={Kowalski, E.}, author={Michel, Ph.}, author={VanderKam, J.}, title={Rankin--Selberg $L$-functions in the level aspect}, journal={Duke Math. Journal}, volume={114}, date={2002}, pages={123--191}, } \bib{laumon87}{article}{ author={Laumon, G.}, title={Transformation de Fourier, constantes d'\'equations fonctionnelles et conjecture de Weil}, language={French}, journal={Inst. Hautes \'Etudes Sci. Publ. Math.}, volume={65}, date={1987}, pages={131--210}, } \bib{Matomaki}{article}{ author={Matom\"aki, Kaisa}, title={A note on signs of Kloosterman sums}, language={English, with English and French summaries}, journal={Bull. Soc. Math. France}, volume={139}, date={2011}, number={3}, pages={287--295}, issn={0037-9484}, } \bib{Maynard}{article}{ author={Maynard, James}, title={Large gaps between primes}, journal={Ann. of Math. (2)}, volume={183}, date={2016}, number={3}, pages={915--933}, } \bib{Mi1}{article}{ author={Michel, Ph.}, title={Autour de la conjecture de Sato-Tate pour les sommes de Kloosterman. I}, journal={Invent. Math.}, volume={121}, date={1995}, number={1}, pages={61--78}, issn={0020-9910}, } \bib{MiDMJ}{article}{ author={Michel, Ph.}, title={Minorations de sommes d'exponentielles}, journal={Duke Math. J.}, volume={95}, date={1998}, number={2}, pages={227--240}, issn={0012-7094}, } \bib{MvdK}{article}{ author={Michel, Ph.}, author={VanderKam, Jeffrey}, title={Non-vanishing of high derivatives of Dirichlet $L$-functions at the central point}, journal={J. Number Theory}, volume={81}, date={2000}, number={1}, pages={130--148}, issn={0022-314X}, } \bib{MilSch}{article}{ author={Miller, Stephen D.}, author={Schmid, Wilfried}, title={Automorphic distributions, $L$-functions, and Voronoi summation for ${\rm GL}(3)$}, journal={Ann. of Math. (2)}, volume={164}, date={2006}, number={2}, pages={423--488}, issn={0003-486X}, } \bib{nunes}{article}{ author={Nunes, R. M.}, title={On the least squarefree number in an arithmetic progression}, journal={Mathematika}, volume={63}, date={2017}, number={2}, pages={483--498} } \bib{Polymath8a}{article}{ author={Polymath, D.H.J.}, title={New equidistribution estimates of Zhang type}, journal={Algebra \& Number Theory} note={{\tt arXiv:1402.0811}}, volume={8}, pages={2067–2199}, number={9}, date={2014}, } \bib{Polymath8b}{article}{ author={Polymath, D. H. J.}, title={Variants of the Selberg sieve, and bounded intervals containing many primes}, journal={Res. Math. Sci.}, volume={1}, date={2014}, pages={Art. 12, 83}, issn={2522-0144}, } \bib{Prachar}{article}{ author={Prachar, Karl}, title={\"Uber die kleinste quadratfreie Zahl einer arithmetischen Reihe}, language={German}, journal={Monatsh. Math.}, volume={62}, date={1958}, pages={173--176}, } \bib{SST}{article}{ author={Sarnak, Peter}, author={Shin, Sug Woo}, author={Templier, Nicolas}, title={Families of $L$-functions and their symmetry}, conference={ title={Families of automorphic forms and the trace formula}, }, book={ series={Simons Symp.}, publisher={Springer, [Cham]}, }, date={2016}, pages={531--578}, } \bib{Serre}{book}{ author={Serre, Jean-Pierre}, title={Local fields}, series={Graduate Texts in Mathematics}, volume={67}, note={Translated from the French by Marvin Jay Greenberg}, publisher={Springer-Verlag, New York-Berlin}, date={1979}, pages={viii+241}, } \bib{Sivak}{article}{ author={Sivak-Fischler, J.}, title={Crible asymptotique et sommes de Kloosterman}, language={French, with English and French summaries}, journal={Bull. Soc. Math. France}, volume={137}, date={2009}, number={1}, pages={1--62}, issn={0037-9484}, } \bib{SoundGYP}{article}{ author={Soundararajan, K.}, title={Small gaps between prime numbers: the work of Goldston-Pintz-Y\i ld\i r\i m}, journal={Bull. Amer. Math. Soc. (N.S).}, volume={44}, date={2007}, number={1}, pages={1--18}, issn={0273-0979}, } \bib{Tay}{article}{ author={Taylor, Richard}, title={Automorphy for some $l$-adic lifts of automorphic mod $l$ Galois representations. II}, journal={Publ. Math. Inst. Hautes \'Etudes Sci.}, number={108}, date={2008}, pages={183--239}, issn={0073-8301}, } \bib{Vaughan}{book}{ author={Vaughan, R. C.}, title={The Hardy--Littlewood method}, series={Cambridge Tracts in Mathematics}, volume={125}, edition={2}, publisher={Cambridge University Press, Cambridge}, date={1997}, pages={xiv+232}, isbn={0-521-57347-5}, } \bib{Weil0}{article}{ author={Weil, Andr\'e}, title={On the Riemann hypothesis in functionfields}, journal={Proc. Nat. Acad. Sci. U. S. A.}, volume={27}, date={1941}, pages={345--347}, issn={0027-8424}, } \bib{WuPing}{article}{ author={Wu, J.}, author={Xi, P.}, title={ Arithmetic exponent pairs for algebraic trace functions and applications}, note={\url{https://arxiv.org/abs/1603.07060}}, date={2016} } \bib{Xi}{article}{ author={Xi, Ping}, title={Sign changes of Kloosterman sums with almost prime moduli}, journal={Monatsh. Math.}, volume={177}, date={2015}, number={1}, pages={141--163}, issn={0026-9255}, } \bib{Xi2}{article}{ author={Xi, Ping}, title={Sign changes of Kloosterman sums with almost prime moduli, II}, journal={IMRN}, volume={2016}, date={2016}, number={00}, pages={1--28}, } \bib{Young}{article}{ author={Young, {M.}{P.}}, title={The fourth moment of Dirichlet $L$-functions}, journal={Ann. of Math. (2)}, pages={1--50}, date={2011}, volume={173}, number={1}, } \bib{YZhang}{article}{ author={Zhang, Yitang}, title={Bounded gaps between primes}, journal={Ann. of Math. (2)}, volume={179}, date={2014}, number={3}, pages={1121--1174}, } \bib{sga4h}{book}{ author={Deligne, P.}, title={Cohomologie \'etale}, series={Lecture Notes in Mathematics}, volume={569}, note={S\'eminaire de G\'eom\'etrie Alg\'ebrique du Bois-Marie (SGA 4${\textstyle{\frac{1}{2}}}$)}, publisher={Springer-Verlag}, address={Berlin-New York}, date={1977}, pages={iv+312pp}, label={SGA4${\textstyle{\frac{1}{2}}}$}, } \end{biblist} \end{bibdiv} \end{document} \bibliographystyle{amsplain} \end{document} \section{} \subsection{} \begin{theorem}[Optional addition to theorem head] \end{theorem} \begin{proof}[Optional replacement proof heading] \end{proof} \begin{figure} \includegraphics{filename} \caption{text of caption} \label{} \end{figure} \begin{equation} \end{equation} \begin{equation*} \end{equation*} \begin{align} & \\ & \end{align}
{'timestamp': '2019-04-17T02:23:37', 'yymm': '1712', 'arxiv_id': '1712.03173', 'language': 'en', 'url': 'https://arxiv.org/abs/1712.03173'}
\section{Introduction} The $b\rightarrow s$ flavour changing neutral current transition is suppressed in the standard model. Dominant contributions to rare $B$ decays such as $B\rightarrow K^{(*)}l^+l^-$ come from loop diagrams: box and penguin diagrams. These rare $B$ decays are good windows for looking for new physics: New particles beyond the standard model could appear in the loops and change the decay widths of those rare decays. The starting point of theoretical calculations of weak decays of hadrons is the effective weak Hamiltonian. In the standard model, there are ten operators in the effective Hamiltonian for radiative and semileptonic decays. The dominant short distance contributions are from effective local operators $Q_7$, $Q_9$ and $Q_{10}$, which come from the penguin and box diagrams. In quantum chromodynamics (QCD), quarks are confined in color singlets. The $b\rightarrow s$ transition happens inside hadrons. Therefore the matrix elements of the above three local operators have to be computed using non-perturbative methods, for example, lattice QCD. Those matrix elements can be parametrized by form factors according to their Lorentz structures. In total, there are ten form factors for the quark currents in $Q_7$, $Q_9$, and $Q_{10}$, and our aim is to calculate these on the lattice with dynamical simulations. More details of our calculation strategy and our definitions of the form factors can be found in Ref.~\cite{Liu:2009dj}. Here we update our progress in the extraction of the form factors. In the next section, we present our lattice setup and then we show some preliminary results in the last section. \section{Lattice setup} We use configurations from the MIMD\footnote{Multiple Instruction stream, Multiple Data stream.} Lattice Computation (MILC) Collaboration, which are $2+1$ flavour dynamical simulations using $\mathcal{O}(a^2)$ and tadpole improved staggered fermions (AsqTad)~\cite{Bazavov:2009bb}. \begin{table}[hb] \begin{center} \begin{tabular}{cccccc} \hline \hline & $a$(fm) & $am_{sea}$ & Volume & $N_{conf}\times N_{src}$ & $am_{val}$ \\ \hline coarse & $\sim$0.12 & $0.007/0.05$ & $20^3\times64$ & $2109\times8$ & $0.007/0.04$ \\ & & $0.02/0.05$ & $20^3\times64$ & $2052\times8$ & $0.02/0.04$ \\ \hline fine & $\sim$0.09 & $0.0062/0.031$ & $28^3\times96$ & $1910\times8$ & $0.0062/0.031$ \\ \hline \hline \end{tabular} \caption{Parameters of lattices being used in this study. $N_{src}$ is the number of point sources used on each configuration.} \label{tab:lat_parameters} \end{center} \end{table} We currently have data from two lattice spacings. At the coarse lattice spacing we have two different light quark masses. The lightest quark mass gives a pion mass of about 300 MeV. The parameters of our calculation are collected in Table~\ref{tab:lat_parameters}. On each configuration, eight point sources are used to increase statistics. In~\cite{Liu:2009dj}, we found $Z_2\times Z_2$ random wall sources were inefficient for vector mesons and heavy-light mesons in reducing statistical errors of correlators (random wall source methods allow one to approximately obtain all to all correlators and thus possibly to reduce statistical errors). Therefore we now use several point sources. The light valance quarks are also AsqTad fermions as the sea quarks. For the heavy $b$ quark, we use the (moving-)non-relativistic QCD (NRQCD) action~\cite{Horgan:2009ti}, which is expanded up to and including $\mathcal{O}(\Lambda^2_{QCD}/m_b^2)$. We can work directly at the physical $b$ quark mass, thus no extrapolation up to $m_b$ is required. In the lattice heavy-light currents, the heavy quark expansion includes order $1/m_b$. We compute 2-point functions for the heavy-light $B$ meson and the light-light final state mesons as well as 3-point functions with the current operators inserted. Then we fit these correlation functions with the Bayesian fitting method~\cite{Lepage:2001ym}. From the fitted ground state energies and amplitudes, one can extract the matrix elements of the current operators and then the form factors. The detailed formulas can be found in Refs.~\cite{Liu:2009dj,Meinel:2008th}. Matching factors of the lattice current operators to the $\overline{\rm MS}$ scheme were computed perturbative in Ref.~\cite{Muller:2010kb} to one loop. We set $\alpha_s=0.3$ below to get the values of these matching factors. Previous lattice calculations of the above form factors are all quenched calculations. And an extrapolation up to the physical $b$ quark mass is needed for the heavy quark. See, e.g.,~\cite{Becirevic:2006nm} and the references therein. \section{Preliminary results} \begin{figure}[p] \centering \includegraphics[height=2in,width=0.48\linewidth]{Aee_B2K_q2max} \includegraphics[height=2in,width=0.48\linewidth]{MB} \caption{Comparison of separate Bayesian fitting and simultaneous Bayesian fitting to heavy-light 2-point (hl2pt) and heavy-light 3-point (hl3pt) functions for the ground state amplitude in a 3-point function (left graph) and the $B$ meson energy $aE_B^{lat}$ (right graph) in lattice units. The horizontal axis is the number of (ground and excited) states in the fitting functions. } \label{fig:Aee_MB} \end{figure} We tried both separate fitting and simultaneous fitting of the 2- and 3-point functions. The simultaneous fitting, where common energy parameters are used, gives better results with smaller statistical errors. Examples are shown in Fig.~\ref{fig:Aee_MB} for the ground state amplitude in a 3-point function for $B\rightarrow K$ and the $B$ meson energy (in lattice units) obtained from different fits. Note a mass shifting term needs to be added for the $B$ meson energy due to the use of (moving-)NRQCD. In Fig.~\ref{fig:Aee_MB} the horizontal axis is the number of (ground and excited) states in the Bayesian fitting functions. Equal number of opposite parity states are also in the fitting functions due to the use of staggered fermions. The fitting results stabilize when more than 6 normal states and 6 opposite parity states are used. \begin{figure} \begin{minipage}[t]{.48\linewidth} \centerline{\includegraphics[width=\linewidth,height=2in]{B2K_fplus_f0_fT_q2}} \caption{Preliminary results for the form factors $f_0$, $f_+$ and $f_T$ for $B\rightarrow K$ decays, obtained from simultaneous Bayesian fits. The left-most points have $\vec v=(0,0,0)$, $\vec p_B=0$ and $\vec p'_K=2\pi/L\cdot(-2,0,0)$.} \label{fig:B_to_K_f0_fplus_fT} \end{minipage} \hfill \begin{minipage}[t]{.48\linewidth} \centerline{\includegraphics[width=\linewidth,height=2in]{B2Kstar_T2_T1_q2}} \caption{Preliminary results for the form factor $T_1$ and $T_2$ for $B\rightarrow K^*$ decays, obtained from simultaneous Bayesian fits. The left-most points have $\vec v=(0,0,0)$, $\vec p_B=0$ and $\vec p'_{K^*}=2\pi/L\cdot(-2,0,0)$.} \label{fig:B_to_Kstar} \end{minipage} \end{figure} Preliminary results of some form factors for $B\rightarrow K$ and $B\rightarrow K^*$ are given in Fig.~\ref{fig:B_to_K_f0_fplus_fT} and Fig.~\ref{fig:B_to_Kstar} respectively. They are from the coarse lattice with light valance quark masses $0.007/0.04$. The $1/m_b$ corrections to the currents are not included yet, but will be soon. A first fitting to the 3-point functions shows that these corrections are small. For the other quark mass and for the fine lattice spacing, data analysis is going on. We will have more data points at lower $q^2$ after we analyze the correlators with non-zero velocity $\vec v$ for the $B$ meson in the moving NRQCD action. For the radiative decay $B\rightarrow \gamma K^*$, we want the form factor $T(0)=T_1(0)=T_2(0)$ to get the branching fraction. To extrapolate our results to the $q^2=0$ limit, we need some ansatz. In Fig.~\ref{fig:T_zero} we do an extrapolation using the B-K ansatz~\cite{Becirevic:1999kt} \begin{equation} T_1(q^2)=\frac{T(0)}{(1-\tilde q^2)(1-\alpha\tilde q^2)},\quad T_2(q^2)=\frac{T(0)}{1-\tilde q^2/\beta},\quad \tilde q^2=q^2/M^2_{B^*_s}, \label{eq:BK_ansatz} \end{equation} with $M_{B^*_s}=5.4158$ GeV fixed. \begin{figure}[htb] \centering \includegraphics[height=2in,width=0.48\linewidth]{T1_T2_4_dat_BK_ansatz_fix_M_Bs_star} \caption{Extrapolation of $T_1$ and $T_2$ to $q^2=0$ using the B-K ansatz in Eq.(\ref{eq:BK_ansatz}).} \label{fig:T_zero} \end{figure} This is only preliminary and we get $T(0)=0.168(29)$. The statistical errors will be reduced when more data points at low $q^2$ are included. Eventually we need to extrapolate to the physical pion mass and the continuum limit to really compare with experiments. In a recent paper by H.~Na et al.\cite{Na:2010uf} on $D\rightarrow K$ form factors, the extrapolation in $q^2$, light quark mass and lattice spacing were done all together using a ``simultaneous modified $z$-expansion". We may try this method in the future.
{'timestamp': '2011-01-17T02:00:37', 'yymm': '1101', 'arxiv_id': '1101.2726', 'language': 'en', 'url': 'https://arxiv.org/abs/1101.2726'}
\section{INTRODUCTION} \begin{figure} \centering \includegraphics[scale=0.5]{Figure1.jpg} \caption{An image of a truncated magnetoelastic membrane in a precessing magnetic field. The degree of truncation $S=h/2R$, where $h$ is the sagitta length of the removed circular segment, and $R$ is the membrane radius, determines membrane symmetry. The magnetic field $\vec{\bm{H}}$ precesses at the angle $\theta$ around the $z$-axis with a phase given by $\phi=\omega t$, where $\omega$ is the precession frequency and $t$ is time. The field induces an amplitude, $A$, along the membrane perimeter, measured from the $x$-$y$ plane. Coloration indicates z-position (+z in red and -z in blue).} \label{fig:fig1} \end{figure} \begin{figure*} \centering \includegraphics[scale=0.62]{Figure2.jpg} \caption{A circular magnetoelastic membrane in a precessing magnetic field adopts dynamic motion. (a) Transverse waves propagate around the membrane above a critical frequency ($\omega > \omega_c$). (b) A schematic plot showing the phase diagram of a membrane. Above the dotted black curve, the ``wobbling" membrane remains perpendicular to the precession axis and possess the rotational waves from (a). The wave amplitude maximizes just before the transition. Below this curve, the membrane buckles and rotates asynchronously with the field, hence ``dancers". (c) The bending stiffness controls the shape of the rotational waves. The black arrows indicate the direction of wave propagation. Coloration indicates z-position (+z in red and -z in blue).} \label{fig:fig2} \end{figure*} Magnetically controlled microrobots have applications in drug delivery \cite{klosta2020kinematics,Dreyfus2005micro, jang2015undulatory, bryan2019magneto}, sensing \cite{moerland2019rotating,goubault2003flexible}, micromixing \cite{biswal2004mixing}, detoxification \cite{zhu2015microfish, wu2015nanomotor} and microsurgery \cite{wu2020multifunctional, Vyskocil2020cancer}. Such versatile use of magnetic microrobots is possible because magnetic fields can penetrate organic matter, do not interfere with biological or chemical functions, do not require fuel, and, most importantly, can be externally controlled. These properties allow for non-invasive and precise spatiotemporal execution of desired function. In particular, superparamagnetic particles are ideal candidates for robotic functions when combined with elastic components \cite{dempster2017contractile,Dreyfus2005micro} due to their lack of residual magnetization, lowering their propensity to agglomerate, and are less toxic than ferromagnetic particles \cite{markides2012biocomp}. Magnetoelastic membranes possess a diverse repertoire of possible dynamic motion under time-dependent magnetic fields \cite{brisbois2019actuation,Hu2018small}, making them particularly suited for designing multifunctional microrobots. Navigating a viscous environment requires a magnetoelastic microrobot to use competing magnetic and elastic interactions to induce non-reciprocal motion \cite{purcell1977life}. That is, the sequence of configurations that the robot adopts must break time-reversal symmetry to swim at low Reynolds numbers (Re $\ll 1$). For example, magnetoelastic filaments achieve non-reciprocal motion with a non-homogeneous distribution of magnetic components or with shape asymmetry \cite{Dreyfus2005micro, jang2015undulatory, bryan2019magneto, yang2020reconfig, cebers2005flexible}. These asymmetries induce bending waves that propagate along the chain. In nature, microscopic organisms such as euglenids \cite{Arroyo2012reverse} swim using self-propagating waves directed along their cellular membrane. G. I. Taylor was the first to model such organisms using a transverse wave traveling along a infinite 2D sheet \cite{taylor1951analysis}. Taylor found that the wave induced motion in the sheet opposite to the propagating wave direction. Subsequent works expanded upon Taylor's findings \cite{lauga2009hydro}, and developed a rotational counterpart \cite{Corsi2019neutrally} that produces a hydrodynamic torque on circular membranes with prescribed waves traveling around their perimeter. In this article, we study rotational waves in homogeneous superparamagnetic membranes under precessing magnetic fields. We investigate the dynamic modes of the membrane separated by a critical precession frequency, $\omega_c$, below which the membrane motion is asynchronous with the field, and above which rotational waves propagate in-phase with the field precession. Breaking the membrane's center of inversion symmetry, by removing part of the circle (Fig 1), allows for locomotion in the fast frequency phase ($\omega > \omega_c$). Shape asymmetry is needed to disrupt the inversion symmetry of the magnetic forces experienced by a circular membrane. We show that the torque and velocity of the membrane counterintuitively resembles the linear Taylor sheet rather than its rotational analogue. Furthermore, by controlling a magnetoviscous parameter and the membrane shape asymmetry, we demonstrate swimming directed by a programmed magnetic field and diagram its non-reciprocal path through conformation space. The paper is organized as follows. In Sec. II, we establish the phase diagram of a circular magnetoelatic membrane in a precessing magnetic field and determine the transition frequency $\omega_c$. In Sec. III, we introduce hydrodynamic interactions and observe circular locomotion in asymmetric membranes. We demonstrate a programmed magnetic field, in Sec. IV, that directs a membrane swimmer along a predetermined path. Finally, we make concluding remarks on the necessary conditions for superparamagnetic swimmers in Sec. V. \section{PHASE SPACE FOR UNTRUNCATED MEMBRANE} \begin{figure} \centering \includegraphics[scale=0.65]{Figure3.jpg} \caption{The synchronous-asynchronous (wobbler-dancer) transition frequency $\omega_c$ for a magnetoelastic membrane. (a) Molecular dynamics calculation of $\omega_c$ as a function of the field precession angle $\theta$. The solid and dashed lines indicate a dipole magnitude of $\mu=$ 2 and $\mu=$ 1, respectively. The inset shows the dimensionless transition frequency $\omega_c/\Omega$, where $\Omega$ is the membrane's characteristic rotation frequency. The green dashed line represents the theoretical transition at $\omega_c/\Omega=2/\pi$, which, near $\theta=$ 90$^\circ$, is independent of bending stiffness ($\kappa=$ 1, orange. $\kappa=$ 100, blue/red). The black squares show the transition calculated from lattice-Boltzmann simulations. (b) Supercritical and subcritical behavior of the total energy U (magnetic + bending). The precession frequency is close to the critical frequency, $0.029 < \omega_c < 0.030$ ($\theta=$ 80$^\circ$). Fourier transform of the rotational wave amplitude (bottom).} \label{fig:fig3} \end{figure} We construct the phase diagram for the dynamic modes of the membrane using molecular dynamics (MD) without hydrodynamics to efficiently search for non-reciprocal actuation relevant to locomotion. Actuation of magnetoelastic membranes in time-dependent magnetic fields necessitates a model that captures elastic bending in response to magnetic forces, which are imparted by the dipolar interactions of embedded magnetic colloids. The membrane is composed of a hexagonal close-packed monolayer of hard spherical colloids, each of diameter $\sigma$ and possessing a point dipole moment $\bm{\mu}$ at its center. The bonds between the colloids are approximately inextensible, but able to bend with rigidity, $\kappa$. We model an implicit, uniform magnetic field by constraining the orientation of the colloids' dipole moments in the direction of the field, $\bm{H}=\bm{\mu}/\chi$, where $\chi$ is the magnetic susceptibility of the material and $\bm{\mu}$ is the dipole moment with magnitude $\mu$. The instantaneous dipole orientation is given by $\bm{\hat{\mu}} = \sin{\theta}\sin{\omega t}~\bm{\hat{i}} + \sin{\theta}\cos{\omega t}~\bm{\hat{j}} + \cos{\theta}~\bm{\hat{k}}$, where $\theta$ is the field precession angle, $\omega$ is the precession frequency, and $t$ is time. All quantities herein are expressed in dimensionless units (see Appendix A). A diverse set of possible dynamic motion develop depending on the radius $R$ of the thin membrane and the magnetic field parameters ($\mu$, $\theta$, $\omega$). While varying these parameters, we solve the equations of motion for an overdamped system with a friction force imparted on each colloid given by $-\xi v(t)$, where $v(t)$ is the colloid velocity, and $\xi$ is the damping coefficient. Within this approximation, two dynamic mode regimes develop. At fast frequencies ($\omega > \omega_c$), the membrane motion synchronizes with the field to produce transverse waves that propagate around the membrane (Fig. 2a). At slow frequencies ($\omega < \omega_c$), we observe a collection of modes that are asynchronous with the field. We find a critical frequency, $\omega_c$, where there is an abrupt change in the membrane's dynamic motion (Fig 2b). As the field precesses, the forces along the membrane perimeter generate internal buckling and create a torque that rotates the membrane around its diameter. If the magnetic field precession is fast ($\omega > \omega_c$), the continuous change in the direction of the axis of rotation leads to the development of a constant-amplitude wave traveling along the membrane perimeter, see Video 1 in Ref. \cite{video1}. On average, the membrane remains perpendicular to the precession axis and simply ``wobbles", synchronous to the field, and with no significant rotation around the precession axis. This state closely resembles acoustically levitated granular rafts \cite{lim2021acoustically}. The direction of the propagating wave matches the handedness of precession because the dipole-dipole forces, which cause buckling, point in the direction of the magnetic field. However, the field polarity does not affect the magnitude or travel direction of the wave since the superparamagnetic dipoles are always oriented in the same direction as the field. Hence, the force due to the dipole-dipole interactions, ${\bm{F}}_{dipole}$, remains unchanged (${\bm{F}}_{dipole} \propto (\bm{\mu} \cdot \bm{r})\bm{\mu} = (-\bm{\mu} \cdot \bm{r})(-\bm{\mu})$, where $\bm{r}$ is the displacement vector between dipoles \cite{yung1998analytical}). In addition to the rotational waves, the wobbling mode also manifests radially propagating (inward) bending waves (Fig. 2c) that terminate at the membrane center. The wave shape weakly depends on the membrane stiffness $\kappa$; the wave form is better defined as $\kappa$ decreases. However, totally compliant membranes ($\kappa\rightarrow 0$) do not transmit bending waves and therefore this phenomenon exists only for intermediate $\kappa$. \begin{figure*} \centering \includegraphics[scale=0.65]{Figure4.jpg} \caption{Fluid flow around a magnetoelastic membranes in the ``wobbler" regime. The top images show the total force vector for each colloid (blue arrows) alongside the dipole orientation (cyan arrows) for a precessing field ($\mu=$1, $\theta=$ 70$^\circ$, $\omega=$ 0.1). The bottom images show streamlines around the membrane, where the color indicates flow speed (red $>$ blue). (a) A snapshot of a circular membrane. (b) Two snapshots of a truncated circular membrane separated by a shift in the field precession $\Delta\phi=\omega t=6\pi/5$.} \label{fig:fig4} \end{figure*} If the precession is slow ($\omega < \omega_c$), the membrane has enough time to rotate completely parallel to the precession axis and will adopt new configurations due to elastic buckling. How the membrane buckles depends on the magnetoelastic parameter \cite{vazquez2018flexible}, $\Gamma = M L^2 / \kappa$, which characterizes the ratio between the membrane's magnetic and bending energies, where $M$ is the magnetic modulus, and $L^2$ is the membrane area. If the magnitude of $\Gamma$ is very small ($\Gamma \ll 1$) or very large ($\Gamma \gg 1$), we observe hard disk behavior because bending distortions become impossible due to mechanical stiffness or due to unfavorable magnetic interactions, respectively. While not investigated here, strong magnetic coupling \cite{park2020dna,messina2015quant} between colloids will adversely affect membrane synthesis. At intermediate $\Gamma$, membrane edges buckle several times per precession period and produce magnetically stabilized conformations that, while periodic, run out-of-sync with the field, see Video 2 in Ref. \cite{video2}. Much of this back-and-forth ``dancing" motion is essentially reciprocal and is therefore generally a poor candidate for studying swimming at small Re. Therefore, here we seek to formally define $\omega_c$ and focus on the wobbling regime ($\omega > \omega_c$). To accurately determine the transition frequency $\omega_c$ that separates the wobblers from the dancers, we investigate how the magnetic field parameters (precession angle $\theta$, dipole magnitude $\mu$), and membrane radius $R$ (Fig. 3a) contribute to the characteristic response time $\tau$ of the rotating membrane. When the membrane rotation time $\tau$ increases, it necessarily requires a slower field to keep the membrane in the wobbling mode, decreasing $\omega_c$. A larger $\tau$, can be achieved by weakening the magnetic torque ($\theta$ closer to $\pi/2$ or smaller $\mu$) or increasing the drag on the membrane (larger $R$). Similarly, a smaller $\tau$ implies a fast membrane response from a strong field or a small membrane. We observe that $\omega_c$ diverges as $\theta$ approaches the magic angle, partly due to instability of the wobbling phase at angles below the magic angle \cite{cimurs2013dynamics}. The transition to the wobbling state is characterized by the abrupt shift in the membrane's potential energy from a time-dependent function to a constant value (Fig. 3b, top). When the potential energy does not change, this implies that the shape of the membrane conformation becomes invariant in the rotating field reference frame. This change in the dynamic buckling results in a single Fourier mode for the displacement of the colloids parallel to the precession axis (Fig. 3b, bottom). This resembles the transition between the synchronous and asynchronous motion for oblate magnetic particles \cite{cimurs2013dynamics}. \begin{figure*} \centering \includegraphics[scale=0.65]{Figure5.jpg} \caption{Actuation drives circular locomotion of truncated magnetoelastic membranes through a viscous fluid. (a) The average rotational (``wobble") wave amplitude $A_{avg}$, scaled by the membrane radius $R$, depends inversely on the magnetoviscous parameter $\tau \omega$. Data points from lattice Boltzmann simulations are compared to our analytical model (solid blue line). The coloration of the simulation data notes the degree of truncation $S$. The inset shows the variation in $A/\sigma$ over time based on membrane geometry ($S=$ 0.05, black; $S=$ 0.5, gray), where $\sigma$ is the colloid diameter. (b) The path taken by a membrane in a precessing field. The arrow indicates the travel direction with velocity $V$. The inset shows the radius $\rho$ of this path as a function of $S$. (c) The membrane velocity is proportional to $A_{avg}^2 \propto (\tau \omega)^{-2}$ and scales with $S^{3/2}$ due to changes in the length of the membrane perimeter. The data points shape are coded by the membrane radius ($R=$ 7, triangle; $R=$ 9, square; $R=$ 12, circle). The line shows our analytical prediction (slope = 1.0). The inset shows the continuous inversion symmetry measure for a flat truncated membrane.} \label{fig:fig5} \end{figure*} When the precession angle approaches $\pi/2$, the membrane motion becomes independent of the stiffness of the membrane; the membrane remains flat at all times and for all values of $\omega$. As the field precesses, the forces perpendicular to the membrane plane vanish near $\theta = \pi/2$ preventing significant radial bending and, consequently, changing $\kappa$ does not shift $\omega_c$ (Fig. 3a, inset). By solving an Euler-Lagrange equation with Rayleigh dissipation (Appendix B), we derive an equation of motion for a membrane in a field precessing at a large angle. It reveals a characteristic frequency of membrane motion, $\Omega = 6 \zeta(3)\mu_0\mu^2\sin{2\theta} / \pi^2 \eta R^2 \sigma^4$, where $\mu_0$ is the magnetic permeability of free space, $R$ is the radius of the membrane, $\eta$ is the viscosity, and $\zeta(x)$ is the Riemann zeta function. The frequency $\Omega$ comes from the magnetic ($\propto \mu_0\mu^2\sin{2\theta} R^2/\sigma^5$) and drag ($\propto \eta R^4/\sigma$) potential functions. The $\omega_c$ curves in Fig. 3a can be scaled by $\Omega$ to obtain a dimensionless transition frequency $\omega_c / \Omega = 2/\pi$ (Fig 3a, inset). This provides a single number with which to predict the dynamic motion of a membrane and defines the membrane response time $\tau = \Omega^{-1}$. \section{HYDRODYNAMIC EFFECTS ON ``WOBBLING" MEMBRANES} It is useful to investigate the broad range of dynamic motions accessible to a magnetoelastic membrane using a simple overdamped system to highlight relevant transitions in motion. Afterwards, we confirmed the dimensionless transition $\omega_c / \Omega$ using the more computationally expensive hydrodynamic simulations (Fig. 3a, black squares) and change the characteristic frequency $\Omega = 27 \zeta(3)\mu_0 \mu^2\sin{2\theta}/64\eta R\sigma^5$ to include hydrodynamic interactions (see Appendix B). Using the same magnetic potential as the previous section, this change in $\Omega$ is due to the torque on the membrane a viscous fluid ($\propto \eta R^3$). We will use this definition for $\Omega$ hereafter. To observe the effect of the wobbler's non-reciprocal motion on the surrounding fluid, we add hydrodynamic interactions to our simulations by coupling the MD model to the lattice Boltzmann method (LBM) \cite{Mackay2013hydrodynamic}. This technique, which comes from a discretization of the Boltzmann transport equation, reproduces the incompressible Navier-Stokes equation in the macroscopic limit. LBM calculates the evolution of a discrete-velocity distribution function, $f_i$, at each fluid node that fills the simulation box on a square lattice mesh with a spacing of $\Delta x$. The surface of the colloids act as a boundary and is defined by surface nodes that interact with the fluid using the model developed by Peskin \cite{peskin2002immersed}. Care must be taken when implementing LBM with MD because compliant springs can cause translation of the membrane due to in-plane stretching. This mechanism has been observed in systems of a few colloids \cite{Grosjean2018surface}. Therefore, the stiffness of the springs must be large enough to eliminate this effect for an inextensible membrane model requiring the use of a smaller simulation timestep. Simultaneously, small Re in LBM is achieved by decreasing the Mach number, set by the speed of sound $c_s=\frac{1}{\sqrt3}\frac{\Delta x}{\Delta t}$ \cite{kruger}. Therefore, we rely on a small timestep that is compatible with both schemes. See Appendix C for a complete description of the model. The fluid flow around the membrane is determined by its symmetry and actuation. The precessing magnetic field induces a torque along the membrane perimeter that circulates fluid around an axis of rotation through the membrane diameter (Fig. 4a). This axis of rotation moves continuously with the field, producing circulating flows above and below the membrane. The rising peaks push the fluid up (+z) and it flows towards the falling peak (-z) on the other side of the membrane. This flow simultaneously resembles analytical predictions for rotating hard disks \cite{tanzosh1996general} and the flow vorticity from Taylor's swimming sheet \cite{lauga2009hydro}. \begin{figure*} \includegraphics[scale=0.5]{Figure6.jpg} \caption{A two-step magnetic field directs a swimming membrane along a path. (a) First, a membrane wobbler moves under a precessing magnetic field. After it rotates a half-turn (\#1), the precession switches to a fast frequency at $\theta=\pi/2$ while the axis rotates to flip the membrane (\#2). (b) We define the angles that the normal vector $\bm{n}$ and the truncation vector $\bm{S}$ make with the $x$-axis as $\zeta_n$ and $\zeta_S$, respectively. (c) The path in conformation space over the two-step field. (c) Repeated cycles from (a) move the membrane against the Brownian motion of a thermalized fluid. The upper panel shows the motion of the membrane in the $x$-$y$ plane. The black arrow indicates the direction of motion. The lower panel shows the displacement in the $z$-direction.} \label{fig:fig6} \end{figure*} The centrosymmetry of a magnetoelastic membrane generates a flow that prevents its center of mass from translating. To induce locomotion, we truncate the membrane by removing a circular segment with a sagitta of length $h$. We normalize $h$ by the diameter of the circle to define the degree of truncation of the circular membranes as $S = h/2R$. In contrast to the circular membrane case, the shape of the fluid flow in the truncated membrane changes during a single precession period leading to asymmetric flow field depending on the relative orientation between the field and the truncation cut (Fig. 4b). The amplitude of propagating waves is particularly relevant for predicting the translational \cite{taylor1951analysis} or rotational \cite{Corsi2019neutrally} velocity of a membrane. Here, the wobble amplitude can be calculated by balancing the magnetic \cite{yung1998analytical} and drag \cite{tanzosh1996general} torque in a viscous fluid (see Appendix D). Under small amplitudes for the rotational wave, we obtain the simple relation \begin{equation} \frac{A}{R}=\frac{C}{\tau\omega} \label{eq:amplitude} \end{equation} where $A$ is the amplitude, and $C=32/9\pi^2$ (Fig. 5a). It is reasonable to assume, in the limit of small deformations, the bending contribution to the torque along the edge is negligible, unless $\kappa \rightarrow \infty$. Furthermore, the amplitude is independent of the membrane size since $\tau \propto R$. However, the membrane is not free to increase in radius arbitrarily. The small Re condition implies that $\nu \gg R^2/\tau$, where $\nu$ is the kinematic viscosity. Obeying this constraint on $\tau$, we can define a magnetoviscous parameter $\tau \omega$ and use it to predict locomotion of the membrane. Asymmetry in the fluid flow due to a degree of truncation $S>0$ leads to locomotion of the membrane. The membrane travels with a net velocity in the direction of the truncation cut. This net motion is due to the decrease in the amplitude of the waves traveling along the truncated edge. Since the truncated edge is closer to the center of mass and $\kappa$ is homogeneous, the membrane will bend to a lesser extent along the truncation. This manifests as a net motion every $2\pi/\omega$, where reversing the handedness of the field reverses the locomotive direction. However, the membrane follows a curved path. While the membrane torque due to the underlying colloidal lattice is negligible, the membrane can rotate significantly by choosing $\omega$ close to $\omega_c$. This rotation emerges exclusively due to the magnetic interactions perpendicular to the wobbling membrane. If the projection of the forces, visualized in Fig. 4, on the $x$-$y$ plane are non-zero, the membrane will rotate. Over many precession periods, the membrane moves in a circular path around a central point (Fig. 5b). The radius $\rho$ of the path depends on $S$ and $A_{avg}$. Untruncated, $S=0$, and fully truncated, $S = 1$, do not translate and result in $\rho=0$. Hence, a maximum for $\rho$ exists at intermediate S values (Fig. 5b inset). Since the membrane is composed of colloids, irregularities in the $\rho \left(S,A_{avg}\right)$ curve appear because the symmetry of the membrane changes in discrete steps. The magnetic field controls how quickly the membrane travels along the circular path and affects its angular velocity. Together with the truncation $S$, the velocity $V$ at which the membrane translates along the path can be determined using a singularity method. With a nearest-neighbors assumption for the magnetic interactions and treating them as point-disturbances, the advective flow through the center of mass leads to the velocity \begin{equation} \frac{V}{R\omega} = \frac{C^2}{12 \zeta(3)} \frac{S^{3/2}}{{(\tau\omega)}^2} \label{eq:velocity} \end{equation} where the $S^{3/2}$ dependence comes from the number of uncompensated point forces formed by truncation. The velocity is normalized by the phase speed $R\omega$. The inverse squared relation on $\tau \omega$ for the velocity is a result of the dependence on the product of the magnetic force and wave amplitude, which, in turn, relies on the magnetic force. Here, we recover the velocity dependence on the square of the wave amplitude \cite{taylor1951analysis}, but with a lower velocity ($V \leq V_{Taylor}/6$). Appendix E contains the full derivation. We see a deviation from the relationship obtained in Eq. \ref{eq:velocity} at large values of $S^{3/2}/(\tau \omega)^2$ owing to either a high degree of truncation (a linear polymer) or a small viscomagnetic parameter (``dancer") (Fig. 5c). The direction of $V$ is dictated by the handedness of the precessing field and is an example of magnetically induce symmetry breaking. We find that the continuous symmetry measurement \cite{zabrodsky1992continuous} can predict relative changes in the velocity of locomotion. When the inversion asymmetry increases, $V$ increases (Fig. 5c, inset) because the conformational path taken by the membrane widens, leading to greater net work done on the fluid \cite{grosjean2016realization}. \section{MEMBRANE SWIMMING} Here, we give an example of how a programmed magnetic field can produce a non-reciprocal conformational path that results linear swimming. In Fig. 6a, we show that a precessing field can rotate the membrane 180$^\circ$ from its initial configuration. Then, the precession frequency is increased and $\theta$ is set to $\pi/2$. This keeps the membrane flat in the precession plane while the precession axis is rotated to flip the membrane. This field is on for a period of $\pi/\omega_s$ to flip the membrane orientation, where $\omega_s > \omega$. Once the membrane resembles the starting configuration, the 2-step field is repeated. After half the orbit from Fig. 5b is obtained, the membrane's center of mass has shifted $\sim 2 \rho$. The ``flip" from the second field places the membrane back into its original configuration. This recovery stroke moves the membrane back towards its original position, but not entirely, leading to a net translation. The chirality and duration of the magnetic field precession controls the displacement in the membrane plane and the flip direction controls the direction for the out-of-plane displacement. The fastest achievable velocity using this method is $V_{max}=(2/\pi)V$, but will be slowed by the time taken during the recovery step. This cycle forms a closed loop in configuration space based on the two independent degrees of freedom that are defined by the angles the normal vector $\bm{n}$ and the truncation vector $\bm{S}$ make with the x-axis as $\zeta_n$ and $\zeta_S$, respectively (Fig. 6b). It is reasonable to note that this configuration loop is in addition to the already present non-reciprocal motion of the wobbling mode, but is needed since the latter only follows circular paths. Thermalizing the LB fluid to 1 $k_BT$, by the method of Adhikari et al. \cite{Adhikari2005fluctuating} for $S^{3/2}/(\tau \omega)^2 \approx$ 10$^{-2}$, shows a swimming membrane as $\zeta_n$ and $\zeta_S$ changes (Fig. 6c). In this instance, the path during the rotation step, to change $\zeta_S$, is dominated by Brownian motion. The largest displacement occur during the flipping step, to change $\zeta_n$. Additionally, each flip shifts the membrane along the $z$-axis, where the traveling direction is determined by the handedness of the flip. By controlling the precession axis orientation, a membrane may be directed along an arbitrary path. The useful swimming regime is bound by the P\'{e}clet number and the dimensionless transition frequency. In other words, the system parameters, in particular, the field frequency $\omega$, must be large enough to maintain the wobbling mode, but not too large as to attenuate the wobble amplitude below an efficient swimming velocity. In practice, this implies operating at a driving frequency just above $\omega_c$. The range for the frequency can be written as $2\Omega/\pi<\omega<C^2\eta R^3S^{3/2}/\sqrt2 ~\zeta(3)\tau^2k_BT$. Here, we calculate the P\'{e}clet number using swimming velocity from Appendix E, the membrane radius as the characteristic length, and set the diffusion coefficient using the radius of gyration of a disk \cite{capuani2006disks}. For example, a membrane of $R=$ 1 $\upmu$m composed of 25 nm magnetite nanoparticles at $25^{\circ}$C in water subject to 50 mT field \cite{susan2019from} precessing at 80$^\circ$ gives a effective frequency range of 1--10 kHz. \section{SWIMMING IN HOMOGENEOUS MEMBRANES} Homogeneous superparamagnetic membranes require both non-reciprocal motion and shape asymmetry to swim in viscous fluids. While the Scallop Theorem \cite{purcell1977life} makes the necessity for non-reciprocal motion known, implementing such motion without modifying the elastic or magnetic homogeneity implies using a ``non-reciprocal" magnetic field, where the field vector returns to its starting position without retracing its path. Using a field that does not self-retrace imparts a change in membrane conformation that breaks time-reversal symmetry. However, this type of external magnetic field will still generate centrosymmetric forces within a symmetric membrane. Therefore, shape asymmetry is also needed to displace the membrane center of mass during one precession period, where more asymmetry leads to a larger per-period displacement. \acknowledgements{We would like to acknowledge Mykola Tasinkevych and Eleftherios Kyrkinis for helpful discussions. We thank the Sherman Fairchild Foundation for computational support. This project was funded by the Department of Energy's Center for Bio-Inspired Energy Science (DE-SC0000989).}
{'timestamp': '2021-12-08T02:25:42', 'yymm': '2112', 'arxiv_id': '2112.03209', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.03209'}
\section{Introduction} In real world scenarios, generic object detection always faces severe challenges from variations in viewpoint, background, object appearance, illumination, occlusion conditions, scene change, etc. These unavoidable factors make object detection in domain-shift circumstance a challenging and new rising research topic in the recent years. Also, domain change is a widely-recognized, intractable problem that urgently needs to break through in reality of detection tasks, like video surveillance, autonomous driving, etc. \noindent{\textbf{Revisiting Domain-Shift Object Detection.}} Common approaches for tackling domain-shift object detection are mainly in two directions: (i) training supervised model and then fine-tuning on the target domain; or (ii) unsupervised cross-domain representation learning. The former requires additional instance-level annotations on target data, which is fairly laborious, expensive and time-consuming. So most approaches focus on the latter one but still have some challenges. The first challenge is that the representations of source and target domain data should be embedded into a common space for matching the object, such as the hidden feature space~\cite{Saito_2019_CVPR,chen2018domain}, input space~\cite{tzeng2018splat,cai2019exploring} or both of them~\cite{kim2019diversify}. The second is that a feature alignment/matching operation or mechanism for source/target domains should be further defined, such as subspace alignment~\cite{raj2015subspace}, $\mathcal{H}$-divergence and adversarial learning~\cite{chen2018domain}, MRL~\cite{kim2019diversify}, Strong-Weak alignment~\cite{Saito_2019_CVPR}, universal alignment~\cite{wang2019towards}, etc. In general, our SCL targets at these two challenges, and it is also a learning-based alignment method across domains with an end-to-end framework. \begin{figure*}[t] \centering \subfloat[\scriptsize Non-adapted]{\includegraphics[width=0.25\textwidth, keepaspectratio]{figs/pascal_to_clipart_non-adapted_2400_1000}\label{fig:subfig1}} \subfloat[\scriptsize CVPR'18~\cite{chen2018domain}]{\includegraphics[width=0.25\textwidth, keepaspectratio]{figs/cvpr18}\label{fig:subfig2}} \subfloat[\scriptsize CVPR'19~\cite{Saito_2019_CVPR}]{\includegraphics[width=0.25\textwidth, keepaspectratio]{figs/cvpr19}\label{fig:subfig3}} \subfloat[\scriptsize SCL (Ours)]{\includegraphics[width=0.25\textwidth, keepaspectratio]{figs/ours2400}\label{fig:subfig4}} \subfloat[\scriptsize Non-adapted]{\includegraphics[width=0.25\textwidth, keepaspectratio]{figs/city_to_foggy_non-adapted_500_500_new}\label{fig:subfig5}} \subfloat[\scriptsize CVPR'18~\cite{chen2018domain}]{\includegraphics[width=0.25\textwidth, keepaspectratio]{figs/city_to_foggy_CVPR18_500_500}\label{fig:subfig6}} \subfloat[\scriptsize CVPR'19~\cite{Saito_2019_CVPR}]{\includegraphics[width=0.25\textwidth, keepaspectratio]{figs/city_to_foggy_CVPR19_500_500}\label{fig:subfig7}} \subfloat[\scriptsize SCL (Ours)]{\includegraphics[width=0.25\textwidth, keepaspectratio]{figs/city_to_foggy_ours_500_500}\label{fig:subfig8}} \vspace{-0.1in} \caption{Visualization of features from PASCAL to Clipart (first row) and from Cityscapes to FoggyCityscapes (second row) by t-SNE~\cite{maaten2008visualizing}. Red indicates the source examples and blue is the target one. If source and target features locate in the same position, it is shown as light blue. All models are re-trained with a unified setting to ensure fair comparisons. It can be observed that our feature embedding results are consistently much better than previous approaches on either dissimilar domains (PASCAL and Clipart) or similar domains (Cityscapes and FoggyCityscapes). Best viewed in color and zoom in.} \label{visualization} \vspace{-0.05in} \end{figure*} \noindent{\textbf{Our Key Ideas.}} The goal of this paper is to introduce a simple design that is specific to convolutional neural network optimization and improves its training on tasks that adapt on discrepant domains. Unsupervised domain adaptation for recognition has been widely studied by a large body of previous literature~\cite{ganin2016domain,long2016unsupervised,tzeng2017adversarial,panareda2017open,hoffman2018cycada,murez2018image,zhao2019learning,wu2019domain}, our method more or less draws merits from them, like aligning source and target distributions with adversarial learning (domain-invariant alignment). However, object detection is a technically different problem from classification, since we would like to focus more on the object of interests (regions). Some recent work~\cite{zhu2019adapting} has proposed to conduct alignment only on local regions so that to improve the efficiency of model learning. While this operation may cause a deficiency of critical information from context. Inspired by strong-weak/multi-feature alignment~\cite{Saito_2019_CVPR,zhang2018collaborative,he2019multi} which proposed to align corresponding local-region on shallow layers with a small respective field (RF) and align image-level features on deep layers with large RF, we extend this idea by studying diverse complementary objectives and their potential assembly for domain adaptive circumstance. We observe that domain adaptive object detection is supported dramatically by the deep supervision, however, the diverse supervisions should be applied in a controlled manner, including the cut-in locations, loss types, orders, updating strategy, etc., which is one of the contributions of this paper. Furthermore, our experiments show that even with the existing objectives, after elaborating the different combinations and training strategy, our method can obtain competitive results. By plugging-in a new sub-network that learns the context features independently with gradient detach updating strategy in a hierarchical manner, we obtain the best results on several domain adaptive object detection benchmarks. \noindent{\textbf{The Relation to Complement Objective Training~\cite{chen2018complement} and Deep Supervision~\cite{lee2015deeply}.}} {COL}~\cite{chen2018complement} proposed to involve additional loss function that complements the primary objective which is moderately similar to our goal in spirit. The cross entropy in COL is used as the primary objective $\mathbf{H_p}$: \begin{equation} \begin{aligned} \mathbf{H_p}(\mathbf{y}, \hat{\mathbf{y}}) &=-\frac{1}{N} \sum_{i=1}^{N} \mathbf{y}_{i}^{T} \cdot \log \left(\hat{\mathbf{y}}_{i}\right) \end{aligned} \end{equation} where ${\mathbf{y}_i} \in {\{ 0,1\} ^D}$ is the label of the $i$-th sample in one-hot representation and ${\hat {\mathbf{y}}_i} \in {[0,1]^D}$ is the predicted probabilities. Th complement entropy $\mathbf{H_c}$ is defined in COT~\cite{chen2018complement} as the average of sample-wise entropies over complement classes in a mini-batch: \begin{equation} \begin{aligned} \mathbf{H_c}\left(\hat{\mathbf{y}}_{\overline{c}}\right) &=\frac{1}{N} \sum_{i=1}^{N} \mathcal{H}\left(\hat{\mathbf{y}}_{{i} \overline{c}}\right) \end{aligned} \end{equation} where $\mathcal H$ is the entropy function. $\hat{\mathbf{y}}_{\overline{c}}$ is the predicted probabilities of complement classes $\overline{c}$. The training process is that: for each iteration of training, 1) update parameters by $\mathbf{H_p}$ first; then 2) update parameters by $\mathbf{H_c}$. Different from COL, we don't use the alternate strategy but update the parameters simultaneously using gradient detach strategy with primary and complement objectives. Since we aim to let the network enable to adapt on both source and target domain data and meanwhile still being able to distinguish objects from them. Thus our complement objective design is quite different from COT. We will describe with details in Section~\ref{method}. In essence, our method is more likely to be the deeply supervised formulation~\cite{lee2015deeply} that backpropagation of error now proceeds not only from the final layer but also simultaneously from our intermediate complementary outputs. While DSN is basically proposed to alleviate ``vanishing'' gradient problem, here we focus on how to adopt these auxiliary losses to promote to mix two different domains through domain classifiers for detection. Interestingly, we observe that diverse objectives can lead to better generalization for network adaptation. Motivated by this, we propose {\bf S}tacked {\bf C}omplementary {\bf L}osses (SCL), a simple yet effective approach for domain-shift object detection. Our SCL is fairly easy and straightforward to implement, but can achieve remarkable performance. We conjecture that previous approaches that focus on conducting domain alignment on high-level layers only~\cite{chen2018domain} cannot fully adapt shallow layer parameters to both source and target domains (even local alignment is applied~\cite{Saito_2019_CVPR}) which restricts the ability of model learning. Also, gradient detach is a critical part of learning with our complementary losses. We further visualize the features obtained by non-adapted model, DA~\cite{chen2018domain}, Strong-Weak~\cite{Saito_2019_CVPR} and ours, features are from the last layer of backbone before feeding into the Region Proposal Network (RPN). As shown in Figure~\ref{visualization}, it is obvious that the target features obtained by our model are more compactly matched with the source domain than any other models. \noindent{\textbf{Contributions.}} Our contributions are three-fold. \begin{itemize}[leftmargin=5.5mm] \vspace{-0.05in} \addtolength{\itemsep}{-0.05in} \item We study {\bf the interaction of multi-loss (deep supervision with complement objective) and gradient detach (training strategy for maximizing context information)} in an end-to-end learnable framework for the challenging domain adaptive object detection task. \item We provide a step-by-step ablation study to empirically verify the effectiveness of each component and design in our framework. Thus, this work gives intuitive and practical guidance for building a high performance framework with multi-objective learning on domain adaptive object detection. \item To the best of our knowledge, this is a pioneer work to investigate the influence of diverse loss functions and gradient detach for domain adaptive object detection. Our method achieves the highest accuracy on several domain adaptive or cross-domain object detection benchmarks\footnote{Our code and models are available at: \url{https://github.com/harsh-99/SCL}.}. \end{itemize} \begin{figure*}[t] \centering \hspace{1.1cm} \includegraphics[width=0.89\textwidth]{figs/framework_fix.pdf} \vspace{-0.1in} \caption{Overview of our SCL framework. More details please refer to Section~\ref{method}.} \label{framework} \end{figure*} \section{Methodology} \label{method} Following the common formulation of domain adaptive object detection, we define a {\em source domain} $\mathcal{S}$ where annotated bound-box is available, and a {\em target domain $\mathcal{T}$} where only the image can be used in training process without any labels. Our purpose is to train a robust detector that can adapt well to both source and target domain data, i.e., we aim to learn a {\em domain-invariant} feature representation that works well for detection across two different domains. \subsection{Multi-Complement Objective Learning} As shown in Figure~\ref{framework}, we focus on the complement objective learning and let $\mathcal{S}=\{(\mathbf{x}_i^{(s)}, \mathbf{y}_i^{(s)})\}$ where $\mathbf{x}_i^{(s)} \in \mathcal{R}^n$ denotes an image, $\mathbf{y}^{(s)}_i$ is the corresponding bounding box and category labels for sample $\mathbf{x}^{(s)}_i$, and $i$ is an index. Each label $\mathbf{y}^{(s)}=(y_\mathbf{c}^{(s)},y_\mathbf{b}^{(s)})$ denotes a class label $y_\mathbf{c}^{(s)}$ where $\mathbf{c}$ is the category, and a 4-dimension bounding-box coordinate $y_\mathbf{b}^{(s)} \in \mathcal{R}^4$. For the target domain we only use image data for training, so $\mathcal{T}=\{\mathbf{x}_i^{(t)}\}$. We define a recursive function for layers $\mathbf{k}=1,2,\dots,\mathbf{K}$ where we cut in complementary losses: \begin{equation} \begin{array}{l}{\hat \Theta_{\mathbf{k}}=\mathcal{F}\left(\mathbf{Z}_{\mathbf{k}}\right), \text { and } \mathbf{Z}_{0} \equiv \mathbf{x}} \end{array} \end{equation} where $\hat \Theta_{\mathbf{k}}$ is the feature map produced at layer $\mathbf{k}$, $\mathcal{F}$ is the function to generate features at layer $\mathbf{k}$ and $\mathbf{Z}_{\mathbf{k}}$ is input at layer $\mathbf{k}$. We formulate the complement loss of domain classifier $\mathbf{k}$ as follows: \begin{equation}\label{k} \begin{gathered} \mathcal{L}_{\mathbf{k}}\left(\hat{\Theta}^{(s)}_{\mathbf{k}}, \hat{\Theta}^{(t)}_{\mathbf{k}} ; \mathbf{D}_{\mathbf{k}}\right)={\cal L}_\mathbf{k}^{(s)}({{\hat \Theta }^{(s)}_{\mathbf{k}}};{\mathbf{D}_{\bf{k}}}) + {\cal L}_\mathbf{k}^{(t)}({{\hat \Theta }^{(t)}_{\mathbf{k}}};{\mathbf{D}_{\bf{k}}}) \\ = \mathbb{E}\left[\log \left(\mathbf{D}_{\mathbf{k}}\left(\hat{\Theta}^{(s)}_{\mathbf{k}}\right)\right)\right] + \mathbb{E}\left[\log \left(1-\mathbf{D}_{\mathbf{k}}\left(\hat{\Theta}^{(t)}_{\mathbf{k}}\right)\right)\right] \end{gathered} \end{equation} where $\mathbf{D}_\mathbf{k}$ is the $\mathbf{k}$-th domain classifier or discriminator. $\hat{\Theta}^{(s)}_{\mathbf{k}}$ and $\hat{\Theta}^{(t)}_{\mathbf{k}}$ denote feature maps from source and target domains respectively. Following~\cite{chen2018domain,Saito_2019_CVPR}, we also adopt gradient reverse layer (GRL)~\cite{ganin2015unsupervised} to enable adversarial training where a GRL layer is placed between the domain classifier and the detection backbone network. During backpropagation, GRL will reverse the gradient that passes through from domain classifier to detection network. For our instance-context alignment loss ${\mathcal{L}_{{\mathbf{ILoss}}}}$, we take the instance-level representation and context vector as inputs. The instance-level vectors are from RoI layer that each vector focuses on the representation of local object only. The context vector is from our proposed sub-network that combine hierarchical global features. We concatenate instance features with same context vector. Since context information is fairly different from objects, joint training detection and context networks will mix the critical information from each part, here we proposed a better solution that uses detach strategy to update the gradients. We will introduce it with details in the next section. Aligning instance and context representation simultaneously can help to alleviate the variances of object appearance, part deformation, object size, etc. in instance vector and illumination, scene, etc. in context vector. We define $d_i$ as the domain label of $i$-th training image where $d_i=1$ for the source and $d_i=0$ for the target, so the instance-context alignment loss can be further formulated as: \begin{equation} \begin{aligned} {\mathcal{L}_{{\mathbf{ILoss}}}} = - \frac{1}{{{N_s}}}\sum\limits_{i = 1}^{{N_s}} {\sum\limits_{i,j} {(1 - d_i)} \log {{\mathbf{P}}_{(i,j)}}} \\ \quad- \frac{1}{{{N_t}}}\sum\limits_{i = 1}^{{N_t}} {\sum\limits_{i,j} {d_i\log \left( {1 - {{\mathbf{P}}_{(i,j)}}} \right)} } \end{aligned} \end{equation} where $N_s$ and $N_t$ denote the numbers of source and target examples. $\mathbf{P}_{(i,j)}$ is the output probabilities of the instance-context domain classifier for the $j$-th region proposal in the $i$-th image. So our total {\bf SCL} objective $\mathcal{L}_{\mathbf{SCL}}$ can be written as: \begin{equation} {\mathcal{L}_{\mathbf{SCL}}} = \sum\limits_{\mathbf{k} = 1}^\mathbf{K} {{\mathcal{L}_{\mathbf{k}}}} + {\mathcal{L}_{\mathbf{ILoss}}} \end{equation} \subsection{Gradients Detach Updating} In this section, we introduce a simple yet effective detach strategy which prevents the flow of gradients from context sub-network through the detection backbone path. We find this can help to obtain more discriminative context and we show empirical evidence (see Figure~\ref{heatmaps}) that this path carries information with diversity and hence gradients from this path getting suppressed is superior for such task. As aforementioned, we define a sub-network to generate the context information from early layers of detection backbone. Intuitively, instance and context will focus on perceptually different parts of an image, so the representations from either of them should also be discrepant. However, if we train with the conventional process, the companion sub-network will be updated jointly with the detection backbone, which may lead to an indistinguishable behavior from these two parts. To this end, in this paper we propose to suppress gradients during backpropagation and force the representation of context sub-network to be dissimilar to the detection network, as shown in Algorithm~\ref{alg:detach}. To our best knowledge, this may be the first work to show the effectiveness of gradient detach that can help to learn better context representation for domain adaptive object detection. The details of our context sub-network architecture are illustrated in Appendix~\ref{sec:sfp}. \begin{algorithm2e}[h] \caption{Backward Pass of Our Detach Algorithm} \label{alg:detach} {\bf INPUT:} $\mathbf{G}_{\bf c}$ is gradient of context network, $\mathbf{G}_{\bf d}$ is the gradient of detection network, $\mathcal{L}_{det}$ is the detection objective, $\mathcal{L}_\mathbf{SCL}$ is the complementary objective; \For{$t \gets 1$ \textbf{to} $n_{train\_steps}$} { {1. Update context net by detection and instance-context objectives:~$\mathcal{L}_{det}$(w/o $\mathcal{L}_{rpn}$)+$\mathcal{L}_\mathbf{ILoss}$} {2. $\mathbf{G}_{\bf d} \gets$ stop-gradient($\mathbf{G}_{\bf c}$;$\mathcal{L}_{det}$)} {3. Update detection net by detection and complementary objectives:~$\mathcal{L}_{det}$+$\mathcal{L}_\mathbf{SCL}$} } \end{algorithm2e} \subsection{Framework Overall} Our detection part is based on the Faster RCNN~\cite{ren2015faster}, including the Region Proposal Network (RPN) and other modules. This is a conventional practice in many adaptive detection works. The objective of the detection loss is summarized as: \begin{equation} {\mathcal{L}_{det}} = {\mathcal{L}_{rpn}} + {\mathcal{L}_{cls}} + {\mathcal{L}_{reg}} \end{equation} where ${\mathcal{L}_{cls}}$ is the classification loss and ${\mathcal{L}_{reg}}$ is the bounding-box regression loss. To train the whole model using SGD, the overall objective function in the model is: \begin{equation} \label{lambda} \min _{\mathcal{F}, \mathbf{R}} \max _{\mathbf{D}} \mathcal{L}_{det}(\mathcal{F}(\mathbf{Z}), \mathbf{R})-\lambda \mathcal{L}_\mathbf{SCL}(\mathcal{F}(\mathbf{Z}), \mathbf{D}) \end{equation} where $\lambda$ is the trade-off coefficient between detection loss and our complementary loss. $\mathbf{R}$ denotes the RPN and other modules in Faster RCNN. \section{Empirical Results} \noindent{\textbf{Datasets.}} We evaluate our approach in three different domain shift scenarios: (1) Similar Domains; (2) Discrepant Domains; and (3) From Synthetic to Real Images. All experiments are conducted on seven domain shift datasets: Cityscapes~\cite{cordts2016cityscapes} to FoggyCityscapes~\cite{sakaridis2018semantic}, Cityscapes to KITTI~\cite{Geiger2012CVPR}, KITTI to Cityscapes, INIT Dataset~\cite{shen2019towards}, PASCAL~\cite{everingham2010pascal} to Clipart~\cite{inoue2018cross}, PASCAL to Watercolor~\cite{inoue2018cross}, GTA (Sim 10K)~\cite{johnson2016driving} to Cityscapes. \noindent{\textbf{Implementation Details.}} In all experiments, we resize the shorter side of the image to 600 following~\cite{ren2015faster,Saito_2019_CVPR} with ROI-align~\cite{he2017mask}. We train the model with SGD optimizer and the initial learning rate is set to $10^{-3}$, then divided by 10 after every 50,000 iterations. Unless otherwise stated, we set $\lambda$ as 1.0 and $\gamma$ as 5.0, and we use $\mathbf{K}=3$ in our experiments (the analysis of hyper-parameter $\mathbf{K}$ is shown in Table~\ref{tab:ablation_k}). We report mean average precision (mAP) with an IoU threshold of 0.5 for evaluation. Following~\cite{chen2018domain,Saito_2019_CVPR}, we feed one labeled source image and one unlabeled target one in each mini-batch during training. SCL is implemented on PyTorch platform, we will release our code and models. \renewcommand{\arraystretch}{1.03} \setlength{\tabcolsep}{.2em} \begin{table*}[t] \caption{Ablation study (\%) on Cityscapes to FoggyCityscapes (we use 150m visibility, the densest one) adaptation. Please refer to Section~\ref{ablation} for more details.} \label{ablation_foggy} \vspace{-1.8ex} \centering \resizebox{0.95\textwidth}{!}{% \begin{tabular}{l|c|ccc|c|c|cccccccc|c} \toprule[1.5pt] \multirow{2}{*}{} & \multirow{2}{*}{} & \multicolumn{1}{r}{\multirow{2}{*}{}} & \multicolumn{1}{r}{\multirow{2}{*}{}} & \multicolumn{1}{r|}{\multirow{2}{*}{}} & \multirow{2}{*}{} & \multirow{2}{*}{}& \multicolumn{9}{c}{AP on a target domain} \\ Method & Context & \multicolumn{1}{r}{$L_1$} & \multicolumn{1}{r}{$L_2$} & \multicolumn{1}{r|}{$L_3$} & ILoss & Detach & person & rider & car & truck & bus & train & mcycle & bicycle & \bf mAP \\ \hline Faster RCNN (Non-adapted)&&&&&&&24.1&33.1&34.3&4.1&22.3&3.0&15.3&26.5&20.3\\ DA (CVPR'18)&$\checkmark$&&&&&&25.0&31.0&40.5&22.1&35.3&20.2&20.0&27.1& 27.6 \\ MAF~\cite{he2019multi} (ICCV'19)&&&&&& &28.2& 39.5&43.9&23.8& 39.9& 33.3& 29.2& 33.9&34.0\\ Strong-Weak (CVPR'19)&$\checkmark$&&&&& &29.9&42.3&43.5&24.5&36.2&32.6&30.0& 35.3& 34.3\\ Diversify\&match~\cite{kim2019diversify} (CVPR'19)&&&&&& &30.8& 40.5& 44.3& 27.2& 38.4& 34.5& 28.4& 32.2&34.6\\ \hline Strong-Weak (Our impl. w/ VGG16)&$\checkmark$&&&&& &30.0&40.0&43.4&23.2&40.1&34.6&27.8& 33.4&34.1\\ Strong-Weak (Our impl. w/ Res101)&$\checkmark$&&&&&&29.1&41.2&43.8&26.0&43.2&27.0&26.2& 30.6&33.4\\ \hline &\ding{55}&$LS$&$FL$&\ding{55}& \ding{55}&\ding{55}&29.6&42.2&43.4&23.1&36.4&31.5&25.1&30.5&32.7 \\ &\ding{55}&$LS$&$CE$&$FL$& \ding{55}&\ding{55}&29.0&41.4&43.9&24.6&46.5&28.5&27.0&32.8&34.2\\ &\ding{55}&$LS$&$CE$&$FL$&$FL$&\ding{55}&28.6&44.0&44.2&25.2&42.9&31.1&27.4&33.0&34.5\\ \hline &$\checkmark$&$LS$&$FL$&\ding{55}& \ding{55}&\ding{55}&28.5&42.6&43.8&23.2&41.6&24.9&28.3&30.3&32.9 \\ &$\checkmark$&$LS$&$FL$&$FL$& \ding{55}&\ding{55}&28.6&41.8&43.8&27.9&43.3&24.0&28.7&31.3&33.7\\ &$\checkmark$&$LS$&$LS$&$FL$& \ding{55}&\ding{55}&28.8&\bf 45.5&44.3&28.6&44.6&29.1&27.8&31.4&35.0 \\ &$\checkmark$&$LS$&$CE$&$FL$& \ding{55}&\ding{55}&29.6&42.6&42.6&28.4&46.3&31.0&28.4&33.0&35.3\\ &$\checkmark$&$LS$&$CE$&$FL$& \ding{55}&$\checkmark$&30.0&42.7&44.2&30.0&\bf 50.2&34.1&27.1&32.2&36.3\\ \cline{2-16} &$\checkmark$&$LS$&$FL$&$FL$& $FL$& \ding{55}&26.3&42.8&44.2&26.7&41.6&36.4& 29.2&30.9&34.8\\ &$\checkmark$&$LS$&$LS$&$FL$& $FL$&$\checkmark$&29.5&43.2&44.2&27.0&42.1&33.3&29.4&30.6&34.9\\ &$\checkmark$&$LS$&$FL$&$FL$& $FL$&$\checkmark$&29.7&43.6&43.7&26.6&43.8&33.1& 30.7&31.5&35.3 \\ \cline{2-16} &$\checkmark$&$LS$&$CE$&$FL$& $CE$&\ding{55}&28.3&41.9&43.1&25.4&45.1&35.5&26.7&31.6&34.7\\ &$\checkmark$&$LS$&$CE$&$FL$& $FL$&\ding{55}&29.8&43.9&44.0&29.4&46.3&30.0&31.8&31.8&35.8 \\ &$\checkmark$&$LS$&$CE$&$FL$& $CE$&$\checkmark$ &29.0&42.5&43.9&28.9&45.7&42.4&26.4&30.5&36.2 \\ &$\checkmark$& $LS$& $CE$& $FL$&$FL$ & $\checkmark$ & 30.7 & 44.1 & 44.3 & 30.0 & 47.9 &\bf 42.9 & 29.6 & 33.7 & \bf 37.9 \\ \hline Our full model w/ VGG16&$\checkmark$&$LS$&$CE$&$FL$& $FL$&$\checkmark$ &\bf 31.6&44.0&\bf 44.8&\bf 30.4&41.8&40.7&\bf 33.6&\bf 36.2&\bf 37.9\\ \hline Upper Bound~\cite{Saito_2019_CVPR} & -- & -- & -- & -- & -- & -- & 33.2 & 45.9 & 49.7 & 35.6 & 50.0 & 37.4 & 34.7 & 36.2& 40.3 \\ \bottomrule[1.5pt] \end{tabular}} \\ { \em LS: Least-squares Loss; CE: Cross-entropy Loss; FL: Focal Loss; ILoss: Instance-Context Alignment Loss.} \vspace{-.1in} \end{table*} \subsection{How to Choose Complementary Losses in Our Framework} There are few pioneer works for exploring the combination of different losses for domain adaptive object detection, hence we conduct extensive ablation study for this part to explore the intuition and best collocation of our SCL method. We mainly adopt three losses which have been introduced in many literature. Since they are not our contribution so here we just give some brief formulations below. \noindent{\textbf{Cross-entropy (CE) Loss.}} CE loss measures the performance of a classification model whose output is a probability value. It increases as the predicted probability diverges from the actual label: \begin{equation} \mathcal{L}_\mathbf{CE}(p_\mathbf{c})=- \sum\limits_{\mathbf{c} = 1}^\mathbf{C} {{y_\mathbf{c}}} \log \,{p_\mathbf{c}} \end{equation} where $p_\mathbf{c} \in [0,1]$ is the predicted probability observation of $\mathbf{c}$ class. $y_\mathbf{c}$ is the $\mathbf{c}$ class label. \noindent{\textbf{Weighted Least-squares (LS) Loss.}} Following~\cite{Saito_2019_CVPR}, we adopt LS loss to stabilize the training of the domain classifier for aligning low-level features. The loss is designed to align each receptive field of features with the other domain. The least-squares loss is formulated as: \begin{equation} \begin{aligned} {\mathcal{L}_\mathbf{LS}} = \alpha{\mathcal{L}^{(s)}_{loc}} + \beta{\mathcal{L}^{(t)}_{loc}} = \frac{\alpha}{{HW}} {\sum\limits_{w = 1}^W {\sum\limits_{h = 1}^H \mathbf D } } \left( {{{\hat \Theta }^{(s)}}} \right)_{wh}^2 \hfill \\ + \frac{\beta}{{HW}} {\sum\limits_{w = 1}^W {\sum\limits_{h = 1}^H {{{\left( {1 - \mathbf{D}{{\left( {{{\hat \Theta }^{(t)}}} \right)}_{wh}}} \right)}^2}} } } \hfill \\ \end{aligned} \end{equation} where $\mathbf D\left( {{{\hat \Theta }^{(s)}}} \right)_{wh}$ denotes the output of the domain classifier in each location $(w,h)$. $\alpha$ and $\beta$ are balance coefficients and we set them to 1 following~\cite{Saito_2019_CVPR}. \noindent{\textbf{Focal Loss (FL).}} Focal loss $\mathcal{L}_\mathbf{FL}$~\cite{lin2017focal} is adopted to ignore easy-to-classify examples and focus on those hard-to-classify ones during training: \begin{equation}\label{gamma} \mathcal{L}_\mathbf{FL}\left(p_{\mathrm{t}}\right)=-f\left(p_{\mathrm{t}}\right) \log \left(p_{\mathrm{t}}\right), f\left(p_{\mathrm{t}}\right)=\left(1-p_{\mathrm{t}}\right)^{\gamma} \end{equation} where $p_{\mathrm{t}}=p \text { if } d_i=1, \text{otherwise}, p_{\mathrm{t}}={1-p}$. \subsection{Ablation Studies from Cityscapes to FoggyCityscapes} \label{ablation} We first investigate each component and design of our SCL framework from Cityscapes to FoggyCityscapes. Both source and target datasets have 2,975 images in the training set and 500 images in the validation set. We design several controlled experiments for this ablation study. A consistent setting is imposed on all the experiments, unless when some components or structures are examined. In this study, we train models with the ImageNet~\cite{deng2009imagenet} pre-trained ResNet-101 as a backbone, we also provide the results with pre-trained VGG16 model. The results are summarized in Table~\ref{ablation_foggy}. We present several combinations of four complementary objectives with their loss names and performance. We observe that ``$LS$|$CE$|$FL$|$FL$'' obtains the best accuracy with {\em Context} and {\em Detach}. It indicates that $LS$ can only be placed on the low-level features (rich spatial information and poor semantic information) and $FL$ should be in the high-level locations (weak spatial information and strong semantic information). For the middle location, $CE$ will be a good choice. If you use $LS$ for the middle/high-level features or use $FL$ on the low-level features, it will confuse the network to learn hierarchical semantic outputs, so that {\em ILoss+detach} will lose effectiveness under that circumstance. This verifies that domain adaptive object detection relies heavily on the deep supervision, however, the diverse supervisions should be adopted in a controlled and correct manner. Furthermore, our proposed method performs much better than baseline Strong-Weak~\cite{Saito_2019_CVPR} (37.9\% {\em vs.}34.3\%) and other state-of-the-arts. \vspace{-0.1cm} \subsection{Similar Domains} \vspace{-0.05cm} \noindent{\textbf{Between Cityspaces and KITTI.}} In this part, we focus on studying adaptation between two real and similar domains, as we take KITTI and Cityscapes as our training and testing data. Following~\cite{chen2018domain}, we use KITTI training set which contains 7,481 images. We conduct experiments on both adaptation directions K $\to$ C and C $\to$ K and evaluate our method using AP of {\em car} as in DA. As shown in Table~\ref{tab:KC}, our proposed method performed much better than the baseline and other state-of-the-art methods. Since Strong-Weak~\cite{Saito_2019_CVPR} didn't provide the results on this dataset, we re-implement it and obtain 37.9\% AP on K$\to$C and 71.0\% AP on C$\to$K. Our method is 4\% higher than the former and 1.7\% higher than latter. If comparing to the non-adapted results (source only), our method outperforms it with a huge margin (about 10\% and 20\% higher, respectively). \begin{table}[t] \caption{Adaptation results between KITTI and Cityscapes. We report AP of {\em Car} on both directions: K$\to$C and C$\to$K. We re-implemented DA~\cite{chen2018domain} and Weak-Strong~\cite{Saito_2019_CVPR} based on the same Faster RCNN framework~\cite{ren2015faster}.} \vspace{-.1in} \label{tab:KC} \centering \resizebox{0.25\textwidth}{!}{% \begin{tabular}{l|c|c} \toprule[1.5pt] Method & K$\to$C & C$\to$K \\ \hline Faster RCNN &30.2&53.5\\ \hline DA~\cite{chen2018domain} &38.5&64.1\\ DA (Our impl.)~\cite{chen2018domain} &35.6&70.8\\ WS (Our impl.)~\cite{Saito_2019_CVPR} &37.9&71.0\\ \hline Ours &\bf 41.9&\bf 72.7\\ \bottomrule[1.5pt] \end{tabular} } \vspace{-.05in} \end{table} \begin{table}[t] \centering \caption{Adaptation results on INIT dataset.} \vspace{-.2cm} \label{tab:INIT} \resizebox{0.33\textwidth}{!}{% \begin{tabular}{l|l|c|c|c|c} \toprule[1.5pt] & & Car & Sign & Person & mAP \\ \hline \multirow{3}{*}{s2n} & Faster&63.33 &63.96 &32.00 &53.10 \\ & Strong-Weak&67.43&64.33&\bf 32.53&54.76\\ & Ours& \bf 67.92&\bf 65.89&32.52&\bf 55.44\\ \cline{2-6} & Oracle&80.12&84.68&44.57&69.79\\ \hline \multirow{3}{*}{s2r} & Faster&70.20 &72.71 &36.22 &59.71 \\ & Strong-Weak&\bf 71.56&78.07&39.27&62.97\\ & ours& 71.41&\bf 78.93&\bf 39.79&\bf 63.37\\ \cline{2-6} & Oracle & 71.83 & 79.42 & 45.21 & 65.49 \\ \hline \multirow{3}{*}{s2c} & Faster&-- &-- &-- &-- \\ &Strong-Weak&\bf 71.32&72.71&43.18&62.40\\ & Ours & 71.28 &\bf 72.91 &\bf 43.79 & \bf 62.66 \\ \cline{2-6} &Oracle & 76.60 & 76.72 & 47.28 & 66.87\\ \bottomrule[1.5pt] \end{tabular} } \end{table} \begin{table*}[t] \caption{Results on adaptation from PASCAL VOC to Clipart Dataset. Average precision (\%) is evaluated on target images.} \vspace{-2.8mm} \label{tbl:ap_clipart} \centering \scalebox{0.84}{ \tabcolsep=1.5pt \centering \begin{tabular}{l|cccccccccccccccccccc|c} \toprule[1.5pt] Method & aero & bcycle & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & hrs & bike & prsn & plnt & sheep & sofa & train & tv & \bf mAP \\\hline Faster (Non-adapted)& {35.6} & 52.5 & 24.3 & 23.0 & 20.0 & 43.9 & 32.8 & 10.7 & 30.6 & 11.7 & 13.8 & 6.0 & \bf{36.8} & 45.9 & 48.7 & 41.9 &{16.5} & 7.3 & 22.9 & 32.0 & 27.8 \\ BDC-Faster &20.2 & 46.4 & 20.4 & 19.3 & 18.7 & 41.3 & 26.5 & 6.4 & 33.2 & 11.7 &{ 26.0} & 1.7 & 36.6 & 41.5 & 37.7 & 44.5 & 10.6 & 20.4 & 33.3 & 15.5 & 25.6 \\ DA&15.0&34.6&12.4&11.9&19.8&21.1&23.2&3.1&22.1&26.3&10.6&10.0&19.6&39.4&34.6&29.3&1.0&17.1&19.7&24.8&19.8 \\\hline WST-BSR~\cite{Kim_Self_2019}&28.0&\bf 64.5& 23.9& 19.0& 21.9&\bf 64.3&\bf 43.5& 16.4&\bf 42.2& 25.9&\bf 30.5& 7.9& 25.5& 67.6& 54.5& 36.4& 10.3& \bf 31.2& \bf 57.4& 43.5& 35.7\\ Strong-Weak~\cite{Saito_2019_CVPR}&26.2&48.5&32.6&\bf{33.7}&38.5&{54.3}& 37.1&{18.6}&34.8&{58.3}&17.0&12.5&33.8&65.5&\bf 61.6&\bf{52.0}&9.3&{24.9}& 54.1&\bf{49.1}&{38.1}\\ \hline Ours w/$\mathcal{L}_{ILoss}=FL$ &33.4&49.2&36.0&27.1&38.4&55.7&38.7&15.9&39.0&59.2&18.8&23.7&36.9&70.0&60.6&49.7&25.8&34.8&47.2&51.2&40.6 \\ Ours w/$\mathcal{L}_{ILoss}=CE$ &\bf 44.7&50.0&33.6&27.4&\bf 42.2&55.6&38.3&\bf 19.2&37.9&\bf 69.0&30.1&\bf 26.3&34.4&67.3&61.0&47.9&21.4&26.3&50.1&47.3&\bf 41.5 \\ \bottomrule[1.5pt] \end{tabular}} \vspace{-0.15cm} \end{table*} \noindent{\textbf{INIT Dataset.}} INIT Dataset~\cite{shen2019towards} contains 132,201 images for training and 23,328 images for testing. There are four domains: sunny, night, rainy and cloudy, and three instance categories, including: car, person, speed limited sign. This dataset is first proposed for the instance-level image-to-image translation task, here we use it for the domain adaptive object detection purpose. Our results are shown in Table~\ref{tab:INIT}. Following~\cite{shen2019towards}, we conduct experiments on three domain pairs: sunny$\to$night (s2n), sunny$\to$rainy (s2r) and sunny$\to$cloudy (s2c). Since the training images in rainy domain are much fewer than sunny, for s2r experiment we randomly sample the training data in sunny set with the same number of rainy set and then train the detector. It can be observed that our method is consistently better than the baseline method. We don't provide the results of s2c (faster) because we found that cloudy images are too similar to sunny in this dataset (nearly the same), thus the non-adapted result is very close to the adapted methods. \subsection{Discrepant Domains} In this section, we focus on the dissimilar domains, i.e., adaptation from real images to cartoon/artistic. Following~\cite{Saito_2019_CVPR}, we use PASCAL VOC dataset (2007+2012 training and validation combination for training) as the source data and the Clipart or Watercolor~\cite{inoue2018cross} as the target data. The backbone network is ImageNet pre-trained ResNet-101. \noindent{\textbf{PASCAL to Clipart.}} Clipart dataset contains 1,000 images in total, with the same 20 categories as in PASCAL VOC. As shown in Table~\ref{tbl:ap_clipart}, our proposed SCL outperforms all baselines. In addition, we observe that replacing $FL$ with $CE$ loss on instance-context classifier can further improve the performance from 40.6\% to 41.5\%. More ablation results are shown in our Appendix~\ref{app_clipart} (Table~\ref{tbl:appendix_clipart}). \noindent{\textbf{PASCAL to WaterColor.}} Watercolor dataset contains 6 categories in common with PASCAL VOC and has totally 2,000 images (1,000 images are used for training and 1,000 test images for evaluation). Results are summarized in Table~\ref{tbl:ap_water}, SCL consistently outperforms other state-of-the-arts. \subsection{From Synthetic to Real Images} \noindent{\textbf{Sim10K to Cityscapes.}} Sim 10k dataset~\cite{johnson2016driving} contains 10,000 images for training which are generated by the gaming engine Grand Theft Auto (GTA). Following~\cite{chen2018domain,Saito_2019_CVPR}, we use Cityscapes as target domain and evaluate our models on {\em Car} class. Our result is shown in Table~\ref{tab:Sim10k}, which consistently outperforms the baselines. \begin{table}[t] \centering\small \setlength{\tabcolsep}{2pt} \caption{Adaptation results from PASCAL to WaterColor.} \vspace{-2mm} \label{tbl:ap_water} \scalebox{0.9}{ \tabcolsep=1.5pt \begin{tabular}{l|cccccc|c} \toprule[1.5pt] & \multicolumn{7}{c}{AP on a target domain}\\%\cmidrule(r){6-19} Method & bike & bird & car & cat & dog & prsn &\bf mAP \\\hline Source Only &68.8 & 46.8 & 37.2 & 32.7 & 21.3 & 60.7 & 44.6 \\ \hline BDC-Faster&68.6 & 48.3 & 47.2 & 26.5 & 21.7 & 60.5 & 45.5\\ DA~\cite{chen2018domain} &75.2 & 40.6 & {48.0} & 31.5 & 20.6 & 60.0 & 46.0\\%\hline WST-BSR~\cite{Kim_Self_2019} & 75.6 &45.8 & 49.3 & 34.1 & 30.3 & 64.1 & 49.9\\ Strong-Weak~\cite{Saito_2019_CVPR}& \bf{82.3} & \bf{55.9}&46.5&32.7&{35.5}&\bf{66.7}&{53.3}\\ \hline Ours& 82.2&55.1&\bf{51.8}&\bf{39.6}&\bf{38.4}&64.0&\bf{55.2}\\ \bottomrule[1.5pt] \end{tabular}} \end{table} \begin{table}[t] \caption{Adaptation results on {\em Car} from Sim10k to Cityscapes Dataset (\%). {\em Source Only} indicates the non-adapted results ($\lambda = 0.1$ and $\gamma = 2.0$ are used).} \label{tab:Sim10k} \vspace{-0.15cm} \centering \scalebox{0.8}{ \begin{tabular}{c|c} \toprule[1.5pt] Method & AP on Car \\ \hline Faster & 34.6 \\ DA~\cite{chen2018domain} & 38.9 \\ Strong-Weak~\cite{Saito_2019_CVPR}& 40.1 \\ MAF~\cite{he2019multi} & 41.1 \\ \hline Ours & \bf 42.6 \\ \bottomrule[1.5pt] \end{tabular} } \end{table} \section{Analysis} \noindent{\textbf{Hyper-parameter $\mathbf{K}$.}} Table~\ref{tab:ablation_k} shows the results for sensitivity of hyper-parameter $\mathbf{K}$ in Figure~\ref{framework}. This parameter controls the number of SCL losses and context branches. It can be observed that the proposed method performs best when $\mathbf{K}=3$ on all three datasets. \begin{table}[h \vspace{-0.2cm} \caption{Analysis of hype-parameter $\mathbf{K}$.} \label{tab:ablation_k} \vspace{-0.25cm} \centering \resizebox{0.36\textwidth}{!}{% \begin{tabular}{l|c|c|c} \toprule[1.5pt] Method & $\mathbf{K}$=2 & $\mathbf{K}$=3 & $\mathbf{K}$=4 \\ \hline from Cityscapes to Foggycityscapes & 32.7 & \bf 37.9 & 34.5 \\ \hline from PASCAL VOC to Clipart & 39.0 & \bf 41.5 & 39.3 \\ \hline from PASCAL VOC to Watercolor & 54.7 & \bf 55.2 & 53.4 \\ \bottomrule[1.5pt] \end{tabular} } \vspace{-0.2cm} \end{table} \begin{figure \centering \includegraphics[width=0.98\columnwidth]{figs/senti__.pdf} \vspace{-0.15cm} \caption{Parameter sensitivity for the value of $\lambda$ (left) and $\gamma$ (right) in adaptation from Cityscapes to FoggyCityscapes and from Sim10k to Cityscapes.} \label{fig:senti} \vspace{-.1cm} \end{figure} \begin{figure*}[h] \centering \subfloat[\scriptsize from Cityscapes and FoggyCityscapes]{\includegraphics[width=0.33\textwidth, keepaspectratio]{figs/clipart}\label{fig:1}} \subfloat[\scriptsize from PASCAL VOC to Clipart]{\includegraphics[width=0.33\textwidth, keepaspectratio]{figs/foggy}\label{fig:2}} \subfloat[\scriptsize from PASCAL VOC to Watercolor]{\includegraphics[width=0.33\textwidth, keepaspectratio]{figs/watercolor}\label{fig:3}} \vspace{-0.1in} \caption{AP (\%) with different IoU thresholds. We show comparisons on three datasets and all results are calculated with different IoU thresholds and illustrated in different colors.} \label{IOU} \vspace{-0.1in} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=0.99\textwidth]{figs/heatmaps.pdf} \vspace{-0.1in} \caption{Visualization of {\em Attention Maps} on source and target domains. We use feature maps after {\bf Conv B3} in Figure~\ref{framework} for visualizing. Top: Input images; Middle: Heatmaps from models {\em w/o} gradient detach; Bottom: Heatmaps from models {\em w/} gradient detach. The colors (red$\to$blue) indicate values from high to low. It can be observed that {\em w/} detach training, our models can learn more discriminative representation between object areas and background (context).} \label{heatmaps} \vspace{-0.1in} \end{figure*} \noindent{\textbf{Parameter Sensitivity on $\lambda$ and $\gamma$.}} Figure~\ref{fig:senti} shows the results for parameter sensitivity of $\lambda$ and $\gamma$ in Eq.~\ref{lambda} and Eq.~\ref{gamma}. $\lambda$ is the trade-off parameter between SCL and detection objectives and $\gamma$ controls the strength of hard samples in {\em Focal Loss}. We conduct experiments on two adaptations: Cityscapes $\to$ FoggyCityscapes (blue) and Sim10K $\to$ Cityscapes (red). On Cityscapes $\to$ FoggyCityscapes, we achieve the best performance when $\lambda=1.0$ and $\gamma=5.0$ and the best accuracy is 37.9\%. On Sim10K $\to$ Cityscapes, the best result is obtained when $\lambda=0.1$, $\gamma=2.0$. \noindent{\textbf{Analysis of IoU Threshold.}} The IoU threshold is an important indicator to reflect the quality of detection, and a higher threshold means better coverage with ground-truth. In our previous experiments, we use 0.5 as a threshold suggested by many literature~\cite{ren2015faster,chen2018domain}. In order to explore the influence of IoU threshold with performance, we plot the performance {\em vs.} IoU on three datasets. As shown in Figure~\ref{IOU}, our method is consistently better than the baselines on different thresholds by a large margin (in most cases). \noindent{\textbf{Why Gradient Detach Can Help Our Model?}} To further explore why gradient detach can help to improve performance vastly and what our model really learned, we visualize the heatmaps on both source and target images from our models {\em w/o} and {\em w/} detach training. As shown in Figure~\ref{heatmaps}, the visualization is plotted with feature maps after {\em Conv B3} in Figure~\ref{framework}. We can observe that the object areas and context from {\em detach}-trained models have stronger contrast than {\em w/o} detach model (red and blue areas). This indicates that {\em detach-based} model can learn more discriminative features from the target object and context. To be more precise, {\em w/o detach} model is attentive more on the background (green color), in contrast, {\em with detach} model is mainly activated on the object areas with less attention on the background (blue color, i.e., less attention). More visualizations are shown in Appendix~\ref{app_heatmaps} (Figure~\ref{heatmaps_more}). \begin{figure*}[t] \centering \subfloat[FoggyCityscapes]{\includegraphics[width=0.96\textwidth, keepaspectratio]{figs/foggy_det_redu.pdf}\label{det_fig:1}}\\ \vspace{-0.12in} \subfloat[Clipart]{\includegraphics[width=0.96\textwidth, keepaspectratio]{figs/clipart_det_redu.pdf}\label{det_fig:2}}\\ \vspace{-0.12in} \subfloat[Watercolor]{\includegraphics[width=0.96\textwidth, keepaspectratio]{figs/watercolor_det_redu.pdf}\label{det_fig:3}} \vspace{-0.12in} \caption{Detection examples with DA~\cite{chen2018domain}, Strong-Weak~\cite{Saito_2019_CVPR} and our proposed SCL on three datasets. For each group, the first row is the result of DA, the second row is from Strong-Weak and the last row is ours. We show detections with the scores higher than a threshold (0.3 for FoggyCityscapes and 0.5 for other two).} \label{detection} \end{figure*} \noindent{\textbf{Detection Visualization.}} Figure~\ref{detection} shows several qualitative comparisons of detection examples on three test sets with DA~\cite{chen2018domain}, Strong-Weak~\cite{Saito_2019_CVPR} and our SCL models. Our method detects more small and blurry objects in dense scene (FoggyCityscapes) and suppresses more false positives (Clipart and Watercolor) than the other two baselines. \section{Conclusion} We have addressed unsupervised domain adaptive object detection through stacked complementary losses. One of our key contributions is gradient detach training, enabled by suppressing gradients flowing back to the detection backbone. In addition, we proposed to use multiple complementary losses for better optimization. We conduct extensive experiments and ablation studies to verify the effectiveness of each component that we proposed. Our experimental results outperform the state-of-the-art approaches by a large margin on a variety of benchmarks. Our future work will focus on exploring the domain-shift detection from scratch, i.e., without the pre-trained models like DSOD~\cite{shen2017dsod}, to avoid involving bias from the pre-trained dataset. { \small \bibliographystyle{ieee_fullname}
{'timestamp': '2019-11-22T02:07:58', 'yymm': '1911', 'arxiv_id': '1911.02559', 'language': 'en', 'url': 'https://arxiv.org/abs/1911.02559'}
\section{Introduction} Image restoration covers some fundamental settings in image processing such as denoising, deblurring and super-resolution. Over the past few years, image restoration methods have demonstrated impressive improvements in both visual quality and distortion measures such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) \citep{wang2004image}. It was noticed, however, that improvement in accuracy, as measured by distortion, does not necessarily lead to improvement in visual quality, referred to as perceptual quality. Furthermore, the lower the distortion of an estimator, the more the distribution of its outputs generally deviates from the distribution of the signals it attempts to estimate. This phenomenon, known as the \emph{perception-distortion tradeoff} \citep{blau2018perception}, has captured significant attention, where it implies that faithfulness to ground truth images comes at the expense of perceptual quality, namely the deviation from statistics of natural images. Several works have extended the perception-distortion tradeoff to settings such as lossy compression \citep{blau2019rethinking} and classification \citep{liu2019classification}. Despite the increasing popularity of performing comparisons on the perception-distortion plane, the exact characterization of the minimal distortion that can be achieved under a given perception constraint remains an important open question. Although \citet{blau2018perception} investigated the basic properties of this \emph{distortion-perception function}, such as monotonicity and convexity, little is known about its precise nature. While a general answer to this question is unavailable, in this paper, we derive a closed form expression for the distortion-perception (DP) function for the mean squared-error (MSE) distortion and the Wasserstein-$2$ perception index. Our main contributions are: \emph{(i)} We prove that the DP function is \emph{always} quadratic in the perception constraint $P$, regardless of the underlying distribution (Theorem \ref{Thm:=00005Bthe-Distortion-Perception-funct}). \emph{(ii)} We show that it is possible to construct estimators on the DP curve from the estimators at the two extremes of the tradeoff (Theorem~\ref{Thm::extrapol}): The one that globally minimizes the MSE, and a minimizer of the MSE under a perfect perceptual quality constraint. The latter can be obtained as a stochastic transformation of the former. \emph{(iii)} In the Gaussian setting, we further provide a closed form expression for optimal estimators and for the corresponding DP curve (Theorems \ref{thm:Gaussian1} and \ref{thm:Gaussian_not_unique}). We show this Gaussian DP curve is a lower bound on the DP curve of any distribution having the same second order statistics. Finally, we illustrate our results, numerically and visually, in a super-resolution setting in Section \ref{sec::numerical}. The proofs of all the theorems in the main text are provided in Appendix \ref{APPsec::proofs}. Our theoretical results shed light on several topics that are subject to much practical activity. Particularly, in the domain of image restoration, numerous works target perceptual quality rather than distortion (\emph{e.g.} \citep{wang2018esrgan,lim2017enhanced,ledig2017photo}). However, it has recently been recognized that generating a single reconstructed image often does not convey to the user the inherent ambiguity in the problem. Therefore, many recent works target \emph{diverse} perceptual image reconstruction, by employing randomization among possible restorations \citep{lugmayr2020srflow,bahat2020explorable,prakash2021removing,abid2021generative}. Commonly, such works perform sampling from the posterior distribution of natural images given the degraded input image. This is done \emph{e.g.}~using priors over image patches \citep{friedman2021posterior}, conditional generative models \citep{ohayon2021high,prakash2020divnoising}, or implicit priors induced by deep denoiser networks \citep{kawar2021stochastic}. Theoretically, posterior sampling leads to perfect perceptual quality (the restored outputs are distributed like the prior). However, a fundamental question is whether this is optimal in terms of distortion. As we show in Section~\ref{sec::The MSE--Wasserstein-2 tradeoff}, posterior sampling is often not an optimal strategy, in the sense that there often exist perfect perceptual quality estimators that achieve lower distortion. Another topic of practical interest, is the ability to \emph{traverse the distortion-perception tradeoff} at test time, without having to train a different model for each working point. Recently, interpolation has been suggested for controlling several objectives at test-time. \citet{shoshan2019dynamic} propose using interpolation in some latent space in order to approximate intermediate objectives. \citet{wang2018esrgan} use per-pixel interpolation for balancing perceptual quality and fidelity. Studies of network parameter interpolation are presented by \citet{wang2018esrgan,wang2019deep}. \citet{deng2018enhancing} produces a low distortion reconstruction and a high perceptual quality one, and then uses style transfer to combine them. An important question, therefore, is which strategy is optimal. In Section~\ref{sec::optimalestimators} we show that for the MSE--Wasserstein-2 tradeoff, linear interpolation leads to optimal estimators. We also discuss a geometric connection between interpolation and the fact that estimators on the DP curve form a geodesic in Wasserstein space. \section{Problem setting and preliminaries} \label{Sec::D-P::optimalTransport} \subsection{The distortion-perception tradeoff} Let $X,Y$ be random vectors taking values in $\R^{n_{x}}$ and $\R^{n_{y}}$, respectively. We consider the problem of constructing an estimator $\hat{X}$ of $X$ based on $Y$. Namely, we are interested in determining a conditional distribution $p_{\hat{X}|Y}$ such that $\hat{X}$ constitutes a good estimate of $X$. In many practical cases, the goodness of an estimator is associated with two factors: (i) the degree to which $\hat{X}$ is close to $X$ on average (low distortion), and (ii) the degree to which the distribution of $\hat{X}$ is close to that of $X$ (good perceptual quality). An important question, then, is \emph{what is the minimal distortion that can be achieved under a given level of perceptual quality?} and \emph{how can we construct estimators that achieve this lower bound?} In mathematical language, we are interested in analyzing the distortion-perception (DP) function (defined similarly to the perception-distortion function of \cite{blau2018perception}) \begin{equation} D(P)=\min_{p_{\hat{X}|Y}}\left\{\EE[d(X,\hat{X})] \;:\; d_{p}(p_X,p_{\hat{X}})\leq P\right\}.\label{eq:D_P::General_definition} \end{equation} Here, $d:\R^{n_x}\times\R^{n_x}\rightarrow \R^{+}\cup\{0\}$ is some distortion criterion, $d_{p}(\cdot,\cdot)$ is some divergence between probability measures, and $p_{\hat{X}}$ is the probability measure on $\R^{n_x}$ induced by $p_{\hat{X}|Y}$ and $p_Y$. We assume that $\hat{X}$ is independent of $X$ given $Y$. As discussed in \citep{blau2018perception}, the function $D(P)$ is monotonically non-increasing and is convex whenever $d_{p}(\cdot,\cdot)$ is convex in its second argument (which is the case for most popular divergences). However, without further concrete assumptions on the distortion measure $d(\cdot,\cdot)$ and the perception index $d_p(\cdot,\cdot)$, little can be said about the precise nature of $D(P)$. Here, we focus our attention on the squared-error distortion $d(x,\hat{x})=\|x-\hat{x}\|^{2}$ and the Wasserstein-2 distance $d_p(p_X,p_{\hat{X}})=W_2(p_X,p_{\hat{X}})$, with which \eqref{eq:D_P::General_definition} reads \begin{equation} D(P)=\min_{p_{\hat{X}|Y}}\left\{\EE[\|X-\hat{X}\|^2] \;:\; W_2(p_X,p_{\hat{X}})\leq P\right\}. \label{eq:MSE::D_P::Definition} \end{equation} Throughout this paper we assume that all distributions have finite first and second moments. In addition, from Theorem \ref{Thm::extrapol} below it will follow that the minimum is indeed attained, so that \eqref{eq:MSE::D_P::Definition} is well defined. It is well known that the estimator minimizing the mean squared error (MSE) without any constraints, is given by $X^{*}=\EE[X|Y]$. This implies that $D(P)$ monotonically decreases until $P$ reaches $P^*\triangleq W_2(p_X,p_{X^*})$, beyond which point $D(P)$ takes the constant value $D^*\triangleq \EE[\|X-X^{*}\|^{2}]$. This is illustrated in Fig.~\ref{fig:DP_func}. It is also known that in this case $D(0)\leq 2D^*$ since the posterior sampling estimator $p_{\hat{X}|Y}=p_{X|Y}$ achieves $W_2(p_X,p_{\hat{X}})=0$ and $\EE[\|X-\hat{X}\|^2]=2D^*$ \citep{blau2018perception}. However, apart for these rather general properties, the precise shape of the DP curve has not been determined to date, and neither have the estimators that achieve the optimum in \eqref{eq:MSE::D_P::Definition}. This is our goal in this paper. \begin{figure} \centering \includegraphics[width=0.44\linewidth]{plot/DP_func_star} \includegraphics[bb=39bp 62bp 420bp 300bp,clip,scale=0.75,width=0.55\linewidth]{plot/geodesic_noraster} \caption{\textbf{Left: The distortion-perception function.} When using the MSE distortion and the Wasserstein-2 perception index, the minimal possible distortion, $D^*$, is achieved by the estimator $X^{*}=E[X|Y]$. The perception index attained by this estimator is $P^*$. At the other extreme of the tradeoff, we know that the distortion at $P=0$ is bounded from above by $2D^*$.\label{fig:DP_func} \textbf{Right:} \label{fig:geodesic} The minimal distortion $D(P)$ for a given perception index $P<P^*$ can be achieved by an estimator with a distribution $\gamma_{P}$ lying on a straight line (or geodesic) defined by the geometry of the probabilities space. Given $P$, $\gamma_{P}$ achieves $W_{2}(p_X,\gamma_{P})=P$ and $W_{2}(p_{X^*},\gamma_{P})=P^*-P$, hence $D(P)=D^*+W_2^2(p_{X^*},\gamma_{P})=D^*+(P^*-P)^{2}.$} \end{figure} \subsection{The Wasserstein and Gelbrich Distances} Before we present our main results, we briefly survey a few properties of the Wasserstein distance, mostly taken from \citep{panaretos2020invitation}. The Wasserstein-$p$ ($p\geq 1$) distance between measures $\mu$ and $\gamma$ on a separable Banach space $\mathcal{X}$ with norm $\| \cdot \|$ is defined by \begin{equation}\label{eq:Wp_def} W_{p}^{p}(\mu,\gamma)\triangleq\inf\left\{ \EE_{(U,V)\sim\nu}[\|U-V\|^{p}]\;:\;\nu\in\Pi(\mu,\gamma)\right\}, \end{equation} where $\Pi(\mu,\gamma)$ is the set of all probabilities on $\mathcal{X}\times\mathcal{X}$ with marginals $\mu$ and $\gamma$. A joint probability $\nu$ achieving the optimum in \eqref{eq:Wp_def} is often referred to as \emph{optimal plan}. The Wasserstein space of probability measures is defined as \[ \mathcal{W}_{p}(\mathcal{X})\triangleq\left\{\gamma:\int_{\mathcal{X}}\|x\|^{p}d\gamma<\infty\right\}, \] and $W_p$ constitutes a metric on $\mathcal{W}_{p}(\mathcal{X})$. For any $(m_{1},\Sigma_{1}),(m_{2},\Sigma_{2})\in \R^{d}\times \SSS_{+}^{d}$ (where $\SSS_{+}^{d}$ is the set of symmetric positive semidefinite matrices in $\R^{d\times d})$, the Gelbrich distance is defined as \begin{equation} G^{2}((m_{1},\Sigma_{1}),(m_{2},\Sigma_{2}))\triangleq\Vert m_{1}-m_{2}\Vert_{2}^{2}+\mathrm{Tr}\left\{\Sigma_{1}+\Sigma_{2}-2\left(\Sigma_{1}^{\frac{1}{2}}\Sigma_{2}\Sigma_{1}^{\frac{1}{2}}\right)^{\frac{1}{2}}\right\}.\label{eq:Gelbrich_dist} \end{equation} The root of a PSD matrix is always taken to be PSD. For any two probability measures $\mu_{1},\mu_{2}$ on $\R^{d}$ with means and covariances $(m_{1},\Sigma_{1}),(m_{2},\Sigma_{2})$, from \citep[Thm. 2.1]{gelbrich1990formula} we have that \begin{equation}\label{eq:W2_greater_G2} W_{2}^{2}(\mu_{1},\mu_{2})\geq G^{2}((m_{1},\Sigma_{1}),(m_{2},\Sigma_{2})). \end{equation} When $\mu_1=\mathcal{N}(m_{1},\Sigma_{1})$ and $\mu_2=\mathcal{N}(m_{2},\Sigma_{2})$ are Gaussian distributions on $\R^{d}$, we have that $W_{2}(\mu_{1},\mu_{2})= G((m_{1},\Sigma_{1}),(m_{2},\Sigma_{2}))$. This equality is obvious for non-singular measures but is true for any two Gaussian distributions \citep[p.~18]{panaretos2020invitation}. If $\Sigma_1$ and $\Sigma_2$ are non-singular, then the distribution attaining the optimum in \eqref{eq:Wp_def} corresponds to \begin{equation}\label{eq:optimal_dist_for_gauss} U\sim \mathcal{N}(m_1,\Sigma_1),\;\; V=m_{2}+T_{1\rightarrow 2}(U-m_{1}), \end{equation} where \begin{equation}\label{eq::T1->2:def} T_{1\rightarrow 2}=\Sigma_{1}^{-\frac{1}{2}}\left(\Sigma_{1}^{\frac{1}{2}}\Sigma_{2}\Sigma_{1}^{\frac{1}{2}}\right)^{\frac{1}{2}}\Sigma_{1}^{-\frac{1}{2}} \end{equation} is the optimal transformation pushing forward from $\mathcal{N}(0,\Sigma_{1})$ to $\mathcal{N}(0,\Sigma_{2})$ \citep{knott1984optimal}. This transformation satisfies $\Sigma_{2}=T_{1\rightarrow 2}\Sigma_{1}T_{1\rightarrow 2}.$ For a discussion on singular distributions, please see App.~\ref{Appsec::Supp}. \section{Main results} \subsection{The MSE--Wasserstein-2 tradeoff} \label{sec::The MSE--Wasserstein-2 tradeoff} The DP function \eqref{eq:MSE::D_P::Definition} depends, of course, on the underlying joint probability $p_{XY}$ of the signal $X$ and measurements $Y$. Our first key result is that this dependence can be expressed solely in terms of $D^*$ and $P^*$. In other words, knowing the distortion and perception index attained by the minimum MSE estimator $X^*$, suffices for determining $D(P)$ for any $P$. \begin{thm}[The DP function] \label{Thm:=00005Bthe-Distortion-Perception-funct} The DP function \eqref{eq:MSE::D_P::Definition} is given by \begin{equation} D(P)= D^*+\left[(P^*-P)_{+}\right]^{2}, \label{eq:Thm:DP_function::Main_result} \end{equation} where $(x)_+=\max(0,x)$. Furthermore, an estimator achieving perception index $P$ and distortion $D(P)$ can always be constructed by applying a (possibly stochastic) transformation to $X^*$. \end{thm} Theorem~\ref{Thm:=00005Bthe-Distortion-Perception-funct} is of practical importance because in many cases constructing an estimator that achieves a low MSE (\emph{i.e.} an approximation of $X^*$) is a rather simple task. This is the case, for example, in image restoration with deep neural networks. There, it is common practice to train a network by minimizing its average squared error on a training set. Now, measuring the MSE of such a network on a large test set allows approximating $D^*$. We can also obtain an approximation of at least a lower bound on $P^*$ by estimating the second order statistics of $X$ and $X^*$. Specifically, recall that $P^*$ is lower bounded by the Gelbrich distance between $(m_X,\Sigma_X)$ and $(m_{X^*},\Sigma_{X^*})$, which is given by $(G^*)^2\triangleq\mathrm{Tr}\{\Sigma_{X}+\Sigma_{X^*}-2(\Sigma_{X}^{1/2}\Sigma_{X^*}\Sigma_{X}^{1/2})^{1/2}\}$ (see \eqref{eq:W2_greater_G2}). Given approximations for $D^*$ and $G^*$, we can approximate a lower bound on the DP function for any $P$, \begin{equation}\label{eq:Thm2::LwrBound_G_Sx_S*} D(P)\geq D^*+[(G^*-P)_+]^{2}. \end{equation} The bound is attained when $X$ and $Y$ are jointly Gaussian. \paragraph{Uniqueness} A remark is in place regarding the uniqueness of an estimator achieving \eqref{eq:Thm:DP_function::Main_result}. As we discuss below, what defines an optimal estimator $\hat{X}$ is its joint distribution with $X^*$. This joint distribution may not be unique, in which case the optimal estimator is not unique. Moreover, even if $p_{\hat{X} X^{*}}$ is unique, the uniqueness of the estimator is not guaranteed because there may be different conditional distributions $p_{\hat{X}|Y}$ that lead to the same optimal $p_{\hat{X} X^{*}}$. In other words, given the optimal $p_{\hat{X} X^{*}}$, one can choose any joint probability $p_{\hat{X} Y X^* }$ that has marginals $p_{\hat{X} X^{*}}$ and $p_{YX^* }$. One option is to take the estimator $\hat{X}$ to be a (possibly stochastic) transformation of $X^*$, namely $p_{\hat{X}|Y}=p_{\hat{X}|X^*}p_{X^*|Y}$. But there may be other options. In cases where either $Y$ or $\hat{X}$ are a deterministic transformation of $X^*$ (\emph{e.g.} when $X^*$ has a density, or is an invertible function of $Y$), there is a unique joint distribution $p_{\hat{X} Y X^{*}}$ with the given marginals \cite[Lemma 5.3.2]{ambrosio2008gradient}. In this case, if $p_{\hat{X}X^{*}}$ is unique then so is the estimator $p_{\hat{X}|Y}$. \paragraph{Randomness} Under the settings of image restoration, many methods encourage diversity in their output by adding randomness \citep{lugmayr2020srflow,bahat2020explorable,prakash2021removing}. In our setting, we may ask under what conditions there exists an optimal estimator $\hat X$ which is a deterministic function of $Y$. For example, when $p_Y=\delta_0$ but $X$ has some non-atomic distribution, it is clear that no deterministic function of $Y$ can attain perfect perceptual quality. It turns out that a sufficient condition for the optimal $\hat X$ to be a deterministic function of $Y$ is that $X^*$ have a density. We discuss this in App.~\ref{APPsec::proofs} and explicitly illustrate it in the Gaussian case (see Sec.~\ref{Sec::GaussianSetting}), where if $X^*$ has a non-singular covariance matrix then $\hat X$ is a deterministic function of~$Y$. \paragraph{When is posterior sampling optimal?} Many recent image restoration methods attempt to produce diverse high perceptual quality reconstructions by sampling from the posterior distribution $p_{X|Y}$ \citep{friedman2021posterior, ohayon2021high,kawar2021stochastic}. As discussed in \citep{blau2018perception}, the posterior sampling estimator attains a perception index of $0$ (namely $W_{2}(p_X,p_{\hat{X}})=0$) and distortion $2D^*$. But an interesting question is: when is this strategy optimal? In other words, in what cases do we have that the DP function at $P=0$ equals precisely $2D^*$ and is not strictly smaller? Note from the definition of the Wasserstein distance \eqref{eq:Wp_def}, that $(P^*)^2=W_{2}^{2}(p_X,p_{X^{*}})\leq \EE[\|X-X^*\|^2]=D^*$. Using this in \eqref{eq:Thm:DP_function::Main_result} shows that the DP function at $P=0$ is upper bounded by \begin{equation}\label{eq:Thm2::UprBound_2Dmin} D(0)=D^*+(P^*)^2\leq2D^*, \end{equation} and the upper bound is attained when $(P^*)^2=D^*$. To see when this happens, observe that \begin{equation}\label{eq:Sandwitch} \mathrm{Tr}\left\{\Sigma_{X}+\Sigma_{X^*}-2(\Sigma_{X}^{\frac{1}{2}}\Sigma_{X^*}\Sigma_{X}^{\frac{1}{2}})^{\frac{1}{2}}\right\}=(G^*)^2\leq (P^*)^2\leq D^*=\mathrm{Tr}\{\Sigma_X-\Sigma_{X^*}\}. \end{equation} We can see that when $\mathrm{Tr}\{\Sigma_{X^*}\}=\mathrm{Tr}\{(\Sigma_{X}^{1/2}\Sigma_{X^*}\Sigma_{X}^{1/2})^{1/2}\}$, the leftmost and rightmost sides become equal, and thus $(P^*)^2=D^*$. To understand the meaning of this condition, let us focus on the case where $\Sigma_{X}$ and $\Sigma_{X^*}$ are jointly diagonalizable. This is a reasonable assumption for natural images, where shift-invariance induces diagonalization by the Fourier basis \citep{unser1984approximation}. In this case, the condition can be written in terms of the eigenvalues of the matrices, namely $\sum_{i}\lambda_i(\Sigma_{X^*})=\sum_{i} \sqrt{\lambda_i(\Sigma_{X^*})\lambda_i(\Sigma_{X})}$. This condition is satisfied when each $\lambda_i(\Sigma_{X^*})$ equals either $\lambda_i(\Sigma_{X})$ or $0$. Namely, the $i$th eigenvalue of the error covariance of $X^*$, which is given by $\Sigma_X-\Sigma_{X^*}$, is either $\lambda_i(\Sigma_{X})$ or $0$. We conclude that posterior sampling is optimal when there exists a subspace $\mathcal{S}$ spanned by some of the eigenvectors of $\Sigma_X$, such that the projection of $X$ onto $\mathcal{S}$ can be recovered from $Y$ with zero error, but the projection of $X$ onto $\mathcal{S}^\perp$ cannot be recovered at all (the optimal estimator is trivial). This is likely not the case in most practical scenarios. Therefore, it seems that \emph{posterior sampling is often not optimal}. That is, posterior sampling can be improved upon in terms of MSE without any sacrifice in perceptual quality. \subsection{Optimal estimators} \label{sec::optimalestimators} While Theorem~\ref{Thm:=00005Bthe-Distortion-Perception-funct} reveals the shape of the DP function, it does not provide a recipe for constructing optimal estimators on the DP tradeoff. We now discuss the nature of such estimators. Our first observation is that since $\hat{X}$ is independent of $X$ given $Y$, its MSE can be decomposed as $\EE[\|X-\hat{X}\|^2]=\EE[\|X-X^*\|^2+\EE[\|X^*-\hat{X}\|^2]$ (see App.~\ref{APPsec::proofs}). Therefore, the DP function \eqref{eq:MSE::D_P::Definition} can be equivalently written as \begin{equation}\label{eq:MSE::D_P::Alternative} D(P)=D^*+\min_{p_{\hat{X}|Y}}\left\{\EE[\|\hat{X}-X^*\|^2] \;:\; W_2(p_X,p_{\hat{X}})\leq P\right\}. \end{equation} Note that the objective in \eqref{eq:MSE::D_P::Alternative} depends on the MSE between $\hat{X}$ and $X^*$, so that we can perform the minimization on $p_{\hat{X}|X^*}$ rather than on $p_{\hat{X}|Y}$ (once we determine the optimal $p_{\hat{X}|X^*}$ we can construct a consistent $p_{\hat{X}|Y}$ as discussed above). Now, let us start by examining the leftmost side of the curve $D(P)$, which corresponds to a perfect perceptual quality estimator (\emph{i.e.}~$P=0$). In this case, the constraint becomes $p_{\hat{X}}=p_X$. Therefore, \begin{equation} D(0)=D^*+\min_{p_{\hat{X}X^*}}\left\{\EE[\|\hat{X}-X^*\|^2] \;:\; p_{\hat{X}X^*}\in\Pi(p_X,p_{X^*})\right\}, \end{equation} where $\Pi(p_X,p_{X^*})$ is the set of all probabilities on $\R^{n_x}\times\R^{n_x}$ with marginals $p_X,p_{X^*}$. One may readily recognize this as the optimization problem underlying the Wasserstein-2 distance between $p_X$ and $p_{X^*}$. This leads us to the following conclusion. \begin{thm}[Optimal estimator for $P=0$] \label{thm:perfect_perception}Let $\hat{X}_0$ be an estimator achieving perception index $0$ and MSE $D(0)$. Then its joint distribution with $X^*$ attains the optimum in the definition of $W_2(p_X,p_{X^*})$. Namely, $p_{\hat{X}_0 X^*}$ is an optimal plan between $p_X$ and $p_{X^*}$. \end{thm} Having understood the estimator $\hat{X}_0$ at the leftmost end of the tradeoff, we now turn to study optimal estimators for arbitrary $P$. Interestingly, we can show that Problem \eqref{eq:MSE::D_P::Alternative} is equivalent to (see App.~\ref{APPsec::proofs}) \begin{equation}\label{eq:ObjectiveArbitraryP} D(P)=D^*+\min_{p_{\hat{X}}}\left\{ W^2_2(p_{\hat{X}},p_{X^{*}})\;:\; W_{2}(p_{\hat{X}},p_X)\leq P\right\}. \end{equation} Namely, an optimal $p_{\hat{X}}$ is closest to $p_{X^*}$ among all distributions within a ball of radius~$P$ around $p_X$, as illustrated in Fig.~\ref{fig:geodesic}. Moreover, $p_{\hat{X}X^*}$ is an optimal plan between $p_{\hat{X}}$ and $p_{X^*}$. As it turns out, this somewhat abstract viewpoint leads to a rather practical construction for $\hat{X}$ from the estimators $\hat{X}_0$ and $X^*$ at the two extremes of the tradeoff. Specifically, we have the following result, proved in App.~\ref{APPsec::proofs}. \begin{thm}[Optimal estimators for arbitrary $P$] \label{Thm::extrapol}Let $\hat{X}_0$ be an estimator achieving perception index~$0$ and MSE $D(0)$. Then for any $P\in[0,P^*]$, the estimator \begin{equation} \hat{X}_P=\left(1-\frac{P}{P^*}\right)\hat{X}_{0}+\frac{P}{P^*}X^{*}\label{eq:hat_X::extrapolated} \end{equation} is optimal for perception index $P$. Namely, it achieves perception index $P$ and distortion $D(P)$. \end{thm} Theorem~\ref{Thm::extrapol} has important implications for perceptual signal restoration. For example, in the task of image super-resolution, there exist many deep network based methods that achieve a low MSE \citep{lim2017enhanced,ulyanov2018deep,shocher2018zero}. These provide an approximation for $X^*$. Moreover, there is an abundance of methods that achieve good perceptual quality at the price of a reasonable degradation in MSE (often by incorporating a GAN-based loss) \citep{ledig2017photo,wang2018esrgan,shaham2019singan}. These constitute approximations for $\hat{X}_0$. However, achieving results that strike other prescribed balances between MSE and perceptual quality commonly require training a different model for each setting. \citet{shoshan2019dynamic} and \citet{navarrete2018multi} tried to address this difficulty by introducing new training techniques that allow traversing the distortion-perception tradeoff at test time. But, interestingly, Theorem~\ref{Thm::extrapol} shows that in our setting such specialized training methods are not required. Having a model that leads to low MSE and one that leads to good perceptual quality, it is possible to construct any other estimator on the DP tradeoff, by simply averaging the outputs of these two models with appropriate weights. We illustrate this in Sec.~\ref{sec::numerical}. \subsection{The Gaussian setting} \label{Sec::GaussianSetting} When $X$ and $Y$ are jointly Gaussian, it is well known that the minimum MSE estimator $X^*$ is a linear function of the measurements $Y$. However, it is not \emph{a-priori} clear whether all estimators along the DP tradeoff are linear in this case, and what kind of randomness they possess. As we now show, equipped with Theorem~\ref{Thm::extrapol}, we can obtain closed form expressions for optimal estimators for any $P$. For simplicity, we assume here that $X$ and $Y$ have zero means and that $\Sigma_X,\Sigma_{Y} \succ0$. It is instructive to start by considering the simple case, where $\Sigma_{X^*}$ is non-singular (in Theorem~\ref{thm:Gaussian1} below we address the more general case of a possibly singular $\Sigma_{X^*}$). It is well known that \begin{equation}\label{eq:SigmaXstar} X^*=\Sigma_{XY}\Sigma_Y^{-1}Y,\qquad \Sigma_{X^*}=\Sigma_{XY}\Sigma_{Y}^{-1}\Sigma_{YX}. \end{equation} Now, since we assumed that $\Sigma_X,\Sigma_{X^*}\succ0$, we have from Theorem~\ref{thm:perfect_perception} and \eqref{eq:optimal_dist_for_gauss},\eqref{eq::T1->2:def} that \begin{equation} \hat{X}_0 = \Sigma_{X^*}^{-\frac{1}{2}}\left(\Sigma_{X^*}^{\frac{1}{2}}\Sigma_{X}\Sigma_{X^*}^{\frac{1}{2}}\right)^{\frac{1}{2}}\Sigma_{X^*}^{-\frac{1}{2}} X^*. \end{equation} Finally, we know that $P^*=G^*$, which is given by the left-hand side of \eqref{eq:Sandwitch}. Substituting these expressions into \eqref{eq:hat_X::extrapolated}, we obtain that an optimal estimator for perception $P\in[0,G^*]$ is given by \begin{equation}\label{eq:GaussXpInvertible} \hat{X}_P=\left(\left(1-\frac{P}{G^*}\right)\Sigma_{X^*}^{-\frac{1}{2}}\left(\Sigma_{X^*}^{\frac{1}{2}}\Sigma_{X}\Sigma_{X^*}^{\frac{1}{2}}\right)^{\frac{1}{2}}\Sigma_{X^*}^{-\frac{1}{2}} +\frac{P}{G^*}I\right) \Sigma_{XY}\Sigma_Y^{-1}Y. \end{equation} As can be seen, this optimal estimator is a deterministic linear transformation of $Y$ for any $P$. The setting just described does not cover the case where $Y$ is of lower dimensionality than $X$ because in that case $\Sigma_{X^*}$ is necessarily singular (it is a $n_x\times n_x$ matrix of rank at most $n_y$; see \eqref{eq:SigmaXstar}). In this case, any deterministic linear function of $Y$ would result in an estimator $\hat{X}$ with a rank-$n_y$ covariance. Obviously, the distribution of such an estimator cannot be arbitrarily close to that of $X$, whose covariance has rank $n_x$. What is the optimal estimator in this more general setting, then? \begin{thm}[Optimal estimators in the Gaussian case]\label{thm:Gaussian1} Assume $X$ and $Y$ are zero-mean jointly Gaussian random vectors with $\Sigma_X,\Sigma_{Y} \succ0$. Denote $T^{*}\triangleq T_{p_X\rightarrow p_{X^*}}=\Sigma_{X}^{-1/2}(\Sigma_{X}^{1/2}\Sigma_{X^*}\Sigma_{X}^{1/2})^{1/2}\Sigma_{X}^{-1/2}$. Then for any $P\in[0,G^{*}]$, an estimator with perception index $P$ and MSE $D(P)$ can be constructed as \begin{equation} \label{eq::X_P_Gauss:interpolation} \hat{X}_P=\left(\left(1-\frac{P}{G^{*}}\right)\Sigma_{X}^{\frac{1}{2}}\left(\Sigma_{X}^{\frac{1}{2}}\Sigma_{X^*}\Sigma_{X}^{\frac{1}{2}}\right)^{\frac{1}{2}}\Sigma_{X}^{-\frac{1}{2}}\Sigma_{X^*}^{\dagger}+\frac{P}{G^{*}}I\right)\Sigma_{XY}\Sigma_{Y}^{-1}Y+\left(1-\frac{P}{G^{*}}\right)W, \end{equation} where $W$ is a zero-mean Gaussian noise with covariance $\Sigma_{W}=\Sigma_{X}^{1/2}(I-\Sigma_{X}^{1/2}T^{*}\Sigma_{X^*}^{\dagger}T^{*}\Sigma_{X}^{1/2})\Sigma_{X}^{1/2}$, which is independent of $Y,X$, and $\Sigma_{X^*}^{\dagger}$ is the pseudo-inverse of $\Sigma_{X^*}$. \end{thm} Note that in this case, we indeed have a random noise component that shapes the covariance of $\hat{X}_P$ to become closer to $\Sigma_X$ as $P$ gets closer to $0$. It can be shown (see App.~\ref{APPsec::proofs}) that when $\Sigma_{X^*}$ is invertible, $\Sigma_W=0$ and \eqref{eq::X_P_Gauss:interpolation} reduces to \eqref{eq:GaussXpInvertible}. Also note that, as in \eqref{eq:GaussXpInvertible}, the dependence of $\hat{X}_P$ on $Y$ in \eqref{eq::X_P_Gauss:interpolation} is only through $X^*=\Sigma_{XY}\Sigma_{Y}^{-1}Y$. As mentioned in Sec.~\ref{sec::The MSE--Wasserstein-2 tradeoff}, the optimal estimator is generally not unique. Interestingly, in the Gaussian setting we can explicitly characterize a \emph{set} of optimal estimators. \begin{thm}[A set of optimal estimators in the Gaussian case] \label{thm:Gaussian_not_unique} Consider the setting of Theorem~\ref{thm:Gaussian1}. Let $\Sigma_{\hat X_0 Y}\in \R^{n_{x}\times n_{y}}$ satisfy \begin{equation} \Sigma_{\hat X_0 Y}\Sigma_{Y}^{-1}\Sigma_{YX}=\Sigma_{X}^{\frac{1}{2}}(\Sigma_{X}^{\frac{1}{2}}\Sigma_{X^*}\Sigma_{X}^{\frac{1}{2}})^{\frac{1}{2}}\Sigma_{X}^{-\frac{1}{2}},\label{eq:Thm_Gaussian_General::M_cond1-2-1-1} \end{equation} and $W_{0}$ be a zero-mean Gaussian noise with covariance \begin{equation} \Sigma_{W_{0}}=\Sigma_{X}-\Sigma_{\hat X_0 Y}\Sigma_{Y}^{-1}\Sigma_{\hat X_0 Y}^{T}\succeq0 \label{eq:Thm_Gaussian_General::M_cond2-2-1-1} \end{equation} that is independent of $X,Y$. Then, for any $P\in[0,G^*]$, an optimal estimator with perception index $P$ can be obtained by \begin{equation} \hat{X}_P=\left(\left(1-\frac{P}{G^{*}}\right)\Sigma_{\hat X_0 Y}+\frac{P}{G^{*}}\Sigma_{XY}\right)\Sigma_{Y}^{-1}Y+\left(1-\frac{P}{G^{*}}\right)W_{0}. \end{equation} The estimator given in \eqref{eq::X_P_Gauss:interpolation} is one solution to \eqref{eq:Thm_Gaussian_General::M_cond1-2-1-1}-\eqref{eq:Thm_Gaussian_General::M_cond2-2-1-1}, but is generally not unique. \end{thm} \section{A geometric perspective on the distortion-perception tradeoff} \label{sec::geometric} In this section we provide a geometric point of view on our main results. Specifically, we show that the results of Theorems \ref{Thm:=00005Bthe-Distortion-Perception-funct} and \ref{Thm::extrapol} are a consequence of a more general geometric property of the space $\mathcal{W}_{2}(\R^{n_{x}})$. In the Gaussian case, this is simplified to a geometry of covariance matrices. Recall from \eqref{eq:ObjectiveArbitraryP} that the optimal $p_{\hat{X}}$ is the one closest to $p_{X^*}$ (in terms of Wasserstein distance) among all measures at a distance $P$ from $p_X$. This implies that to determine $p_{\hat{X}}$, we should traverse the geodesic between $p_{X^*}$ and $p_{X}$ until reaching a distance of $P$ from $p_X$. Furthermore, $p_{\hat{X}X^*}$ should be the optimal plan between $p_{\hat{X}}$ and $p_{X^*}$. Interestingly, geodesics in Wasserstein spaces take a particularly simple form, and their explicit construction also turns out to satisfy the latter requirement. Specifically, let $\gamma,\mu$ be measures in $\mathcal{W}_{2}(\R^{d})$, let $\nu\in\Pi(\gamma,\mu)$ be an optimal plan attaining $W_2(\gamma,\mu)$, and let $\pi_{i}$ denote the projection $\pi_{i}:\R^{d}\times \R^{d}\rightarrow \R^{d}$ such that $\pi_{i}((x_{1},x_{2}))=x_{i},\;i=1,2$. Then, the curve \begin{equation} \gamma_{t}\triangleq\left[(1-t)\pi_{1}+t\pi_{2}\right]\#\nu,\quad t\in[0,1]\label{eq:Geodesic:nu} \end{equation} is a constant-speed geodesic from $\gamma$ to $\mu$ in $\mathcal{W}_{2}(\R^{d})$ \citep{ambrosio2008gradient}, where $\#$ is the push-forward operation\footnote{For measures $\gamma,\mu$ on $\mathcal{X},\mathcal{Y}$, we say that a measurable transform $T:\mathcal{X}\rightarrow\mathcal{Y}$ pushes $\gamma$ forward to $\mu$ (denoted $T\#\gamma=\mu$) iff $\gamma(T^{-1}(B))=\mu(B)$ for any measurable $B \subseteq \mathcal{Y}.$}. Particularly, \begin{equation} W_{2}(\gamma_{t},\gamma_{s})=|t-s|W_{2}(\gamma,\mu), \end{equation} and it follows that $\ensuremath{W_{2}(\gamma_{t},\gamma)=tW_{2}(\gamma,\mu)}$ and $\ensuremath{W_{2}(\gamma_{t},\mu)=(1-t)W_{2}(\gamma,\mu)}$. Furthermore, if $\gamma_t,t\in [0,1]$ is a constant-speed geodesic with $\gamma_0=\gamma,\gamma_1=\mu$, then the optimal plans between $\gamma,\gamma_t$ and between $\gamma_t,\mu$ are given by \begin{equation} \left[\pi_1,(1-t)\pi_{1}+t\pi_{2}\right]\#\nu,\quad \left[(1-t)\pi_{1}+t\pi_{2},\pi_2\right]\#\nu, \end{equation} respectively, where $\nu\in\Pi(\gamma,\mu)$ is some optimal plan. Applying \eqref{eq:Geodesic:nu} to $(\hat X_0,X^*)\sim \nu$ with $t=P/P^*$, we obtain \eqref{eq:hat_X::extrapolated}, where we show that the obtained estimator achieves $\EE[\|\hat {X}_P-X^*\|^2]=(1-t)^2 W^2_2(p_X,p_{X^*})$. This explains the result of Theorem \ref{Thm::extrapol}. It is worth mentioning that this geometric interpretation is simplified under some common settings. For example, when $\gamma$ is absolutely continuous (w.r.t.~the Lebesgue measure), we have a measurable map $T_{\gamma\rightarrow\mu}$ which is the solution to the optimal transport problem with the quadratic cost \citep[Thm 1.6.2, p.16]{panaretos2020invitation}. The geodesic \eqref{eq:Geodesic:nu} then takes the form \begin{equation} \gamma_{t}=[Id+t(T_{\gamma\rightarrow\mu}-Id)]\#\gamma,\quad t\in[0,1]. \end{equation} Therefore, in our setting, if $\gamma=p_{X^*}$ has a density, then we can obtain $\hat {X}_P$ by the deterministic transformation $[X^*+\left( 1-\frac{P}{P^*} \right) \left(T_{p_{X^*}\rightarrow p_{X}}(X^*)-X^*\right)]$ (see Remark about randomness in Sec.~\ref{sec::The MSE--Wasserstein-2 tradeoff}). Further simplification arises when $\gamma,\mu$ are centered non-singular Gaussian measures, in which case $T_{\gamma\rightarrow\mu}$ is the linear and symmetric transformation \eqref{eq::T1->2:def}. Then, $\gamma_{t}$ is a Gaussian measure with covariance $\Sigma_{\gamma_{t}}=T_{t}\Sigma_{\gamma}T_{t}$, where $T_{t}\triangleq[I+t(T_{\gamma\rightarrow\mu}-I)].$ Therefore, in the Gaussian case, the shortest path (\ref{eq:Geodesic:nu}) between distributions is reduced to a trajectory in the geometry of covariance matrices induced by the Gelbrich distance \citep{takatsu2010wasserstein}. If additionally $\Sigma_{\gamma}$ and $\Sigma_{\mu}$ commute, then the Gelbrich distance is further reduced to the $\ell^2$-distance between matrices, as we discuss in App.~\ref{appsec::commute}. \section{Numerical illustration} \label{sec::numerical} In this Section we evaluate $12$ super resolution algorithms on the BSD100 dataset\footnote{All codes are freely available and provided by the authors. The BSD100 dataset is free to download for non-commercial research.} \citep{martin2001database}. The evaluated algorithms include EDSR \citep{lim2017enhanced}, ESRGAN \citep{wang2018esrgan}, SinGAN \citep{shaham2019singan}, ZSSR \citep{shocher2018zero}, DIP \citep{ulyanov2018deep}, SRResNet variants which optimize MSE and VGG$_{2,2}$, SRGAN variants which optimize MSE, VGG$_{2,2}$ and VGG$_{5,4}$ in addition to an adversarial loss \citep{ledig2017photo}, ENet \citep{sajjadi2017enhancenet} (``PAT'' and ``E'' variants). Low resolution images were obtained by $ 4\times$ downsampling using a bicubic kernel. In Figure \ref{fig:DG} we plot each method on the distortion-perception plane. Specifically, we consider natural (and reconstructed) images to be stationary random sources, and use $9 \times 9 $ patches (totally $1.6\times 10^6$ patches) from the RGB images to empirically estimate the mean and covariance matrix for the ground-truth images, and for the reconstructions produced by each method. We then use the estimated Gelbrich distances \eqref{eq:Gelbrich_dist} between the patch distribution of each method and that of ground-truth images, as a perceptual quality index. Recall this is a lower bound on the Wasserstein distance. We consider EDSR \cite{lim2017enhanced} to be the best MSE estimator $X^*$ since it achieves the lowest distortion among the evaluated methods. We therefore estimate the lower bound \eqref{eq:Thm2::LwrBound_G_Sx_S*} as \[\hat D (P)=D_{\text{EDSR}}+\left[(P_{\text{EDSR}}-P)_+\right]^2,\] where $D_{\text{EDSR}}$ is the MSE of EDSR, and $P_{\text{EDSR}}$ is the estimated Gelbrich distance between EDSR reconstructions and ground-truth images. Note the unoccupied region under the estimated curve in Figure~\ref{fig:DG}, which is indeed unattainable according to the theory. We also present 11 estimators $\hat X_t$ which we construct by interpolation between EDSR and ESRGAN \cite{wang2018esrgan}, $ \hat X_t = tX_{\text{EDSR}} + (1-t)X_{\text{ESRGAN}}$. We observe (Figure \ref{fig:DG}) that estimators constructed using these two extreme points are closer to the optimal DP tradeoff than the evaluated methods. Also note that since ESRGAN does not attain $0$-perception index, we are practically able to use negative values $t<0$ to extrapolate better perception-quality estimators $\hat X_{-0.05}$ and $\hat X_{-0.1}$. In Figure \ref{fig:SrganVSintp} we present a visual comparison between SRGAN-VGG$_{2,2}$ \citep{ledig2017photo} and our interpolated estimator $\hat X_{0.12}$. Both achieve roughly the same RMSE distortion ($18.09$ for SRGAN, $18.15$ for $\hat X_{0.12}$), but our estimator achieves a lower perception index. Namely, by using interpolation, we manage to achieve improvement in perceptual quality, without degradation in distortion. The improvement in visual quality is also apparent in the figure. Additional visual comparisons can be found in the Appendix. \begin{figure}[h] \begin{centering} \includegraphics[bb=10bp 0bp 460bp 307bp,clip,scale=0.85,width=0.80\linewidth]{plot/20210520_22_29_19.pdf} \par\end{centering} \centering{}\caption{\label{fig:DG} \textbf{Evaluation of SR algorithms}. We plot 12 algorithms (Blue) on the Distortion-Perception plane. Here we estimate perception using the Gelbrich distance between empirical means and covariances of the original data and reconstructed data. $\hat D(P)$ (Orange) is the estimated lower bound \eqref{eq:Thm2::LwrBound_G_Sx_S*} where we consider EDSR to be the global minimizer $X^*$. Note the unoccupied region under the estimated curve, which is unattainable. We also plot 11 estimators $\hat X_t$ (Green) created by an interpolation between EDSR and ESRGAN, using different relative weights $t$. Note that estimators constructed using these two extreme estimators are closer to the optimal DP curve than the compared methods.} \end{figure} \begin{figure}[] \begin{centering} \includegraphics[bb=12bp 200bp 1150bp 522bp,clip,scale=0.99,width=.95\linewidth]{plot/vgg22x12_horiz_largefont} \par\end{centering} \centering{}\caption{ \label{fig:SrganVSintp} \textbf{A visual comparison between estimators with approximately the same MSE}. Left: SRGAN-VGG$_{2,2}$. Right: $\hat X_{0.12}$, an interpolation between EDSR and ESRGAN using $t=0.12$. Observe the improvement in perceptual quality, without any significant degradation in distortion.} \end{figure} \section{Conclusion} In this paper we provide a full characterization of the distortion-perception tradeoff for the MSE distortion and the Wasserstein-$2$ perception index. We show that optimal estimators are obtained by interpolation between the minimum MSE estimator and an optimal perfect perception quality estimator. In the Gaussian case, we explicitly formulate these estimators. To the best of our knowledge, this is the first work to derive such closed-form expressions. Our work paves the way towards fully understanding the DP tradeoff under more general distortions and perceptual criteria, and bridging between fidelity and visual quality at test-time, without training different models. \setcitestyle{numbers}
{'timestamp': '2021-07-07T02:17:53', 'yymm': '2107', 'arxiv_id': '2107.02555', 'language': 'en', 'url': 'https://arxiv.org/abs/2107.02555'}
\section{Introduction}\label{section:Intro} The study of extreme eigenvalue statistics in Random Matrix Theory (RMT) has attracted much attention during the last twenty years. In particular, the Tracy-Widom (TW) distributions \cite{TW94a,TW96} describing the largest eigenvalue $\lambda_{\max}$ (as well as the smallest one, $\lambda_{\min}$) in the classical Gaussian ensembles, orthogonal (GOE, $\beta =1$), unitary (GUE, $\beta= 2$) and symplectic (GSE, $\beta =4$) -- where $\beta$ is the Dyson index -- have become cornerstones of the theory of extreme value statistics of strongly correlated variables. Quite remarkably, it was shown that the TW distributions, denoted by ${\cal F}_{\beta}$ in the following, appear in a wide variety of problems \cite{Majumdar06}, a priori not directly related to RMT, ranging from the longest increasing sequence of random permutations of integers \cite{Baik99}, stochastic growth and related directed polymer models in the Kardar-Parisi-Zhang \cite{KPZ86} universality class \cite{Johansson00,Prahofer00,Sasamoto10,Calabrese10,Dotsenko10,Amir11} and sequence alignment problems \cite{Majumdar05} to non-intersecting interfaces and Brownian motions \cite{Nadal09,Forrester11,Liechty12} as well as in finance \cite{Biroli07}. While the TW distribution describes the typical fluctuations of $\lambda_{\max}$ and $\lambda_{\min}$, a large body of work has also been devoted to the study of large deviations of extreme eigenvalues in Gaussian ensembles \cite{Majumdar14}. \begin{figure}[h!] \begin{center} \resizebox{120mm}{!}{\includegraphics{Wishart_soft_vs_hard.pdf}} \caption{Plot of the Marchenko-Pastur distribution for $c<1$ ({\bf left}) and for $c=1$ ({\bf right}).} \label{fig:Wishart_soft_vs_hard} \end{center} \end{figure} Another interesting ensemble of random matrices, which we focus on in this paper, is the so-called Wishart-Laguerre ensemble -- here we focus on the case of complex matrices ($\beta = 2$). Let $X$ be a $M \times N$ rectangular matrix with i.i.d. complex Gaussian entries and $M-N=a \geq 0$. The Wishart-Laguerre matrix $W$ is defined as $W=X\,^\dagger X$ which is thus a $N \times N$ Hermitian matrix, having $N$ real and positive eigenvalues $\lambda_1,\lambda_2,\cdots,\lambda_N$. The joint probability density function (PDF) of these $N$ eigenvalues is given by \cite{Mehta91,Forrester10} \begin{eqnarray}\label{Pjoint} P_{\rm joint}(\lambda_1,\lambda_2,\cdots,\lambda_N)=\frac{1}{Z_N} \prod_{i < j}^N (\lambda_i-\lambda_j)^2 \,\exp{\left(-\sum_{i=1}^N \lambda_i\right)} \prod_{i=1}^N\lambda_i^a, \end{eqnarray} where $\lambda_1$, $\lambda_2$ ... $\lambda_N$ are positive and where $Z_N$ is a normalization constant, depending on $a$. These Wishart-Laguerre matrices play an important role in statistics, in particular in principal component analysis, where the matrix $W$ is a covariance matrix. Hence in this case both $M$ and $N$, and thus $a = M-N$, are positive integers. However, this joint PDF in Eq. (\ref{Pjoint}) is well defined for any real value of $a \geq 0$, which, for some non-integer value of $a$, may have some physical applications. An interesting example is the case of non-intersecting Brownian excursions \cite{Tracy07}, i.e., $N$ non-colliding positive Brownian paths $x_i(t) \ge 0$ on the unit time interval $t\in[0,1]$ constrained to start and end at the origin $x_i(0)=x_i(1)=0$. The joint PDF of the positions of the $N$ walkers, at a given time $t$, can indeed be written as \cite{Schehr08} \begin{eqnarray}\label{pdf_watermelons} P_{\rm joint}(x_1,\cdots,x_N;t)=\frac{1}{z_N(t)} \prod_{i =1}^N x_i^{2} \prod_{i < j}^N (x_i^2-x_j^2)^2 \,\exp{\left(-\sum_{i=1}^N \frac{x_i^2}{\sigma^2(t)} \right)} \;, \end{eqnarray} where $\sigma^2(t) = 2t(1-t)$ and $z_N(t)$ is a normalization constant. From this expression (\ref{pdf_watermelons}) we obtain that the scaled variables $x_i^2/\sigma^2(t)$ behave statistically like the eigenvalues of random matrices from the Wishart-Laguerre ensemble (\ref{Pjoint}) with a non-integer parameter $a=1/2$. Hence it is physically relevant to study the joint distribution in Eq.~(\ref{Pjoint}) for any real $a \geq 0$. A first important characteristic associated to this ensemble (\ref{Pjoint}) is the mean density of eigenvalues, $\rho(\lambda,N)$, defined by \begin{eqnarray} \rho(\lambda,N)=\frac{1}{N} \sum_{i=1}^{N} \overline{\delta(\lambda_i-\lambda)}, \end{eqnarray} where the overline denotes an average over the different realizations of the random variables $\lambda_i$'s according to the joint PDF in Eq. (\ref{Pjoint}). In the large $N$ limit, it is well known that $\rho(\lambda,N)$ is given by the Marchenko-Pastur (MP) distribution (see Fig. \ref{fig:Wishart_soft_vs_hard}) \begin{eqnarray}\label{density_MP} \rho(\lambda,N) &\underset{N \to \infty}{\to}& \frac{1}{N}\rho_{\rm MP}\left( \frac{\lambda}{N}\right),\\ \rho_{\rm MP}\left(x\right)&=&\frac{\sqrt{(x-x_-)(x_+-x)}}{2 \pi x} , \end{eqnarray} where $x_\pm = (c^{-1/2} \pm 1)^2$ are the right and left edges of the support, with $c=N/M \le 1$. The case where $c<1$ corresponds to the case where $a \sim {\cal O}(N)$. Here, we mainly focus on the case $c=1$ which corresponds instead to the case where $a$ is finite, while both $N$ and $M$ are large. In this case, which we will mainly focus on in this paper, the MP distribution takes the particular form (see the left panel of Fig.~\ref{fig:Wishart_soft_vs_hard}) \begin{eqnarray}\label{eq:MP_afinite} \rho_{\rm MP}\left(x\right)&=&\frac{1}{2\pi}\sqrt{\frac{4-x}{x}} \;. \end{eqnarray} At the right edge, near $x_+ = 4$, $\rho_{\rm MP}\left(x\right)$ vanishes as a square-root, $\rho_{\rm MP}\left(x\right) \propto \sqrt{4 - x}$ and therefore the fluctuations near this {\it soft} edge are governed, for large $N$, by the Airy kernel. In particular, the distribution of the largest eigenvalue $\lambda_{\max}$, appropriately shifted and scaled, converges, when $N \to \infty$, to the TW distribution ${\cal F}_2$ mentioned above (the same as for GUE) \cite{Johansson00,Johnstone01}. It is now well known that this distribution can be expressed in terms of a special solution of a Painlev\'e II equation. While this connection to Painlev\'e transcendents was initially obtained by Tracy and Widom using Fredholm operator techniques, Nadal and Majumdar \cite{Nadal11} provided, more recently, a derivation of this result (for $\beta =2$) using semi-classical orthogonal polynomials (OPs), see also Ref. \cite{Chen05}. This method is at the heart of the present paper. On the other hand, at the left edge, $x_- = 0$, $\rho_{\rm MP}\left(x\right)$ has a square-root singularity, $\rho_{\rm MP}\left(x\right) \propto 1/\sqrt{x}$ (see the left panel of Fig. \ref{fig:Wishart_soft_vs_hard}). What about the fluctuations of the smallest eigenvalue $\lambda_{\min}$ in this case? One can estimate the typical scale of $\lambda_{\min}$, for large $N$, by considering that there is typically one eigenvalue in the interval $[0,\lambda_{\min}]$, i.e., \begin{eqnarray} N \int_0^{\lambda_{\min}} \rho\left(\lambda,N\right) \, {\rm d} \lambda \sim {\cal O}(1)\;, \end{eqnarray} which implies that $\lambda_{\min} \sim {\cal O}(1/N)$. Within this scale ${\cal O}(1/N)$, the fluctuations are governed by the Bessel kernel \cite{For93,Ver93}. Furthermore, it has been shown that the distribution of $N \lambda_{\min}$ converges to a limiting form which (i) is different from the standard TW distribution ${\cal F}_2$ and depends continuously on the exponent $a$ in (\ref{Pjoint}) and (ii) can be written in terms of a special solution of a Painlev\'e III equation~\cite{Tracy94}. If one introduces $F_N(t)={\Pr}(\lambda_{\min} \ge t)$ then, one has indeed \begin{eqnarray}\label{eq:result_TW} \underset{N \to \infty}{\lim}F_N\!\left(\frac{x}{N}\right) = F_{\infty}(x),\hspace{0.2cm} F_{\infty}(x)=\exp\left( \int_0^x \frac{f(u)}{u} {\rm d}u\right) \;, \end{eqnarray} where $f(x)$ is the unique solution of a Painlev\'e III~\cite{Tracy94}: \begin{eqnarray}\label{PIII} (xf'')^2+4f'(1+f')(xf'-f)=(a f')^2 \;, \end{eqnarray} satisfying \begin{eqnarray}\label{asympt_PIII} f(x) \sim - \frac{x^{a+1}}{\Gamma(a+1) \Gamma(a+2)} \;, \; {\rm as} \; x \to 0 \;. \end{eqnarray} This result (\ref{eq:result_TW}) was shown by Tracy and Widom using Fredholm operator techniques~\cite{Tracy94}. Note that for integer values of $a$, the limiting distribution $F_{\infty}(x)$ can be written as an $a \times a$ determinant whose entries are expressed in terms of Bessel functions (see Eq. (\ref{fdetBessel}) below) -- a result which can be obtained by clever manipulations of determinants~\cite{Forrester94}. In particular for $a=0$ the result is extremely simple as $F_N(x) = \exp{(-N\,x)}$ for all $N$, implying $f(x) = -x$, which is obviously solution of Eq. (\ref{PIII}) with the boundary condition~(\ref{asympt_PIII}). The limiting distribution of $\lambda_{\min}$ for complex Wishart matrices, in the limit $N \to \infty$, is thus well known (\ref{eq:result_TW}) -- we refer the reader to Ref. \cite{akemann} to a recent work on the smallest eigenvalue for real Wishart matrices in the hard edge limit. What about the finite $N$ corrections to this asymptotic form? Such a question is quite natural for practical applications of extreme value statistics (EVS), where one always deals with finite samples -- here matrices of finite size. This issue was recently revisited for EVS of independent and identically distributed random variables using a renormalization group approach \cite{Gyorgyi10}. For EVS of strongly correlated variables, there are actually few cases where these corrections have been worked out explicitly, including random walks \cite{Schehr06}, the {\it largest} eigenvalue of random matrices belonging to various ensembles \cite{karoui2006,johnstone2008,johnstonema2011,Ma12}, non-intersecting Brownian motions \cite{Schehr13} or (Poissonized) random matchings \cite{Baik13}. For {\it real} Wishart matrices, the first corrections to the limiting distribution of the {\it smallest} eigenvalue in the soft edge limit were studied in Ref. \cite{Ma12} where it was shown that corrections to the limiting distribution of $\lambda_{\min}$ and $\lambda_{\max}$ are quite different, although the limiting distributions for both observables are actually the same, namely the TW distribution for GOE, ${\cal F}_1$. What about the corrections to the limiting distribution $F_{\infty}$ in Eq. (\ref{eq:result_TW}) of $\lambda_{\min}$ for complex Wishart matrices in the hard edge limit? This question was recently raised by Edelman, Guionnet et P\'ech\'e \cite{Edelman14} in their study of finite size covariance matrices with non-Gaussian entries. Based on the large $N$ expansion of the exact formulas obtained in Ref.~\cite{Forrester94} for small integer values of $a$, they conjectured the following form of the first $1/N$-correction \begin{eqnarray}\label{conjecture_finite_N} F_N\left(\frac{x}{N}\right) = F_\infty(x) + \frac{a}{2N} x \, F'_\infty(x) + o\!\left(\frac{1}{N}\right) \;. \end{eqnarray} Note that this first $1/N$ correction in Eq. (\ref{conjecture_finite_N}) can be interpreted as a correction to the width, i.e., \begin{eqnarray}\label{eq:correction} F_N\left(\frac{x}{N}\right) = F_{\infty}\left(x\left(1 + \frac{a}{2\,N}\right)\right) + o\!\left(\frac{1}{N}\right) \;. \end{eqnarray} It is interesting to notice \cite{Baik13} that for most of the cases which have been studied in RMT~\cite{karoui2006,johnstone2008,johnstonema2011}, it was actually found that the first order correction to the limiting distribution of extreme eigenvalue actually corresponds to a correction of the scaling variable, as in Eq.~(\ref{eq:correction}). One exception concerns the smallest eigenvalue of real Wishart matrices in the soft-edge limit, where the first correction has a more complicated structure~\cite{Ma12}. The main goal of this paper is to provide an explicit computation of this first correction in the hard edge limit and we will show that it has indeed the conjectured form given above in Eq.~(\ref{conjecture_finite_N}). To perform this computation, we will use a method relying on semi-classical OPs, in the spirit of Refs. \cite{Nadal11} and \cite{Chen05,Basor09}. As we will see, our method does not only allow us to compute explicitly the first $1/N$ corrections but provides also a rather straightforward derivation of the expression for the limiting distribution $F_{\infty}(x)$ in terms of the solution of a Painlev\'e III equation, without using Fredholm operators theory but relying instead only on the recurrence relations associated to the (semi-classical) OPs system. Finally, we will also study the first finite $N$ corrections to the limiting distribution of $\lambda_{\min}$ at the soft edge. Note that after the results obtained in the present paper were presented in a conference \cite{conf_montevideo}, another independent proof of the conjecture in Eq. (\ref{conjecture_finite_N}) was achieved in Ref. \cite{Bornemann15}, using operator theoretic techniques. More recently, yet another independent proof of this conjecture was given in Ref. \cite{najim2015}. \section{Summary of main results and outline of the paper}\label{section:Summary} The distribution of the smallest eigenvalue $\lambda_{\min}=\underset{1 \le i \le N}{\min}\lambda_i$ is given by \begin{equation}\label{def_FN} F_N(t)={\rm Prob}(\lambda_{\min} \ge t)=\int_t^{\infty} \!{\rm d}\lambda_1 \int_t^{\infty} \!{\rm d}\lambda_2 ... \int_t^{\infty} \!{\rm d}\lambda_N P_{\rm joint}(\lambda_1,\lambda_2,\cdots,\lambda_N) \;. \end{equation} In this paper, we will compute $F_N(t)$ using semi-classical OPs $\{\pi_k(\lambda)\}_{k \in \mathbb{N}}$ which are polynomials of the variable $\lambda$ while $t$ and $a$ are parameters (for the sake of clarity in the notations, this dependence is omitted here): \begin{eqnarray}\label{eq:def_OP} \left\{ \begin{array}{l} \braket{\pi_k}{\pi_{k'}}=\int_t^{\infty} e^{-\lambda} \lambda^a \pi_k(\lambda) \pi_{k'}(\lambda) {\rm d}\lambda = h_k \delta_{k,k'}, \\ \\ \pi_k(\lambda)=\lambda^k + \zeta_k \lambda^{k-1} + ... \end{array} \right. \end{eqnarray} The cumulative distribution $F_N(t)$ can be expressed, using standard manipulations, in terms of the norms $h_k$'s as \begin{eqnarray} F_N(t) = \frac{N!}{Z_N} \prod_{k=0}^{N-1} h_k \;. \end{eqnarray} As we will see, the norms $h_k$'s can be computed from the three term recurrence relation satisfied by the OPs \begin{eqnarray} \lambda \pi_k=\pi_{k+1}+S_k \pi_k +R_k \pi_{k-1} \;, \end{eqnarray} from which we deduce the following important relations: \begin{eqnarray}\label{important_eq} &&R_k = \frac{h_k}{h_{k-1}}, \\ &&S_k = - t \partial_t \log h_k + 2k + a + 1 \;,\\ && \zeta_k = - \sum_{i=0}^{k-1} S_i \;. \end{eqnarray} Note that the starting point of our analysis is very similar to the one of Basor and Chen in Ref. \cite{Basor09} but the analysis of the recursion relations is different. In particular, we do not make use of ladder operators techniques, which are heavily used in Ref. \cite{Basor09}. In addition, we provide an asymptotic analysis of this OP system (\ref{eq:def_OP}) for large $N$, beyond the leading order. In section \ref{section:OP}, we will study the variables $S_k$, $R_k$, $h_k$ and $\zeta_k$. In particular, we will show that $S_k$ and $R_k$ satisfy a coupled set of equations, named Schlesinger equations in the literature on OPs \begin{eqnarray}\label{schlesinger_intro} \left\{ \begin{array}{ccl} S_k-R_{k+1}+R_{k} &=& t\partial_t S_k,\\ \\ 2-S_{k+1}+S_{k} &=& t \dfrac{\partial_t R_{k+1}}{R_{k+1}} \;. \end{array} \right. \end{eqnarray} This system of equations, together with the initial condition given in Eq. (\ref{CI}) below, determines uniquely the values of $R_k$ and $S_k$ for all values of $k$, as they can be computed by induction. It is however quite difficult to analyze the large $N$ behavior of $R_N$ and $S_N$ using only this set of equations (\ref{schlesinger_intro}). To circumvent this difficulty, it is customary to use another set of relations, called the Laguerre-Freud equations, which we derive here using the method based on Tur\'an determinants, as developed in Ref. \cite{Belmehdi94}. They read \begin{eqnarray}\label{LF_intro} \left\{ \begin{array}{rcl} R_{k+2}-R_k&=&S_{k+1}(2k+4+a+t-S_{k+1})-S_k(2k+a+t-S_k)-2t,\\ \\ S_{k+1}(S_{k+1}-t) &=& R_{k+1}(2k+1+a+t-S_{k+1}-S_k) \\ &-&R_{k+2}(2k+5+a+t-S_{k+2}-S_{k+1}). \end{array} \right. \end{eqnarray} Some details of this derivation have been relegated in \ref{appendix:Turan}. In section \ref{section:PV}, by manipulating these two sets of equations (\ref{schlesinger_intro}) and (\ref{LF_intro}), we show that $F_N(t)$, for finite $N$, is related to a special solution of a Painlev\'e V equation [see Eqs. (\ref{def:Hn}, \ref{Painleve5})], thus recovering a previous result of Tracy and Widom. Section \ref{section:PIII} is devoted to the hard edge scaling limit: \begin{eqnarray}\label{def_hard_edge} N \to \infty , \, t \to 0,\, x=N\, t \in \mathbb{R}^+ \;\;{\rm fixed} \;. \end{eqnarray} It is known that the limiting distribution $F_{\infty}(x)$ can be obtained by analyzing the large $N$ limit of this Painlev\'e V equation in Eqs. (\ref{def:Hn}, \ref{Painleve5}), obtained for finite $N$, leading to (\ref{eq:result_TW}) and (\ref{PIII}). As we show in section 5, the finite $N$ corrections are then easily obtained from the Schlesinger equation (\ref{schlesinger_intro}), from which we obtain Eq. (\ref{conjecture_finite_N}). Finally, section \ref{section:PII} is devoted to the soft edge scaling limit, when $a \sim {\cal O}(N)$, for $N$ large. \section{Semi-classical Orthogonal Polynomials}\label{section:OP} To study this OP system (\ref{eq:def_OP}) it is useful \cite{Nadal11} to introduce a deformation parameter $\alpha$ and study the following OP system \begin{eqnarray}\label{eq:OP_deformed} \left\{ \begin{array}{l} \braket{\pi_k}{\pi_{k'}}=\int_t^{\infty} e^{-\alpha\lambda} \lambda^a \pi_k(\lambda) \pi_{k'}(\lambda) {\rm d}\lambda = h_k \delta_{k,k'}, \\ \\ \pi_k(\lambda)=\lambda^k + \ldots \;, \end{array} \right. \end{eqnarray} such that the norms $h_k$'s are defined by \begin{eqnarray} h_k &=& \braket{\pi_k}{\pi_{k}}. \label{def_hn} \end{eqnarray} As we show here, some useful relations can be obtained by varying $\alpha$. Eventually, we will of course set $\alpha =1$ (\ref{eq:def_OP}). The first polynomials can be computed from (\ref{eq:OP_deformed}) to obtain \begin{eqnarray}\label{first_polynomials0} \pi_0(\lambda)&=&1\,,\\ \pi_1(\lambda)&=&\lambda-\frac{1+a}{\alpha}-\frac{e^{-t\alpha}t(t\alpha)^a}{\Gamma(1+a,t\alpha)}\,,\label{first_polynomials1} \end{eqnarray} where $\Gamma(\nu,x) = \int_x^\infty y^{\nu-1} e^{-y} {\rm d}y$ is the incomplete gamma function. Obviously, the expression of the OPs $\pi_k$ becomes more and more complicated as $k$ grows. The polynomials $\pi_k$ being OPs, they satisfy a three-term recursion relation, which can be obtained as follows. As $\lambda \pi_k$ is a polynomial of degree $k+1$, we can expand it on the basis of these OPs. Because $\braket{\pi_{k-2-i}}{\lambda \pi_k}= 0 $ if $i \ge 0$, we can write the three term recurrence relation \cite{Szego75}: \begin{eqnarray}\label{3term_rec} \lambda \pi_k=\pi_{k+1}+S_k \pi_k +R_k \pi_{k-1}, \end{eqnarray} where, by definition \begin{eqnarray} S_k h_k &=& \braket{\pi_k}{\lambda\pi_{k}},\\ R_k h_{k-1} &=& \braket{\pi_{k-1}}{\lambda\pi_{k}}. \end{eqnarray} From (\ref{first_polynomials0}) and (\ref{first_polynomials1}), we can compute the first terms \begin{eqnarray}\label{CI0} \left\{ \begin{array}{ccl} h_0 &=& \int_t^{\infty} e^{-\alpha\lambda} \lambda^a {\rm d}\lambda = \dfrac{\Gamma(1+a,\alpha t)}{\alpha^{a+1}},\\ \\ S_0 &=& -\partial_\alpha \log{h_0}=\dfrac{1+a}{\alpha}+\dfrac{e^{-t\alpha}t(t\alpha)^a}{\Gamma(1+a,t\alpha)},\\ \\ R_0 &=& 0,\\ \\ \zeta_0&=& 0. \end{array} \right. \end{eqnarray} \subsection{Schlesinger equations}\label{section:Schlesinger} In this section, we derive a couple of recursion relations called the Schlesinger equations that couple $R_k$ and $S_k$. We first write \begin{eqnarray} R_k h_{k-1} = \braket{\pi_{k-1}}{\lambda\pi_{k}}= \braket{\lambda \pi_{k-1}}{\pi_k} \;. \end{eqnarray} Therefore, using Eq. (\ref{3term_rec}) with the substitution $k \to k-1$, we have $\braket{\lambda \pi_{k-1}}{\pi_k} = \braket{\pi_k}{\pi_k} = h_k$ and finally we obtain the standard relation \begin{eqnarray}\label{R_en_h} R_k = \frac{h_k}{h_{k-1}}. \end{eqnarray} On the other hand, using the definition of the scalar product in (\ref{eq:OP_deformed}), we have \begin{eqnarray}\label{Snhn} S_k h_k = \braket{\pi_k}{\lambda\pi_{k}} = \int_t^{\infty} e^{-\alpha\lambda} \lambda^a \, \lambda \, \pi_{k}^2(\lambda)\, {\rm d}\lambda = -\partial_\alpha \braket{\pi_k}{\pi_{k}} = -\partial_\alpha h_k, \end{eqnarray} from which we deduce the relation between $S_k$ and $h_k$ \begin{eqnarray}\label{S_en_h} S_k = -\partial_\alpha \log{h_k}. \end{eqnarray} By combining Eq. (\ref{S_en_h}) and Eq. (\ref{R_en_h}), we obtain straightforwardly \begin{eqnarray}\label{1_recu} S_{k+1}-S_{k} = -\partial_\alpha \log{\frac{h_{k+1}}{h_{k}}} = -\partial_\alpha \log{R_{k+1}}. \end{eqnarray} We now study the coefficient $\zeta_k$ of the term of degree $k-1$ in the polynomial $\pi_k$ [see Eq. (\ref{eq:def_OP})]: \begin{eqnarray}\label{def_zeta} \pi_k(\lambda)=\lambda^k + \zeta_k \lambda^{k-1}+... \end{eqnarray} Taking the derivative of this equation with respect to (w.r.t.) $\alpha$ we obtain \begin{eqnarray}\label{eq:deriv_zeta} \partial_\alpha\pi_k(\lambda)=\partial_\alpha \zeta_k \lambda^{k-1}+ \ldots \;. \end{eqnarray} Multiplying both sides of Eq. (\ref{eq:deriv_zeta}) by $\lambda$ and projecting on $\pi_k$ yields \begin{eqnarray}\label{eq:deriv_zeta2} \partial_\alpha \zeta_k h_k = \braket{\lambda \partial_\alpha \pi_k}{ \pi_{k}}. \end{eqnarray} Besides, by looking at the term of degree $k$, i.e. $\propto \lambda^k$, in Eq. (\ref{3term_rec}) we find a recursion relation between $S_k$ and $\zeta_k$ \begin{eqnarray}\label{zeta_S} \zeta_k = \zeta_{k+1}+S_k \;, \end{eqnarray} which can be solved for $\zeta_k$, using the initial condition (\ref{CI0}), to get \begin{eqnarray}\label{zeta_k_S} \zeta_k = -\sum_{i=0}^{k-1} S_i. \end{eqnarray} Furthermore, by differentiating Eq. (\ref{Snhn}) w.r.t. to $\alpha$ we have \begin{eqnarray} \partial_\alpha (S_k h_k) &=& - \int_t^\infty e^{-\alpha \lambda} \lambda^\alpha \, \lambda^2 \, \pi_k^2(\lambda) \, {\rm d}\lambda + 2 \int_t^\infty e^{-\alpha \lambda} \lambda^\alpha \, \lambda \,\partial_\alpha \pi_k(\lambda) \pi_k(\lambda) \, {\rm d }\lambda \nonumber \\ &=& -\braket{\lambda \pi_k}{\lambda\pi_{k}} + 2 \braket{\lambda \partial_\alpha \pi_k}{\pi_{k}} \;, \end{eqnarray} where, in the second line, we have simply used the definition of the scalar product in (\ref{eq:def_OP}). Using the three-term recurrence relation (\ref{3term_rec}) to rewrite the first term and Eq. (\ref{eq:deriv_zeta2}) to rewrite the second one, we have \begin{eqnarray}\label{deriv_prod} \partial_\alpha (S_{k} h_k)= -h_{k+1} -S^2_k h_k -R_k^2 h_{k-1} +2 \partial_\alpha \zeta_k h_k \;. \end{eqnarray} We can also write \begin{eqnarray}\label{id_prod} \partial_\alpha (S_{k} h_k)= (\partial_\alpha S_{k}) h_k+S_k (\partial_\alpha h_k) \;. \end{eqnarray} By replacing the left hand side of Eq. (\ref{deriv_prod}) by Eq. (\ref{id_prod}), dividing the resulting equation by $h_k$ and using Eqs. (\ref{R_en_h}) and (\ref{S_en_h}), we obtain a last recursion relation between $R_k$, $S_k$ and $\zeta_k$ \begin{eqnarray}\label{2_recu} R_{k+1} +R_k = -\partial_\alpha S_{k} +2 \partial_\alpha \zeta_k \;. \end{eqnarray} Combining Eq. (\ref{zeta_S}) and (\ref{2_recu}), we have \begin{eqnarray} R_{k+1}+R_{k} = \partial_\alpha \zeta_{k+1}+ \partial_\alpha \zeta_k. \end{eqnarray} And finally, with the initial condition $R_0 = \zeta_0 = 0$ in Eq. (\ref{CI0}) we obtain \begin{eqnarray}\label{eq:rel_zeta_R} \partial_\alpha \zeta_k = R_k \;. \end{eqnarray} Therefore substituting this relation in Eq. (\ref{2_recu}) we finally obtain a closed system of two coupled recursion relations between $S_k$ and $R_k$ \begin{eqnarray}\label{recu_derivalpha} \left\{ \begin{array}{ccl} R_{k+1}-R_{k} &=& -\partial_\alpha S_k,\\ \\ S_{k+1}-S_{k} &=& -\partial_\alpha \log{R_{k+1}} \;, \end{array} \right. \end{eqnarray} where the second equation was previously obtained in Eq. (\ref{1_recu}). Our goal now is to relate $\alpha$-derivatives to $t$-derivatives, i.e., find a relation between $\partial_\alpha h_k$ and $\partial_t h_k$. Differentiating Eq. (\ref{def_hn}) w.r.t. $t$, using that $\braket{\partial_t \pi_k}{\pi_{k}}=0$ as $\partial_t \pi_k$ is a polynomial of degree $k-1$ (\ref{eq:def_OP}), we have \begin{eqnarray}\label{dot_hn} \partial_t h_k=-e^{-\alpha t}t^{a} \pi_k^2(t). \end{eqnarray} We now start from the expression of $\partial_\alpha h_k$ given in Eq. (\ref{Snhn}) and use integration by parts to obtain \begin{eqnarray} -\partial_\alpha h_k&=& \int_t^\infty{\rm d}\lambda \pi_k^2(\lambda)e^{-\alpha\lambda}\lambda^{a+1}, \nonumber \\ &=&\big[-\frac{1}{\alpha} e^{-\alpha\lambda}\lambda^{a+1} \pi_k^2(\lambda)\big]^{\lambda=\infty}_{\lambda=t}\\ &+&\frac{1}{\alpha}\int_t^\infty{\rm d}\lambda e^{-\alpha\lambda}\lambda^{a} \left((a+1) \pi_k^2(\lambda) + 2 \lambda \pi_k(\lambda) \partial_\lambda\pi_k(\lambda)\right),\nonumber \\ &=&\frac{1}{\alpha} (-t\partial_t h_k+(a+1)h_k+2\braket{ \pi_k}{\lambda\partial_\lambda\pi_{k}}) \;, \end{eqnarray} where we have used Eq. (\ref{dot_hn}). We can easily calculate the last scalar product \begin{eqnarray} \braket{ \pi_k}{\lambda\partial_\lambda\pi_{k}}=\braket{\pi_k}{(k \lambda^k + ...)}=k h_k \;, \end{eqnarray} to obtain the desired relation between $\partial_\alpha h_k$ and $\partial_t h_k$ \begin{eqnarray}\label{deriv_alpha_t_h} \alpha \partial_\alpha h_k=t\partial_t h_k-(2k+a+1)h_k. \end{eqnarray} With this relation (\ref{deriv_alpha_t_h}), it is then straightforward to relate $\partial_\alpha R_k$ and $\partial_\alpha S_k$ to $\partial_t R_k$ and $\partial_t S_k$, using (\ref{R_en_h}) and (\ref{S_en_h}). Note that from now on, we set $\alpha =1$ \begin{eqnarray}\label{S_en_h_sans_alpha} S_k = -t \partial_t \log{h_k}+(2k+a+1). \end{eqnarray} We finally obtain, using Eq. (\ref{recu_derivalpha}), the so-called Schlesinger equations \cite{Magnus95} \begin{eqnarray} \left\{ \begin{array}{ccl}\label{recurrence_dotS} S_k-R_{k+1}+R_{k} &=& t\partial_t S_k,\\ \\ 2-S_{k+1}+S_{k} &=& t \partial_t \log R_{k+1}. \end{array} \right. \end{eqnarray} Note that, for $\alpha =1$, the initial condition (\ref{CI0}) reads \begin{eqnarray}\label{CI} \left\{ \begin{array}{ccl} h_0 &=& \int_t^{\infty} e^{-\lambda} \lambda^a{\rm d}\lambda = \Gamma(1+a,t),\\ \\ S_0 &=& -t \partial_t \log{h_0} +a+1 = \frac{e^{-t}t^{a+1}}{\Gamma(1+a,t)}+a+1,\\ \\ R_0 &=& 0,\\ \\ \zeta_0&=& 0. \end{array} \right. \end{eqnarray} Using the Schlesinger equations (\ref{recurrence_dotS}) and this initial condition (\ref{CI}), we can compute step by step all the terms for arbitrary $k$. We end up this section by providing a useful relation between $\zeta_k$ [see Eq. (\ref{def_zeta})] and the cumulative distribution of the smallest eigenvalue $F_N(t)={\rm Prob}(\underset{1 \le i \le N}{\min}\lambda_i \ge t)$. One has \begin{equation}\label{F_k_fonction_de_hn} F_N(t)=\int_t^{\infty} \!{\rm d}\lambda_1 \int_t^{\infty} \!{\rm d}\lambda_2 ... \int_t^{\infty} \!{\rm d}\lambda_N \, P_{\rm joint}(\lambda_1,\lambda_2,\cdots,\lambda_N) = \frac{N!}{Z_N} \prod_{k=0}^{N-1} h_k, \end{equation} where the last equality is obtained by using the classical tricks of replacing the Vandermonde determinant by the determinant built from the OPs $\pi_k$'s (\ref{eq:def_OP}) and then use the Cauchy-Binet formula \cite{Mehta91,Forrester10}. Using Eq. (\ref{zeta_k_S}) with (\ref{S_en_h_sans_alpha}) as well as (\ref{deriv_alpha_t_h}) , we can write \begin{eqnarray} \zeta_N=-N(N+a)+t\partial_t\log\left(\prod_{k=0}^{N-1} h_k \right). \end{eqnarray} And therefore, from the expression of $F_N(t)$ given in Eq. (\ref{F_k_fonction_de_hn}), we have \begin{eqnarray}\label{lien_zeta_F} \zeta_N=-N(N+a)+t\partial_t\log \left(F_N(t)\right) \;. \end{eqnarray} This expression thus provides a link between $\zeta_N$, which is the first non trivial coefficient of the OP's [see Eq. (\ref{def_zeta})] and the cumulative distribution of the smallest eigenvalue. \subsection{Laguerre-Freud equations}\label{section:LF} In this section, we derive another set of recursion relations between the coefficients $R_k$'s and $S_k$'s, the so-called Laguerre-Freud equations, following the procedure used by Belmehdi and Ronveaux in \cite{Belmehdi94}. To derive these equations, we start by searching two functions $\Psi$ and $\Phi$, which are polynomials in $\lambda$, satisfying for any polynomial~$p$ \begin{eqnarray}\label{def_phi_psi} \braket{\Psi}{p}=\braket{\Phi}{p'} \;, \end{eqnarray} where the polynomials $\Phi$ and $\Psi$ may depend explicitly on the parameters $t$ and $a$ and $p'(\lambda)\equiv \partial_\lambda p(\lambda) $. Denoting by $w(\lambda)=e^{-\lambda} \lambda^a$ the weight associate to the scalar product in Eq. (\ref{eq:def_OP}) and using an integration by parts, this relation (\ref{def_phi_psi}) is satisfied provided that \begin{eqnarray} \Psi w+ (\Phi w)' = 0 \hspace{0.2cm} {\rm and} \hspace{0.2cm} \Phi(\lambda=t)=0. \end{eqnarray} We find that the simplest non trivial solution to this equation is given by \begin{eqnarray} \left\{ \begin{array}{ccl} \Psi(\lambda)&=&\lambda^2-(2+a+t)\lambda+t(1+a),\\ \\ \Phi(\lambda)&=&\lambda^2-t \lambda \;. \end{array} \right. \end{eqnarray} Then, using this property (\ref{def_phi_psi}) for $p=\pi_k^2$ and for $p=\pi_k\pi_{k+1}$, we can write after expansion, using the three-term recursion relation (\ref{3term_rec}) \begin{eqnarray} \left\{ \begin{array}{ccl} I_{2,k}-(2+a+t)I_{1,k}+t(1+a)I_{0,k}&=& 2(J_{2,k}-tJ_{1,k}),\\ \\ K_{2,k}-(2+a+t)K_{1,k}+t(1+a)K_{0,k}&=& L_{2,k}-tL_{1,k}, \end{array} \right. \end{eqnarray} where we have introduced the four family of integrals \begin{eqnarray} \left\{ \begin{array}{ccl} I_{m,k} &=& \braket{\lambda^m}{\pi_{k}^2},\\ \\ J_{m,k} &=& \braket{\lambda^m}{\pi_{k}\pi'_{k}},\\ \\ K_{m,k} &=& \braket{\lambda^m}{\pi_{k}\pi_{k+1}},\\ \\ L_{m,k} &=& \braket{\lambda^m}{(\pi_{k+1}\pi_k)'}. \end{array} \right. \end{eqnarray} These integrals are all calculated (for $m=0, 1$ and $2$) in \ref{appendix:Turan}. Using these expressions together with Eq. (\ref{R_en_h}), we find the two relations \begin{eqnarray} \left\{ \begin{array}{rcl}\label{LF01} &&R_{k+1}+R_k+S_k(S_k-a-t-2k-2)+t(2k+1+a) = 2 \,\displaystyle{\sum_{i=0}^{k-1} }S_i, \\ \\ &&(S_{k+1}+S_k-3-a-t-2k)R_{k+1} = 2 \,\displaystyle{\sum_{i=1}^k R_i+\sum_{i=0}^k S_i^2 - t \sum_{i=0}^k S_i}. \end{array} \right. \end{eqnarray} We can rewrite these equations by subtracting rank $k$ to the rank $k+1$ and find the two Laguerre-Freud recurrence equations (which are here of order 2): \begin{eqnarray}\label{laguerre-freud} \left\{ \begin{array}{rcl} R_{k+2}-R_k&=&S_{k+1}(2k+4+a+t-S_{k+1})-S_k(2k+a+t-S_k)-2t,\\ \\ S_{k+1}(S_{k+1}-t) &=& R_{k+1}(2k+1+a+t-S_{k+1}-S_k)\\ &-&R_{k+2}(2k+5+a+t-S_{k+2}-S_{k+1}). \end{array} \right. \end{eqnarray} As we show below these two sets of equations (\ref{recurrence_dotS}) and (\ref{laguerre-freud}) allow us to (i) derive the connection to the Painlev\'e equation and (ii) perform the asymptotic analysis of these coefficients for large $N$. \section{Painlev\'e V equation for finite $N$}\label{section:PV} In this section, we proceed to the derivation of the Painlev\'e V equation, following the method of Ref. \cite{Basor09}. First, it is useful to introduce the quantities~\cite{Forrester07} \begin{eqnarray} \left\{ \begin{array}{rcl}\label{def_omega_txt} \theta_k&=&2k+1+a-S_k,\\ \\ \omega_k&=&-R_k-\zeta_k. \end{array} \right. \end{eqnarray} Manipulating the Laguerre-Freud equations (\ref{laguerre-freud}) we can prove (see \ref{appendix:LF}) \begin{eqnarray}\label{LF02_bonne_limite} \omega_k^2 -\theta_k (\theta_k-a-2k+t) \omega_k -\theta_k (kt(k+a)+(\theta_k+t)\zeta_k) = 0 \;, \end{eqnarray} which is a simple algebraic relation (and not a recursion relation) between the different variables $\omega_k, \theta_k$ and $\zeta_k$. Using the previous relations with the index $k=N$, we find from Eq. (\ref{lien_zeta_F}), \begin{eqnarray}\label{def:Hn} H_N=t \partial_t \log(F_N(t)) =N(N+a)+ \zeta_N. \end{eqnarray} Summing up the first Schlesinger equation (\ref{recurrence_dotS}) from $k=0$ to $k=N-1$, we find \begin{eqnarray} \sum_{k=0}^{N-1} S_k + \sum_{k=0}^{N-1}(R_k-R_{k-1})= t \partial_t \sum_{k=0}^{N-1} S_k \end{eqnarray} which can be simplified using (\ref{zeta_k_S}), (\ref{CI0}) and (\ref{def_omega_txt}) to yield \begin{eqnarray}\label{zeta:omegan} -t \partial_t \zeta_N = -R_N-\zeta_N =\omega_N . \end{eqnarray} On the other hand, using (\ref{def:Hn}) and (\ref{zeta:omegan}), we have \begin{eqnarray}\label{Hn_to_Rn} t \partial_t H_N = - \omega_N = \zeta_N+R_N = H_N -N(N+a)+R_N \;. \end{eqnarray} Taking a derivative of this equation w.r.t. $t$, we find \begin{eqnarray}\label{partialRn:H''} \partial_t R_N = t \,\partial_t^2 H_N. \end{eqnarray} From the second Schlesinger equation (\ref{recurrence_dotS}), we find \begin{eqnarray} t \partial_t R_N &=& R_N (2-S_N-S_{N-1}) = R_N (\theta_N-\theta_{N-1}) \;, \end{eqnarray} where we have used (\ref{def_omega_txt}). Using the relation derived in \ref{appendix:LF} in Eq. (\ref{theta_km}), we can rewrite this recursion equation as a simple algebraic relation \begin{eqnarray}\label{partialR_k:omegaRnthetan} t \partial_t R_N &=& R_N \theta_N-\frac{\omega_N^2}{\theta_N} \;. \end{eqnarray} Therefore, combining (\ref{partialRn:H''}) and (\ref{partialR_k:omegaRnthetan}), we find \begin{eqnarray}\label{H'':omegaRnthetan} t^2 \partial_t^2 H_N &=& R_N \theta_N-\frac{\omega_N^2}{\theta_N} \;. \end{eqnarray} On the other hand, from our Eq. (\ref{LF02_bonne_limite}) in which we inject the definition of $\omega_N$ (\ref{def_omega_txt}) to eliminate $\zeta_N$ \begin{eqnarray}\label{LF_modif_kozeta} N(N+a)t-(2N+a)\omega_N-t R_N &=& R_N \theta_N+\frac{\omega_N^2}{\theta_N}. \end{eqnarray} Summing and subtracting the last two equations (\ref{H'':omegaRnthetan}) and (\ref{LF_modif_kozeta}), we obtain \begin{eqnarray} 2\frac{\omega_N^2}{\theta_N} &=& N(N+a)t-(2N+a)\omega_N-t R_N -t^2 \partial_t^2 H_N, \label{multi1}\\ 2 R_N \theta_N &=& N(N+a)t-(2N+a)\omega_N-t R_N +t^2 \partial_t^2 H_N. \label{multi2} \end{eqnarray} Multiplying these two equations (\ref{multi1}) and (\ref{multi2}) together, we find \begin{eqnarray} 4 R_N \omega_N^2 &=& \left(N(N+a)t-(2N+a)\omega_N-t R_N \right)^2- \left(t^2 \partial_t^2 H_N\right)^2. \end{eqnarray} Finally, by using (\ref{Hn_to_Rn}), we eliminate $\omega_N$ and $R_N$ (by expressing them in terms of $H_N$ and $\partial_t H_N$) and find the equation \begin{equation}\label{Painleve5} \left(t \partial_t^2 H_N\right)^2 = 4\left( \partial_t H_N \right)^2 \left( H_N -N(N+a) -t \partial_t H_N \right) + \left((2N+a-t)\partial_t H_N +H_N\right)^2 \;, \end{equation} which is a Painlev\'e V equation in the Jimbo-Miwa-Okamoto $\sigma$ form \cite{Jimbo81,Okamoto82}. Note that this equation coincides exactly with the equation first found by Tracy and Widom in \cite{Tracy94b}. \section{Large $N$ asymptotic limit at the hard edge: Painlev\'e III and first correction}\label{section:PIII} In this section, we study the behavior of the quantities $h_N$, $R_N$, $S_N$ and $\zeta_N$ in the hard edge limit (\ref{def_hard_edge}), which, in the language of OPs, corresponds to a double scaling limit. Of course, as we are eventually interested in the study of the cumulative distribution of the smallest eigenvalue $F_N(t)$, we could perform this asymptotic analysis directly on the Painlev\'e V equation (\ref{Painleve5}), as done by Tracy and Widom in their original study of GUE \cite{TW94a}. But it turns out to be much more convenient, especially to extract the $1/N$ corrections, to perform this asymptotic analysis on the Schlesinger (\ref{recurrence_dotS}) and Laguerre-Freud (\ref{laguerre-freud}) equations. To understand the structure of the coefficients $h_N$, $R_N$, $S_N$ and $\zeta_N$ in this double scaling limit (\ref{def_hard_edge}), it is useful to study their behavior for small $t$ (keeping $N$ fixed). For $t=0$, the OPs $\pi_k$'s in Eq. (\ref{eq:def_OP}) can be expressed in terms of the generalized Laguerre polynomials \begin{eqnarray}\label{Laguerre:def} L_{k}^{(a)}(x)= \frac{\Gamma(a+k+1)}{k!} \sum_{i=0}^k \binom{k}{i} \frac{(-x)^i}{\Gamma(a+i+1)}\,. \end{eqnarray} Hence, thanks to standard properties of Laguerre polynomials, we easily obtain these coefficients, for $t=0$, using (\ref{def_hn}), (\ref{R_en_h}) and (\ref{S_en_h_sans_alpha}) as: \begin{eqnarray} \left\{ \begin{array}{ccl}\label{polym0} \left. \pi_k(\lambda) \right|_{t=0} &=& L_k^{(a)}(\lambda) k! (-1)^k,\\ \\ \left. h_k \right|_{t=0} &=&\Gamma(k+a+1)k!,\\ \\ \left. R_k \right|_{t=0} &=&k(k+a),\\ \\ \left. S_k \right|_{t=0} &=&2k+a+1,\\ \\ \left. \zeta_k \right|_{t=0} &=&-k(k+a). \end{array} \right. \end{eqnarray} We now take the index $k=N$. When $t$ is closed to 0, we can use Eq. (\ref{dot_hn}) and find \begin{equation}\label{Eq:dot_h} \partial_t h_N = -t^a e^{- t}\pi_N(t)^2 = -t^a L_N^{(a)}(0)^2 N!^2 +o(t^a)= -t^a \frac{[\Gamma(N+a+1)]^2}{[\Gamma(a+1)]^2} +o(t^a) \;. \end{equation} Using (\ref{polym0}), we can integrate and find the first correction \begin{eqnarray} h_N=\Gamma(N+a+1)N!-\frac{t^{a+1}[\Gamma(N+a+1)]^2}{\Gamma(a+2) \Gamma(a+1)}+o(t^{a+1}). \end{eqnarray} We are interested in the scaling behavior when $N$ goes to infinity, $t$ goes to $0$ with $x=Nt$ finite (\ref{def_hard_edge}). In this double scaling regime, the expansion above reads \begin{eqnarray}\label{hn_small_x} h_N &=&\Gamma(N+a+1)N!\left[1-\frac{1}{N} \left(\frac{x^{a+1}}{\Gamma(a+2)\Gamma(a+1)}+ o(x^{a+1})\right) + o\left(\frac{1}{N}\right)\right]. \end{eqnarray} This suggests the ansatz \begin{eqnarray}\label{ansatz} h_N=\Gamma(N+a+1)N!\left[1+\frac{1}{N}f(x= N\,t) + o\left(\frac{1}{N}\right)\right], \end{eqnarray} where the function $f$ has thus the small $x$ expansion, read from Eq.~(\ref{hn_small_x}) \begin{eqnarray}\label{f_dvpt0} f(x) &=& -\frac{x^{a+1}}{\Gamma(a+2)\Gamma(a+1)} + o(x^{a+1}). \end{eqnarray} We can introduce the ansatz (\ref{ansatz}) in the relations (\ref{R_en_h}) and (\ref{S_en_h_sans_alpha}) which give $R_N$ and $S_N$ in terms of the function $f$. We obtain \begin{eqnarray}\label{ansatz_R_S} \left\{ \begin{array}{ccl} R_N&=&N(N+a)+x f'(x) -f(x) + o\left(1\right),\\ \\ S_N&=&2N+a+1-\dfrac{1}{N}x f'(x) + o\left(\dfrac{1}{N}\right). \end{array} \right. \end{eqnarray} The first non trivial terms in Eq. (\ref{ansatz_R_S}) $xf'(x) - f(x)$ for $R_N$ and $-(1/N) x \,f'(x)$ for $S_N$ are necessary to compute the leading order of the cumulative distribution, $F_\infty(x)$. To compute the first $1/N$ correction to the limiting distribution, we need to expand $R_N$ and $S_N$ in Eq. (\ref{ansatz_R_S}) to the next order in $1/N$. One actually expects the following expansion \begin{eqnarray}\label{R_S_expansion} \left\{ \begin{array}{ccl} R_N&=&N(N+a)+\displaystyle \sum_{i=0}^j r_i(x)N^{-i} + o\left(N^{-j}\right),\\ \\ S_N&=&2N+a+1+\displaystyle \sum_{i=1}^j s_i(x)N^{-i} + o\left(N^{-j}\right) \;, \end{array} \right. \end{eqnarray} where the first term in this expansion are given in (\ref{ansatz_R_S}), i.e., $r_0(x) = xf'(x)-f(x)$ and $s_1(x) = - xf'(x)$. Besides, using the exact relation $\partial_t h_N = -t^a e^{- t}\pi_N(t)^2$ [see Eq. (\ref{Eq:dot_h})], one can show that $r_i(x) = o(x)$ as well as $s_i(x)=o(x)$ when $x \to 0$. One can check, in principle, the validity of this expansion (\ref{R_S_expansion}) order by order in powers of $1/N$ by injecting it in the Schlesinger equations (\ref{recurrence_dotS}). Proving it rigorously to arbitrary order $j$ is however a hard task. However, here, we only need this asymptotic expansion up to order ${\cal O}(1/N^2)$, which can be obtained explicitly as follows. To compute the second correction, we truncate the expansion (\ref{R_S_expansion}) up to the $N^{-2}$ order and write \begin{eqnarray}\label{R_S_expansion2} \left\{ \begin{array}{ccl} R_N&=&N(N+a)+x f'(x)-f(x)+\dfrac{r_1(x)}{N}+\dfrac{r_2(x)}{N^2}+o(N^{-2})\,, \\ \\ S_N&=&2N+a+1-\dfrac{x f'(x)}{N}+\dfrac{s_2(x)}{N^2}+o(N^{-2})\,. \end{array} \right. \end{eqnarray} In the hard edge limit $N \to \infty$, $t \to 0$, keeping $x=N t$ fixed, we can obtain the expansion of $R_{N+1}$ and $S_{N+1}$ at the same order by replacing $N$ by $N+1$ and using $(N+1)t = x(1 + 1/N)$ in (\ref{R_S_expansion2}) \begin{eqnarray}\label{R_S_expansion} \left\{ \begin{array}{ccl} R_{N+1}&=&(N+1)(N+1+a)+x f'(x)-f(x)+\frac{1}{N}(r_1(x)+x^2 f''(x))\\ &&\!\!\!+\frac{1}{N^2}(r_2(x)+x r_1'(x)-r_1(x)+\frac{1}{2}x^2 f''(x)+\frac{1}{2}x^3 f'''(x))+o(N^{-2})\,, \\ \\ S_{N+1}&=&2N+a+3-\frac{1}{N}x f'(x)+\frac{1}{N^2}(s_2(x)-xf'(x)-x^2f''(x))+o(N^{-2})\,. \end{array} \right. \end{eqnarray} By injecting these two expansions into the Schlesinger equation (\ref{recurrence_dotS}), we find (using $t \partial_t = x \partial_x$) at the first non trivial order (${\cal O}(N^{-2})$ for the first Schlesinger equation and at ${\cal O}(N^{-3})$ for the second) \begin{eqnarray}\label{R_S_expansion2} \left\{ \begin{array}{ccl} s_2(x)-xs'_2(x)+r_1(x)-xr_1'(x)&=&\frac{1}{2}(x^2 f''(x)+x^3 f'''(x))\,,\\ \\ 2s_2(x)-xs'_2(x)-xr_1'(x)&=&-a x^2 f''(x)+\frac{1}{2}x^3 f'''(x))\,, \end{array} \right. \end{eqnarray} which can be solved as follows. First, by subtracting the first equation of (\ref{R_S_expansion2}) to the second one, one obtains \begin{eqnarray}\label{rel_s2_r1} s_2(x) = r_1(x) - \left(a+\frac{1}{2}\right)x^2 f''(x) \;. \end{eqnarray} By injecting this relation (\ref{rel_s2_r1}) in the first equation of (\ref{R_S_expansion2}), one finds that $r_1$ satisfies the following equation \begin{eqnarray}\label{eq_r1_only} r_1(x) - x r_1'(x) = - \frac{a}{2} \left(x^2 f''(x) + x^3 f'''(x)\right) \;, \end{eqnarray} which can be solved, using that $r_1(x) = o(x)$ as $x \to 0$, yielding \begin{eqnarray}\label{expr_r1} r_1(x) = \frac{a}{2} x^2 f''(x) \;. \end{eqnarray} Consequently, from Eq. (\ref{rel_s2_r1}) one has \begin{eqnarray}\label{expr_s2} s_2(x) = - \frac{a+1}{2} x^2 f''(x) \;. \end{eqnarray} We finally find the two first terms of the expansion of $R_N$ and $S_N$, for large $N$, as \begin{eqnarray}\label{expansion_second_order} \left\{ \begin{array}{rcl} R_N&=&N(N+a) + xf'(x)-f(x) + \dfrac{a}{2N}x^2 f''(x)+o\left(\dfrac{1}{N}\right),\\ \\ S_N&=&2N+a+1 - \dfrac{1}{N}xf'(x)-\dfrac{a+1}{2N^2} x^2 f''(x)+o\left(\dfrac{1}{N^2}\right). \end{array} \right. \end{eqnarray} From the expansion of $S_N$ in Eq. (\ref{expansion_second_order}), we compute $\zeta_N$ in the double scaling limit (\ref{def_hard_edge}), using Eq.~(\ref{zeta_S}), up to the second non-trivial order for large $N$ \begin{eqnarray} \label{ansatz_zeta} \zeta_N&=&-(N + a) N + f(x) + \frac{a}{2N} x f'(x) + o\left(\frac{1}{N}\right) \;. \end{eqnarray} Finally, from the expression of $\zeta_N$ we obtain the large $N$ expansion of $F_N(t)$ in the hard edge limit (\ref{def_hard_edge}), using Eq. (\ref{lien_zeta_F}), which is given by \begin{eqnarray}\label{Dvpt_F0} x\partial_x\log\left(F_N\left(\frac{x}{N}\right) \right) = f(x) + \frac{a}{2N} x f'(x) + o\left(\frac{1}{N}\right). \end{eqnarray} Using the initial condition $F_N(0)=1$ and (\ref{f_dvpt0}) we can rewrite this equation as \begin{eqnarray}\label{Dvpt_F} F_N\left(\frac{x}{N}\right) = \exp{ \left( \int_0^x \frac{f(u)}{u} {\rm d}u + \frac{a}{2N} f(x) + o\left(\frac{1}{N}\right) \right) }. \end{eqnarray} Expanding Eq. (\ref{Dvpt_F}) up to first order in $1/N$, one obtains finally \begin{eqnarray}\label{Finfty2} F_N\left(\frac{x}{N}\right) = F_\infty(x) + \frac{a}{2N} x F'_\infty(x) + o\!\left(\frac{1}{N}\right) \;, \end{eqnarray} where $F_{\infty}(x)$ is given by \begin{eqnarray}\label{Finfty(x)} \underset{N \to \infty}{\lim}F_N\left(\frac{x}{N}\right) = F_\infty(x)\,,\hspace{0.7cm} F_\infty(x)=\exp\left( \int_0^x \frac{f(u)}{u} {\rm d}u\right) \;. \end{eqnarray} Hence we easily obtain the functional form of the $1/N$ correction as given in Eq.~(\ref{Finfty2}), fully consistent with the conjecture in (\ref{conjecture_finite_N}) made in \cite{Edelman14}. However, at this stage, we still need to find the equation satisfied by the function $f$, which enters in the definition of $F_{\infty}(x)$ in Eq. (\ref{Finfty(x)}). To obtain this equation, we expand $\theta_N$ and $\omega_N$ in Eq.~(\ref{def_omega_txt}), which is easily done from the expansions of $R_N$, $S_N$ and $\zeta_N$ in (\ref{ansatz_R_S}) and (\ref{ansatz_zeta}) to yield \begin{eqnarray}\label{dvpt_omega_theta} \left\{ \begin{array}{rcl} \theta_N&=& \dfrac{1}{N}xf'(x)+\dfrac{a+1}{2N^2} x^2 f''(x)+o\left(\dfrac{1}{N^2}\right),\\ \\ \omega_N&=& - xf'(x) - \dfrac{a}{2N}(x^2 f''(x)+x f'(x)) + o\left(\dfrac{1}{N}\right). \end{array} \right. \end{eqnarray} Finally, by injecting these expansions (\ref{ansatz_zeta}) and (\ref{dvpt_omega_theta}) in (\ref{LF02_bonne_limite}) with $t=x/N$, we obtain, by canceling the first term, of order ${\cal O}(N^{-2})$, in Eq. (\ref{LF02_bonne_limite}) that $f$ satisfies a Painlev\'e III equation \begin{eqnarray} (xf'')^2+4f'(1+f')(xf'-f)=(a f')^2, \label{PainleveIII} \end{eqnarray} with the small argument behavior in Eq. (\ref{f_dvpt0}). This result coincides with the one obtained, by a quite different method, by Tracy and Widom \cite{Tracy94} (note the correspondence $\sigma(s)=-f(s/4)$, where $\sigma(s)$ is the notation used in \cite{Tracy94}). Note that for integer values of $a$, $f(x)$ can be written explicitly in terms Bessel functions \cite{Forrester07} [see Eq. (\ref{fdetBessel})]. These results in Eqs. (\ref{Finfty2}), (\ref{Finfty(x)}) and (\ref{PainleveIII}) yield the results announced in the introduction in Eqs. (\ref{eq:result_TW}), (\ref{PIII}) and (\ref{conjecture_finite_N}). Finally, in \ref{appendix:Numeric} we present a comparison between numerical simulations of Wishart matrices of size $N=50$ and the asymptotic formula in Eq. (\ref{Finfty2}) describing the first $1/N$ correction. \section{Large $N$ asymptotic limit at the soft edge: Painlev\'e II and first correction}\label{section:PII} We now turn to the analysis of the PDF of the smallest eigenvalue $\lambda_{\min}$ in the case where $a \sim {\cal O}(N)$, and we set $a = \alpha \, N$. In this case, the density of eigenvalues has a single support on $[N x_-, N x_+]$ [see Eq. (\ref{density_MP})], with $x_{\pm} = (\sqrt{1+\alpha} \pm 1)^2$ with a soft edge at both extremities (see Fig. \ref{fig:Wishart_soft_vs_hard}). Therefore, one expects that $\lambda_{\min}$ will be close to $N x_-$, while its fluctuations, of order ${\cal O}(N^{1/3})$, are governed by the Tracy-Widom distribution for $\beta = 2$. In the soft edge limit, the large $N$ analysis of $F_N(t)$ in Eq. (\ref{def_FN}) is more conveniently done directly on the Painlev\'e V equation (\ref{Painleve5}) \cite{Baker98}. Following this route, one can indeed show \cite{Baker98} that, for large $N$ \begin{eqnarray}\label{baker_forrester} \lambda_{\min} = N x_- - \frac{N^{1/3}}{m}\chi + {o}(N^{1/3})\;, \end{eqnarray} where $m$ is given by \cite{Baker98} \begin{eqnarray}\label{def_m} m=\frac{(1+\alpha)^{1/6}}{(\sqrt{1+\alpha}-1)^{4/3}} \;, \end{eqnarray} and where ${\chi}$ is distributed according to the Tracy-Widom distribution ${\cal F}_2$, i.e. $\Pr[\chi \leq s] = {\cal F}_2(s)$ where ${\cal F}_2(s)$ is given by \cite{TW94a} \begin{eqnarray}\label{TW2} {\cal F}_2(s) = \exp{\left(-\int_s^\infty (x-s) q^2(x) \, {\rm d}x \right)} \;. \end{eqnarray} Here $q(x)$ is the so-called Hatings-McLeod solution of the Painlev\'e II equation \begin{eqnarray}\label{PII} q''(x) = x\,q(x) + 2 q^3(x) \;, {\rm with} \; q(x) \sim {\rm Ai}(x) \;, \; {\rm for}\; x \to \infty \;, \end{eqnarray} where ${\rm Ai}(x)$ is the Airy function. The result in Eq. (\ref{baker_forrester}) can be equivalently written as \begin{eqnarray}\label{tilde_f0} F_N(t) = \tilde f_0\left(m \frac{N\, x_- -t}{N^{1/3}}\right) + {o}(1) \;, \end{eqnarray} which implies \begin{eqnarray}\label{HN_0} H_N(t) = t \partial_t \log F_N(t) &=& -(m x_-) \tilde h_0(x) N^{2/3} + o(N^{2/3}) \;, \; x = m \frac{N\, x_- -t}{N^{1/3}}. \end{eqnarray} with $\tilde h_0(x)=\tilde f_0'(x)/f_0(x)$ and where $\tilde f_0 = {\cal F}_2$, and where the tilde refers to the soft edge scaling limit. By injecting this form (\ref{HN_0}) into the Painlev\'e V equation satisfied by $H_N(t)$ (\ref{Painleve5}) one finds that $\tilde h_0$ satisfies the following equation \begin{eqnarray}\label{PII_bis} \left(\tilde h_0''(x)\right)^2+ 4 \tilde h'_0(x) \left[\left(\tilde h'_0(x)\right)^2 - x \tilde h'_0(x) + \tilde h_0(x) \right]=0 \;. \end{eqnarray} Using the Painlev\'e II equation (\ref{PII}), one can indeed check that $\tilde h_0(x) = \int_x^\infty q^2(u) {\rm d}u$ is solution of this equation (\ref{PII_bis}). Note that to check this, it is useful to use the identity \begin{eqnarray}\label{identity} \tilde h_0(x) = \int_x^\infty q^2(u) {\rm d} u = (q'(x))^2 - (q(x))^4 - x (q(x))^2 \;. \end{eqnarray} What is the first correction to the limiting form in Eq. (\ref{tilde_f0}) ? Unfortunately, in the soft edge limit, it turns out the Schlesinger equations (\ref{recurrence_dotS}) do not allow to determine easily this first correction -- while they were very helpful in the hard edge scaling limit. An alternative way to compute this first correction is to analyze directly the Painlev\'e V equation in (\ref{Painleve5}). By inspection of this equation (\ref{Painleve5}), we conjecture that the first correction to Eq. (\ref{tilde_f0}) is of the form \begin{eqnarray}\label{HN_kext} H_N(t) = -(m x_-) \left(\tilde h_0(x) N^{2/3} + \tilde h_1(x) N^{1/3}\right) + {o}(N^{1/3}) \;. \end{eqnarray} By inserting this expansion (\ref{HN_kext}) in Eq. (\ref{Painleve5}) we obtain that $\tilde h_1$ satisfies the following linear differential equation \begin{eqnarray}\label{eq_h1} 2 \tilde h_1 \tilde h'_0 +2 (\tilde h_0 + h'_0 (3 \tilde h'_0 -2 x)) \tilde h'_1 + \tilde h''_0 \tilde h''_1= 0 \;, \end{eqnarray} where $\tilde h_0(x)$ is given in Eq. (\ref{identity}). Of course, we know that $\tilde h_1(x) \to 0$ when $x \to +\infty$ (\ref{HN_kext}). In addition, from Eq. (\ref{eq_h1}) one can show, using Eq.~(\ref{identity}) and the large argument behavior in (\ref{PII}) $q(x) \sim {\rm Ai}(x) \sim x^{-1/4} e^{-(2/3)x^{3/2}}/(2 \sqrt{\pi})$ that $\tilde h_1(x) \sim A \, x^{-1/2} e^{-(4/3) x^{3/2}}$. But, of course, the equation (\ref{eq_h1}) being linear, the amplitude $A$ can not be determined from this analysis. One way to determine it would be to analyze the OP system (\ref{eq:def_OP}) in the limit when $t$ is far from the left edge, i.e., for $(N x_- -t)\gg N^{1/3}$ (corresponding to the left large deviation tail of $\lambda_{\min}$~\cite{Katzav10}), and then match this result with the typical regime, for $|N x_-- t| \sim {\cal O}(N^{1/3})$. This program was carried out in detail in a similar albeit different context, involving discrete OPs in Ref. \cite{Schehr13}. Since we are interested in the first correction, this actually requires a very precise (and tedious) analysis of this regime $(N x_- -t)\gg N^{1/3}$ which goes beyond the scope of the present paper. Hence our result for the first correction $\tilde h_1(x)$ in Eq. (\ref{eq_h1}) does determine this function only up to a constant. We have not found any simple solution to this equation (\ref{eq_h1}), which could indicate that the corrections to scaling in this case are actually more complicated, as found recently in the case of real Wishart matrices \cite{Ma12}. \section{Conclusion}\label{section:Conclusion} To conclude, we have provided a direct computation of the cumulative distribution $F_N(t)$ of the smallest eigenvalue of complex Wishart random matrices (\ref{Pjoint}), for arbitrary parameter $a \geq 0$. This was done by studying a set of semi-classical orthogonal polynomials as defined in Eq. (\ref{eq:def_OP}) for which we derived (i) the Schlesinger (\ref{schlesinger_intro}) and (ii) the Laguerre-Freud (\ref{LF_intro}) equations. By combining these equations, we showed that $F_N(t)$ can be expressed in terms of a solution of a Painlev\'e V equation (\ref{Painleve5}), thus recovering the result of Tracy and Widom \cite{Tracy94b} using a quite different method. In the large $N$ limit, $F_N(t)$, properly shifted and scaled, converges to a limiting distribution $F_{\infty}(x)$ which can be expressed in terms of a solution of a Painlev\'e III equation (\ref{PainleveIII}) in the hard edge limit (corresponding to $a ={\cal O}(1)$) and of a Painlev\'e II equation (\ref{PII_bis}) in the soft edge limit (corresponding to $a = {\cal O}(N)$). Furthermore, we have computed explicitly the first correction to the limiting distribution when $N \to \infty$. In the hard edge case (\ref{conjecture_finite_N}), we confirmed a conjecture by Edelman, Guionnet et P\'ech\'e in Ref. \cite{Edelman14}. In this case, the first correction can be simply understood as a correction to the scale of the fluctuations of $\lambda_{\min}$, see Eq. (\ref{eq:correction}). On the other hand, in the soft edge limit, we found that this correction is a solution of a second order linear differential equation with varying coefficients (\ref{eq_h1}). Solving this equation remains a challenging open problem, which could suggest that the first correction does not correspond to a simple shift or rescaling of the scaling variable, as found for the hard edge (\ref{eq:correction}). This could be reminiscent of the result found for real Wishart matrices \cite{Ma12} and certainly deserves further investigations. \section*{Acknowledgments} We would like to thank Y. Chen, P. J. Forrester and N. S. Witte for useful correspondence and discussions.
{'timestamp': '2015-06-09T02:14:53', 'yymm': '1506', 'arxiv_id': '1506.02387', 'language': 'en', 'url': 'https://arxiv.org/abs/1506.02387'}
\section{Weak approximate tensorization of the relative entropy}\label{sec:strong-quasi-tensorization} \subsection{A technical lemma} In this section, we prove our main results concerning \textit{approximate tensorization of the relative entropy} (also known in the literature as \textit{quasi-factorization} \cite{cesi2001quasi}, \cite{[CLP18a]}, \cite{BardetCapelLuciaPerezGarciaRouze-HeatBath1DMLSI-2019}). In particular, we relate the weak and strong constants to properties of the subalgebras. Let us first define in this subsection the notion of weak approximate tensorization: \begin{definition} Let ${\mathcal{M}}\subset {\cal N}_1,\,{\cal N}_2\subset {\cal N}$ be finite-dimensional von Neumann algebras and $E^{{\mathcal{M}}},\,E_1 ,\, E_2$ associated conditional expectations onto ${\mathcal{M}}$, resp. ${\cal N}_1,\,{\cal N}_2$. These conditional expectations are said to satisfy a \textit{weak approximate tensorization} with constants $c \geq1$ and $d\geq 0$ if, for any state $\rho\in{\cal D}({\cal N})$: \begin{align}\tag{$\operatorname{AT(c,d)}$}\label{at} D(\rho\|E^{\mathcal{M}}_*(\rho))\le c\,\big(D(\rho\|E_{1*}(\rho))+D(\rho\|E_{2*}(\rho))\big)+d \end{align} The approximate tensorization is said to be \textit{strong} if $d=0$. \end{definition} \begin{remark} One can easily get similar inequalities for $k\ge 2$ algebras ${\mathcal{M}}\subset {\cal N}_1,\dots {\cal N}_k\subset {\cal N}$ by simply averaging over each inequality for two $k_1\ne k_2\in[k]$. Denoting by $c$ and $d$ as the maximum constants we get by considering two algebras ${\cal N}_{k_1}$ and ${\cal N}_{k_2}$ pairwise, we would thus obtain \begin{align}\label{at_multiple_k} D(\rho\|E^{\mathcal{M}}_*(\rho))\le \frac{2c}{k}\,\sum_{j=1}^k\,D(\rho\|E_{j*}(\rho))+d\,. \end{align} For sake of clarity, we will restrict to the case $k=2$ in the rest of the article. \end{remark} In the next result, we derive a bound on the difference between $D(\rho\|E^{\mathcal{M}}_*(\rho))$ and the sum of the relative entropies $D(\rho\|E_{i*}(\rho))$, which is our key tool in finding constants $c$ and $d$ for which \ref{at} is satisfied. The result is inspired by the work of \cite{cesi2001quasi,[D02]} and makes use of the multivariate trace inequalities introduced in \cite{Sutter2017}: \begin{lemma}\label{propAT} Let ${\mathcal{M}}\subset {\cal N}_1 ,\,{\cal N}_2\subset {\cal N}$ be finite-dimensional von Neumann algebras and $E^{{\mathcal{M}}},E_1 ,\, E_2$ their corresponding conditional expectations. Then the following inequality holds for any $\rho\in{\cal D}({\cal N})$, writing $\rho_j:=E_{j*}(\rho)$ and $\rho_{\mathcal{M}}:=E^{\mathcal{M}}_{*}(\rho)$: \begin{align}\label{mainequation} D(\rho\|\rho_{\mathcal{M}})\le \,D(\rho\|\rho_1)+D(\rho\|\rho_2)+\,\ln \left\lbrace \int_{-\infty}^\infty\,\mathop{\rm Tr}\nolimits\left[\rho_{1}\rho_{{\mathcal{M}}}^{\frac{-1-it}{2}}\rho_{2}\rho_{{\mathcal{M}}}^{\frac{-1+it}{2}} \right]\,\beta_0(t)\,dt \right\rbrace \,, \end{align} with the probability distribution function \begin{equation*} \beta_0(t)= \frac{\pi}{2} (\cosh(\pi t)+ 1)^{-1}\,. \end{equation*} \end{lemma} \begin{proof} The first step of the proof consists in showing the following bound: \begin{equation}\label{eq:step-31} D(\rho\|\rho_{\mathcal{M}}) \leq D(\rho\|\rho_1)+D(\rho\|\rho_2)+ \ln\mathop{\rm Tr}\nolimits [M]\,, \end{equation} where $ M = \exp \left[ - \ln \rho_{\mathcal{M}} + \ln \rho_1 + \ln \rho_2 \right] $. Indeed, given the conditional expectation of the statement of the theorem, it follows that: \begin{align*} \displaystyle D(\rho\|\rho_{\mathcal{M}})- D(\rho\|\rho_1)- D(\rho\|\rho_2) &=\mathop{\rm Tr}\nolimits \left[ {\rho} \left( - \ln {\rho} \underbrace{ - \ln \rho_{\mathcal{M}} + \ln \rho_1+\ln \rho_2}_{\ln M} \right) \right] \\ & = - D(\rho \| M). \end{align*} Moreover, since $\mathop{\rm Tr}\nolimits[M]\neq 1$ in general, from the non-negativity of the relative entropy of two states it follows that: \begin{equation*} D(\rho \| M) \geq -\log \mathop{\rm Tr}\nolimits[M]. \end{equation*} In the next step, we bound the error term making use of \cite[Theorem 7]{Lieb1973} and \cite[Lemma 3.4]{Sutter2017}, concerning Lieb's extension of Golden-Thompson inequality and Sutter, Berta and Tomamichel's rotated expression for Lieb's pseudo-inversion operator using multivariate trace inequalities, respectively: applying Lieb's theorem to inequality (\ref{eq:step-31}), we have: \begin{align*} \mathop{\rm Tr}\nolimits [M] =& \mathop{\rm Tr}\nolimits \left[ \operatorname{exp}\left( - \ln {\rho_{\mathcal{M}}} + \ln {\rho_1} + \ln {\rho_2} \right) \right] \leq \mathop{\rm Tr}\nolimits \left[ {\rho_1}\mathcal T_{\rho_{\mathcal{M}}} (\rho_2)\right], \end{align*} where $\mathcal T_{\rho_{\mathcal{M}}} $ is given by: \begin{equation*} \mathcal T_{\rho_{\mathcal{M}}} (X) := \int_0^\infty (\rho_{\mathcal{M}} + t)^{-1} X (\rho_{\mathcal{M}} + t)^{-1} dt \, , \end{equation*} and because of multivariate trace inequalities \cite{Sutter2017}, \begin{equation*} \mathop{\rm Tr}\nolimits [M] \leq \int_{-\infty}^{+\infty} \,\mathop{\rm Tr}\nolimits \left[ \rho_1 \, \rho_{\mathcal{M}}^{\frac{-1-it}{2}} \rho_2 \, \rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right]\,\beta_0 (t) \, dt\,, \end{equation*} with $\beta_0$ as in the statement of the lemma. \end{proof} \begin{remark} Note that, if a constant $d>0$ is such that \begin{equation*} \int_{-\infty}^\infty\,\mathop{\rm Tr}\nolimits\left[\rho_{1}\rho_{{\mathcal{M}}}^{\frac{-1-it}{2}}\rho_{2}\rho_{{\mathcal{M}}}^{\frac{-1+it}{2}} \right]\,\beta_0(t)\,dt \leq d \end{equation*} for every $\rho \in {\cal D}({\cal N})$, then inequality \eqref{mainequation} constitutes a result of approximate tensorization AT($1,d$). \end{remark} \Cref{propAT} essentially differs from those results of \cite{[CLP18a]} in the left-hand side of the inequality, since here now appears a conditional relative entropy instead of a usual relative entropy, as it did in the latter. This stronger result can be interpreted as a quantum generalization of the results in \cite{[D02],cesi2001quasi}. In these papers, the last term was further bounded by the relative entropy $D(\rho\|\rho_{\mathcal{M}})$ up to a small multiplicative error term, which lead to an approximate tensorization AT($c,0$) with a constant $c$ typically close to-albeit slightly larger than-$1$. In particular, consider the case of Gibbs measures on a configuration space $\Omega_{\Lambda}:= S^{\Lambda}$ where $\Lambda\subset\mathbb{Z}^D$ is a finite region of $\mathbb{Z}^D$ and where $S$ corresponds to the local configuration space (e.g. S=\{-1,+1\} for spin systems). Here ${\cal N}$ is the algebra of bounded measurable functions: ${\cal N}\equiv \mathbb{L}_\infty(\Omega_{\mathbb{Z}^D})$. We are then interested in conditional expectations arising from conditioning the Gibbs measures onto overlapping finite subregions $A$ and $B$ of $\Lambda$, so that ${\cal N}_1\equiv \mathbb{L}_\infty(\Omega_{A^c})$, $\mathcal{N}_2\equiv \mathbb{L}_\infty(\Omega_{B^c})$ and ${\mathcal{M}}\equiv \mathbb{L}_\infty(\Omega_{(A\cup B)^c})$. In this case, the last term in (\ref{mainequation}) can be shown to be equal to $0$ at infinite temperature where there is no interaction between the different sites, and AT($1,0$) holds in this case. The finite temperature case can then be interpreted as an approximation of the latter, where the constant $c$ typically scales as $(1-\kappa\mathrm{e}^{-\xi d(A^c,B^c)})^{-1}$ for some positive universal constants $\kappa$ and $\xi$, under a condition of strong clustering of correlations in the invariant Gibbs measure on $\Lambda$. In particular, this condition is satisfied for 1D systems and at high temperature. In the quantum setting, it appears that the inequality AT($1,0$) does not hold in general even at infinite temperature, unless the Hamiltonian is classical. We differ a precise discussion on the quantum versions of the aforementioned conditions of clustering of correlations and their use to obtain results of approximate tensorization to Subsection \ref{sec:clustering}. To conclude the first part of this section and show the first explicit result of approximate tensorization, in the next corollary, we rather directly bound the last term of (\ref{mainequation}) by a quantity that characterizes the conditional expectations involved. This result generalizes a former result of \cite{gao2017strong} for conditional expectations with respect to possibly non-tracial states. \begin{corollary}\label{corollaruweak} With the notations of \Cref{propAT}, define the constant $$d:=\sup_{\rho\in{\cal D}({\cal N}_{2})}\inf\big\{\ln(\lambda)|\,E_{1*}(\rho)\le \lambda\eta\,\text{ for some }\,\eta\in{\cal D}({\mathcal{M}}) \big\}\equiv \sup_{\rho\in{\cal D}({\cal N}_{2})}\inf_{\eta\in{\cal D}({\mathcal{M}})}\,D_{\max}(E_{1*}(\rho)\|\eta) \,.$$ Then the following weak approximate tensorization $\operatorname{AT}(1,d)$ holds: \begin{align*} D(\rho\|\rho_{\mathcal{M}})\le D(\rho\|\rho_1)+D(\rho\|\rho_2)+ d\,. \end{align*} \end{corollary} \begin{proof} We focus on the last term on the right-hand side of (\ref{mainequation}). First, note that: \begin{equation*} \mathop{\rm Tr}\nolimits \left[ \rho_{1}\,\rho_{\mathcal{M}}^{\frac{-1-it}{2}}\rho_{2}\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right] = \mathop{\rm Tr}\nolimits \left[ \rho\,E_1^*\left( \rho_{\mathcal{M}}^{\frac{-1-it}{2}}\rho_{2}\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right) \right] = \mathop{\rm Tr}\nolimits \left[ \rho\,\rho_{\mathcal{M}}^{\frac{-1-it}{2}}E_{1*}(\rho_{2})\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right]. \end{equation*} We have by definition of $d$ that there exists a state $\eta\in{\cal D}({\mathcal{M}})$ such that for any $t\in\mathbb{R}$: \begin{align*} \mathop{\rm Tr}\nolimits \left[ \rho\,\rho_{\mathcal{M}}^{\frac{-1-it}{2}}E_{1*}(\rho_{2})\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right] \le\,\mathrm{e}^{d} \mathop{\rm Tr}\nolimits \left[ \rho\,\rho_{\mathcal{M}}^{\frac{-1-it}{2}}\eta\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right]=\mathrm{e}^{d}\mathop{\rm Tr}\nolimits[\rho X_{\mathcal{M}}]\,, \end{align*} for some density $X_{\mathcal{M}}\in{\mathcal{M}}$ given by $\rho_{\mathcal{M}}^{\frac{-1-it}{2}}\eta\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}}$. Since ${\mathcal{M}}\subset {\cal N}$, $\mathop{\rm Tr}\nolimits[\rho X_{\mathcal{M}}]=\mathop{\rm Tr}\nolimits[\rho_{\mathcal{M}}\,X_{\mathcal{M}}]=\mathop{\rm Tr}\nolimits[\eta]=1$. The result follows. \end{proof} \begin{remark} In \cite{gao2019relative}, the authors showed that, for doubly stochastic conditional expectations (i.e. $E_{i*}=E_i$, $E^{\mathcal{M}}_*=E^{\mathcal{M}}$), the following equation holds: given the following block decomposition of the algebras ${\cal N}_2$ and ${\mathcal{M}}$: \begin{align*} {\cal N}_2\equiv \bigoplus_{l\in I_{{\cal N}_2}}\,\mathbb{M}_{m_l}\otimes {\mathds{1}}_{t_l}\,\qquad\qquad {\mathcal{M}}\equiv \bigoplus_{k\in I_{{\mathcal{M}}}}\,\mathbb{M}_{n_k}\otimes {\mathds{1}}_{s_k}\,. \end{align*} \begin{align*} D({\cal N}_2\|{\mathcal{M}}):= \sup_{\rho\in{\cal D}({\cal N}_2)} \inf_{\eta\in{\cal D}({\mathcal{M}})}D_{\max}(\rho\| \eta)\equiv \max_{l\in I_{{\cal N}_2}}\ln \Big(\sum_{k\in I_{{\mathcal{M}}}} \min(a_{kl},\,n_k)\,s_k/t_l \Big)\,, \end{align*} where $a_{kl}$ denotes the number of copies of the block $\mathbb{M}_{n_k}$ contained in the block $\mathbb{M}_{m_l}$. In the context of lattice spin systems, this typically corresponds to the infinite temperature regime. \end{remark} \subsection{Approximate tensorization via noncommutative change of measure} In this subsection, we prove a result of approximate tensorization using a noncommutative change of measure argument. Given a state $\sigma$ that is invariant for the conditional expectations $E^{\mathcal{M}}, E_1$ and $E_2$, we define the doubly stochastic conditional expectations ${E}^{(0),{\mathcal{M}}}, E_1^{(0)}$ and $E_2^{(0)}$ onto the same fixed-point algebras ${\mathcal{M}}\subset {\cal N}_1,{\cal N}_2\subset {\cal N} $. Then, the following proposition is a direct consequence of a recent noncommutative change of measure argument in \cite{junge2019stability} under the assumption that strong approximate tensorization for the relative entropy holds at infinite temperature, a condition that we will further discuss for spin systems in Subsection \ref{subsec:preliminaries}. \begin{proposition}\label{HSAT} As in \Cref{corollaruweak}, we define the constant \[d:=\sup_{\rho\in{\cal D}({\cal N}_{2})}\inf\big\{\ln(\lambda)|\,E_{1*}(\rho)\le \lambda\eta\,\text{ for some }\,\eta\in{\cal D}({\mathcal{M}}) \big\}\equiv \sup_{\rho\in{\cal D}({\cal N}_{2})}\inf_{\eta\in{\cal D}({\mathcal{M}})}\,D_{\max}(E_{1*}(\rho)\|\eta) \,.\] Let us assume that $\operatorname{AT}(1,d)$ holds at infinite temperature, i.e. for every $X \in {\cal B}({\cal H})$ \begin{equation}\label{eq:InfiniteTemperatureAT} D( X \|E^{(0),{\mathcal{M}}}_{*}(X)) \leq D(X\|E^{(0)}_{1*}(X)) + D(X \|E^{(0)}_{2*}(X)) +d \, . \end{equation} Then, the following result of $\operatorname{AT}(c,d')$ with $c=\frac{\lambda_{\max}(\sigma)}{\lambda_{\min}(\sigma)}$ and $d'= \lambda_{\max}(\sigma)\,d_{\cal H}\,d$ holds: \begin{align}\label{eq:NoncommChangMeasAT} D(\rho\|E^{\mathcal{M}}_{*}(\rho))\le \frac{\lambda_{\max}(\sigma)}{\lambda_{\min}(\sigma)}\,\big(D(\rho\|E_{1*}(\rho))+D(\rho\|E_{2*}(\rho))\big)+\lambda_{\max}(\sigma)\,d_{\cal H}\,d\,. \end{align} \end{proposition} \begin{proof} The proof of this result relies on the Holley-Stroock perturbative argument for the Lindblad relative entropy proved in \cite{junge2019stability}. This entropic distance is defined for two positive semi-definite operators $X,Y\in{\cal B}({\cal H})$ such that $Y$ is full rank as \[D_{\operatorname{Lin}}(X\|Y):=\mathop{\rm Tr}\nolimits[X(\log X-\log Y)]-\mathop{\rm Tr}\nolimits[X]+\mathop{\rm Tr}\nolimits[Y]\,.\] Next, we adapt the proof of Proposition 4.2 in \cite{junge2019stability} in order to relate the relative entropies $D_{\operatorname{Lin}}(\rho\|E_{*}^{\mathcal{M}}(\rho))$ and $D_{\operatorname{Lin}}(\rho\|E_{*}^{(0),{\mathcal{M}}}(\rho))$. Using this proposition, we directly see that for any positive, semidefinite operator $X$, we have \begin{align*} \frac{1}{\lambda_{\max}(\sigma)\,d_{\cal H}} \,D_{\operatorname{Lin}}(\Gamma_{\sigma}(X)\| E_{*}^{{\mathcal{M}}}(\Gamma_\sigma(X)) )\le D_{\operatorname{Lin}}(X\|E_{*}^{(0),{\mathcal{M}}}(X))\le \frac{1}{\lambda_{\min}(\sigma)d_{{\cal H}}}\,D_{\operatorname{Lin}}(\Gamma_{\sigma}(X)\| E_{*}^{{\mathcal{M}}}(\Gamma_\sigma(X)) )\,, \end{align*} where $\lambda_{\min}(\sigma)$, resp. $\lambda_{\max}(\sigma)$, denotes the smallest, resp. largest, eigenvalue of the state $\sigma$. Analogous inequalities hold for $E_1$ and $E_2$. Now, using \eqref{eq:InfiniteTemperatureAT}, for $\rho:=\Gamma_\sigma(X)$ we have: \begin{align*} D(\rho\|E_{*}^{\mathcal{M}}(\rho))&\le \lambda_{\max}(\sigma)\,d_{\cal H}\,D_{\operatorname{Lin}}(X\|E_{*}^{(0),{\mathcal{M}}}(X))\\ &\le \lambda_{\max}(\sigma)\,d_{\cal H}\,\big( D_{\operatorname{Lin}}(X\|E_{1*}^{(0)}(X))+D_{\operatorname{Lin}}(X\|E_{2*}^{(0)}(X)) +d\big)\\ &\le \frac{\lambda_{\max}(\sigma)}{\lambda_{\min}(\sigma)}\,\big(D(\rho\|E_{1*}^{}(\rho))+D(\rho\|E_{2*}^{}(\rho)) \big) +\lambda_{\max}(\sigma)\,d_{\cal H}\,d\,. \end{align*} \end{proof} \begin{remark} Similarly to what is done for classical spin systems in \cite{ledoux2001logSobolev}, the previous result can be rewritten in the following way. Consider the generalization of the relative entropy for $X=\Gamma_\sigma^{-1}(\rho)$ given by: \[\operatorname{Ent}_{1,{\mathcal{M}}}(X):=D(\rho\|E_*^{\mathcal{M}}(\rho))\,,\] with analogous expressions for ${\cal N}_1$ and ${\cal N}_2$ with their respective conditional expectations. Then, we can express this relative entropy as an infimum over $D_{\operatorname{Lin}}$. Indeed, Lemma 3.4 in \cite{junge2019stability} states that for all full-rank positive semi-definite $Y\in{\mathcal{M}}$, \begin{equation}\label{eq_proof_HS} D_{\operatorname{Lin}}(\rho\|\Gamma_{\sigma}(Y))=D_{\operatorname{Lin}}(\rho\|E_*^{\mathcal{M}}(\rho))+D_{\operatorname{Lin}}(E_*^{\mathcal{M}}(\rho)\|\Gamma_{\sigma}(Y))\,. \end{equation} It shows in particular that $D_{\operatorname{Lin}}(\rho\|\Gamma_{\sigma}(Y))\geq D_{\operatorname{Lin}}(\rho\|E_*^{\mathcal{M}}(\rho))=\operatorname{Ent}_{1,{\mathcal{M}}}(X)$, with equality for $Y=E^{\mathcal{M}}(X)$. Thus we obtain \begin{equation}\label{eq_var_entropy} \operatorname{Ent}_{1,{\mathcal{M}}}(X)=\underset{Y\in {\mathcal{M}}\,,Y>0}{\inf}\,D_{\operatorname{Lin}}(\rho\|\Gamma_{\sigma}(Y))\, \end{equation} and we can rewrite \eqref{eq:NoncommChangMeasAT} as \begin{equation*} \operatorname{Ent}_{1,{\mathcal{M}}}(X) \leq \frac{\lambda_{\max}(\sigma)}{\lambda_{\min}(\sigma)}\,\big( \operatorname{Ent}_{1,{\cal N}_1}(X)+\operatorname{Ent}_{1,{\cal N}_2}(X) \big) +\lambda_{\max}(\sigma)\,d_{\cal H}\,d\,. \end{equation*} \end{remark} \subsection{Approximate tensorization via Pinching map} None of the previous results reduces to those of \cite{cesi2001quasi,[D02]} in the case of classical Gibbs measures over classical systems at finite temperature. In the following theorem, we take care of this issue by interpolating between these two extreme cases. Before stating the result, let us fix some notations. As before, we are interested in proving (weak) approximate tensorization results for the quadruple of algebras ${\mathcal{M}}\subset {\cal N}_1\,,\,{\cal N}_2\subset{\cal N}$. As a subalgebra of ${\cal B}({\cal H})$ for some Hilbert space ${\cal H}$, ${\mathcal{M}}$ bears the following block diagonal decomposition: given ${\cal H}=\bigoplus_{i\in I_{\mathcal{M}}}{\cal H}_i\otimes {\cal K}_i$: \begin{align}\label{eq_decomp} {\mathcal{M}}\equiv\bigoplus_{i\in I_{\mathcal{M}}}\,{\cal B}({\cal H}_i)\otimes {\mathds{1}}_{{\cal K}_i}\,,\qquad\text{ so that }\qquad\forall \rho\in{\cal D}({\cal N})\,,\, \rho_{\mathcal{M}}:=\sum_{i\in I_{\mathcal{M}}}\,\mathop{\rm Tr}\nolimits_{{\cal K}_i}[P_i\rho P_i]\,\otimes\tau_i\,, \end{align} where $P_i$ corresponds to the projection onto the $i$-th diagonal block in the decomposition of ${\mathcal{M}}$, and each $\tau_i$ is a full-rank state on ${\cal K}_i$. We further make the observation that, since the restrictions of the conditional expectations $E_1$, $E_2$ and $E^{\mathcal{M}}$ on ${\cal B}({\cal H}_i\otimes{\cal K}_i)$ only act non-trivially on the factor ${\cal B}({\cal K}_i)$, we can define the conditional expectations ${E}_j^{(i)}$ and $({E}^{{\mathcal{M}}})^{(i)}$ acting on ${\cal B}( {\cal K}_i)$ and such that \begin{equation}\label{eq_decom_cond} E_j|_{{\cal B}({\cal H}_i\otimes{\cal K}_i)}:={\rm{id}}_{{\cal B}({\cal H}_i)}\otimes {E}_j^{(i)}\,,\quad\text{resp.}\quad E^{\mathcal{M}}|_{{\cal B}({\cal H}_i\otimes{\cal K}_i)}:={\rm{id}}_{{\cal B}({\cal H}_i)}\otimes ({E}^{\mathcal{M}})^{(i)}\,. \end{equation} In order to get another form of approximate tensorization, we wish to compare the state $\rho$ with a classical-quantum state according to the decomposition given by ${\mathcal{M}}$. To this end we introduce the Pinching map with respect to the ${\cal H}_i$ in the decomposition of ${\mathcal{M}}$. First define $\rho_{{\cal H}_i}\equiv \mathop{\rm Tr}\nolimits_{{\cal K}_i}[P_i\,\rho\,P_i]$. Then each $\rho_{{\cal H}_i}$ can be diagonalized individually: \[\rho_{{\cal H}_i}\equiv\sum_{\lambda^{(i)}\in\operatorname{Spec}(\rho_{{\cal H}_i})}\,\lambda^{(i)}\,\proj{\lambda^{(i)}}\,.\] The Pinching map we are interested in is then: \[\mathcal{P}_{\rho_{\mathcal{M}}}(X)\equiv \sum_{i\in I_{\mathcal{M}}}\,\sum_{\lambda^{(i)}\in\operatorname{Spec}(\rho_{{\cal H}_i})}\,\left(\proj{\lambda^{(i)}}\otimes 1\hspace{-0.27em}\mathrm{l}_{{\cal K}_i}\right)\,X\,\left(\proj{\lambda^{(i)}}\otimes 1\hspace{-0.27em}\mathrm{l}_{{\cal K}_i}\right)\,,\qquad X\in{\cal B}({\cal H})\,.\] Remark that we have for all $\rho\in{\cal D}({\cal N})$: \[\mathop{\rm Tr}\nolimits_{{\cal H}_i}[P_i\,\rho\,P_i]=\mathop{\rm Tr}\nolimits_{{\cal H}_i}[P_i\,\mathcal{P}_{\rho_{\mathcal{M}}}(\rho)\,P_i]\,.\] \begin{theorem}\label{theo_AT_pinching} Define \begin{align}\label{cond_L1_clustering} &c_1:=\max_{i\in I_{\mathcal{M}}}\|E_{1}^{(i)}\circ E_{2}^{(i)}-(E^{\mathcal{M}})^{(i)}:\,\mathbb{L}_1(\tau_i)\to\mathbb{L}_\infty\|\,. \end{align} Then, the following inequality holds: \begin{align}\label{eqgeneral} D(\rho\|\rho_{\mathcal{M}})\le \frac{1}{(1-2{c_1})}\,\big(D(\rho\|\rho_1)+D(\rho\|\rho_2)+D_{\max}\big(E_{1*}\circ E_{2*}(\rho)\|E_{1*}\circ E_{2*}(\eta)\big)+\,c_1 D(\eta\|\mathcal{P}_{\rho_{\mathcal{M}}}(\rho))\big)\,, \end{align} for any $\eta\in{\cal D}({\cal N})$ such that $\eta=\mathcal{P}_{\rho_{\mathcal{M}}}(\eta)$ and $\mathop{\rm Tr}\nolimits_{{\cal K}_i}[P_i\,\eta\,P_i]=\rho_{{\cal H}_i}$. Alternatively, we can get \begin{align}\label{eqchanged} D(\rho\|\rho_{\mathcal{M}})\le \frac{1}{(1-2{c_1})}\,\big(D(\rho\|\rho_1)+D(\rho\|\rho_2)\big)+D\big(\rho\|\mathcal{P}_{\rho_{\mathcal{M}}}(\rho) )\,. \end{align} Consequently, \ref{at} holds with \begin{align} & c:=\frac{1}{(1-2{c_1})}\,, \nonumber\\ & d:= \frac{1}{(1-2{c_1})}\left(\underset{\rho\in{\cal D}({\cal N})}{\sup}\,\underset{\eta\in{\cal D}({\cal N})}\inf\,D_{\max}\big(E_{1*}\circ E_{2*}(\rho)\|E_{1*}\circ E_{2*}(\eta)\big)+\,c_1 D(\eta\|\mathcal{P}_{\rho_{\mathcal{M}}}(\rho))\big)\right)\,, \label{eq_theo_AT_pinching} \end{align} where the infimum in the second line runs over $\eta$ such that $\eta=\mathcal{P}_{\rho_{\mathcal{M}}}(\eta)$ and $\mathop{\rm Tr}\nolimits_{{\cal K}_i}[P_i\,\eta\,P_i]=\rho_{{\cal H}_i}$. \end{theorem} \begin{remark} In particular, any state $\eta$ of the form $\eta:= \sum_{i\in I_{\mathcal{M}}}\,\rho_{{\cal H}_i}\otimes \tau_i'$, for an arbitrary family of subnormalized states $\tau_i'$, satisfies the conditions stated below \Cref{eqgeneral}. \end{remark} \begin{proof} The proof starts similarly to that of Corollary \ref{corollaruweak}. We once again simply need to bound the integral on the right hand side of (\ref{mainequation}). By considering $\eta$ as in the statement of the theorem and writing for the moment $\tilde{d}:=D_{\max}\big(E_{1*}\circ E_{2*}(\rho)\|E_{1*}\circ E_{2*}(\eta)\big)$, we obtain \begin{align*} \mathop{\rm Tr}\nolimits\Big[ \rho_{1}\,\rho_{\mathcal{M}}^{\frac{-1-it}{2}}\,\rho_{2}\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}}\Big]= \mathop{\rm Tr}\nolimits\Big[ \rho\,\rho_{\mathcal{M}}^{\frac{-1-it}{2}}\,E_{1*}(\rho_{2})\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}}\Big]\le \mathrm{e}^{\tilde{d}} \, \mathop{\rm Tr}\nolimits\Big[ \rho\,\rho_{\mathcal{M}}^{\frac{-1-it}{2}}\,E_{1*}\circ E_{2*}(\eta)\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}}\Big]\,. \end{align*} To simplify the notation, let us write: $\eta_{12}:=E_{1*}\circ E_{2*}(\eta)$. Now, note that the following holds: \begin{equation*} \mathop{\rm Tr}\nolimits \left[ \left( \rho - \rho_{\mathcal{M}} \right) \rho_{\mathcal{M}}^{\frac{-1-it}{2}} \left( \eta_{12} - \rho_{\mathcal{M}} \right) \rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right] = \mathop{\rm Tr}\nolimits \left[ \rho \, \rho_{\mathcal{M}}^{\frac{-1-it}{2}} \, \eta_{12} \, \rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right] -1 -1 + 1, \end{equation*} since $ E^{\mathcal{M}}_*$, $E_{1*}$ and $E_{2*}$ are conditional expectations in the Schr\"{o}dinger picture and, thus, trace preserving. Therefore, \begin{align*} & \ln \int_{-\infty}^{+\infty} \mathrm{e}^{\tilde{d}} \, \mathop{\rm Tr}\nolimits \left[ \rho \, \rho_{\mathcal{M}}^{\frac{-1-it}{2}} \, \eta_{12} \, \rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right] \,\beta_0 (t) \,dt \, \\ &\phantom{adasdasdad}\phantom{adasdasdad}\phantom{adasdasdad}= \ln \int_{-\infty}^{+\infty} \mathrm{e}^{\tilde{d}} \, \left( \mathop{\rm Tr}\nolimits \left[ \left( \rho - \rho_{\mathcal{M}} \right) \rho_{\mathcal{M}}^{\frac{-1-it}{2}} \left( \eta_{12} - \rho_{\mathcal{M}} \right) \rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right] + 1 \right)\,\beta_0 (t) \,dt \\ &\phantom{adasdasdad}\phantom{adasdasdad}\phantom{adasdasdad} \leq \tilde{d} + \int_{-\infty}^{+\infty} \mathop{\rm Tr}\nolimits \left[ \left( \rho - \rho_{\mathcal{M}} \right) \rho_{\mathcal{M}}^{\frac{-1-it}{2}} \left( \eta_{12} -\rho_{\mathcal{M}} \right) \rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right] \, \beta_0 (t)\,dt\,, \end{align*} where we have used that $ \ln(x +1)\le x$ for positive real numbers. Defining $X:=\Gamma_{\rho_{\mathcal{M}}}^{-1}(\rho)$ and $Y_t:=\rho_{\mathcal{M}}^{\frac{-1-it}{2}}\,\eta\, \rho_{\mathcal{M}}^{\frac{-1+it}{2}}$, we note that \begin{equation} E_1 \circ E_2 [Y_t] = \rho_{\mathcal{M}}^{\frac{-1-it}{2}}\,\eta_{12}\, \rho_{\mathcal{M}}^{\frac{-1+it}{2}}, \end{equation} and we can rewrite the previous expression as \begin{align} & \int_{-\infty}^{+\infty} \mathop{\rm Tr}\nolimits \left[ \left(\rho - \rho_{\mathcal{M}} \right) \rho_{\mathcal{M}}^{\frac{-1-it}{2}} \left( \eta_{12} - \rho_{\mathcal{M}}\right) \rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right] \, \beta_0 (t)\,dt\nonumber \\ &\phantom{adasdasdad}\phantom{adasdasdad}\phantom{adasdasdad}= \int_{-\infty}^{+\infty} \mathop{\rm Tr}\nolimits \left[ \left( X - E^{\mathcal{M}}[X] \right) \Delta_{\rho_{\mathcal{M}}}^{-it/2} \left( \eta_{12} - \rho_{\mathcal{M}} \right) \right] \, \beta_0 (t)\,dt \nonumber\\ &\phantom{adasdasdad}\phantom{adasdasdad}\phantom{adasdasdad}= \int_{-\infty}^{+\infty} \mathop{\rm Tr}\nolimits \left[ \left(X - E^{\mathcal{M}}[X] \right) \rho_{\mathcal{M}}^{\frac{1}{2}} \left( E_1 \circ E_2 [Y_t] -E^{\mathcal{M}} [Y_t] \right) \rho_{\mathcal{M}}^{\frac{1}{2}} \right] \, \beta_0 (t)\,dt\nonumber \\ &\phantom{adasdasdad}\phantom{adasdasdad}\phantom{adasdasdad}= \int_{- \infty}^{+ \infty} \left\langle X - E^{\mathcal{M}}[X] , \ E_1 \circ E_2 [Y_t] -E^{\mathcal{M}} [Y_t] \right\rangle_{\rho_{\mathcal{M}}}\, \beta_0 (t)\,dt\, , \end{align} thus obtaining the following inequality \begin{align*} \ln\, \int_{-\infty}^{\infty} \mathrm{e}^{\tilde{d}} \, \mathop{\rm Tr}\nolimits\Big[ \rho_{1}\,\rho_{\mathcal{M}}^{\frac{-1-it}{2}}\,\eta_{12}\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}}\Big]\,\beta_0(t)\,dt\le \tilde{d}+\int_{-\infty}^\infty \,\langle X-E^{\mathcal{M}}[X]\,,E_1\circ E_2[Y_t]-E^{\mathcal{M}}[Y_t]\rangle_{\rho_{\mathcal{M}}}\,\beta_0(t)\,dt . \end{align*} Now, we focus on the integrand on the right-hand side of the above inequality. Denote for any $A\in{\cal B}({\cal H})$, \[A^{(\lambda,i)}:=\left(\proj{\lambda^{(i)}}\otimes 1\hspace{-0.27em}\mathrm{l}_{{\cal K}_i}\right)P_i\,A\,P_i\left(\proj{\lambda^{(i)}}\otimes 1\hspace{-0.27em}\mathrm{l}_{{\cal K}_i}\right)\,.\] We also write $A^{(\lambda,i)}=\proj{\lambda^{(i)}}\otimes A^{(\lambda,i)}$ by a slight abuse of notation. Then \begin{align*} \langle X-E^{\mathcal{M}}[X]\,,&\, E_1\circ E_2[Y_t]-E^{\mathcal{M}}[Y_t]\rangle_{\rho_{\mathcal{M}}}\\ &= \sum_{i\in I_{\mathcal{M}}}\,\sum_{\lambda^{(i)}\in\operatorname{Spec}(\rho_{{\cal H}_i})}\,\lambda^{(i)}\,\langle X^{(\lambda,i)}-E^{\mathcal{M}}[X^{(\lambda,i)}],\,E_1\circ E_2[Y_t^{(\lambda,i)}]-E^{\mathcal{M}}[Y_t^{(\lambda,i)}]\rangle_{\tau_i}\,. \end{align*} Next, by Hölder's inequality each summand in the right-hand side above is upper bounded by \begin{align*} & \|({\rm{id}}-(E^{\mathcal{M}})^{(i)})[X^{(\lambda,i)}] \|_{\mathbb{L}_1(\tau_i)}\,\|(E_1^{(i)}\circ E_2^{(i)}-(E^{\mathcal{M}})^{(i)}) [Y_t^{(\lambda,i)}] \|_\infty\nonumber\\ &\phantom{adasdasdad}\phantom{adasdasdad}\phantom{adasdasdad} \le c_1\, \|({\rm{id}}-(E^{\mathcal{M}})^{(i)})[X^{(\lambda,i)}] \|_{\mathbb{L}_1(\tau_i)}\,\| ({\rm{id}}-(E^{\mathcal{M}})^{(i)})[Y_t^{(\lambda,i)}]\|_{\mathbb{L}_1(\tau_i)}\nonumber\\ &\phantom{adasdasdad}\phantom{adasdasdad}\phantom{adasdasdad} = c_1\,\|\rho^{(\lambda,i)}-E^{\mathcal{M}}_{*}(\rho^{(\lambda,i)})\|_1\,\|\tau_i'-\tau_i\|_{1}\label{eqchange}\\ &\phantom{adasdasdad}\phantom{adasdasdad}\phantom{adasdasdad} \le \frac{c_1}2\,\left(\|\rho^{(\lambda,i)}-E^{\mathcal{M}}_{*}(\rho^{(\lambda,i)})\|_1^2+\|\tau_i'-\tau_i\|_{1}^2\right)\,,\nonumber \end{align*} where we use Young inequality in the last line. Using Pinsker's inequality and summing over the indices $i$ and $\lambda^{(i)}$, we find that \begin{align*} \ln\int_{-\infty}^\infty \mathop{\rm Tr}\nolimits\left[ \rho\,\rho_{\mathcal{M}}^{\frac{-1-it}{2}}E_{1*}\circ E_{2*}(\mathcal{P}_{\mathcal{M}}(\rho))\,\rho_{\mathcal{M}}^{\frac{-1+it}{2}} \right] \beta_0(t)\,dt\le \tilde{d} +c_1\,D(\rho\|\rho_{\mathcal{M}})+ c_1\,D(\eta\|\mathcal{P}_{\rho_{\mathcal{M}}}(\rho))\,. \end{align*} \Cref{eqgeneral} follows after rearranging the term. In order to obtained \Cref{eqchanged}, we exploit that $\rho_{\mathcal{M}}$ is a fixed point of $\mathcal{P}_{\rho_{\mathcal{M}}}$ and therefore \[D(\rho\|\rho_{\mathcal{M}})=D(\rho\|\mathcal{P}_{\rho_{\mathcal{M}}}(\rho))+D(\mathcal{P}_{\rho_{\mathcal{M}}}(\rho)\|\rho_{\mathcal{M}})\,.\] We can then apply \Cref{eqgeneral} to $\mathcal{P}_{\rho_{\mathcal{M}}}(\rho)$ and remark that the weak constant vanish. The result follows after remarking that $\mathcal{P}_{\rho_{\mathcal{M}}}\circ E_*^{\mathcal{M}}=E_*^{\mathcal{M}}\circ\mathcal{P}_{\rho_{\mathcal{M}}}$ and applying the data-processing inequality to the map $\mathcal{P}_{\rho_{\mathcal{M}}}$. \end{proof} \begin{proposition}\label{propboundsd1d2} With the notations of \Cref{propAT} and \Cref{theo_AT_pinching}, \begin{equation*} \sup_{\rho\in{\cal D}({\cal N})}\,D_{\max}\big(E_{1*}\circ E_{2*}(\rho)\| E_{1*}\circ E_{2*}(\eta)\big)\leq d_1 + d_2 \end{equation*} where \begin{align*} &d_1:=\sup_{\rho\in{\cal D}({\cal N})}\,D_{\max}\big(E_{1*}\circ E_{2*}(\rho)\| E_{1*}\circ E_{2*}(\mathcal{P}_{\mathcal{M}}(\rho))\big)\,,\\ &d_2:=\,\max_{i\in I_{\mathcal{M}}}\sup_{\rho^{(i)}\in{\cal D}(P_i{\cal N} P_i)}\,I_{\max}\big( {\cal H}_i:{\cal K}_i \big)_{\rho^{(i)}}\,, \end{align*} where $\mathcal{P}_{\mathcal{M}}:=\sum_{i\in I_{\mathcal{M}}}P_i(\cdot)P_i$. Furthermore, given $i\in I_{\cal N}$, denote by $I^{(i)}_{\mathcal{M}}$ the set of indices corresponding to the minimal projectors in ${\mathcal{M}}$ contained in the $i$-th block of ${\cal N}$. Moreover, for each of the blocks $i$ of ${\cal N}$, of corresponding minimal projector $P^{{\cal N}}_i$, decompose $P^{\cal N}_i{\mathcal{M}} P^{\cal N}_i$ as follows: letting $P_i^{\cal N}{\cal H}:= \bigoplus_{j\in I^{(i)}_{\mathcal{M}}} \,{\cal H}^{(i)}_{j}\otimes {\cal K}^{(i)}_j$, \begin{align*} P_i^{\cal N}{\mathcal{M}} P_i^{\cal N}:=\bigoplus_{j\in I^{(i)}_{\mathcal{M}}} {\cal B}({\cal H}^{(i)}_j)\otimes {\mathds{1}}_{{\cal K}_j^{(i)}}\,. \end{align*} Then, \begin{align*} &d_1\le \max_{i\in I_{\cal N}}\ln(|I^{(i)}_{\mathcal{M}}|)\,\\ &d_2\le 2\max_{i\in I_{\cal N}}\max_{j\in I^{(i)}_{\mathcal{M}}}\min\left\lbrace \ln (d_{{\cal H}^{(i)}_j}),\ln(d_{{\cal K}^{(i)}_j}) \right\rbrace \,. \end{align*} \end{proposition} \begin{proof} We first proceed by proving the bound $d\leq d_1+d_2$. For all $\rho\in{\cal D}({\cal N})$, we can use the chain rule on the max-relative entropy to obtain: \begin{align*} & D_{\max}\big(E_{1*}\circ E_{2*}(\rho)\|E_{1*}\circ E_{2*}(\eta)\big) \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~ \leq D_{\max}\big(E_{1*}\circ E_{2*}(\rho)\|E_{1*}\circ E_{2*}(\mathcal{P}_{\mathcal{M}}(\rho))\big) +D_{\max}\big(E_{1*}\circ E_{2*}(\mathcal{P}_{\mathcal{M}}(\rho))\|E_{1*}\circ E_{2*}(\eta)\big)\\ & ~~~~~~~~~~~~~~~~~~~~~~~~~ \leq d_1 + D_{\max}\big(\mathcal{P}_{\mathcal{M}}(\rho)\|\eta\big)\,, \end{align*} where the second inequality follows from the data processing inequality for $D_{\max}$. Then \[D_{\max}\big(\mathcal{P}_{\mathcal{M}}(\rho)\|\eta\big)\leq\max_{i\in I_{\mathcal{M}}}D_{\max}\big(\mathcal{P}_{\mathcal{M}}(\rho)^{(i)}\|\eta^{(i)}\big)\,,\] where we write $A^{(i)}:=P_i\,A\,P_i$ for any $A\in{\cal B}({\cal H})$. This last $D_{\max}$ is exactly $I_{\max}\big( {\cal H}_i:{\cal K}_i \big)_{\rho^{(i)}}$ after minimizing on $\eta$. We are left with proving the two separate bounds on $d_1$ and $d_2$ respectively. The first bound is a simple consequence of the data processing inequality for $D_{\max}$ and the Pinching inequality. The second bound is a consequence of Lemma B.7 in \cite{berta2011quantum}. \end{proof} \begin{remark} In the case of a classical evolution over a classical system, taking $\eta=\mathcal{P}_{\rho_{\mathcal{M}}}(\rho)$ shows that $d=0$ in \Cref{eq_theo_AT_pinching}, and thus we get back the strong approximate tensorization of \cite{cesi2001quasi}. The estimation of the constant $c$ under a condition of clustering of correlations is discussed in the next section. \end{remark} \begin{example}[Pinching onto different bases]\label{example2} Take ${\cal H}={\mathbb{C}}^l$ and assume that the algebra ${\cal N}_1$ is the diagonal onto some orthonormal basis $|e^{(1)}_k\rangle$, whereas ${\cal N}_2$ is the diagonal onto the basis $|e^{(2)}_k\rangle$. Moreover, choose ${\mathcal{M}}$ to be the trivial algebra $\mathbb{C}{\mathds{1}}_\ell$. Hence for each $i\in\{1,2\}$, $E_i$ denotes the Pinching map onto the diagonal $\operatorname{span}\left\lbrace |e_k^{(i)}\rangle\langle e_k^{(i)}|\right\rbrace$ and $E^{\mathcal{M}}=\frac{{\mathds{1}}}{\ell}\mathop{\rm Tr}\nolimits[\cdot]$. Then, for any $X\ge0$: \begin{align*} \|(E_1\circ E_2-E^{\mathcal{M}})(X)\|_\infty&=\Big\|\sum_{k,k'}|e^{(1)}_k\rangle\langle e^{(1)}_k|e^{(2)}_{k'}\rangle\langle e^{(2)}_{k'}| X| e^{(2)}_{k'}\rangle\langle e^{(2)}_{k'}|e^{(1)}_k\rangle\langle e^{(1)}_k|-\frac{1}{\ell}\,|e^{(1)}_k\rangle\langle e^{(1)}_k|\,\langle e^{(2)}_{k'}|X|e^{(2)}_{k'}\rangle\Big\|_\infty\\ &=\Big\|\sum_{k,k'}\,\big( |\langle e^{(1)}_k|e^{(2)}_{k'}\rangle |^2-\frac{1}{\ell} \big)|e^{(1)}_k\rangle\langle e^{(1)}_k|\,\langle e^{(2)}_{k'}| X| e^{(2)}_{k'}\rangle\Big\|_\infty\\ &=\max_{k}\,\sum_{k'}\,\Big| |\langle e^{(1)}_k|e^{(2)}_{k'}\rangle |^2-\frac{1}{\ell} \Big| \quad\langle e^{(2)}_{k'}| X| e^{(2)}_{k'}\rangle\\ &\le \varepsilon\,\frac{1}{\ell}\mathop{\rm Tr}\nolimits[X] \end{align*} where $\varepsilon:=\ell \,\max_{k,k'}\,\Big| |\langle e^{(1)}_k|e^{(2)}_{k'}\rangle |^2-\frac{1}{\ell} \Big|$. Hence \begin{align*} \|(E_1\circ E_2-E^{\mathcal{M}}):\mathbb{L}_1(\ell^{-1}{\mathds{1}})\to \mathbb{L}_\infty\|\le \varepsilon \,, \end{align*} so that by chosing $\eta=\rho=\mathcal{P}_{\rho_{\mathcal{M}}}(\rho)$ in Theorem \ref{theo_AT_pinching}, as long as $2\varepsilon<1$, for any $\rho\in {\cal D}(\mathbb{C}^\ell)$, AT($(1-2\varepsilon)^{-1},0$) holds: \begin{align}\label{our-bound} D(\rho\| \ell^{-1}{\mathds{1}})\le \frac{1}{1-2\varepsilon}\,(D(\rho\|E_{1*}(\rho))+D(\rho\|E_{2*}(\rho)))\,. \end{align} This result is related to Example 4.5 of \cite{nick2019scoopingpaper}. There the author obtains an inequality that can be rewritten in the following form: \begin{align}\label{nick-bound} D(\rho\| \ell^{-1}{\mathds{1}})\le 4 \left( \ln_{1-\delta} \left( \frac{2}{3 \ell + 5} \right) + 1 \right) \,(D(\rho\|E_{1*}(\rho))+D(\rho\|E_{2*}(\rho)))\,, \end{align} where $\delta$ here is related with $\varepsilon$ in our example by: \begin{equation*} \delta \geq \frac{1}{\ell} (1- \varepsilon). \end{equation*} The approximate tensorization derived in (\ref{our-bound}) can be used to find exponential convergence in relative entropy of the primitive quantum Markov semigroup $\mathrm{e}^{t{\cal L}}$, where \begin{align*} {\cal L}(X):=E_1(X)+E_2(X)-2X\,. \end{align*} Indeed, for any state $\rho\in{\cal D}({\cal H})$, denoting by $\rho_t$ the evolved state $\mathrm{e}^{t{\cal L}}(\rho)$ up to time $t$, the fact that $D(\rho_t\|\ell^{-1}{\mathds{1}})\le \mathrm{e}^{-\alpha t}D(\rho\|\ell^{-1}{\mathds{1}})$ holds for some $\alpha >0$ is equivalent to the so-called modified logarithmic Sobolev inequality, which can be written for all $\rho\in{\cal D}({\cal H})$ by \cite[Lemma 3.4]{junge2019stability} as \begin{align*} \alpha D(\rho\|\ell^{-1}{\mathds{1}})\le D(\rho\|E_{1*}(\rho))+D(E_{1*}(\rho)\|\rho)+D(\rho\|E_{2*}(\rho))+D(E_{2*}(\rho)\|\rho)\,. \end{align*} By positivity of the relative entropy, it suffices to prove the existence of a constant $\alpha>0$ such that \begin{align*} \alpha D(\rho\|\ell^{-1}{\mathds{1}})\le D(\rho\|E_{1*}(\rho))+D(\rho\|E_{2*}(\rho))\,. \end{align*} This last inequality is equivalent to (\ref{our-bound}) for $\alpha=1-2\varepsilon$. \end{example} \subsection{Application: tightened entropic uncertainty relations} In this subsection, we use the results obtained above to derive some applications concerning entropic uncertainty relations. Let us first recall that, given a function $f\in\mathbb{L}_2(\mathbb{R})$ and its Fourier transform ${\mathcal{F}}[f]$ with $\|f\|_{\mathbb{L}_2(\mathbb{R})}=\|{\mathcal{F}}[f]\|_{\mathbb{L}_2(\mathbb{R})}=1$, Weyl proved in \cite{weyl1950theory} the following uncertainty relation: \begin{align}\label{uncertainty} V(|f|^2)\,V(|{\mathcal{F}}[f]|^2)\ge \frac{1}{16\pi^2}\,, \end{align} where, given a probability distribution function $g$, $V(g)$ denotes its variance. The uncertainty inequality means that $|f|^2$ and $|{\mathcal{F}}[f]|^2$ cannot both be concentrated arbitrarily close to their corresponding means. An entropic strenghthening of (\ref{uncertainty}) was derived independently by Hirschmann \cite{hirschman1957note} and Stam \cite{stam1959some}, and tightened later on by Beckner \cite{beckner1975inequalities}: \begin{align*} H(|f|^2)+H(|{\mathcal{F}}[f]|^2)\ge \ln\frac{\mathrm{e}}{2}\,, \end{align*} where $H(g):=-\int_{\mathbb{R}} \,g(x)\ln g(x)\,dx$ stands for the differential entropy functional. In the quantum mechanical setting, this inequality has the interpretation that the total amount of uncertainty, as quantified by the entropy, of non-commuting observables (i.e. the position and momentum of a particle) is uniformly lower bounded by a positive constant independently of the state of the system. For an extensive review of entropic uncertainty relations for classical and quantum systems, we refer to the recent survey \cite{coles2017entropic}. More generally, given two POVMs $\mathbf{X}:= \{X_x\}_{x}$ and $\mathbf{Y}:=\{Y_y\}_{y}$ on a quantum system $A$, and in the presence of side information $M$ that might help to better predict the outcomes of $\mathbf{X}$ and $\mathbf{Y}$, the following state-dependent tightened bound was found in \cite{frank2013extended} (see also \cite{berta2010uncertainty} for the special case of measurements in two orthonormal bases and \cite{maasen1988uncertainty} for the case without memory): for any bipartite state $\rho_{AM}\in{\cal D}({\cal H}_A\otimes {\cal H}_M)$, \begin{align}\label{uncert} S(X|M)_{(\Phi_{\mathbf{X}}\otimes {\rm{id}}_M)(\rho)}+S(Y|M)_{(\Phi_{\mathbf{Y}}\otimes {\rm{id}}_M)(\rho)}\ge- \ln c'+S(A|M)_\rho\,, \end{align} with $c'=\max_{x,y}\{\mathop{\rm Tr}\nolimits(X_x\,Y_x)\}$, where $\Phi_{\mathbf{Z}}$ denotes the quantum-classical channel corresponding to the measurement $\mathbf{Z}\in\{\mathbf{X},\mathbf{Y}\}$: \begin{align*} \Phi_{\mathbf{Z}}(\rho_A):=\sum_{z}\,\mathop{\rm Tr}\nolimits(\rho_A Z_z)\,|z\rangle\langle z|_Z\,. \end{align*} The above inequality has been recently extended to the setting where the POVMs are replaced by two arbitrary quantum channels in \cite{gao2018uncertainty}. In this section, we restrict ourselves to the setting of \cite{berta2010uncertainty}, so that the measurement channels reduce to the Pinching maps of \Cref{example2}. First of all, we notice that the relation (\ref{uncert}) easily follows from \Cref{corollaruweak}, as we show below. \begin{example}\label{ex:example3} Take ${\cal H}_{AM}={\cal H}_A \otimes {\cal H}_M$ a bipartite system and, as in the case of Example \ref{example2}, assume that the algebra ${\cal N}_1$ is the diagonal onto some orthonormal basis $|e^{(\mathcal{X})}_x\rangle$ in ${\cal H}_A $, whereas ${\cal N}_2$ is the diagonal onto the basis $|e^{(\mathcal{Y})}_y\rangle$ also in ${\cal H}_A $. Moreover, choose ${\mathcal{M}}$ to be the algebra $\mathbb{C}{\mathds{1}}_\ell \otimes \mathcal{B}({\cal H}_M)$. Hence for each alphabet $\mathcal{Z}\in\{\mathcal{X},\mathcal{Y}\}$, $E_\mathcal{Z}$ denotes the Pinching map onto the diagonal $\operatorname{span}\left\lbrace |e_z^{(\mathcal{Z})}\rangle\langle e_z^{(\mathcal{Z})}|\right\rbrace$, which we tensorize with the identity map in $M$, and $E^{\mathcal{M}} \otimes {\rm{id}}_M=\frac{1}{d_A}{\mathds{1}}_A \otimes\mathop{\rm Tr}\nolimits_A[\cdot] $. Then, for every $\rho_{AM} \in \mathcal{D}({\cal H}_{AM})$, \begin{align*} S(X|M)_{(E_{\mathcal{X}}\otimes {\rm{id}}_M)(\rho_{AM})} & = - D\left( (E_{\mathcal{X}}\otimes {\rm{id}}_M)(\rho_{AM}) \Big| \Big| \frac{{\mathds{1}}_A}{d_A} \otimes \rho_M \right) +\ln d_A \\ & = D(\rho_{AM} || (E_{\mathcal{X}}\otimes {\rm{id}}_M)(\rho_{AM})) - D(\rho_{AM} || (E^{\mathcal{M}}\otimes {\rm{id}}_M)(\rho_{AM})) +\ln d_A, \end{align*} where the last equality is derived from \cite[Lemma 3.4]{junge2019stability}. Hence, since \begin{equation*} D(\rho_{AM} || (E^{\mathcal{M}}\otimes {\rm{id}}_M)(\rho_{AM})) = - S(A| M)_{\rho_{AM}} + \ln d_A , \end{equation*} by virtue of Corollary \ref{corollaruweak} we have \begin{align*} S(X|M)_{(E_{\mathcal{X}}\otimes {\rm{id}}_M)(\rho_{AM})} &+ S(Y|M)_{(E_{\mathcal{Y}}\otimes {\rm{id}}_M)(\rho_{AM})} \\ & \geq D(\rho_{AM} || (E_{\mathcal{X}}\otimes {\rm{id}}_M)(\rho_{AM}))+ D(\rho_{AM} || (E_{\mathcal{Y}}\otimes {\rm{id}}_M)(\rho_{AM}))\\ & \; \; \; \; - 2 D(\rho_{AM} || (E^{\mathcal{M}}\otimes {\rm{id}}_M)(\rho_{AM})) + 2 \ln d_A\\ & \geq S(A|M)_{\rho_{AM}} - d + \ln d_A , \end{align*} where \begin{equation*} d:=\sup_{\rho\in{\cal D}({\cal N}_{2})}\inf\big\{\ln(\lambda)|\,E_{1*}(\rho)\le \lambda\eta\,\text{ for some }\,\eta\in{\cal D}({\mathcal{M}}) \big\} . \end{equation*} Now, taking into account the computations of Example \ref{example2}, notice that \begin{equation*} d= d_A \, \underset{x, \, y}{\text{max}}\, |\langle e^{(\mathcal{X})}_x|e^{(\mathcal{Y})}_{y}\rangle |^2 , \end{equation*} obtaining thus expression (\ref{uncert}). \end{example} However, close to the completely mixed state, this inequality is not tight whenever $\mathbf{X}$ and $\mathbf{Y}$ are not mutually unbiased bases (i.e. $\exists x\in{\cal X},y\in\mathcal{Y}$ such that $|\langle X^x|Y^y\rangle|^2>\frac{1}{d_A}$). Here, we derive the following strengthening of \Cref{uncert} when $d_M=1$ as a direct consequence of \Cref{theo_AT_pinching}: \begin{corollary}\label{cor:2pinching} Given a finite alphabet $\mathcal{Z}\in \{\mathcal{X},\mathcal{Y}\}$, let $E_{\mathcal{Z}}$ denote the Pinching channels onto the orthonormal basis $\{|e^{(\mathcal{Z})}_z\rangle\}_{z\in\mathcal{Z}}$ corresponding to the measurement $\mathbf{Z}$. Assume further that $c_1= d_A\max_{x,y}\big| |\langle e^{(\mathcal{X})}_x|e^{(\mathcal{Y})}_y\rangle|^2-\frac{1}{d_A} \big|<\frac{1}{2}$. Then the following strenghtened entropic uncertainty relation holds for any state $\rho\in{\cal D}({\cal H}_{A})$, \begin{align} S(X)_{E_{\mathcal{X}}(\rho)}+S(Y)_{E_{\mathcal{Y}}(\rho)}\ge (1+2c_1)\,S(A)_\rho+(1-2c_1)\ln d_A\,. \end{align} \end{corollary} \begin{proof} Following the first lines of Example \ref{ex:example3} for $d_M=1$, we have \begin{equation*} S(X)_{E_{\mathcal{X}}(\rho)}+ S(Y)_{E_{\mathcal{Y}}(\rho)} = D(\rho || E_{\mathcal{X}}(\rho)) + D(\rho || E_{\mathcal{Y}}(\rho)) - 2 D(\rho || E^{\mathcal{M}}(\rho)) + 2 \ln d_A , \end{equation*} where $E^{\mathcal{M}}=\frac{{\mathds{1}}}{\ell}\mathop{\rm Tr}\nolimits[\cdot]$. Then, by virtue of Theorem \ref{theo_AT_pinching}, and choosing $\eta=\rho=\mathcal{P}_{\rho_{\mathcal{M}}}(\rho)$, we have: \begin{equation*} S(X)_{E_{\mathcal{X}}(\rho)}+ S(Y)_{E_{\mathcal{Y}}(\rho)} \geq (-1 - 2c_1) \, D(\rho || E^{\mathcal{M}}(\rho)) + 2 \ln d_A . \end{equation*} To conclude, just notice that \begin{equation*} D(\rho || E^{\mathcal{M}}(\rho)) = - S(A)_{\rho} + \ln d_A , \end{equation*} \end{proof} Analogously, we can study the case for three different orthonormal basis (see \cite{berta2010uncertainty}). For that, let us recall that given $ {\cal N}_1, {\cal N}_2, {\cal N}_3 \subset {\cal N}$ von Neumann subalgebras and ${\mathcal{M}} \subset {\cal N}_1 \cap {\cal N}_2 \cap {\cal N}_3$, if we consider their associated conditional expectations $E_i$ with respect to a state $\sigma$, and for each pair $({\cal N}_i, {\cal N}_j)$ a result of AT($c_{ij}, d_{ij}$) holds, then for every $\rho \in {\cal D}({\cal N})$: \begin{equation} D(\rho || E_*^{\mathcal{M}} (\rho)) \leq \frac{2}{3} \, \underset{i, \, j \in \qty{1,2,3}}{\text{max}} \qty{c_ij} \left( \, D(\rho || E_{1*} (\rho) ) + D(\rho || E_{2*} (\rho) ) + D(\rho || E_{3*} (\rho) ) \, \right) + \frac{d_{12} + d_{13} + d_{23}}{3}. \end{equation} \begin{corollary}\label{cor:3pinching} Given a finite alphabet $I \in \{\mathcal{X},\mathcal{Y}, \mathcal{Z}\}$, and using the same notation that in Corollary \ref{cor:2pinching}, assume that $$c_1= d_A \underset{I, \, J \in \qty{\mathcal{X}, \mathcal{Y}, \mathcal{Z}}}{\text{max}} \left| |\langle e^{(I)}_i|e^{(J)}_j\rangle|^2-\frac{1}{d_A} \right|<\frac{1}{2} .$$ Then the following strenghtened entropic uncertainty relation holds for any state $\rho\in{\cal D}({\cal H}_{A})$, \begin{align*} S(X)_{E_{\mathcal{X}}(\rho)}+S(Y)_{E_{\mathcal{Y}}(\rho)}+S(Z)_{E_{\mathcal{Z}}(\rho)}\ge (2+2c_1)\,S(A)_\rho+(1-2c_1)\ln d_A\,. \end{align*} \end{corollary} \subsection{Clustering of correlations}\label{sec:clustering} The constant $c_1:=\max_{i\in I_{\mathcal{M}}}\|E_{1}^{(i)}\circ E_{2}^{(i)}-(E^{\mathcal{M}})^{(i)}:\,\mathbb{L}_1(\tau_i)\to\mathbb{L}_\infty\|$ appearing in \Cref{theo_AT_pinching} provides a bound on the following covariance-type quantity: for any $i\in I_{\mathcal{M}}$ and any $X, Y\in \mathbb{L}_1(\tau_i)$, \begin{align} \operatorname{Cov}_{\mathbb{C}{\mathds{1}}_{{\cal K}_i},\tau_i}(E_1^{(i)}[X],E_2^{(i)}[Y])&:=\langle E_1^{(i)}[X]-(E^{\mathcal{M}})^{(i)}[X],\,E_2^{(i)}[Y]-(E^{\mathcal{M}})^{(i)}[Y]\rangle_{\tau_i}\nonumber\\ &\le \,c_1\,\|X\|_{\mathbb{L}_1(\tau_i)}\,\|Y\|_{\mathbb{L}_1(\tau_i)}\,.\label{condL1sclust} \end{align} We will call the above property \textit{conditional $\mathbb{L}_1$ clustering of correlations}, and denote it by $\operatorname{cond\mathbb{L}}_1(c_1)$. Conversely, one can show by duality of $\mathbb{L}_p$-norms that if $\operatorname{cond\mathbb{L}}_1(c_1')$ holds for some positive constant $c_1'$, then $c_1\le c_1'$: for all $i\in I_{\mathcal{M}}$ \begin{align*} \|E_{1}^{(i)}\circ E_{2}^{(i)}-(E^{\mathcal{M}})^{(i)}:\,\mathbb{L}_1(\tau_i)\to\mathbb{L}_\infty\|&=\sup_{X\in \mathbb{L}_1(\tau_i)}\,\|E_{1}^{(i)}\circ E_{2}^{(i)}[X]-(E^{\mathcal{M}})^{(i)}[X]\|_\infty\\ &=\sup_{X,Y\in \mathbb{L}_1(\tau_i)}\,\langle Y,\,E_1^{(i)}\circ E_2^{(i)}[X]-(E^{\mathcal{M}})^{(i)}[X]\rangle_{\tau_i}\\ &=\sup_{X,Y\in \mathbb{L}_1(\tau_i)}\,\langle E_1^{(i)}[Y]-(E^{\mathcal{M}})^{(i)}[Y],\, E_2^{(i)}[X]-(E^{\mathcal{M}})^{(i)}[X]\rangle_{\tau_i}\\ &\le c_1'\,. \end{align*} In \cite{Kastoryano2014}, the authors introduced a different notion of clustering of correlation in order to show the positivity of the spectral gap of Gibbs samplers\footnote{We formulate it in our general framework of finite-dimensional $*$-algebras.}. \begin{definition}\label{defL2strong} We say that ${\mathcal{M}}\subset {\cal N}_1,{\cal N}_2\subset {\cal N}$ satisfies \emph{strong $\mathbb{L}_2$ clustering of correlations} with respect to the state $\sigma\in{\cal D}({\mathcal{M}})$ with constant $c_{2}>0$ if for all $X,Y\in{\cal N}$, \begin{align}\label{L2_clust} \operatorname{Cov}_{{\mathcal{M}},\sigma}(E_1[X],E_2[Y])\le \,c_{2}\, \|X\|_{\mathbb{L}_2(\sigma)}\,\|Y\|_{\mathbb{L}_2(\sigma)}\,. \end{align} Equivalently, $\|E_1\circ E_2-E^{\mathcal{M}}:\,\mathbb{L}_2(\sigma)\to \mathbb{L}_2(\sigma)\|\le c_2$. \end{definition} \Cref{defL2strong} does not depend on the state $\sigma\in {\cal D}({\mathcal{M}})$ chosen. This is the content of the next theorem. \begin{theorem}\label{theo_L2clust} Let ${\mathcal{M}}\subset {\cal N}_1,{\cal N}_2\subset{\cal N}\subset {\cal B}({\cal H})$ be von Neumann subalgebras of the algebra ${\cal B}({\cal H})$ so that ${\cal N}_1 \cap {\cal N}_2 \neq \emptyset$. Then, for any two states $\sigma,\sigma'\in{\cal D}({\mathcal{M}})$: \begin{equation*} \sup_{X\in{\cal N}}\,\frac{\operatorname{Cov}_{{\mathcal{M}},\,\sigma}(E_1[X],E_2[X])}{\|X\|_{\mathbb{L}_2(\sigma)}^2} = \sup_{X\in{\cal N}}\,\frac{\operatorname{Cov}_{{\mathcal{M}},\,\sigma'}(E_1[X],E_2[X])}{\|X\|_{\mathbb{L}_2(\sigma')}^2} \end{equation*} \end{theorem} We first prove a technical lemma. \begin{lemma}\label{lemma:L2clustering-invariant} Given a conditional expectation $E:{\cal N}\to{\mathcal{M}}\subset {\cal N}\subset {\cal B}({\cal H})$ that is invariant with respect to two different full-rank states, $\rho$ and $\sigma$, the following holds: \begin{align*} \Gamma_\rho^{1/2}\circ E\circ \Gamma_{\rho}^{-1/2}=\Gamma_\sigma^{1/2}\circ E\circ \Gamma_\sigma^{-1/2} \end{align*} \end{lemma} \begin{proof}[Proof of \Cref{lemma:L2clustering-invariant}] Since we are in finite dimension, the von Neumann algebra ${\mathcal{M}}$ takes the following form: \begin{align*} {\mathcal{M}}=\bigoplus_i\,{\cal B}({\cal H}_i)\otimes{\mathds{1}}_{{\cal K}_i}\,, \end{align*} for some decomposition ${\cal H}:=\bigoplus_i\,{\cal H}_i\otimes{\cal K}_i$ of ${\cal H}$. Therefore, since $\rho$ and $\sigma$ are invariant stats of $E$, they can be decomposed as follows: \begin{align*} \rho=\bigoplus_{i}\, \rho_i\otimes\tau_i\,,~~~ \sigma=\bigoplus_{i}\, \sigma_i\otimes\tau_i\,, \end{align*} for given positive definite operators $\sigma_i$, $\rho_i$ and where $\tau_i$ is given by ${\mathds{1}}_{\mathcal{K}_i}/d_{\mathcal{K}_i}$. Hence, \begin{align*} \rho^{-1/4}\sigma^{1/4}=\bigoplus_i\,\rho_i^{-1/4}\sigma_i^{1/4}\otimes{\mathds{1}}_{{\cal K}_i}\in{\cal N}. \end{align*} Then, it is clear that the following string of identities hold for all $Y\in{\cal B}({\cal H})$: \begin{align*} \rho^{-1/4}\,\sigma^{1/4}\,E\big[\sigma^{-1/4}\rho^{1/4}\,Y\,\rho^{1/4}\sigma^{-1/4}\big]\,\sigma^{1/4}\rho^{-1/4}&=E\big[\rho^{-1/4}\,\sigma^{1/4}\sigma^{-1/4}\rho^{1/4}\,Y\,\rho^{1/4}\sigma^{-1/4}\sigma^{1/4}\rho^{-1/4}\big]\\ &=E[Y]\,. \end{align*} The result follows after choosing $Y=\rho^{-1/4}X\rho^{-1/4}$. \end{proof} \begin{proof}[Proof of \Cref{theo_L2clust}] We begin with proving that the property of strong $\mathbb{L}_2$ clustering of correlations is independent of the invariant state, thanks to \Cref{lemma:L2clustering-invariant}. Indeed, if we choose $Y:= \Gamma_\sigma^{-1/2}(X)$ and call $X':= \Gamma_{\sigma'}^{1/2}(Y)$, it is clear that \begin{equation*} \norm{X}_{\mathbb{L}_2(\sigma)}^2= \norm{Y}_2^2 \; \; \text{ and } \; \;\norm{Y}_2^2 = \norm{X'}_{\mathbb{L}_2(\sigma')}^2. \end{equation*} Therefore, we have the following chain of identities: \begin{align*} \sup_{X\in{\cal N}} \, \frac{\operatorname{Cov}_{{\mathcal{M}},\,\sigma}(E_1[X],E_2[X])}{\|X\|_{\mathbb{L}_2(\sigma)}^2} & = \sup_{X\in{\cal N}} \, \frac{\langle X,\,E_1\circ E_2[X]-E^{\mathcal{M}}[X] \rangle_{\sigma} }{\|X\|_{\mathbb{L}_2(\sigma)}^2}\\ &=\sup_{Y\in{\cal N}} \, \frac{\langle \Gamma_{\sigma}^{-1/2}(X),\,E_1\circ E_2[\Gamma_{\sigma}^{-1/2}(X)]-E^{\mathcal{M}}[\Gamma_{\sigma}^{-1/2}(X)] \rangle_{\sigma} }{\|Y\|_{2}^2}\\ &=\sup_{Y\in{\cal N}}\, \frac{\langle X,\,\Gamma_{\sigma}^{1/2}(E_1\circ E_2-E^{\mathcal{M}})[\Gamma_{\sigma}^{-1/2}(X)] \rangle_{\operatorname{HS}} }{\|Y\|_{2}^2}\\ &=\sup_{Y\in{\cal N}} \, \frac{\langle \Gamma_{\sigma'}^{-1/2}(X),\,E_1\circ E_2[\Gamma_{\sigma'}^{-1/2}(X)]-E^{\mathcal{M}}[\Gamma_{\sigma'}^{-1/2}(X)] \rangle_{\sigma'} }{\|Y\|_{2}^2}\\ &=\sup_{X'\in{\cal N}} \, \frac{\operatorname{Cov}_{{\mathcal{M}},\,\sigma}(E_1[X'],E_2[X'])}{\|X'\|_{\mathbb{L}_2(\sigma)}^2}\,, \end{align*} where we have used Lemma \ref{lemma:L2clustering-invariant} in the fourth line. \end{proof} It is easy to see that the above notion of strong $\mathbb{L}_2$ clustering of correlation implies that of a \textit{conditional $\mathbb{L}_2$ clustering}, denoted by $\operatorname{cond\mathbb{L}_2}(c_2)$, simply defined by replacing the $\mathbb{L}_1$ norms by $\mathbb{L}_2$ norms in \Cref{condL1sclust}, or equivalently by assuming that \[ \max_{i\in I_{\mathcal{M}}}\|E_1^{(i)}\circ E_2^{(i)}-(E^{\mathcal{M}})^{(i)}:\,\mathbb{L}_2(\tau_i)\to \mathbb{L}_2(\tau_i)\|\le c_2\,. \] One can ask whether the converse holds. We prove it under the technical assumption that the composition of conditional expectations $E_1\circ E_2$ cancels off-diagonal terms in the decomposition of ${\mathcal{M}}$: \begin{equation}\label{eq_d1=0} E_{1*}\circ E_{2*}=E_{1*}\circ E_{2*}\circ\mathcal{P}_{\mathcal{M}}\,. \end{equation} We postpone to the next section a discussion on when this holds. \begin{theorem}\label{theo_transference} Assume that \Cref{eq_d1=0} holds. Then: \begin{enumerate} \item $d_1=0$ in \Cref{propboundsd1d2} and \item strong $\mathcal{L}_2$ clustering is equivalent to conditional $\mathcal{L}_2$ clustering. \end{enumerate} \end{theorem} \begin{proof} Point 1. is straigthforward so we focus on point 2. As already mentioned, strong $ \mathbb L_2$ clsutering implies $\operatorname{cond\mathbb{L}_2}(c_2)$, so we only need to prove the only implication. Now assume that $\operatorname{cond\mathbb{L}_2}(c_2)$ holds with a constant $c_2$ and take $X\in{\cal D}({\cal N})$. We write $T=E_1\circ E_2-E^{\mathcal{M}}$. Remark that, according to the decomposition of ${\mathcal{M}}$ given in \Cref{eq_decomp} and exploiting \Cref{eq_d1=0}, $T$ acts on $X$ as: \begin{equation}\label{eq_decomp_T} T(X)=\sum_{i\in I_{\mathcal{M}}}\,\left(\mathbb I_{{\cal B}({\cal H}_i)}\otimes T^{(i)}\right)(P_i\,X\,P_i)\,, \end{equation} where $T^{(i)}$ acts on ${\cal B}({\cal K}_i)$ and where the $P_i$ are the orthogonal projections on ${\cal H}_i\otimes{\cal K}_i$. Consider now the Hilbert-Schmidt decomposition of $P_iXP_i$ with respect to $({\cal B}({\cal H}_i),\langle\cdot|\cdot\rangle_{\sigma_i})$ and $({\cal B}({\cal K}_i),\langle\cdot|\cdot\rangle_{\tau_i})$: \[P_i\,X\,P_i=\sum_\alpha\,f_\alpha^{(i)}\otimes g_\alpha^{(i)}\,.\] Thus we have \[\| P_i\,X\,P_i\|_{\mathbb L_2(\sigma_i\otimes\tau_i)}^2=\sum_\alpha\,\|f_\alpha^{(i)}\|_{\mathbb L_2(\sigma_i)}^2\,\|g_\alpha^{(i)}\|_{\mathbb L_2(\tau_i)}^2\,,\] and therefore \begin{align*} \|T(X)\|_{\mathbb L_2(\sigma_i\otimes\tau_i)}^2 & = \sum_{i\in I_{\mathcal{M}}}\,\|\left(\mathbb I_{{\cal B}({\cal H}_i)}\otimes T^{(i)}\right)(P_i\,X\,P_i)\|_{\mathbb L_2(\sigma_i\otimes\tau_i)}^2 \\ & = \sum_{i\in I_{\mathcal{M}}}\,\|\sum_\alpha f_\alpha^{(i)}\otimes T^{(i)}(g_\alpha^{(i)})\|_{\mathbb L_2(\sigma_i\otimes\tau_i)}^2 \\ & = \sum_{i\in I_{\mathcal{M}}}\,\sum_\alpha \|f_\alpha^{(i)}\|_{\mathbb L_2(\sigma_i)}^2\,\|T^{(i)}(g_\alpha^{(i)})\|_{\mathbb L_2(\tau_i)}^2 \\ & \leq c_2 \|\sum_{i\in I_{\mathcal{M}}}\,P_i\,X\,P_i\|_{\mathbb L_2(\sigma)}^2 \\ & \leq c_2 \|X\|_{\mathbb L_2(\sigma)}^2\,, \end{align*} where in the third line we use that $(f_\alpha^{(i)})_\alpha$ is an orthogonal family for every $i\in I_{\mathcal{M}}$. This shows that \[\|E_1\circ E_2-E^{\mathcal{M}}\,:\,L_2(\sigma)\to L_2(\sigma)\|\leq c_2\,,\] which is equivalent to strong $L_2$ clustering. \end{proof} Similarly to \Cref{defL2strong}, one could define a notion of \textit{strong $\mathbb{L}_1$ clustering of correlation} with respect to a state $\sigma\in{\cal D}({\mathcal{M}})$: \begin{align*} \|E_1\circ E_2-E^{\mathcal{M}}:\,\mathbb{L}_1(\sigma)\to \mathbb{L}_\infty({\cal N})\|\equiv c_1(\sigma)\,<\infty. \end{align*} This would in particular imply $\operatorname{cond\mathbb{L}_1}(c_1(\sigma))$. With this notion, and from an argument very similar to that of the proof of \Cref{theo_AT_pinching}, we could show the following bound on the error term in \Cref{propAT}: \begin{align*} \ln \left\lbrace \int_{-\infty}^\infty\,\mathop{\rm Tr}\nolimits\left[\rho_{1}\rho_{{\mathcal{M}}}^{\frac{-1-it}{2}}\rho_{2}\rho_{{\mathcal{M}}}^{\frac{-1+it}{2}} \right]\,\beta_0(t)\,dt \right\rbrace&\le 2\,\|E_{1}\circ E_2-E^{{\mathcal{M}}}:\,\mathbb{L}_1(\rho_{\mathcal{M}})\to\mathbb{L}_\infty\|\,D(\rho\|\rho_{\mathcal{M}})\,\\ &\equiv 2 \,c_1(\rho_{\mathcal{M}})\,D(\rho\|\rho_{\mathcal{M}})\,. \end{align*} From this, one would conclude a strong approximate tensorization result if one could find a uniform bound on $c_1(\sigma)$ for any $\sigma\in{\cal D}({\mathcal{M}})$. However, and as opposed to the case of strong $\mathbb{L}_2$ clustering, the constant $c_1(\sigma)$ depends on the state $\sigma$, and can in particular diverge: this is the case whenever there exists $i\in I_{\mathcal{M}}$ such that $\dim({\cal H}_i)<\infty$, and for a state $\sigma:=|\psi\rangle\langle\psi|_{{\cal H}_i}\otimes \tau_i$ that is pure on ${\cal H}_i$. This justifies our choice of $\operatorname{cond\mathbb{L}}_1$ as the better notion of $\mathbb{L}_1$ clustering in the quantum setting. \begin{remark} As a consequence of the previous proposition, we realize that the condition assumed in \cite{Kastoryano2014} of strong $\mathbb{L}_2$ clustering of correlation with respect to one invariant state, to prove positivity of the spectral gap for the Davies dynamics, would be analogous to assuming strong $\mathbb{L}_1$ clustering of correlation with respect to any invariant state. If we could reduce the problem mentioned above to finding a bound on $c_1(\sigma)$ for only one $\sigma$, we would also be able to prove positivity of the MLSI for the Davies dynamics from here, although from the discussion above we know that it is not possible in general. \end{remark} \section{Conditional expectations on fixed-points of Markovian evolution}\label{sec:condexp} In this section, we consider conditional expectations arising from Petz recovery maps and from Davies generators. The latter model the dynamics resulting from the weak coupling limit of a system in contact with a large heat-bath. The main result, \Cref{coro_equal_cond}, states that the corresponding conditional expectations coincide. \subsection{Conditional expectations generated by a Petz recovery map} Here, we further discuss the notion of conditional expectations that we will consider in the case of Gibbs states on spin systems. The discussion is largely inspired by some results in \cite{carlen2017recovery}. Let $\sigma$ be a faithful density matrix on the finite-dimensional algebra ${\cal N}$ and let ${\mathcal{M}}\subset{\cal N}$ be a subalgebra. We denote by $E_\tau$ the conditional expectation onto ${\mathcal{M}}$ with respect to the completely mixed state (i.e. $E_\tau$ is self-adjoint with respect to the Hilbert-Schmidt inner product). We also adopt the following notations: we write $\sigma_{\mathcal{M}}=E_\tau(\sigma)$ and \[\mathcal A_\sigma(X):=\sigma_{\mathcal{M}}^{-\frac12}\,E_\tau[\sigma^{\frac12}\,X\,\sigma^{\frac12}]\,\sigma_{\mathcal{M}}^{-\frac12}\,.\] Remark that $\mathcal A_\sigma$ is also the unique map such that for all $X\in{\cal N}$ and all $Y\in{\mathcal{M}}$: \[\mathop{\rm Tr}\nolimits[\sigma^{\frac12}\,X\,\sigma^{\frac12}\,Y]=\mathop{\rm Tr}\nolimits[\sigma_{\mathcal{M}}^{\frac12}\,\mathcal A_\sigma(X)\,\sigma_{\mathcal{M}}^{\frac12}\,Y]\,.\] The adjoint of $\mathcal A_\sigma$ is the Petz recovery map of $E_\tau$ with respect to $\sigma$, denoted by ${\cal R}_\sigma$: \begin{align*} \mathcal{R}_{\sigma}(\rho_{\mathcal{M}}):=\sigma^{\frac{1}{2}}\sigma_{\mathcal{M}}^{-\frac{1}{2}}\rho_{\mathcal{M}}\sigma_{\mathcal{M}}^{-\frac{1}{2}}\sigma^{\frac{1}{2}}\,, \end{align*} where $\rho_{\mathcal{M}}:=E_\tau(\rho)$. It is proved in \cite{carlen2017recovery} that $\mathcal A_\sigma$ is a conditional expectation if and only if $\sigma\,X\,\sigma^{-1}\in{\mathcal{M}}$ for all $X\in{\mathcal{M}}$. In the general case, we denote by \begin{equation}\label{eq_cond_Petz} E_\sigma:=\lim_{n\to\infty}\mathcal A_\sigma^n \end{equation} the projection on its fixed-point algebra for the $\sigma$-KMS inner product, which is a conditional expectation as we assumed $\sigma$ to be faithful. That is, $E_\sigma$ is the orthogonal projection for $\langle\cdot,\cdot\rangle_\sigma$ on the algebra: \[{\mathcal{F}}(\mathcal A_\sigma)=\{X\in{\cal N}\,;\,\mathcal A_\sigma(X)=X\}\,.\] \begin{example}\label{ex_bipartite} Our main example is the case of a bipartite system $AB$. In this case, ${\cal N}={\cal B}({\cal H}_{AB})$ and ${\mathcal{M}}={\mathds{1}}_{{\cal H}_A}\otimes{\cal B}({\cal H}_{B})$. Let $\sigma=\sigma_{AB}$ be a faithful density matrix on $AB$. The partial trace with respect to ${\cal H}_A$ is an example of a conditional expectation $E_\tau$ which is not compatible with $\sigma_{AB}$, in general. With this choice, we obtain: \begin{align*} & \sigma_{\mathcal{M}}=\sigma_B\,,\\ & \mathcal A_{\sigma_{AB}}(X)=\sigma_{B}^{-\frac12}\,\mathop{\rm Tr}\nolimits_A[\sigma_{AB}^{\frac12}\,X\,\sigma_{AB}^{\frac12}]\,\sigma_{B}^{-\frac12}\,,\qquad\forall X\in{\cal B}({\cal H}_{AB})\, , \\ & {\cal R}_{\sigma_{AB}}(\rho_B)=\sigma_{AB}^{\frac12}\sigma_{B}^{-\frac12}\,\rho_B\,\sigma_{B}^{-\frac12}\sigma_{AB}^{\frac12}\,,\qquad\forall \rho_{AB}\in{\cal D}({\cal H}_{AB})\,, \end{align*} where here we identify an operator $X_B$ with ${\mathds{1}}_A\otimes X_B$ for sake of simplicity. An important remark is that, in general, $E_{\sigma_{AB}^*}$ is not a recovery map. \end{example} We are now ready to state a first technical proposition, whose content is mostly contained in \cite{carlen2017recovery}. \begin{proposition}\label{prop_condexp} Let $\rho$ be a density matrix on ${\cal N}$. Then the following assertions are equivalent: \begin{enumerate} \item $D(\rho\|\sigma)=D(\rho_{\mathcal{M}}\|\sigma_{\mathcal{M}})$; \item $\rho={\cal R}_\sigma(\rho_{\mathcal{M}})$; \item $\rho=E_{\sigma*}(\rho)$; \item $D(\rho\|E_{\sigma*}(\rho))=0$. \item $D(\rho\|\sigma)=D(E_{\sigma*}(\rho)\|\sigma)$; \end{enumerate} \end{proposition} Remark that $(1)\Leftrightarrow(2)$ is Petz condition for equality in the data processing inequality. The equivalence $(3)\Leftrightarrow(4)$ is obvious, and $(4)\Leftrightarrow(5)$ is a consequence of Lemma 3.4 in \cite{junge2019stability}: \begin{equation}\label{very_nice_lemma} D(\rho\|\sigma)-D(E_{\sigma*}(\rho)\|\sigma)=D(\rho\|E_{\sigma*}(\rho))\,. \end{equation} We shall now give a direct proof of $(2)\Leftrightarrow(3)$. \begin{proof}[Proof of \Cref{prop_condexp}] We only prove $(2)\Leftrightarrow(3)$. Note that for $X\in{\cal N}$, by definition $X=\mathcal A_\sigma(X)$ iff $X=E_\sigma(X)$. Then let $\rho$ be a density matrix on ${\cal N}$ and define $X=\sigma^{-\frac12}\,\rho\,\sigma^{-\frac12}$. We have: \begin{align*} \rho={\cal R}_\sigma(\rho_{\mathcal{M}}) & \Leftrightarrow X=\mathcal A_\sigma(X) \\ & \Leftrightarrow X=E_\sigma(X) \\ & \Leftrightarrow \rho= \sigma^{\frac12}\,X\,\sigma^{\frac12} = \sigma^{\frac12}\,E_\sigma(X)\,\sigma^{\frac12}=E_{\sigma*}(\sigma^{\frac12}\,X\,\sigma^{\frac12})=E_{\sigma*}(\rho)\,. \end{align*} where in the last line we use property 3 in Proposition \ref{prop_condexp}. \end{proof} It would be interesting to compare the two notions of ``conditional'' relative entropies $D(\rho\|\sigma)-D(\rho_{\mathcal{M}}\|\sigma_{\mathcal{M}})$ (introduced in \cite{[CLP18],[CLP18a],BardetCapelLuciaPerezGarciaRouze-HeatBath1DMLSI-2019}) and $D(\rho\|E_{\sigma*}(\rho))$. This is the content of the following proposition. \begin{proposition}\label{prop_compar_cond} For any state $\eta\in{\cal D}({\cal N})$ such that $E_{\sigma*}(\eta)=\eta$ and any state $\rho\in{\cal D}({\cal N})$, we have \begin{equation}\label{eq_prop_compar_cond1} D(\rho\|\sigma)-D(\rho_{\mathcal{M}}\|\sigma_{\mathcal{M}})=D(\rho\|\eta)-D(\rho_{\mathcal{M}}\|\eta_{\mathcal{M}})\,, \end{equation} i.e. the difference of relative entropies does not depend on the choice of the invariant state for $E_\sigma$.Consequently, \begin{equation}\label{eq_prop_compar_cond2} D(\rho\|\sigma)-D(\rho_{\mathcal{M}}\|\sigma_{\mathcal{M}})\leq D(\rho\|E_{\sigma*}(\rho))\,. \end{equation} \end{proposition} \begin{proof} \Cref{eq_prop_compar_cond2} is a direct consequence of \Cref{eq_prop_compar_cond1} when applied to $\eta=E_{\sigma*}(\rho)$, so we focus on the first equation (remark that it can be seen as a counterpart of \Cref{very_nice_lemma} for the difference of relative entropies). To this end, we need the following state $\sigma_{\mathop{\rm Tr}\nolimits}$ defined in \cite{BarEID17} and heavily exploited in \cite{bardet2018hypercontractivity}: \[\sigma_{\mathop{\rm Tr}\nolimits}=E_{\sigma*}\Big(\frac{\mathds 1}{d_{\cal H}}\Big)\,.\] It has the property that for all $X\in{\mathcal{F}}(\mathcal A_\sigma)$, $[X,\sigma_{\mathop{\rm Tr}\nolimits}]=0$ (see Lemma 3.1 in \cite{BarEID17}). Then it is enough to prove that for all $\eta\in{\cal D}({\cal N})$ such that $E_{\sigma*}(\eta)=\eta$, we have: \[D(\rho\|\sigma_{\mathop{\rm Tr}\nolimits})-D(\rho_{\mathcal{M}}\|(\sigma_{\mathop{\rm Tr}\nolimits})_{\mathcal{M}})=D(\rho\|\eta)-D(\rho_{\mathcal{M}}\|\eta_{\mathcal{M}})\,.\] Now any such $\eta$ can be written $\eta=X\sigma_{\mathop{\rm Tr}\nolimits}$ with $X\in{\mathcal{F}}(\mathcal A_\sigma)$. Remark that by definition of ${\mathcal{F}}(\mathcal A_\sigma)$, $X\in{\mathcal{M}}$ so that $E_\tau(\eta)=X E_\tau(\sigma_{\mathop{\rm Tr}\nolimits})$. Using the commutation between $X$ and $\sigma_{\mathop{\rm Tr}\nolimits}$ and developping the RHS of the previous equation we get the result. \end{proof} \subsection{Davies semigroups}\label{subsec:clustering-of-correlations} Let ${\cal H}$ be the Hilbert space representing a quantum system and let $H$ be a selfadjoint operator on it, representing the Hamiltonian of the system. The corresponding Gibbs state at inverse temperature $\beta$ is defined as \begin{align} \sigma=\frac{\mathrm{e}^{-\beta H}}{\mathop{\rm Tr}\nolimits[\mathrm{e}^{-\beta H}]}\,. \end{align} Next, consider the Hamiltonian $H^{\operatorname{HB}}$ of the heat bath, as well as a set of system-bath interactions $\{ S_{\alpha}\otimes B_{\alpha} \}$, for some label $\alpha$. Here, we do not assume anything on the $S_\alpha$'s. The Hamiltonian of the universe composed of the system and its heat-bath is given by \begin{align} H=H_\Lambda+H^{\operatorname{HB}}+\sum_{\alpha\in\Lambda}S_{\alpha}\otimes B_{\alpha}\,. \end{align} Assuming that the bath is in a Gibbs state, by a standard argument (e.g. weak coupling limit, see \cite{[SL78]}), the evolution on the system can be approximated by a quantum Markov semigroup whose generator is of the following form: \begin{align}\label{eq_lindblad} {\cal L}^{\operatorname{D},\beta}(X)=\sum_{\omega,\alpha}\,\chi^{\beta}_{\alpha}(\omega)\,\Big( S_{\alpha}^*(\omega)XS_{\alpha}(\omega)-\frac{1}{2}\,\big\{ S_{\alpha}^*(\omega)S_{\alpha}(\omega),X \big\} \Big)\,. \end{align} The Fourier coefficients of the two-point correlation functions of the environment $\chi_{\alpha}^\beta$ satisfy the following KMS condition: \begin{align}\label{eq_KMS} \chi_{\alpha}^\beta(-\omega)=\mathrm{e}^{-\beta\omega}\,\chi_{\alpha}^\beta(\omega)\,. \end{align} The operators $S_{\alpha}(\omega)$ are the Fourier coefficients of the system couplings $S_{\alpha}$, which means that they satisfy the following equation for any $t\in\mathbb{R}$: \begin{align}\label{eq!} \mathrm{e}^{-itH}\,S_{\alpha}\mathrm{e}^{it H}=\sum_\omega\mathrm{e}^{it\omega}S_{\alpha}(\omega)\,\qquad\Leftrightarrow \qquad S_\alpha(\omega)=\sum_{\varepsilon-\varepsilon'=\omega}P_\varepsilon\,S_\alpha\,P_{\varepsilon'}\,. \end{align} where the sum is over a finite number of frequencies. This implies in particular the following useful relation: \begin{align}\label{eq_eigenvector1} \Delta_{\sigma}(S_{\alpha}(\omega))=\mathrm{e}^{\beta\omega}\,S_{\alpha}(\omega)\,. \end{align} The above identity means that the operators $S_{\alpha}(\omega)$ form a basis of eigenvectors of $\Delta_\sigma$. Next, we define the conditional expectation onto the algebra ${\mathcal{F}}({\cal L})$ of fixed points of ${\cal L}$ with respect to the Gibbs state $\sigma=\sigma^\beta$ as follows \cite{Kastoryano2014}: \begin{align}\label{Davies_cond} E^{\operatorname{D},\beta}:=\lim_{t\to \infty}\mathrm{e}^{t{\cal L}^{\operatorname{D},\beta}}\,. \end{align} Our first result is a characterization of the fixed-point algebra in the Davies case. \begin{proposition}\label{prop_fixedpoint_Davies} One has \begin{equation}\label{eq_prop_fixedpoint_Davies} {\mathcal{F}}({\cal L}^{\operatorname{D},\beta})=\{\sigma^{it}\,S_\alpha\,\sigma^{-it}\,;\,\forall\,t\geq0,\,\forall \alpha\}'\,, \end{equation} where the notation $\{ \cdot \}'$ denotes the centralizer of the set. \end{proposition} \begin{proof} We recall that ${\mathcal{F}}({\cal L}^{\operatorname{D,\beta}})=\{S_\alpha(\omega)\}'$. Hence, since $\sigma^{it}S_\alpha \sigma^{-it}$ can be expressed as a linear combination of the $S_\alpha(\omega)$'s by \Cref{eq!}, it directly follows that \begin{align*} {\mathcal{F}}({\cal L}^{\operatorname{D,\beta}})\subseteq \{\sigma^{it}\,S_\alpha\,\sigma^{-it}\,;\,t\geq0\}' \end{align*} To prove the opposite direction, we let $X\in \{\sigma^{it}\,S_\alpha\,\sigma^{-it}\,;\,t\geq0\}'$. This means in particular that, for all $t\in\mathbb{R}$, and all $\alpha$: \begin{align}\label{eq:sumomega} [X, \sigma^{it}S_{\alpha}\sigma^{-it}]=\sum_{\omega}\,\mathrm{e}^{it\omega}\,[X,S_{\alpha}(\omega)]=0\,. \end{align} Since the equation holds for all $t\in\mathbb{R}$, we can differentiate it $N\equiv|\{\omega\}|$ times at $0$ to get that, for any $0\le n\le N-1$: \begin{align*} \sum_{\omega}\,\omega^n\,[X,S_\alpha(\omega)]=0\,. \end{align*} Using an arbitrary labelling of the $N$ distinct frequencies $\omega_1,...,\omega_N$, the resulting $N$ linear equations can be rewritten as \begin{align*} \begin{pmatrix} 1 & 1& 1 & \dots & 1 \\ \omega_1& \omega_2 &\omega_3 & \dots & \omega_N \\ \omega_1^2&\omega_2^2 &\omega_3^2 & \dots & \omega_N^2 \\ \hdotsfor{5} \\ \omega_1^{N-1} & \omega_2^{N-1} &\omega_3^{N-1} & \dots &\omega_N^{N-1} \end{pmatrix}\,\begin{pmatrix} [X,S_\alpha(\omega_1)]\\ [X,S_\alpha(\omega_2)]\\ [X,S_\alpha(\omega_3)]\\ \hdotsfor{1}\\ [X,S_\alpha(\omega_N)] \end{pmatrix}=0 \end{align*} Since all the frequencies $\omega_i$ are distinct, their Vandermonde matrix is invertible. Hence, $[X,S_\alpha(\omega)]=0$ for all $\omega$, so that $X\in{\mathcal{F}}({\cal L}^{\operatorname{D},\beta})$. \end{proof} Combining this result with a result from \cite{carlen2017recovery}, we can finally show that the conditional expectations in the Davies and the Petz cases are the same. \begin{theorem}\label{coro_equal_cond} Define the algebra ${\mathcal{M}}=\{S_\alpha\}'$. Define $E^{\operatorname{D},\beta}$ as above and $E_{\sigma}$ as in \Cref{eq_cond_Petz} with respect to the inclusion ${\mathcal{M}}\subset{\cal B}({\cal H})$. Then both conditional expectations are equal. \end{theorem} \begin{proof} First, we remark that both conditional expectations are self-adjoint with respect to the $\sigma$-KMS inner product. Therefore, by uniqueness of the conditional expectation, it is enough to prove that ${\mathcal{F}}({\cal L}^{\operatorname{D},\beta})={\mathcal{F}}(A_{\sigma})$. The analysis of the algebra ${\mathcal{F}}(A_{\sigma})$ was carried out in \cite{carlen2017recovery}\footnote{Compared to \cite{carlen2017recovery}, the role of $\rho$ and $\sigma$ is exchanged. The result nevertheless stays the same, as can be readily checked from their proof.}. In particular, they proved (Theorem 3.3) that ${\mathcal{F}}(A_{\sigma})$ is the largest $*$-sub-algebra of ${\mathcal{M}}$ left-invariant by the modular operator. From this characterization, it is easy to see that ${\mathcal{F}}({\cal L}^{\operatorname{D},\beta})\subseteq {\mathcal{F}}({\cal A}_\sigma)$: indeed ${\mathcal{F}}({\cal L}^{\operatorname{D},\beta})=\{S_\alpha(\omega)\}'\subseteq \{S_\alpha\}'\equiv {\mathcal{M}}$. Moreover, for any $X\in {\mathcal{F}}({\cal L}^{\operatorname{D},\beta})$ \begin{align*} [\Delta_\sigma(X),S_\alpha(\omega)]&=\sigma\,X\,\sigma^{-1}\,S_\alpha(\omega)-S_\alpha(\omega)\sigma X\sigma^{-1}\\ &=\mathrm{e}^{-\beta\omega }\,\big( \sigma XS_{\alpha}(\omega)\sigma^{-1}-\sigma\,S_\alpha(\omega)X\sigma^{-1} \big)\\ &=\mathrm{e}^{-\beta\omega }\,\sigma\,[X,S_\alpha(\omega)]\,\sigma^{-1}\\ &=0\,. \end{align*} It remains to show that any $*$-sub-algebra $\mathcal{V}$ of ${\mathcal{M}}$ which is invaraint by $\Delta_\sigma$ is contained in ${\mathcal{F}}({\cal L}^{\operatorname{D},\beta})$. This directly follows from (\ref{eq_prop_fixedpoint_Davies}): since for any $X\in\mathcal{V}$, $\Delta(X)\in\mathcal{V}$, we have that \begin{align*} [X,\sigma^{it}\,S_\alpha\,\sigma^{-it}]&=X\,\sigma^{it}\,S_\alpha\,\sigma^{-it}-\sigma^{it}\,S_\alpha\,\sigma^{-it}X\\ &=\sigma^{it}\Delta_{\sigma}^{-it}(X)S_\alpha\sigma^{-it}-\sigma^{it}S_\alpha \Delta_{\sigma}^{-it}(X)\sigma^{-it}\\ &=\sigma^{it}\,[\Delta_{\sigma}^{-it}(X),S_\alpha]\,\sigma^{-it}\\ &=0\, \end{align*} and the result follows. \end{proof} \section{Introduction} In the last few decades, entropy has been proven to be a fundamental object in various fields of mathematics and theoretical physics. Its quantum analogue characterizes the optimal rate at which two different states of a system can be discriminated when an arbitrary number of copies of the system is available. Given two states $\rho,\sigma$ of a finite-dimensional von Neumann algebra ${\cal N}\subset {\cal B}({\cal H})$, it is given by \begin{align*} D(\rho\|\sigma):= \mathop{\rm Tr}\nolimits[\rho\,(\ln\rho-\ln\sigma)]\,, \end{align*} whenever $\mathop{\rm supp}\nolimits(\rho)\subset\mathop{\rm supp}\nolimits(\sigma)$, where $\mathop{\rm Tr}\nolimits$ denotes the unnormalized trace on ${\cal B}({\cal H})$. When $\sigma:={\mathds{1}}_{{\cal H}}/d_{\cal H}$ is the completely mixed state of ${\cal B}({\cal H})$, the relative entropy can be written in terms of the von Neumann entropy $S(\rho):=-\mathop{\rm Tr}\nolimits[\rho\ln\rho]$ of the state $\rho$: \begin{align*} D(\rho\|{\mathds{1}}_{\cal H}/d_{\cal H})=-S(\rho)+\ln(d_{\cal H})\,. \end{align*} Probably the most fundamental property of entropy is the following \textit{strong subadditivity} inequality (SSA) \cite{lieb1973SSA}: given a tripartite system ${\cal H}_{ABC}:={\cal H}_A\otimes {\cal H}_B\otimes {\cal H}_C$, and a state $\rho\equiv \rho_{ABC}$ on ${\cal H}_{ABC}$, the following holds \begin{align}\tag{SSA} S(\rho_{ABC})+S(\rho_B)\le S(\rho_{AB})+S(\rho_{BC})\,, \end{align} where for any subsystem $D$ of $ABC$, $\rho_{D}:=\mathop{\rm Tr}\nolimits_{D^c}[\rho_{ABC}]$ denotes the marginal state on $D$. Restated in terms of the quantum relative entropy, SSA takes the following form: \begin{align}\label{SSA} D\left(\rho_{ABC}\Big\|\rho_B\otimes \frac{{\mathds{1}}_{AC}}{d_{{\cal H}_{AC}}}\right)\le D\left(\rho_{ABC}\Big\| \rho_{AB}\otimes \frac{{\mathds{1}}_{C}}{d_{{\cal H}_C}} \right)+D\left(\rho_{ABC}\Big\| \rho_{BC}\otimes \frac{{\mathds{1}}_{A}}{d_{{\cal H}_{A}}} \right)\,. \end{align} In the present paper, we consider the following more general framework: let ${\mathcal{M}}\subset {\cal N}_1,{\cal N}_2\subset{\cal N}$ be four von Neumann subalgebras of the algebra of linear operators acting on a finite-dimensional Hilbert space ${\cal H}$, and let $E^{\mathcal{M}},E_1,E_2$ be corresponding conditional expectations onto ${\mathcal{M}},{\cal N}_1,{\cal N}_2$, respectively. When the quadruple $({\mathcal{M}},{\cal N}_1,{\cal N}_2,{\cal N})$ forms a \textit{commuting square}, that is when $E_{1}\circ E_2=E_2\circ E_1=E^{\mathcal{M}}$, the following generalization of SSA occurs: for any state $\rho$ on ${\cal N}$, \begin{align}\label{eqcoarsegrain} D(\rho\|E^{{\mathcal{M}}}_*(\rho))\le D(\rho\| E_{1*}(\rho))+D(\rho\|E_{2*}(\rho))\,, \end{align} where the maps $E^{\mathcal{M}}_*,E_{1*}$, $E_{2*}$ are the Hilbert-Schmidt duals of $E^{\mathcal{M}}, E_{1}, E_2$, also known as \textit{coarse-graining maps} \cite{petz2008}. One can easily recover the previous (SSA) inequality from (\ref{eqcoarsegrain}) by taking ${\cal N}\equiv{\cal B}({\cal H}_{ABC})$, and the coarse graining maps to be the partial traces onto the subalgebras ${\cal N}_1\equiv{\cal B}({\cal H}_{AB}) $, ${\cal N}_2\equiv {\cal B}({\cal H}_{BC})$ and ${\mathcal{M}}\equiv {\cal B}({{\cal H}_B})$. In the context of interacting lattice spin systems, conditional expectations arising e.g. from the large time limit of a dissipative evolution on subregions of the lattice generally do not satisfy the commuting square assumption. In this case, approximations of the SSA were found in the classical case (i.e. when all algebras are commutative), and when ${\mathcal{M}}\equiv \mathbb{C}{\mathds{1}}_{{\cal H}}$ \cite{cesi2001quasi}. These inequalities, termed as \textit{approximate tensorization of the relative entropy} (also known in the literature as \textit{quasi-factorization of the relative entropy}, \cite{cesi2001quasi}, \cite{[CLP18a]}, \cite{BardetCapelLuciaPerezGarciaRouze-HeatBath1DMLSI-2019}), take the following form \begin{align*} D(\rho\|\sigma)\le \frac{1}{1-2c}\,\big(D(\rho\|E_{1*}(\rho))+D(\rho\|E_{2*}(\rho))\big)\,, \end{align*} where $\sigma:= E^{{\mathcal{M}}}_*(\rho)$ and $c:=\|E_{1}\circ E_{2}-E^{\mathcal{M}}:\,\mathbb{L}_1(\sigma)\to \mathbb{L}_\infty({\cal N})\|$ is a constant that measures the distance from being a commuting square for the quadruple $({\mathcal{M}},{\cal N}_1,{\cal N}_2,{\cal N})$. Typically, $c=0$ at infinite temperature, and remains small for conditional expectations onto far apart regions and at high enough temperature. Such an inequality was recently generalized to the quantum setting in \cite{[CLP18],[CLP18a],BardetCapelLuciaPerezGarciaRouze-HeatBath1DMLSI-2019}. In \cite{gao2017strong}, a different extension of SSA in the case of noncommuting squares was proposed, with an additive error term that measures the distance from being a commuting square. \paragraph{Main result:}In this paper, we take one step further and prove a \textit{weak approximate tensorization} for the relative entropy, which amounts to the existence of positive constants $c\geq 1$ and $d\ge 0$ such that (see \Cref{theo_AT_pinching} )\footnote{The definition of (strong) approximate tensorization recently arose in the paper \cite{nick2019scoopingpaper}, where it was coined as ``adjusted subadditivity of relative entropy''. As explained by the author himself, this definition was already present in an earlier draft of our present article, which we had shared with him (see also the recently published thesis \cite{thesisangela}). Furthermore, the techniques that we introduce here are different from his, and more in line with the classical literature on the subject.} \begin{align}\tag{$\operatorname{AT(c,d)}$} D(\rho\|E^{\mathcal{M}}_*(\rho))\le c\,\big(D(\rho\|E_{1*}(\rho))+D(\rho\|E_{2*}(\rho))\big)+d\,. \end{align} As opposed to the classical setting, conditional expectations arising from dissipative evolutions on quantum lattice spin systems do not satisfy the commuting square condition in general, even at infinite temperature. This difference is exclusively due to the non-commutativity of the underlying algebras, which explains the introduction of the \textit{weak} constant $d$. Here, we estimate both constants $c$ and $d$ in terms of the interactions appearing in the Hamiltonian of the system. Unlike previous work on the subject, our inequality exactly reduces to that of \cite{cesi2001quasi} for commuting algebras. As mentioned previously, our main application of these inequalities is in the context of mixing times of quantum lattice spin systems - although we expect these inequalities and their proof techniques to find other applications in quantum information theory. In \cite{cesi2001quasi}, Cesi used his inequality in order to show the exponential convergence in relative entropy of classical Glauber dynamics on lattice systems towards equilibrium, independently of the lattice size. In a forthcoming paper, we will make use of the approximate tensorization inequality to show similar convergences for dissipative quantum Gibbs samplers. \paragraph{Outline of the paper} In \Cref{sec2}, we review basic mathematical concepts used in this paper, and more particularly the notion of a noncommutative conditional expectation. We derive theoretical expressions on the \textit{strong} ($c$) and \textit{weak} ($d$) constants for general von Neumann algebras in \Cref{sec:strong-quasi-tensorization}, where our main result is stated in \Cref{theo_AT_pinching}, and subsequently apply them to obtain strengthenings of uncertainty relations. In \Cref{sec:condexp}, we review the conditional expectations arising from Petz recovery maps and from Davies generators and show in \Cref{coro_equal_cond}, states that both conditional expectations coincide. Finally, in \Cref{sec:dabies}, we derive explicit bounds on the constants $c$ and $d$ for conditional expectations associated to Gibbs samplers on lattice spin systems in terms of the interactions of the corresponding Hamiltonian. \section{Notations and definitions}\label{sec2} In this section, we fix the basic notation used in the paper, and introduce the necessary definitions. \subsection{Basic notations} Let $({\cal H},\langle .|.\rangle)$ be a finite dimensional Hilbert space of dimension $d_{\cal H}$. We denote by ${\cal B}({\cal H})$ the Banach space of bounded operators on ${\cal H}$, by ${\cal B}_{\operatorname{sa}}({\cal H})$ the subspace of self-adjoint operators on ${\cal H}$, i.e. ${\cal B}_{\operatorname{sa}}({\cal H})=\left\{X={\cal B}({\cal H});\ X=X^*\right\}$, where the adjoint of an operator $Y$ is written as $Y^*$, and by ${\cal B}_+({\cal H})$ the cone of positive semidefinite operators on ${\cal H}$. We will also use the same notations ${\cal N}_{\operatorname{sa}}$ and ${\cal N}_+$ in the case of a von Neumann subalgebra ${\cal N}$ of ${\cal B}({\cal H})$. The identity operator on ${\cal N}$ is denoted by ${\mathds{1}}_{\cal N}$, dropping the index ${\cal N}$ when it is unnecessary. In the case of ${\cal B}(\mathbb{C}^\ell)$, $\ell\in\mathbb{N}$, we will also use the notation ${\mathds{1}}$ for ${\mathds{1}}_{\mathbb{C}^\ell}$. Similarly, given a map $\Phi:{\cal B}({\cal H})\to{\cal B}({\cal H})$, we denote its dual with respect to the Hilbert-Schmidt inner product as $\Phi_*$. We also denote by ${\rm{id}}_{{\cal B}({\cal H})}$, or simply ${\rm{id}}$, resp. ${\rm{id}}_\ell$, the identity superoperator on ${\cal B}({\cal H})$, resp. ${\cal B}(\mathbb{C}^\ell)$. We denote by $\mathcal{D}({\cal H})$ the set of positive semidefinite, trace-one operators on ${\cal H}$, also called \emph{density operators}, by ${\cal D}_+({\cal H})$ the subset of full-rank density operators, and by ${\cal D}_{\leq}({\cal H})$ the set of subnormalized states. In the following, we will often identify a density matrix $\rho\in\mathcal{D}({\cal H})$ and the \emph{state} it defines, that is the positive linear functional ${\cal B}({\cal H})\ni X\mapsto\mathop{\rm Tr}\nolimits(\rho \,X)$. \subsection{Entropic quantities and $\mathbb{L}_p$ spaces} Throughout this paper, we will use various distance measures between states and between observables. We collect them here for sake of clarity: given a state $\rho\in{\cal D}({\cal N})$, its \textit{von Neuman entropy} is defined by \begin{align*} S(\rho):=-\mathop{\rm Tr}\nolimits\big[ \rho\,\ln\rho \big]\,. \end{align*} Next, when $\rho\equiv \rho_{AB}\in {\cal D}({\cal H}_A\otimes {\cal H}_B)$ is the state of a bipartite quantum system, its \textit{conditional entropy} is defined by \begin{align*} S(A|B)_\rho:=S(\rho_{AB})-S(\rho_B)\,, \end{align*} where $\rho_B:=\mathop{\rm Tr}\nolimits_A(\rho)$ corresponds to the marginal of $\rho$ over the subsystem ${\cal H}_B$. More generally, given two positive semidefinite operators $\rho,\sigma\in {\cal B}_+({\cal H})$, the \textit{relative entropy} between $\rho$ and $\sigma$ is defined as follows \cite{Umegaki-RelativeEntropy-1962}: \begin{align*} D(\rho\|\sigma):=\left\{\begin{aligned} &\mathop{\rm Tr}\nolimits[\rho\,(\ln\rho-\ln\sigma)]\,\,\,\,\mathop{\rm supp}\nolimits(\rho)\subset\mathop{\rm supp}\nolimits(\sigma)\\ &+\infty\,\,\qquad\qquad\qquad\text{else} \end{aligned}\right. \end{align*} Moreover, given (possibly subnormalized) positive semidefinite operators $\rho\ge 0$ and $\sigma>0$, their \textit{max-relative entropy} is defined as \cite{datta2009minmaxrelativeentropies}: \begin{align*} D_{\max}(\rho\|\sigma):= \inf\{\lambda|\,\rho\le \mathrm{e}^{\lambda}\sigma \} \equiv\ln\,(\|\sigma^{-\frac{1}{2}}\,\rho\,\sigma^{-\frac{1}{2}}\|_\infty)\,. \end{align*} From the max-relative entropy, we can define the \textit{max-information} of a (possibly subnormalized) bipartite state $\rho_{AB}\in{\cal B}_+({\cal H}_A\otimes {\cal H}_B)$ as follows \cite{berta2011quantum}: \begin{align*} I_{\max}(A:B)_{\rho}\equiv I_{\max}({\cal H}_A:{\cal H}_B)_{\rho} :=\inf_{\tau_{B}\in{\cal D}({\cal H})}\,D_{\max}(\rho_{AB}\|\rho_A\otimes \tau_B)\,. \end{align*} Given a subalgebra ${\cal N}$ of ${\cal B}({\cal H})$ and $\sigma\in{\cal D}_+({\cal N})$, we define the modular maps $\Gamma_\sigma:{\cal N}\to {\cal B}({\cal H})$ and $\Delta_\sigma:{\cal N}\to{\cal N}$ as follows \begin{align*} \Gamma_\sigma(X):=\sigma^{1/2}\,X\,\sigma^{1/2}\,\qquad\Delta_\sigma(X)=\sigma\,X\,\sigma^{-1}\,. \end{align*} Then for any $p\ge 1$ and $X\in{\cal N}$, its non-commutative weighted $\mathbb{L}_p(\sigma)$-norm is defined as \cite{Kosaki-noncommLp-1984}: \begin{align*} \|X\|_{\mathbb{L}_p(\sigma)}:=\mathop{\rm Tr}\nolimits\big[ |\Gamma_\sigma^{\frac{1}{p}}(X)|^p \big]^{\frac{1}{p}}\, \end{align*} and $\|X\|_{\mathbb{L}_{\infty}(\sigma)}=\|X\|_\infty$, the operator norm of $X$, which we will often denote by $\| X \|$ too, dropping the subindex. We call the space ${\cal B}({\cal H})$ endowed with the norm $\|.\|_{\mathbb{L}_p(\sigma)}$ the \textit{quantum $\mathbb{L}_p(\sigma)$ space}. In the case $p=2$, we have a Hilbert space, with corresponding $\sigma$-KMS scalar product \begin{align}\label{KMSinner} \langle X,\,Y\rangle_\sigma:=\mathop{\rm Tr}\nolimits[\sigma^{1/2}X^*\sigma^{1/2}Y]\,. \end{align} Weighted $\mathbb{L}_p$ norms enjoy the following useful properties: \begin{itemize} \item[-] H\"{o}lder's inequality: for any $p,\hat{p}\ge 1$ such that $p^{-1}+\hat{p}^{-1}=1$, and any $X,Y\in{\cal N}$: \begin{align*} \langle X,\,Y\rangle_\sigma\le \|X\|_{\mathbb{L}_p(\sigma)}\,\|Y\|_{\mathbb{L}_{\hat{p}}(\sigma)}\,. \end{align*} Here, $\hat{p}$ is the \textit{H\"{o}lder conjugate} of $p$. \item[-] Duality of norms: for any $p\ge 1$ of H\"{o}lder conjugate $\hat{p}$, and any $X\in{\cal N}$: \begin{align*} \|X\|_{\mathbb{L}_p(\sigma)}=\sup_{\|Y\|_{\mathbb{L}_{\hat{p}}(\sigma)}\le 1}\,\langle Y,\,X\rangle_\sigma\,. \end{align*} \item[-] For any completely positive, unital linear map $\Phi:{\cal N}\to {\cal N}$ such that $\Phi_*(\sigma)=\sigma$, any $p\ge 1$ and any $X\in{\cal N}$: \begin{align}\label{Phiinvariantnorm} \|\Phi(X)\|_{\mathbb{L}_p(\sigma)}\le \|X\|_{\mathbb{L}_p(\sigma)}\,. \end{align} \end{itemize} \subsection{Conditional expectations} Here we introduce the main object studied in this paper: \begin{definition}[Conditional expectations~\cite{OhyaPetz-Entropy-1993}] Let ${\mathcal{M}}\subset {\cal N}$ be a von Neumann subalgebra of ${\cal N}$. Given a state $\sigma\in{\cal D}_+({\mathcal{M}})$, a linear map $E:{\cal N}\to{\mathcal{M}}$ is called a \textit{conditional expectation} with respect to $\sigma$ of ${\cal N}$ onto ${\mathcal{M}}$ if the following conditions are satisfied: \begin{itemize} \item[-] For all $X\in{\cal N}$, $\|E[X]\|\le \|X\|$; \item[-] For all $X\in {\mathcal{M}}$, $E[X]=X$; \item[-] For all $X\in{\cal N}$, $\mathop{\rm Tr}\nolimits[\sigma E[X]]=\mathop{\rm Tr}\nolimits[\sigma X]$. \end{itemize} \end{definition} A conditional expectation satisfies the following useful properties (see \cite{Aspects2003} for proofs and more details): \begin{proposition}\label{propositioncondexp} Conditional expectations generically satisfy the following properties: \begin{itemize} \item[(i)] The map $E$ is completely positive and unital. \item[(ii)] For any $X\in{\cal N}$ and any $Y,Z\in{\mathcal{M}}$, $E[YXZ]=Y E[X]Z$. \item[(iii)] $E$ is self-adjoint with respect to the scalar product $\langle .,\,.\rangle_\sigma$. In other words: \begin{align*} \Gamma_\sigma\circ E=E_*\circ\Gamma_\sigma \,, \end{align*} where $E_*$ denotes the adjoint of $E$ with respect to the Hilbert-Schmidt inner product. \item[(iv)] $E$ commutes with the modular automorphism group of $\sigma$: for any $s\in\mathbb{R}$, \begin{align} \Delta_\sigma^{is}\circ E=E\circ \Delta^{is}_\sigma\,. \end{align} \item[(v)] Uniqueness: given a von Neumann subalgebra ${\mathcal{M}}\subset {\cal N}$ and a faithful state $\sigma$, the existence of a conditional expectation $E$ is equivalent to the invariance of ${\mathcal{M}}$ under the modular automorphism group $(\Delta_\sigma^{is})_{s\in\mathbb{R}}$. In this case, $E$ is uniquely determined by $\sigma$. \end{itemize} \end{proposition} With a slight abuse of notations, given a finite-dimensional von Neumann subalgebra ${\cal N}=E[{\cal B}({\cal H})]$ of ${\cal B}({\cal H})$, we denote by ${\cal D}({\cal N}):= E_{*}({\cal D}({\cal H}))$ its corresponding set of states that are invariant by $E$, so that ${\cal D}({\cal H})\equiv {\cal D}({\cal B}({\cal H}))$. Similarly, the set of subnormalized states on the algebra ${\cal N}$ is defined as ${\cal D}_{\le}({\cal N}) $. We also introduce the concept of a conditional covariance: given a von Neumann-subalgebra ${\mathcal{M}}\subset {\cal N}$, a conditional expectation $E^{\mathcal{M}}$ from ${\cal N}$ onto ${\mathcal{M}}$ and a quantum state $\sigma\in{\cal D}_+({\mathcal{M}})$, where ${\cal D}({\mathcal{M}})$ is defined with respect to $E^{\mathcal{M}}$, we define the \textit{conditional covariance} functional as follows: for any two $X,Y\in{\cal N}$, \begin{align}\label{condcov} \operatorname{Cov}_{{\mathcal{M}},\sigma}(X,Y):=\langle X-E^{\mathcal{M}}[X],\,Y-E^{\mathcal{M}}[Y]\rangle_\sigma\,. \end{align} \subsection{Quantum Markov semigroups} The basic model for the evolution of an open system in the Markovian regime is given by a quantum Markov semigroup (or QMS) $(\mathcal{P}_t)_{t\ge0}$ acting on ${\cal B}({\cal H})$. Such a semigroup is characterised by its generator, called the Lindbladian $\mathcal{L}$, which is defined on ${\cal B}({\cal H})$ by ${\cal L}(X)={\lim}_{t\to 0}\,\frac{1}{t}\,(\mathcal{P}_t(X)-X)$ for all $X\in{\cal B}({\cal H})$. Recall that by the GKLS Theorem \cite{Lind,[GKS76]}, ${\cal L}$ takes the following form: for all $X\in{\cal B}({\cal H})$, \begin{equation}\label{eqlindblad} {\cal L}(X)=i[H,X]+\frac{1}{2}\sum_{k=1}^l{\left[2\,L_k^*XL_k-\left(L_k^*L_k\,X+X\,L_k^*L_k\right)\right]} \end{equation} where $H\in{\cal B}_{\operatorname{sa}}({\cal H})$, the sum runs over a finite number of \textit{Lindblad operators} $L_k\in{\cal B}({\cal H})$, and $[\cdot,\cdot]$ denotes the commutator defined as $[X,Y]:=XY-YX$, $\forall X,Y\in{\cal B}({\cal H})$. The QMS is said to be \textit{faithful} if it admits a full-rank invariant state $\sigma$. When the state $\sigma$ is the unique invariant state, the semigroup is called \textit{primitive}. Further assuming the self-adjointness of the generator ${\cal L}$ with respect to the inner product (\ref{KMSinner}), there exists a conditional expectation $E\equiv E_{\mathcal{F}}$ onto the fixed-point subalgebra ${\mathcal{F}}({\cal L}):=\{X\in{\cal B}({\cal H}):\,{\cal L}(X)=0\}$ such that \begin{align*} \mathcal{P}_t(X)\underset{t\to\infty}{\to}E[X]\,. \end{align*} \section{Lattice spin systems with commuting Hamiltonians}\label{sec:dabies} In this section, we aim at estimating the strong and weak constants appearing in \Cref{theo_AT_pinching} in the context of lattice spin systems, and compare them with previous conditions in the classical and quantum literature. Given a finite lattice $\Lambda \subset\subset \mathbb{Z}^d$, we define the tensor product Hilbert space ${\cal H}:={\cal H}_\Lambda\equiv\bigotimes_{k\in\Lambda}{\cal H}_k$, where for each $k\in\Lambda$, ${\cal H}_k\simeq \mathbb{C}^\ell$, $\ell\in\mathbb{N}$. Then, let $\Phi:\Lambda\to {\cal B}_{\operatorname{sa}}({\cal H}_{\Lambda}) $ be an $r$-local potential, i.e. for any $j\in \Lambda$, $\Phi(j)$ is self-adjoint and supported on a ball of radius $r$ around site $j$. We assume further that $\| \Phi(j) \|\le K$ for some constant $K<\infty$. The potential $\Phi$ is said to be a \textit{commuting potential} if for any $i,j\in\Lambda$, $[\Phi(i),\Phi(j)]=0$. Given such a local, commuting potential, the Hamiltonian on a subregion $A\subseteq \Lambda$ is defined as \begin{align} H_A=\sum_{j\in A}\,\Phi(j)\,. \end{align} Next, the corresponding Gibbs state corresponding to the region $A$ and at inverse temperature $\beta$ is defined as \begin{align} \sigma_A=\frac{\mathrm{e}^{-\beta H_A}}{\mathop{\rm Tr}\nolimits[\mathrm{e}^{-\beta H_A}]}\,. \end{align} Note that this is in general not equal to the state $\mathop{\rm Tr}\nolimits_B[\sigma_\Lambda]$. \subsection{Heat-bath generators on lattice spin systems} Let $\Lambda \subset\subset \mathbb{Z}^d$ be a finite lattice and $\Phi:\Lambda\to {\cal B}_{\operatorname{sa}}({\cal H}_{\Lambda}) $ an $r$-local commuting potential. Denote by $\sigma$ the associated Gibbs state. Then, the \textit{heat-bath generator} is defined as \begin{equation} {\cal L}_\Lambda^{H} (X) := \underset{k \in \Lambda}{\sum} \left( {\cal A}_{k, \sigma} (X) -X \right) \end{equation} for every $X \in {\cal B}({\cal H}_\Lambda)$, where we are writing ${\cal A}_{k, \sigma}(X)=(\mathop{\rm Tr}\nolimits_k[\sigma])^{-1/2}\mathop{\rm Tr}\nolimits_k[\sigma^{1/2} X \sigma^{1/2}] (\mathop{\rm Tr}\nolimits_k[\sigma])^{-1/2}$ (see Section \ref{sec:condexp}). Note that the dual map of the first term of each summand above is a Petz recovery map for the partial trace. Similarly, for any $A \subseteq \Lambda$, we define ${\cal L}_A^{H}$ the generator in $A$, in which the summation is over elements $k \in A$. In \cite{BardetCapelLuciaPerezGarciaRouze-HeatBath1DMLSI-2019}, we addressed the problem proving the positivity of the MLSI constant associated to the heat-bath dynamics for quantum spin chains. We showed results of positivity of this constant, under the assumption of some conditions of clustering of correlations on the Gibbs state, via results of quasi-factorization of the relative entropy, i.e. results of strong approximate tensorization in which the term in the left-hand side has no conditional expectation (and thus it is not a ‘conditional relative entropy'). The new notion of approximate tensorization AT$(c,d)$ introduced in this paper allows to take an orthogonal approach in that problem, since now we are considering the conditional expectation $E_\sigma$ associated to the heat-bath generator, as opposed to \cite{BardetCapelLuciaPerezGarciaRouze-HeatBath1DMLSI-2019}, where we focused on the dual of ${\cal A}_{\sigma}$, i.e. the Petz recovery map, which is not a conditional expectation. Hence, now we can prove new results on AT$(c,d)$ that might be of use to take a further step in the study of the positivity of the MLSI. Before that, we need to prove the following result concerning the kernel of the generator. \begin{proposition}\label{prop:ker_gen_heatbath} For every $A \subseteq \Lambda$, the following holds: \begin{equation} \ker({\cal L}_A^{H}) = \{X \, : \, E_{A,\sigma}(X) = X\} \, , \end{equation} where $E_{A,\sigma}(X) = \underset{n \rightarrow \infty}{\operatorname{lim}} \mathcal{A}^n_{A,\sigma}(X) $, for ${\cal A}_{A, \sigma}(X)=(\mathop{\rm Tr}\nolimits_A[\sigma])^{-1/2}\mathop{\rm Tr}\nolimits_A[\sigma^{1/2} X \sigma^{1/2}] (\mathop{\rm Tr}\nolimits_A[\sigma])^{-1/2}$. \end{proposition} \begin{proof} By virtue of \cite[Theorem 6]{BardetCapelLuciaPerezGarciaRouze-HeatBath1DMLSI-2019}, we know that for $X \in {\cal B}({\cal H})$, \begin{equation} X= \mathcal{A}_{A, \sigma} (X) \, \, \Leftrightarrow \, \, X= \mathcal{A}_{k, \sigma} (X) \, \; \forall k \in A \,. \end{equation} We conclude by the equivalence (2) $\Leftrightarrow$ (3) in Proposition \ref{prop_condexp}. \end{proof} \subsection{Davies generators on lattice spin systems} Consider the Hamiltonian $H_\Lambda:=H^{\Sigma}_\Lambda$ of the system on the lattice $\Lambda$. Introduce also the Hamiltonian $H^{\operatorname{HB}}$ of the heat bath, as well as a set of system-bath interactions $\{ S_{\alpha,k}\otimes B_{\alpha,k} \}$, where $\alpha$ label all the operators $S_{\alpha,k}$ and $B_{\alpha,k}$ associated to the site $k\in\Lambda$. Here, we assume that the operators $S_{\alpha,k}$ form an orthonormal basis of self-adjoint operators in ${\cal B}_{\operatorname{sa}}({\cal H})$ with respect to the Hilbert-Schmidt inner product (think of the qubit Pauli matrices). The Hamiltonian of the universe composed of the system and its heat-bath is given by \begin{align} H=H_\Lambda+H^{\operatorname{HB}}+\sum_{\alpha,k\in\Lambda}S_{\alpha,k}\otimes B_{\alpha,k}\,. \end{align} Assuming that the bath is in a Gibbs state, by a standard argument (e.g. weak coupling limit, see \cite{[SL78]}), the evolution on the system can be approximated by a quantum Markov semigroup whose generator is of the following form: \begin{align}\label{totalgenerator} {\cal L}^{\operatorname{D},\beta}_\Lambda(X)=i[H_\Lambda,X]+\sum_{k\in\Lambda}\,{\cal L}^{\operatorname{D},\beta}_k(X)\,, \end{align} where \begin{align}\label{lindblad} {\cal L}^{\operatorname{D},\beta}_k(X)=\sum_{\omega,\alpha}\,\chi^{\beta}_{\alpha,k}(\omega)\,\Big( S_{\alpha,k}^*(\omega)XS_{\alpha,k}(\omega)-\frac{1}{2}\,\big\{ S_{\alpha,k}^*(\omega)S_{\alpha,k}(\omega),X \big\} \Big)\,. \end{align} Similarly, define the generator ${\cal L}^\beta_A$ by restricting the sum in \Cref{totalgenerator} to the sublattice $A$: \begin{align}\label{localgenerator} {\cal L}^{\operatorname{D},\beta}_A(X)=i[H_A,X]+\sum_{k\in A}\,{\cal L}^{\operatorname{D},\beta}_k(X)\,. \end{align} Note that ${\cal L}^{\operatorname{D},\beta}_A$ acts non-trivially on $A_\partial:=\{k\in\Lambda:\,d(k,A)\le r\}$. Then, for any region $A\subset \Lambda$, we define the conditional expectation onto the algebra ${\cal N}_A$ of fixed points of ${\cal L}_A$ with respect to the Gibbs state $\sigma=\sigma_\Lambda$ as follows \cite{Kastoryano2014}: given an adequate decomposition ${\cal H}_\Lambda:=\bigoplus_{i\in I_{{\cal N}_A}}\,{\cal H}_i\otimes {\cal K}_i$ of the total Hilbert space ${\cal H}_\Lambda$ of the lattice spin system, \begin{align}\label{Daviescond} E^{\operatorname{D},\beta}_A[X]:=\lim_{t\to \infty}\mathrm{e}^{t{\cal L}^{\operatorname{D},\beta}_A}(X)\equiv \sum_{i\in I_{{\cal N}_A}}\, \mathop{\rm Tr}\nolimits_{{\cal K}^A_i}(P^{A}_i ({\mathds{1}}_{{\cal H}_i^A}\otimes \sigma_i^A)\,X P^{A}_i) \otimes {\mathds{1}}_{{\cal K}^A_i}\,, \end{align} for some fixed full-rank states $\sigma_i^A$ on ${\cal K}_i^A$. In was shown in Lemma 11 of \cite{Kastoryano2014} that the generator of the Davies semigroups corresponding to a local commuting potential is \textit{frustration-free}. This means that the state $\sigma$ is invariant with respect to any ${\cal L}_A^{\operatorname{D},\beta}$, $A\subseteq \Lambda$. Therefore, the conditional expectations $E^{\operatorname{D},\beta}_A$ are all defined with respect to $\sigma$. \subsection{Preliminary results}\label{subsec:preliminaries} In Theorem \ref{coro_equal_cond}, we have proven that the conditional expectations arising from the Petz recovery map and Davies semigroups coincide and, as a consequence, the two examples of dissipative dynamics for quantum spin systems mentioned in the previous two subsections have the same associated conditional expectation. Indeed, note that the fixed points of the conditional expectation arising from the Petz recovery map coincides with the kernel of the conditional heat-bath generator by Proposition \ref{prop:ker_gen_heatbath}. Hence, for the rest of the paper, all the results presented concern this conditional expectation. In Proposition \ref{prop_classical_inftyT}, we will show that approximate tensorization AT($1,0$) holds at infinite temperature for classical Hamiltonians. However, it is not clear (and we strongly believe the opposite) that this remains true for non-classical commuting Gibbs states, which is the reason behind the introduction of our notion of weak approximate tensorization. Nevertheless, we can still prove in general two interesting results. \begin{proposition}\label{prop_cond_ABdisjoint} Let $A,B\subset\Lambda$ be two regions separated by at least a distance $2r$, that is such that $A\partial\cap B\partial=\emptyset$. Then ${\cal N}_A$ and ${\cal N}_B$ form a commuting square, that is, \begin{equation}\label{eq_prop_cond_ABdisjoint1} E_A^{\beta}\circ E_B^{\beta}=E_B^{\beta}\circ E_A^{\beta} = E_{A\cup B}^{\beta}\,. \end{equation} Consequently, for all $\rho\in{\cal D}({\cal H}_\Lambda)$, \begin{equation}\label{eq_prop_cond_ABdisjoint2} D\big(\rho\|E_{A\cup B*}^{\beta}(\rho)\big)\leq D\big(\rho\|E_{A*}^{\beta}(\rho)\big)+D\big(\rho\|E_{B*}^{\beta}(\rho)\big)\,. \end{equation} \end{proposition} \begin{proof} Remark that by definition of the map $A_{\sigma_{A}}$, it only acts non-trivially on $A_\partial$ and as identity on $(A_\partial)^c$. Consequently, as $E_A=\lim_{n\to\infty} A_{\sigma_A}^n$, this property carries over to the conditional expectation and we have $E_A=E_A\otimes 1\hspace{-0.27em}\mathrm{l}_{{\cal H}_{A_\partial^c}}$ by slight abuse of notations. Similarly, $E_B=E_B\otimes 1\hspace{-0.27em}\mathrm{l}_{{\cal H}_{B_\partial^c}}$. This shows the result since $A_\partial\cap B_\partial=\emptyset$. \end{proof} The belief that, compared to classical Hamiltonians, general commuting Gibbs states do not satisfy strong AT at infinite temperature leads us to consider weak AT instead, as suggested by the result in \cite{gao2017strong}. As already mentioned in the general setting of \Cref{corollaruweak}, they obtain the following weak AT for $\beta=0$: \begin{equation}\label{eq_weakAT_beta0} D\big(\rho\|E_{A\cup B*}^{\beta=0}(\rho)\big)\leq D\big(\rho\|E_{A*}^{\beta=0}(\rho)\big)+D\big(\rho\|E_{B*}^{\beta=0}(\rho)\big)+d\,, \end{equation} where $d=\sup_{\rho}\,\inf_{\eta} D_{\max}(E_{A*}^{\beta=0}\circ E_{B*}^{\beta=0}(\rho)\|E_{A\cup B*}^{\beta=0}(\eta))$. One can then wonder if the only difference with the classical case comes from this defect at infinite temperature. The next proposition goes in this direction. To state it we need the following notation. Given a subset $A\subset\Lambda$ and using that $\sigma=e^{-\beta\,H_\Lambda}$ is an invariant state, we decompose the total hamiltonian according to the fixed-point algebra of $E_A^\beta$: \[H_\Lambda=\sum_{i\in I_{{\cal N}_A}}\,H_i^A\otimes 1\hspace{-0.27em}\mathrm{l}_{{\cal K}_i^A}+1\hspace{-0.27em}\mathrm{l}_{{\cal H}_i^A}\otimes H_i^{\Lambda\backslash A}\,.\] We then denote by $H^{A\partial}$ the part of the Hamiltonian acting only on $A$ and its boundary: \[H^{A\partial}=\sum_{i\in I_{{\cal N}_A}}\,H_i^A\otimes 1\hspace{-0.27em}\mathrm{l}_{{\cal K}_i^A}\,.\] \begin{proposition}\label{prop_AT_HS} Define $d$ as above. Then for all $\rho\in{\cal D}({\cal H}_\Lambda)$, \begin{equation}\label{eq_prop_cond_ABdisjoint} D\big(\rho\|E_{A\cup B*}^{\beta}(\rho)\big)\leq \,e^{4\beta\|H^{A\partial}\|}\,\Big(D\big(\rho\|E_{A*}^{\beta}(\rho)\big)+D\big(\rho\|E_{B*}^{\beta}(\rho)\big)\Big)+e^{2\beta\|H^{A\partial}\|}\,d\,. \end{equation} \end{proposition} \begin{proof} The proof of this result follows the same steps as \Cref{HSAT}, but replacing $\lambda_{\max}(\sigma)$ and $1/(\lambda_{\min}(\sigma)$ by $e^{2\beta\|H^{A\partial}\|}$, thanks to Proposition 4.2 in \cite{junge2019stability}. \end{proof} \begin{remark} This argument is similar to that of Lemma 2.2 of \cite{caputo2015approximate}. For the same reason as in the classical setting therein, the strong constant can however be very large, since it depends on the norm $\|H^{A\partial}\|$, which generally depends linearly on $|A|$. For this reason, we will rather consider applying \Cref{theo_AT_pinching} in the next section, which will provide us with better bounds under the assumption of strong clustering of correlations. \end{remark} \subsection{Clustering of correlations on lattice spin systems}\label{subsubsec:L1-clustering-correlations} We recall the expression entering the definition of the constant $c_1$ appearing in \Cref{theo_AT_pinching}: given ${\cal N}_{A\cup B}\equiv E_{A\cup B}[{\cal N}_\Lambda]:=\bigoplus_{i\in I_{{\cal N}_{A\cup B}}}\,{\cal B}({\cal H}_i)\otimes {\mathds{1}}_{{\cal K}_i} $, \begin{align}\label{C2cond} c(A,B):=\max_{i\in I_{{\cal N}_{A\cup B}}}\|E_{A}^{(i)}\circ E_{B}^{(i)}-E_{A\cup B}^{(i)}:\,\mathbb{L}_1(\tau_i)\to\mathbb{L}_\infty({\cal N}_\Lambda)\|\,. \end{align} In the case of the classical Glauber dynamics that will be introduced in \Cref{classicalglauber}, and when the algebra ${\cal N}$ corresponds to the linear span of the rank-one projections onto classical product basis $|\eta\rangle$, $\eta\in\Omega_\Lambda$, the Hilbert spaces ${\cal H}_i$ appearing in the decomposition of the algebra ${\mathcal{M}}$ all are one dimensional. Moreover, the indices $i\in I_{{\cal N}_{A\cup B}}$ correspond to the set of boundary conditions $\omega\in\Omega_{\partial{A\cup B}}$ and the states $\tau_\omega$ correspond to the conditional Gibbs measures $\mu^\omega_{A\cup B} $ (see Section \ref{classicalglauber} for the notation). Hence $c(A,B)$ reduces to \begin{align*} c(A,B)^{\operatorname{cl}}:=\max_{\omega\in\Omega_{ (A\cup B)^c}} \,\|E_A^\omega\circ E_B^\omega-E_{A\cup B}^\omega: \,\mathbb{L}_1(\mu^\omega_{A\cup B})\to\mathbb{L}_\infty(\mu^\omega_{A\cup B})\| \,. \end{align*} In \cite{cesi2001quasi}, such norms were estimated: under the condition of \textit{complete analyticity} of Dobrushin and Shlosman \cite{[DS85],Dobrushin1985,[DS87]}, which characterizes the absence of a phase transition and hence is satisfied for one dimensional systems, and for all dimensions above a critical temperature, Cesi showed that for any two subsets $A,B\subset \Lambda$, and any two positive constants $\kappa,\xi$ such that $|\partial B\cap (A\cup B)|\,\kappa\mathrm{e}^{-\xi\,d(b\backslash A,A\backslash B)}\le 1$, the following holds: \begin{align*} c(A,B)\le \kappa\mathrm{e}^{-\xi\, d(B\backslash A,A\backslash B)+1}\,. \end{align*} By H\"{o}lder's inequality, this bound in particular implies the so-called \textit{(conditional) clustering of correlations} (see Section \ref{sec:clustering}): for all $\omega\in \Omega_{(A\cup B)^c}$ and any $f\in \mathbb{L}_\infty(\Omega_{A\cup B})$, \begin{align*} \mu^\omega_{A\cup B}\big(E_A^\omega[f],\,E_B^\omega[f] \big)\le \kappa\,\mathrm{e}^{-\xi \,d(B\backslash A,A\backslash B)}\,\|f\|^2_{\mathbb{L}_1(\mu^\omega_{A\cup B})}\,, \end{align*} where $\mu^\omega_{A\cup B}\big(E_A^\omega[f],\,E_B^\omega[f] \big):= \mu^\omega_{A\cup B}\big((E_A^\omega[f]-E_{A\cup B}^\omega[f])(\,E_B^\omega[f] -E_{A\cup B}^\omega[f] )\big)$ denotes the correlation function. Motivated by this classical setting, we introduce the notion of a conditional quantum clustering of correlations: \begin{definition}\label{def:expcondl1clust} Let $\Lambda \subset \subset \mathbb{Z}^d$ be a finite lattice and ${\cal N}_{\Lambda}\subset {\cal B}({\cal H}_\Lambda)$ a subalgebra of ${\cal B}({\cal H}_\Lambda)$. Let $\mathcal{E}^{\Lambda}:=\{E_A;\,A\subset \Lambda\}$ a family of conditional expectations on ${\cal N}_{\Lambda}$. We say that $\mathcal{E}^{\Lambda}$ satisfies a \textit{conditional $\mathbb{L}_1$-clustering of correlations} if there exist constants $\kappa,\xi>0$ such that, for any $A,B\subset \Lambda$, with $A\cap B\ne \emptyset$, \begin{align*} c(A,B)\le \kappa\,\mathrm{e}^{-\xi d(A\backslash B,B\backslash A)}\,. \end{align*} \end{definition} \paragraph{Block diagonal states in the energy basis} Recall that for a classical Hamiltonian (i.e. a Hamiltonian that is diagonalizable in the classical product basis $\{|\eta\rangle\}_{\eta\in \Omega_{\Lambda}}$), and classical states, the inequality (\ref{eqgeneral}) reduces to that found in \cite{cesi2001quasi}, with $d=0$ and $\eta=\mathcal{P}_{\mathcal{M}}(\rho)$. More generally, one can find approximate tensorization for states that are block diagonal in the energy eigenbasis of the Hamiltonian $H$. This is the content of next proposition. Once again, we introduce some notations before stating it. Given the spectral decomposition $H_\Lambda:=\sum_{\varepsilon\in{\rm sp}(H_\Lambda)}\,\varepsilon P_\varepsilon$ of the Hamiltonian $H_\Lambda$, let $${\cal N}_{{\rm sp}(H_\Lambda)}:=\{ X\in{\cal B}({\cal H}_\Lambda):\,P_\varepsilon X P_{\varepsilon'}=\delta_{\varepsilon,\varepsilon'}P_\varepsilon X P_\varepsilon ~\forall \varepsilon,\varepsilon'\in {\rm sp}(H_\Lambda)\}$$ be the algebra of operators $X$ that are block diagonal in the energy eigenbasis. \begin{proposition}\label{stabilizer} Given any sublattice $A\subset \Lambda$, $E_A[{\cal N}_{{\rm sp}(H_\Lambda)}]\subset {\cal N}_{{\rm sp}(H_\Lambda)}$. With a slight abuse of notations, we denote by $E_A^{(i)}:=E_A[P_i(\cdot)P_i]$ the restriction onto the block $i\in I_{{\cal N}_A}$ of $E_A\equiv E_A|_{{\cal N}_{{\rm sp}(H_\Lambda)}}:{\cal N}_{{\rm sp}(H_\Lambda)}\mapsto {\cal N}_A\cap {\cal N}_{{\rm sp}(H_\Lambda)}$. Moreover, the following approximate tensorization holds for any two sublattices $A,B\subset \Lambda$, $A\cap B\ne \emptyset$: \begin{align*} D(\rho\|E_{A\cup B*}(\rho))\le \frac{1}{1-2 c} \left( D(\rho\|E_{A*}(\rho))+D(\rho\|E_{B*}(\rho))+3\max_{\varepsilon\in{\rm sp}(H_\Lambda)}\,\ln(\dim(P_\varepsilon{\cal H}_\Lambda))+4\,c \right) \,, \end{align*} for $$c:=\max_{i\in I_{{\cal N}_{A\cup B}}}\,\|E_A^{(i)}\circ E_B^{(i)}-E_{A\cup B}^{(i)}:\,\mathbb{L}_1(\sigma^{A\cup B}_i)\to \mathbb{L}_\infty({\cal N}_{{\rm sp}(H_\Lambda)})\|\,,$$ where $\sigma_i^A$ was defined in (\ref{Daviescond}). \end{proposition} \begin{proof} First of all, we prove that the algebra ${\cal N}_{{\rm sp}(H_\Lambda)}$ is preserved under the action of the conditional expectations $E_A$ onto an arbitrary sublattice $A\subset \Lambda$. To show this, it is enough to prove that ${\cal L}^{\operatorname{D},\beta}_{A}({\cal N}_{{\rm sp}(H_\Lambda)})\subset {\cal N}_{{\rm sp}(H_\Lambda)}$. This is done using the expression (\ref{lindblad}) together with \Cref{eq!}: for any $\omega,\alpha$, we have \begin{align*} S_{\alpha,k}^*(\omega)P_\varepsilon XP_\varepsilon S_{\alpha,k}(\omega) =\sum_{\substack{\varepsilon_1-\varepsilon_1'=\omega\\\varepsilon_2-\varepsilon_2'=\omega}}\,P_{\varepsilon_1'}S_{\alpha,k}^*P_{\varepsilon_1}P_\varepsilon XP_\varepsilon P_{\varepsilon_2} S_{\alpha,k}P_{\varepsilon_2'}=\sum_{\varepsilon'=\varepsilon-\omega}\,P_{\varepsilon'}S_{\alpha,k}^* P_\varepsilon X P_\varepsilon S_{\alpha,k}P_{\varepsilon'}\in{\cal N}_{{\rm sp}(H_\Lambda)}\,. \end{align*} Similarly \begin{align*} \big\{ S_{\alpha,k}^*(\omega)S_{\alpha,k}(\omega),X \big\}=\sum_{\substack{\varepsilon_1-\varepsilon_1'=\omega\\\varepsilon_2-\varepsilon_2'=\omega}}\big\{ P_{\varepsilon_1'}\,S_{\alpha,k}^*P_{\varepsilon_1}P_{\varepsilon_2}S_{\alpha,k}P_{\varepsilon_2'},\,P_\varepsilon X P_\varepsilon\big\}=\sum_{\varepsilon'=\omega+\varepsilon} \big\{ P_\varepsilon S_{\alpha,k}^*P_{\varepsilon'}S_{\alpha,k},P_\varepsilon XP_\varepsilon\big\}\in{\cal N}_{{\rm sp}(H_\Lambda)}\,. \end{align*} Thus, ${\cal L}^{\operatorname{D},\beta}_A({\cal N}_{{\rm sp}(H_\Lambda)})\subset {\cal N}_{{\rm sp}(H_\Lambda)}$, and hence $E_A[ {\cal N}_{{\rm sp}(H_\Lambda)}]\subset {\cal N}_{{\rm sp}(H_\Lambda)}$ Therefore, the restriction $E_A:=E_A|_{{\cal N}_{{\rm sp}(H_\Lambda)}}$ defines a conditional expectation onto the algebra ${\cal N}_{{\rm sp}(H_\Lambda)}\cap {\cal N}_A$. Moreover, from the bounds found in \Cref{propboundsd1d2}, we directly have the following crude estimates \begin{align*} &d_1\le \max_{\varepsilon\in{\rm sp}(H_\Lambda)}\,\ln(\operatorname{dim}(P_\varepsilon{\cal H}_\Lambda))\\ &d_2\le 2\max_{\varepsilon\in {\rm sp}(H_\Lambda)}\,\ln(\operatorname{dim}(P_\varepsilon{\cal H}_\Lambda))\,. \end{align*} We conclude from the joint use of \Cref{theo_AT_pinching} and \Cref{propboundsd1d2}. \end{proof} \begin{remark} The motivation behind the statement of \Cref{stabilizer} comes from the problem of thermal stability of quantum error correcting codes \cite{Temme2014,temme2015fast,KITAEV20032}. There, the initial state $\rho$ is typically assumed to be supported in the so-called code space, that is the ground space of some stabilizer Hamiltonian $H_\Lambda$. For translational invariant error correcting codes, every eigenspace has dimension independent of the lattice size (cf. the $2D$ toric code has four dimensional eigenspaces). Therefore, upon showing conditional $\mathbb{L}_1$ clustering of correlations, we find that the weak constant in the approximate tensorization is independent of $|\Lambda|$. In that case, for any two fixed sublattices $A,B\subset\subset \mathbb{Z}^d$ we find the following exact tensorization of the normalized relative entropy: given any sequence $\rho_\Lambda\in {\cal N}_{\operatorname{gs}(H_\Lambda)}$ supported in the ground space of $H_\Lambda$: \begin{align*} \limsup_{\Lambda\nearrow \mathbb{Z}^d}\,\frac{D(\rho_\Lambda\|E_{A\cup B*}(\rho_\Lambda))}{|\Lambda|}\le \limsup_{\Lambda\nearrow \mathbb{Z}^d}\,\frac{D(\rho_\Lambda\|E_{A*}(\rho_\Lambda))}{|\Lambda|}+ \limsup_{\Lambda\nearrow \mathbb{Z}^d}\,\frac{D(\rho_\Lambda\|E_{ B*}(\rho_\Lambda))}{|\Lambda|}\,. \end{align*} Such bounds will prove crucial in showing fast thermalization of dissipative Gibbs samplers in a forthcoming paper. \end{remark} \subsection{Classical Hamiltonian over quantum systems}\label{classicalglauber} In this section, we investigate the case of a quantum lattice spin system undergoing a classical Glauber dynamics, whose the framework was already studied in \cite{Cubitt2015}. These semigroups correspond to Davies generators whose Hamiltonian is classical, that is, diagonal in a product basis of ${\cal H}_\Lambda$. In order to make the connexion with the classical Glauber dynamics over a classical system (i.e. initially diagonal in the product basis), We introduce the generator more explicitly: consider a lattice spin system over $\Gamma=\mathbb{Z}^d$ with classical configuration space $S=\{+1,-1\}$, and, for each $\Lambda\subset \Gamma$, denote by $\Omega_\Lambda=S^\Lambda$ the space of configurations over $\Lambda$. Next, given a classical finite-range, translationally invariant potential $\{J_A\}_{A\in\Gamma}$ and a boundary condition $\tau\in\Omega_{\Lambda^c}$, define the Hamiltonian over $\Lambda$ as \begin{align*} H_\Lambda^\tau(\sigma)=-\sum_{A\cap\Lambda\ne 0}\,J_A(\sigma\times\tau),~~~~~~\forall\sigma\in\Omega_\Lambda\,. \end{align*} The classical Gibbs state corresponding to such Hamiltonian is then given by \begin{align*} \mu_\Lambda^\tau(\sigma)=(Z_{\Lambda}^\tau)^{-1}\,\exp\big( -H_{\Lambda}^\tau(\sigma)\big)\,, \end{align*} Next, define the Glauber dynamics for a potential $J$ as the Markov process on $\Omega_\Lambda$ with the generator \begin{align*} (L_\Lambda f)(\sigma)=\sum_{x\in\Lambda}\,c_{J}(x,\sigma)\nabla_xf(\sigma)\,, \end{align*} where $\nabla_xf(\sigma)=f(\sigma^x)-f(\sigma)$ and $\sigma^x$ is the configuration obtained by flipping the spin at position $x$. The numbers $c_J(x,\sigma)$ are called transition rates and must satisfy the following assumptions: \begin{itemize} \item[1.] There exist $c_m,c_M$ such that $0<c_m\le c_J(x,\sigma)\le c_M<\infty$ for all $x,\sigma$. \item[2.] $c_J(x,.)$ depends only on spin values in $b_r(x)$. \item[3.] For all $k\in\Gamma$, $c_J(x,\sigma')=c_J(x+k,\sigma)$ id $\sigma'(y)=\sigma(y+k)$ for all $y$. \item[4.] Detailed balance: for all $x\in\Gamma$, and all $\sigma$ \begin{align*} \exp\left(-\sum_{A\ni x}J_A(\sigma)\right)c_J(x,\sigma)=c_J(x,\sigma^x)\exp\left( -\sum_{A\ni x}J_A(\sigma^x)\right)\,. \end{align*} \end{itemize} These assumptions constitute sufficient conditions for the corresponding Markov process to have the Gibbs states over $\Lambda$ as stationary points. Next, we introduce the notion of a quantum embedding of the aforementioned classical Glauber dynamics. This is the Lindbladian of corresponding Lindblad operators given by \begin{align}\label{Lindbladops} L_{x,\eta}:=\sqrt{c_J(x,\eta)}\,|\eta^x\rangle\langle \eta|\otimes {\mathds{1}}\,,~~~\forall x\in\Lambda,\,\eta\in\Omega_{b_x(r)}\,. \end{align} It was shown in \cite{Cubitt2015} that such a dynamics is KMS-symmetric with respect to the state $\mu_\Lambda^\tau$ as embedded into the computational basis. Moreover, the set of fixed points in the Schr\"{o}dinger picture corresponds to the convex hull of the set of Gibbs states over $\Lambda$, $\{\mu_\Lambda^\tau|\tau\in\Omega_{\Lambda^c}\}$. In the Heisenberg picture, this implies that the fixed-point algebras ${\mathcal{F}}({\cal L}_A)$ are expressed as \begin{align}\label{fixedpoints} {\mathcal{F}}({\cal L}_A):=\bigoplus_{\omega\in\Omega_{\partial A}}\,|\omega\rangle\langle\omega|_{\partial A}\otimes {\mathds{1}}_{A}\otimes {\cal B}({\cal H}_{A_\partial^c})\,. \end{align} Equivalently, \begin{align}\label{classcond} E_{A*}(\rho)=\sum_{\omega\in\Omega_{\partial A}}\,|\omega\rangle\langle\omega|_{\partial A}\otimes \sigma^\omega_{A}\otimes \mathop{\rm Tr}\nolimits_A(\langle \omega|\rho|\omega\rangle)\,, \end{align} where $\sigma^\omega_A$ denotes the Gibbs state $\mu^\omega_A$ embedded into the computational basis. With this expression at hand we can prove that the conditional expectations at infinite temperature satisfy commuting square for any subset $A$ and $B$ of $\Lambda$. \begin{proposition}\label{prop_classical_inftyT} Let $A,B\subset\Lambda$. Then, at $\beta=0$, ${\cal N}_A$ and ${\cal N}_B$ form a commuting square, that is, \begin{equation}\label{eq_prop_classical_inftyT} E_A\circ E_B=E_B\circ E_A = E_{A\cup B}\, \end{equation} and consequently, for all $\rho\in{\cal D}({\cal H}_\Lambda)$, \begin{equation}\label{eq_prop_classical_inftyT2} D\big(\rho\|E_{A\cup B*}^{\beta=0}(\rho)\big)\leq D\big(\rho\|E_{A*}^{\beta=0}(\rho)\big)+D\big(\rho\|E_{B*}^{\beta=0}(\rho)\big)\,. \end{equation} At finite temperature $\beta>0$, we have \begin{equation}\label{eq_prop_classical_inftyT3} D_{\max}(E_{A*}\circ E_{B*}(\rho)\| E_{A\cup B*}(\rho))\leq\max_{\substack{\omega_{\partial AB}\\\omega_{\partial B\cap A} }}\,D_{\max}\big( \sum_{\omega_{\partial A\cap B}}P_{\omega_{\partial A\cap B}}\otimes \sigma_{A}^{\omega_{\partial A}}\otimes \sigma_{B\backslash A\partial}^{\omega_{\partial A\cup B},\omega_{\partial B}} \big\| \sigma_{AB}^{\omega_{\partial AB}} \big)\,, \end{equation} where $P_\omega:=|\omega\rangle\langle\omega|$ and $ \sigma_{B\backslash A\partial}^{\omega_{\partial A\cup B},{\omega_{\partial B}}}:=\mathop{\rm Tr}\nolimits_{A\cap B} \big(\langle \omega_{\partial A\cap B}| \sigma_{B}^{\omega_{\partial B}}|\omega_{\partial A\cap B}\rangle\big)$. In particular, \eqref{eq_prop_classical_inftyT3} provides a bound on the weak constants appearing in \Cref{corollaruweak} and \Cref{theo_AT_pinching}. \end{proposition} Remark that the first part of this last proposition (i.e. when $\beta=0$) does not depend on the relative positions of $A$ and $B$. Moreover, the upper bound in \Cref{eq_prop_classical_inftyT3} has the advantage of being independent of $\rho$. Moreover, it is equal to $0$ when $\beta=0$. Thus we retrieve the exact tensorization of \Cref{prop_classical_inftyT} at infinite temperature. \begin{proof} Letting $\eta:=E_{A\cup B*}(\rho)$, it is enough to estimate, for any $\rho\in {\cal D}({\cal N})$, $D_{\max}(E_{A*}\circ E_{B*}(\rho)\|E_{A\cup B*}(\rho))$. In order to do so, we first provide an expression for $E_{A*}\circ E_{B*}(\rho)$ which will allow us to compare it more easily to $E_{A\cup B*}(\rho)$. Define \begin{align*} & \sigma_{B\backslash A\partial}^{\omega_{\partial A\cup B},{\omega_{\partial B}}}:=\mathop{\rm Tr}\nolimits_{A\cap B} \big(\langle \omega_{\partial A\cap B}| \sigma_{B}^{\omega_{\partial B}}|\omega_{\partial A\cap B}\rangle\big)\\ & \rho^{\omega_{\partial B\cap A},\omega_{\partial AB}}_{(A\cup B\partial)^c}:= \langle \omega_{\partial B\cap A}|\mathop{\rm Tr}\nolimits_{A\cup B\backslash \partial B}\big( \langle \omega_{\partial AB}|\rho |\omega_{\partial AB}\rangle \big)|\omega_{\partial B\cap A}\rangle \end{align*} A lengthy yet easy calculation yields: \begin{align*} E_{A*}\circ E_{B*}(\rho)=\sum_{\substack{ \omega_{\partial AB}\in \Omega_{\partial AB} \\ \omega_{\partial A\cap B}\in \Omega_{\partial A\cap B}\\\omega_{\partial B\cap A} \in \Omega_{\partial B\cap A}}}\, P_{\omega_{\partial AB}}\otimes P_{\omega_{\partial A\cap B}} \otimes \sigma_A^{\omega_{\partial A}}\otimes \sigma_{B\backslash A\partial}^{\omega_{\partial A\cup B},{\omega_{\partial B}}} \otimes \rho^{\omega_{\partial B\cap A},\omega_{\partial AB}}_{(A\cup B\partial)^c}\,, \end{align*} where $P_\omega:=|\omega\rangle\langle\omega|$. Similarly, \begin{align*} E_{A\cup B*}(\rho)=\sum_{\substack{\omega_{\partial AB}\in \Omega_{\partial AB}\\\omega_{\partial B\cap A} }}\,P_{\omega_{\partial AB}}\otimes \sigma^{\omega_{\partial AB}}_{AB}\otimes \rho^{\omega_{\partial B\cap A},\omega_{\partial AB}}_{(A\cup B\partial)^c} \,. \end{align*} From these two expressions, we see directly that $E_{A\cup B*}=E_{A*}\circ E_{B*}=E_{B*}\circ E_{A*}$ at infinite temperature. In the general situation, \begin{align*} &D_{\max}(E_{A*}\circ E_{B*}(\rho)\| E_{A\cup B*}(\rho))\\ &\qquad =\max_{\omega_{\partial AB}}\, D_{\max}\Big( \sum_{\substack{\omega_{\partial A\cap B}\\\omega_{\partial B\cap A}}}\,P_{\omega_{\partial A\cap B}}\otimes \sigma_A^{\omega_{\partial A}}\otimes \sigma_{B\backslash A\partial}^{\omega_{\partial A\cup B},\omega_{\partial B}} \otimes \rho^{\omega_{\partial B\cap A},\omega_{\partial AB}}_{(A\cup B\partial)^c} \Big\| \sum_{\omega_{\partial B\cap A}} \rho^{\omega_{\partial B\cap A},\omega_{\partial AB}}_{(A\cup B\partial)^c} \otimes \sigma_{AB}^{\omega_{\partial AB}} \Big)\\ &\qquad \le \max_{\substack{\omega_{\partial AB}\\\omega_{\partial B\cap A} }}\,D_{\max}\big( \sum_{\omega_{\partial A\cap B}}P_{\omega_{\partial A\cap B}}\otimes \sigma_{A}^{\omega_{\partial A}}\otimes \sigma_{B\backslash A\partial}^{\omega_{\partial A\cup B},\omega_{\partial B}} \big\| \sigma_{AB}^{\omega_{\partial AB}} \big)\,. \end{align*} \end{proof}
{'timestamp': '2020-05-14T02:17:32', 'yymm': '2001', 'arxiv_id': '2001.07981', 'language': 'en', 'url': 'https://arxiv.org/abs/2001.07981'}
\section{Introduction} Let $1_y$ denote the characteristic function of the integers free of prime factors $\leq y$, The idea is to work with the identity \begin{equation} \label{eq: TrivId} \Lambda(n) = \log n - \sum_{\substack{\ell m=n \\ \ell,m>1 }} \Lambda(\ell) , \end{equation} summing it up over integers $n\leq x$ for which $1_y(n)=1$, where we might select $y:=\exp(\sqrt{\log x})$ or larger. In this case the second sum can be written a sum of terms $1_y(\ell) \Lambda(\ell) \cdot 1_y(m)$ which have the bilinear structure that is used in ``Type II sums''. We will see the identity in action in two key results in analytic number theory: \section{The Bombieri-Vinogradov Theorem} \begin{theorem} [The Bombieri-Vinogradov Theorem] \label{thm: 46.3} If $x^{1/2}/(\log x)^B\leq Q\leq x^{1/2}$ then \begin{equation} \label{eq: 46.6} \sum_{q\leq Q} \max_{(a,q)=1} \ \left| \pi(x;q,a) - \frac{\pi(x)}{\phi(q)} \right| \ll Q x^{1/2} (\log\log x)^{1/2} . \end{equation} \end{theorem} This is a little stronger than the results in the literature (for example Davenport \cite{Da} has the $(\log\log x)^{1/2}$ replaced by $(\log x)^{5}$). The reason for this improvement is the simplicity of our identity, and some slight strengthening of the auxiliary results used in the proof. \begin{proof} Let $y=x^{1/\log\log x}$ . We will instead prove the following result, in which the $\psi$ function replaces $\pi$, and deduce \eqref{eq: 46.6} by partial summation: \begin{equation} \label{eq: 46.5} \sum_{q\leq Q} \max_{(a,q)=1} \ \left| \psi(x;q,a) - \frac{\psi(x)}{\phi(q)} \right| \ll Q x^{1/2} (\log x) (\log\log x)^{1/2} . \end{equation} Using \eqref{eq: TrivId} for integers $n$ with $1_y(n)=1$, the quantity on the left-hand side of \eqref{eq: 46.5} is $\leq S_I+S_{II}+E$ where \[ S_I=\sum_{q\leq Q} \max_{(a,q)=1} \ \left| \sum_{\substack{ n\leq x \\ n\equiv a \pmod q }} 1_y(n) \log n - \frac {1}{\phi(q)} \sum_{\substack{ n\leq x \\ (n,q)=1 }} 1_y(n) \log n \right| \] which is $\ll xu^{-u+o(u)}\ll_A \frac{x}{(\log x)^A}$ by the small sieve, where $x/Q=y^u$; and $E$ is the contribution of the powers of primes $\leq y$, which contribute $\leq \pi(y)\log x$ to each sum and therefore $\leq Q \pi(y)\log x\ll_A \frac{x}{(\log x)^A}$ in total. Most interesting is \[ S_{II}=\sum_{q\leq Q} \ \max_{a:\ (a,q)=1} \ \left| \sum_{\substack{ n\equiv a \pmod q }} f(n) - \frac {1}{\phi(q)} \sum_{\substack{ (n,q)=1 }} f(n)\right| \] where $f(n)= \sum_{\substack{\ell m=n, \ell,m>y }} \Lambda(\ell)1_y(\ell) \cdot 1_y(m)$. Its bilinearity means that this is a Type II sum, and we can employ the following general result.\footnote{This can be obtained by taking the ideas in proving (5) of chapter 28 of \cite{Da}, along with the method of proof of Theorem 9.16 of \cite{FI}; in any case it is only a minor improvement on either of these results. For full details see chapter 51 of \cite{Gra}.} \begin{theorem} \label{thm: 46.1} For each integer $n\leq x$ we define \[ f(n):= \sum_{\substack{\ell m=n }} \alpha_\ell\beta_m \] where $\{ \alpha_\ell\}$ and $\{ \beta_m\}$ are sequences of complex numbers, for which \begin{itemize} \item The $\{ \alpha_\ell\}$ satisfy the Siegel-Walfisz criterion; \item The $\{ \alpha_\ell\}$ are only supported in the range $L_0 \leq \ell \leq x/y$ ; \item $\sum_{\ell \leq L} |\alpha_\ell |^2 \leq aL$ and $\sum_{m \leq M} |\beta_m|^2 \leq bM$ for all $L,M\leq x$. \end{itemize} For any $B>0$ we have \begin{equation} \label{eq: 46.4} \sum_{q\leq Q} \ \max_{a:\ (a,q)=1} \ \left| \sum_{\substack{ n\equiv a \pmod q }} f(n) - \frac {1}{\phi(q)} \sum_{\substack{ (n,q)=1 }} f(n)\right| \ll (ab)^{1/2} Q x^{1/2} \log x, \end{equation} where $Q=x^{1/2}/(\log x)^B$, with $x/y\leq \frac{Q^2}{(\log x)^2}$ and $L_0\geq y,\exp( (\log x)^\epsilon)$. \end{theorem} We deduce that $S_{II} \ll Q x^{1/2} (\log x)^{5/4} $ by Theorem \ref{thm: 46.1} since $a\ll \log x$ and $b\ll \frac 1{\log y}= \frac{\log\log x}{\log x}$. \end{proof} \section{A general bound for a sum over primes} \begin{proposition} \label{prop: 80.General} For any given function $F(.)$ and $y\leq x$ we have \[ \left|\sum_{ \substack{ n\leq x \\ p(n)>y}}\Lambda(n)F(n)\right| \ll S_I \log x + (S_{II} \, x (\log x)^5)^{1/2} \] where $S_I$ is the Type I sum given by \[ S_I:= \max_{t\leq x} \left| \sum_{ \substack{ n\leq t \\ p(n)>y}} F(n) \right| \leq \sum_{\substack{ d\geq 1 \\ P(d)\leq y}} \left| \sum_{ m\leq t/d } F(d m) \right| , \] and $S_{II}$ is the Type II sum given by \[ S_{II}:= \max_{\substack{ y<L\leq x/y \\ y< m\leq 2x/L}} \sum_{m/2<n\leq 2m} \left| \sum_{\substack{ L<\ell\leq 2L \\ \ell \leq \frac xm ,\frac xn }} F(\ell m)\overline{F(\ell n)} \right| \] \end{proposition} This simplifies, and slightly improves chapter 24 of \cite{Da}, which is what is used there to bound exponential sums over primes. \begin{proof} We again use \eqref{eq: TrivId} so that \[ \sum_{ \substack{ n\leq x \\ p(n)>y}} \Lambda(n)F(n) = \sum_{ \substack{ n\leq x \\ p(n)>y}} F(n) \log n - \sum_{\substack{ \ell \\ p(\ell)>y}} \Lambda(\ell) \sum_{\substack{ m\leq x/\ell \\ p(m)>y}} F(\ell m). \] where $p(n)$ denotes the smallest prime factor of $n$. Now \[ \sum_{ \substack{ n\leq x \\ p(n)>y}} F(n) \log n =\sum_{ \substack{ n\leq x \\ p(n)>y}} F(n) \int_1^n \frac {dt}t = \int_1^x \sum_{ \substack{ t\leq n\leq x \\ p(n)>y}} F(n) \frac {dt}t \leq 2\log x \cdot \max_{t\leq x} \left| \sum_{ \substack{ n\leq t \\ p(n)>y}} F(n) \right|. \] Moreover for $P=\prod_{p\leq y}p$, \[ \sum_{ \substack{ n\leq t \\ p(n)>y}} F(n) = \sum_{ n\leq t } F(n) \sum_{d|P, d|n} \mu(d) = \sum_{d|P } \mu(d) \sum_{ m\leq t/d } F(d m) . \] For the second sum we first split the sums into dyadic intervals ($L<\ell\leq 2L, M<m\leq 2M$) and then Cauchy, so that the square of each subsum is \begin{align*} & \leq \sum_{\ell : p(\ell)>y} \Lambda(\ell)^2 \cdot \sum_\ell \left| \sum_{\substack{ m\leq x/\ell \\ p(m)>y}} F(\ell m) \right|^2 \ll L \log L \sum_{\substack{M<m,n\leq 2M \\ p(m),p(n)>y}} \sum_{\ell \leq \frac x{\max\{ m,n\} }} F(\ell m)\overline{F(\ell n)} \\ & \ll x\log x \cdot \max_{M<m\leq 2M} \sum_{\substack{m/2<n\leq 2m\\ p(n)>y}} \left| \sum_{\substack{ L<\ell\leq 2L \\ \ell \leq \frac xm ,\frac xn }} F(\ell m)\overline{F(\ell n)} \right| \end{align*} since $m,n\in (M,2M]$, and the result follows. \end{proof} \section{Genesis} The idea for using \eqref{eq: TrivId} germinated from reading the proof of the Bombieri-Vinogradov Theorem (Theorem 9.18) in \cite{FI}, in which they used Ramar\'e's identity, that if $\sqrt{x}<n\leq x$ and $n$ is squarfree then \[ 1_{\mathbb P}(n) = 1 - \sum_{\substack{pm=n \\ p \text{ prime} \leq \sqrt{x}}} \frac 1{\omega_{\sqrt{x}}(m)} \] where $1_{\mathbb P}$ is the characteristic function for the primes, and $\omega_{z}(m)=1+\sum_{p|m,\ p\leq z} 1$. They also had to sum this over all integers free of prime factors $>y$. \bibliographystyle{plain}
{'timestamp': '2020-01-23T02:02:44', 'yymm': '2001', 'arxiv_id': '2001.07777', 'language': 'en', 'url': 'https://arxiv.org/abs/2001.07777'}
\section{Introduction} The galaxy luminosity function (hereafter, LF) represents the number density of galaxies of a given luminosity. It is a robust observable, extensively used in the past, to study the properties of galaxy populations \citep[e.g.][and references therein]{blanton2003}. The comparison between the behaviour of the LF and the halo mass function at faint magnitudes has been proposed as a crucial test to understand galaxy formation processes. Indeed, the cold dark matter theory predicts halo mass functions with a slope of $\sim-1.9$ \citep[e.g.][and reference therein]{springel2008}, steeper than the one observed in deep LFs of nearby galaxy clusters, or in the field \citep[$\sim -1.1:-1.5$][]{trenthamt2002,depropris2003,blanton2005}. This is the so-called missing satellite problem, well reported in censuses of galaxies around the Milky Way and M31 \citep[e.g.][]{klypin1999}. Attempts to solve these discrepancies invoke several physical mechanisms that halt the star formation and darken or destroy dwarf galaxies, including high gas cooling times \citep{wr1978}, and suppression of low-mass galaxies due to a combination of feedback, photoionization or/and dynamical processes \citep{benson2002,benson2003,brooks2013}. Claims of the environmental dependence of the LF are numerous \citep[e.g.][]{tully2002,trentham2002,trentham2005,infante2003}, but the exact shape and significance of this dependence is still a matter of debate. Many previous studies suggest that the most striking differences between low-density and high-density environments concern the faint-end slope of the LF, with cluster galaxies showing a higher abundance of low-luminosity galaxies than the field \citep[e.g.][and references therein]{blanton2005, popesso2006}. This has important implications, as it suggests that, whatever the mechanisms preventing the formation of galaxies within low-mass satellites are, they depend on the host halo mass. \cite{tully2002} suggest that perhaps reionization inhibited the collapse of gas in late-forming dwarfs in low-density environments, whereas a larger fraction of low-mass haloes formed before the photoinization of the intergalactic medium in overdensities that ultimately become clusters. Interestingly, this mechanism could be at the heart of the discrepancies existing in the literature about the faint-end of the LF in clusters. While some authors observe a marked upturn at faint magnitudes \citep[][]{yagi2002,popesso2006,barkhouse2007}, others find a more regular behaviour \citep[e.g.][]{sanchez2005,andreon2006}. When observed, this upturn is usually due to early-type dwarfs located in the outer regions of clusters, suggesting that cluster-related mechanisms may be responsible for the paucity of dwarfs observed in the denser inner regions \citep{pfeffer2013}. Moreover, the slopes derived in the upturn region are rather steep, as much as to be fully consistent with that of the halo mass function. Unfortunately, there is no evidence for the existence of such a dramatic steeping in the few cluster LFs where spectroscopic or visual membership has been determined \citep[][]{hilker2003,rines2003,mieske2007,misgeld2008,misgeld2009}. More extensive spectroscopic cluster surveys are needed to derive robust results on the faint-end slope of the LF, and in this way put constraints on the role played by the environment on the formation of baryonic structures within low-mass dark matter halos. We have undertaken a project in order to obtain spectroscopy of galaxies in nearby clusters down to the dwarf regime ($M > M^*+6$). This dataset will allow us to infer accurate cluster membership and analyze several properties of the dwarf galaxy population in nearby clusters, minimizing the contribution of background sources. In the present work, we present a study of the spectroscopic LF of (A85), a nearby ($z = 0.055$) and massive cluster \citep[$M_{200} = 2.5 \times 10^{14} \; M_{\odot}$, and $R_{200} = 1.02 \, h^{-1} \,$Mpc][]{rines2006}. This cluster is not completely virialized, since several substructures and infalling groups have been identified \citep{durret1999,aguerri2010,cava2010}. A85 is an ideal target for a deep study of the LF, as spectroscopy within the virial radius is almost complete down to m$_r \sim$ 18, resulting in 273 confirmed members \citep[][]{aguerri2007}. The new dataset presented here reaches three mag fainter, and almost doubles the number cluster members. Throughout this work we have used the cosmological parameters $H_0 = 75 \; \mathrm{km} \, \mathrm{s}^{-1} \mathrm{Mpc}^{-1}$, $\Omega _m = 0.3$ and $\Omega _{\Lambda} = 0.7$. \section{The observational data on A85} \subsection{Deep VLT/VIMOS Spectroscopy} Our parent photometric catalogue contains all galaxies brighter than $m_r = 22$ mag\footnote{The apparent magnitudes used are the dered SDSS-DR6 $r$-band magnitudes.} from the SDSS-DR6 \citep[][]{adelman2008}, and within $R_{200}$\footnote{The cluster center is assumed to be at the brightest cluster galaxy (BCG, $\alpha$(J2000): $00^h \, 41^m \, 50.448^s$ $\delta$(J2000): $-9^{\circ} \, 18' \, 11.45''$). This is a sensible assumption because the peak of X-ray emission lies at only 7 kpc from the BCG \citep{popesso2004}.}. Figure \ref{cmd} shows the colour-magnitude diagram of A85 for the galaxies included in this catalogue. The target galaxies for our spectroscopic observations were selected among those with no existing redshift in the literature and bluer that $g-r = 1.0$ (see Fig. \ref{cmd}). This is the colour of a 12 Gyr old stellar population with [Fe/H] = +0.25 supersolar metallicity \citep{worthey1994}, typical of very luminous early-type galaxies. As a result, this colour selection should minimize the contamination by background sources, while matching at the same time the colour distribution of galaxies in the nearby Universe \citep[e.g.][]{hogg2004}. The observations were carried out using the multi-object-spectroscopy (MOS) mode of VLT/VIMOS, in combination with the LR-blue+OS-blue grisms and filters (Program 083.A-0962(B), PI R. S\'anchez-Janssen). To maximize the number of targets, and avoid the gaps between the instrument CCDs, we designed 25 masks with large overlaps covering an area of 3.0 $\times$ 2.6 Mpc$^2$ around the central galaxy in A85 -- i.e. extending out to more than 1\,R$_{200}$. This observational strategy allowed us to obtain 2861 low-resolution spectra (R=180) of galaxies down to $m_r= 22$ mag. We exposed during 1000 s to reach a signal-to-noise ($S/N$) in the range $6 - 10$ down to the limiting magnitude. The data were reduced using GASGANO and the provided pipeline \citep{izzo2004}. The spectra were wavelength calibrated using the HeNe lamp, which yield a wavelength accuracy of $\sim 0.5$ $\rm \AA$ pixel$^{-1}$ in the full spectral range ($3700 - 6700 \, \rm \AA$). \subsection{Redshift Determination and Cluster Membership} The recessional velocity of the observed galaxies were determined using the \textit{rvsao.xcsao} \textit{IRAF} task \citep{kurtz1992}. This task cross correlates a template spectrum \citep[in this work][]{kennicutt1992} with the observed galaxy spectrum. This technique allowed us to determine the recessional velocity in 2070 spectra. The remaining spectra had too low $S/N$ to estimate reliable redshifts. \textit{xcsao} formal errors are smaller than true intrinsic errors \citep[e.g.][]{bardelli1994}, and reliable errors can only be estimated by observing galaxies more than once. Our observational strategy allowed us to obtain 676 repeated spectra result in a one-sigma velocity uncertainty of $\sim 500$ km s$^{-1}$. The redshifts from the literature (SDSS-DR6 and NED catalogues), together with our new data, result in a total number of 1593 galaxy velocities in the A85 direction within R$_{200}$ and $14 < m_r < 22$. \begin{figure} \centering \includegraphics[width=1\linewidth]{cmd} \caption{Lower panel: colour-magnitude diagram of the galaxies in the direction of A85. Grey points are the target galaxies. Red and blue symbols show red and blue cluster members, respectively. The solid line represents the red sequence of the cluster. Upper panel: spectroscopic completeness ($C$, diamonds) and cluster member function ($f_m$, black triangles) as a function of $r$-band magnitude. The dashed vertical line represents our limit magnitude for the spectroscopic LF.} \label{cmd} \end{figure} The caustic method \citep{diaferio1997,diaferio1999,serra2011} estimates the escape velocity and the mass profile of galaxy clusters in both their virial and infall regions, where the assumption of dynamical equilibrium does not necessarily hold. A by-product of the caustic technique is the identification of the members of the cluster with an interloper contamination of only 2\% within R$_{200}$, on average \citep{serra2013}. The application of the caustic technique to our spectroscopic catalogue resulted in a sample of $434$ cluster members within $R_{200}$, 284 of which are new data. We define the completeness of our data as $C = N_z / N_{phot}$, with $N_z$ being the number of measured redshift and $N_{phot}$ the number of photometric targets. Figure \ref{cmd} shows that $C$ is higher than 90 $\%$ for galaxies with M$_r < -19$ and it decreases around $40 \, \%$ at $M_r \sim -16$. We also defined the member fraction as $f_m = N_m / N_z$, being $N_m$ the number of members. The member fraction also strongly depends on luminosity. Thus, $f_m$ is higher than 80 $\%$ down to M$_r < -19$ and then rapidly decreases down to $\sim$ 20 $\%$ at M$_r = -16$ (see Fig. \ref{cmd}). \section{The spectroscopic LF of A85} The A85 LF is computed using all cluster members with m$_r \leq 21$ mag and $\langle \mu_{e,r} \rangle \leq 24$ mag arcsec$^{-2}$. These limits correspond to the values where the galaxy counts stop increasing, and thus determine our completeness limits. We note that our uniform spectroscopic selection function (cf. Sect. 2.1) does not introduce any bias in magnitude, $\langle \mu_{e,r} \rangle $, or colour. Figure \ref{LF} shows the $r$-band spectroscopic LF of A85. It is computed as $\phi(M_r) = N_{phot}(M_r) \times f_m(M_r) / (0.5 \times A)$, where $A$ is the observed area and 0.5 is the magnitude bin. The Pearson test shows that the observed LF cannot be modelled by a single Schechter at 99$\%$ confidence level, due to the presence of a statistically significant upturn at $M_r > -18$ (see Fig. \ref{LF}). The observed LF is better parameterized using two Schechter functions. Because of the degeneracy in fitting a double Schechter profile, we followed a two-steps process \citep[see][]{barkhouse2007}. First, we fitted the bright part ($b: -22.5<$ $M_{r}<-19.0$) of the LF obtaining $M^*_b$ and $\alpha_b$. Then, these parameters were fixed when a double Schechter fit was performed to the total LF. Table \ref{tabella_data} shows the M$_r^*$ and $\alpha$ values of the faint ($f: -19.0<$ $M_{r}<-16.0$) and bright parts of the best-fit LF. \begin{figure} \includegraphics[width=1\linewidth]{lf} \caption{Black points are the observed spectroscopic LF of A85. The solid blue line shows the fit of the LF by a double Schechter. The bright ($b$) and faint ($f$) components of the fit are represented by a dashed and dotted lines, respectively. The 68$\%$ and 99$\%$ c.l. for the fitted parameters are shown in the insets.} \label{LF} \end{figure} In order to better understand the nature of the galaxies responsible for the upturn at the faint end of the LF, we classified galaxies in blue and red ones according to their $(g-r)$ colours. Thus, blue galaxies are those with $(g-r) < (g-r)_{RS} - 3 \sigma $ and red ones are the remaining cluster members; where $(g-r)_{RS}$ is the colour of the red sequence of A85, and $\sigma_{RS}$ represents its dispersion\footnote{The red sequence of the cluster and its dispersion were measured in the magnitude range $-22.5 < $ M$_r < -19.0$.} (see Fig. \ref{cmd}). Figure \ref{LF_rb} shows the spectroscopic LF of the blue and red populations of A85. Naturally, red galaxies completely dominate in number. The red LF departs from the single Schechter shape, showing the characteristic flattening at intermediate luminosities followed by a (mild) upturn at the faint end. The blue LF, however, is well fitted by a single Schechter function. This is in qualitative agreement with previous work \citep[e.g.][P06 hereafter]{popesso2006}. Nevertheless, it is remarkable that the faint-end slopes of both the red and blue populations are virtually identical (see Table \ref{tabella_data}). \begin{table} \begin{center} \caption{Schechter Function Parameters.\label{tabella_data}} \begin{tabular}{cccc} \hline\hline mag interval &$M_{r}^* $ [mag] &$\alpha $\\ \hline -22.5 $<$ M$_{r}$ $<$ -19.0 &$-20.85\; ^{+0.14}_{-0.14}$ &$-0.79\; ^{+0.08}_{-0.09}$ \\ -19.0 $<$ M$_{r}$ $<$ -16.0 &$-18.36\; ^{+0.41}_{-0.40}$ &$-1.58\; ^{+0.19}_{-0.15}$ \\ \hline & red &\\ \hline -22.5 $<$ M$_{r}$ $<$ -19.0 &$-20.71\; ^{+0.13}_{-0.15}$ &$-0.63\; ^{+0.09}_{-0.08}$ \\ -19.0 $<$ M$_{r}$ $<$ -16.0 &$-17.90\; ^{+0.35}_{-0.19}$ &$-1.46\; ^{+0.18}_{-0.17}$ \\ \hline & blue &\\ \hline -22.5 $<$ M$_{r}$ $<$ -16.0 &$-21.29 \; ^{+0.36}_{-0.44}$ &$-1.43 \; ^{+0.06}_{-0.05}$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \includegraphics[width=1\linewidth]{lf_b_r} \caption{The spectroscopic LF of A85 (black points), blue and red diamonds show the LF of blue and red galaxies of A85. The full lines correspond to the Schecter function fits. The histograms are the LF of field red and blue galaxies (Blanton et al. 2005).} \label{LF_rb} \end{figure} \section{Discussion} In Fig.3 we additionally show the LFs of blue and red field populations from \cite{blanton2005} \footnote{Throughout this work, all the literature LFs are normalized so that they have the same number of counts as our A85 LF in the magnitude range $-22 < M_{r} < -19$.}. Their field LF is best described by a double Schechter function with faint-end slope -1.5. Figure \ref{LF_cfr} shows the comparison between our LF and other cluster and field LFs from the literature. We compare with the LF of the Virgo and Abell\,2199 clusters \citep{rines2008} because they are the two deepest spectroscopic LFs of nearby clusters covering a significant fraction of their respective virial radii (out to $\sim 0.7\,R_{200}$, very similar to our coverage). In addition, we include the LF from P06 because it is a photometric LF obtained by stacking a large sample of 69 clusters, and it exhibits the most statistically meaningful example of an upturn. Finally, we compare with the field LF from \cite{blanton2005}. We note that all LFs in Fig.4 have been derived using SDSS photometry. The very steep upturn present by the stacked photometric LF ($\alpha \sim -2.2$) is not observed in any of the nearby clusters. While A85 shows a slightly steeper slope than Virgo and A2199, it is not nearly as steep as the photometric LF. The discrepancy between the LF of A85 and the composite cluster LF from P06 can be traced back to the different methodologies applied to derive them. First, we introduced a stringent colour cut in the selection of our spectroscopic targets (see Sect.2.1). Second, our spectroscopic LF is based on accurate cluster membership using the galaxy recessional velocity, while photometric LFs necessarily rely on a statistical background subtraction. The open circles in Fig. \ref{LF_cfr} show the photometric LF of A85, derived from the number counts of candidate cluster galaxies with $(g-r) < 1.0$, and after performing a statistical background subtraction using galaxies (with the same colour cut) from 50 SDSS random fields of the same area as our A85 survey. It is clear that the photometric LF of A85 closely matches the spectroscopic LF, except for a steeper faint-end slope, which moves towards the P06 value. We quantify this difference by fitting a power law function to these LFs in the $ -18 \leq M_{r} \leq -16$ magnitude range. We prefer this simple approach to minimize the degeneracies present when fitting a double Schechter function (see Sect.3). We derive power-law faint-end slopes $\alpha_{f} = -1.5, -1.8,$ and $-2.1$ for the spectroscopic LF of A85, the photometric LF of A85, and the stacked photometric LF from P06, respectively. We note that cosmic variance of the cluster LF, or any mass dependence, can not explain the discrepancy between the composite cluster LF and that of A85. P06 show that 90 per cent of their nearby cluster sample have individual LFs consistent with the composite one shown in Fig.\ref{LF_cfr}. Moreover, the photometric LF for A85 itself from P06 is totally consistent with having a very steep faint end (see their Figure 5). This all suggests that the composite photometric LF from P06 suffers from contamination at the faint end, resulting in much steeper slopes than what is found using spectroscopic data. \begin{figure} \includegraphics[width=1\linewidth]{lf_cfr} \caption{Comparison between our LF and others present in literature: the stacked photometric LF of 69 clusters from Popesso et al. (2006), the spectroscopic LFs of A2199 and Virgo from Rines \& Geller (2008), and the field galaxy LF from Blanton et al. (2005). The photometric LF of A85 is also shown with open circles.} \label{LF_cfr} \end{figure} The field spectroscopic LF presented by \citep[][]{blanton2005} shows an upturn and a faint-end slope ($\alpha=-1.5$) similar to the one of A85. Contrary to some claims from photometrically derived cluster LFs, Fig. \ref{LF_cfr} shows that the faint-end slope in A85 and the field are consistent with each other -- i.e. clusters of these masses do not seem to contain a significant excess of dwarf galaxies with respect to the field. This, in turn, suggests that the environment may not play a major role in determining the abundance of low-mass galaxies \citep[cf.][]{tully2002}, but only acts to modify their star formation activity. While the overall abundance of dwarfs in the field and in clusters like A85 is the same, blue dwarfs dominate in the former environments, and red dwarfs in the latter. Environmental processes involving the loss of the galaxy gas reservoirs \citep[e.g.][]{quilis2000, bekki2002}, followed by the subsequent halt of the star formation, and the reddening of their stellar populations are the obvious mechanisms invoked to explain the colour transformation of cluster dwarfs. It is however not yet clear whether the efficiency of these processes depends on the halo mass or not. \cite{lisker2013} propose that quiescent cluster dwarfs originate as the result of early \citep[see also][] {sanchez2012} \emph{and} prolonged environmental influence in \emph{both} group- and cluster-size haloes. Along these lines, \cite{wetzel2013} find that group preprocessing is responsible for up to half of the quenched satellite population in massive clusters. On the other hand, P06 suggest that the excess of red dwarfs in clusters is a threshold process that occurs within the cluster virial radius \citep[see also][]{sanchez2008}, but their exact abundance is nevertheless a function of clustercentric radius: the upturn becomes steeper as the distance from the centre increases. Our dataset calls for a study of the LF as a function of radial distance from the cluster centre, and an investigation of the properties of A85 dwarfs in substructures and infalling groups. This will be the subject of future papers in this series. \section{Conclusions} We obtained 2861 low resolution spectra for galaxies down to $m_{r}=22$ mag within the virial radius of the nearby ($z=0.055$) galaxy cluster A85. This unique dataset allowed us to identify 438 galaxy cluster members, and build the spectroscopic LF of A85 down to M* + 6. The resulting LF is best modelled by a double Schechter function due to the presence of a statistically significant upturn at the faint-end. The amplitude of this upturn ($\alpha = -1.58^{+0.19}_{-0.15}$), however, is much smaller than that of most photometric cluster LFs previously reported in the literature. The faint-end slope of the LF in A85 is consistent, within the uncertainties, with that of the field. We investigate the nature of the galaxy population dominating the faint end of the LF of A85 by dividing the galaxies according to their $(g-r)$ colour. The red population dominates at low luminosities, and is the main responsible for the upturn. This is different from the field LF: while a relatively steep upturn is also present in the red population, blue galaxies are the prevalent population. The fact that the slopes of the spectroscopic LFs in the field and in a cluster as massive as A85 are similar suggests that the cluster environment does not play a major role in determining the abundance of low-mass galaxies, but it does influence the star formation history of dwarfs. \textbf{\textit{Acknowledgements.}} IA, AD and ALS acknowledge partial support from the INFN grant InDark and from the grant Progetti di Ateneo TO Call 2012 0011 ‘Marco Polo’ of the University of Torino.
{'timestamp': '2014-07-09T02:00:34', 'yymm': '1407', 'arxiv_id': '1407.1841', 'language': 'en', 'url': 'https://arxiv.org/abs/1407.1841'}
\section{Numerical and asymptotic solutions of equation for pressure} Eq.(11) of our main paper has the following form: \begin{gather} -\mu U=\frac{\partial }{\partial r}\left( H \langle k \rangle \frac{\partial p}{% \partial r}\right) +\frac{H \langle k\rangle}{r}\frac{\partial p}{\partial r}+% \frac{H \langle k\rangle}{r^{2}}\frac{\partial ^{2}p}{\partial \varphi ^{2}} \label{a1} \\ +\cos 2\varphi \left[ \frac{\partial }{\partial r}\left(H \Delta k \frac{% \partial p}{\partial r}\right) -\frac{H\Delta k }{r}\frac{\partial p}{% \partial r}-\frac{H\Delta k }{r^{2}}\frac{\partial ^{2}p}{\partial \varphi ^{2}}\right] \notag \\ -\frac{\sin 2\varphi }{r}\left[ 2H \Delta k \frac{\partial ^{2}p}{% \partial r\partial \varphi }+\left( \frac{d\left(H \Delta k \right) }{dr}% -\frac{2H\Delta k }{r}\right) \frac{\partial p}{\partial \varphi }\right] , \notag \end{gather}% and its solution can be presented in terms of cosine series: \begin{equation} \label{Gener_sol_p} p=p_{0}\left( r\right) +p_{1}\left( r\right) \cos 2\varphi +p_{2}\left( r\right) \cos 4\varphi +... \end{equation}% Here we derive equations for functions $p_{n}\left( r\right) $ in series Eq.(\ref{Gener_sol_p}) and describe their numerical solution. When $\Delta k$ is small, we construct an asymptotic solution. To solve Eq.(\ref{a1}) we substitute (% \ref{Gener_sol_p}) into (\ref{a1}) and collect terms proportional to $\cos 2n\varphi $. To eliminate singularities, we then use logarithmic substitution for variable $r$ and introduce the following dimensionless quantities: \begin{eqnarray*} \xi &=&\ln \left( \frac{r}{\sqrt{hR}}\right) ,\quad \eta =\frac{H}{h}=1+% \frac{\exp \left( 2\xi \right) }{2}, \\ P_{n} &=&\frac{p_{n}h^{2}}{3\mu UR}, \\ G\left( \eta \right) &=&-\frac{\eta ^{3} \langle k \rangle}{\langle k% \rangle\left( 0\right) },\quad D\left( \eta \right) =-\frac{\eta ^{3}\ \Delta k}{\Delta k\left( 0\right) } \end{eqnarray*}% This leads to a system of ordinary differential equations \begin{equation} \mathrm{L}_{0}P_{0}+\varepsilon \mathrm{L}_{0}^{+}P_{1}=\frac{4 k }{\langle k% \rangle\left( 0\right) }\exp \left( 2\xi \right) , \label{a5} \end{equation}% \begin{equation} \mathrm{L}_{n}P_{n}+\varepsilon \mathrm{L}_{n}^{+}P_{n+1}+\varepsilon \mathrm{L}_{n}^{-}P_{n-1}=0\quad \text{for}\quad n>0, \label{a6} \end{equation}% which is expressed in a more compact form by using differential operators \begin{equation*} \mathrm{L}_{n}=\frac{d}{d\xi }\left( G\frac{d}{d\xi }\right) -4n^{2}G, \end{equation*}% \begin{eqnarray*} \mathrm{L}_{n}^{+} &=&\frac{d}{d\xi }\left( \frac{D}{2}\frac{d}{d\xi }% \right) +\left( 2n+1\right) D\frac{d}{d\xi } \\ &&+\left( n+1\right) \frac{dD}{d\xi }+2n\left( n+1\right) D, \end{eqnarray*}% \begin{eqnarray*} \mathrm{L}_{n}^{-} &=&\frac{d}{d\xi }\left( \frac{D}{2}\frac{d}{d\xi }% \right) -\left( 2n-1\right) D\frac{d}{d\xi } \\ &&-\left( n-1\right) \frac{dD}{d\xi }+2n\left( n-1\right) D, \end{eqnarray*}% To solve a finite-difference version of ODE system (\ref{a5}), (\ref{a6}) we resolve the truncated system with respect to $d^{2}P_{n}/d\xi ^{2},$ $% n=1,...,\ N,$ by using the Gauss routine, and then integrate numerically the obtained system by using the fourth-order Runge-Kutta method. This system has a boundary condition $P_{n}\rightarrow 0$ both at small and large $r,$ or at $\xi \rightarrow -\infty $ and $\xi \rightarrow +\infty$. Therefore, the dimensionless gap and the permeabilities take a form% \begin{equation*} \eta \rightarrow 1,\quad dG/d\xi =dD/d\xi \rightarrow 0\quad \text{as}\quad \xi \rightarrow -\infty , \end{equation*}% \begin{equation*} \eta \simeq \frac{\exp \left( 2\xi \right) }{2},\quad G,\ D\simeq \eta ^{3}\simeq \exp \left( 6\xi \right) \quad \text{as}\quad \xi \rightarrow +\infty . \end{equation*}% Asymptotic linearly independent solutions of (\ref{a5}), (\ref% {a6}) can be then found in the form: $P_{ni}=a_{ni}^{0}\exp \left( \lambda _{i}^{0}\xi \right) ,$ $i=1,...,\ 2N$ as $\xi \rightarrow -\infty $ and $% P_{ni}=a_{ni}^{\infty }\exp \left( \lambda _{i}^{\infty }\xi \right) $ as $% \xi \rightarrow +\infty .$ The eigenvalues, $\lambda _{i}^{0}$ and $\lambda _{i}^{\infty },$ and eigenvectors, $\mathbf{a}_{i}^{0}$ and $\mathbf{a}% _{i}^{\infty },$ were found using IMSL routine DEVCRG. The system admits both exponentially growing and decaying solutions $P_{ni}$ whereas the boundary conditions require the decaying one. Thus, we choose solutions with $\lambda _{i}^{0}>0$ and $\lambda _{i}^{\infty }<0$. To resolve all linearly independent solutions properly at $\left\vert \xi \right\vert \gg 1$ the orthonormalization method~\cite{godunov1961nsb,conte1966nsl} is applied. The boundaries of the integration domain are set at $r_{\min }=0.01$ and $% r_{\max }=20,$ the number of taken into account harmonics is $N=4.$ The calculations show that the expansion converges fast with $N,$ since the ratio $\left\vert P_{n+1}/P_{n}\right\vert $ is usually small for all $n$, which is illustrated in Fig.~\ref{pn}. \begin{figure}[tbp] \includegraphics[width=7 cm]{pn.eps}\newline \caption{ Functions $P_{n}\left( r\right) =p_{n}h^{2}/3\protect\mu UR$ in series (\protect\ref{Gener_sol_p}) for $h\ll \min \{ b_2,L \},$ $% b_{1}/h=0.5$, $\protect\phi _{2}=0.8$. Solid line corresponds to an analytical solution for $\tilde{P}=\tilde{p} h^{2}/3\protect\mu UR$, circles - to a numerical evaluation of $P_0$, dashed line - to $10^3P_1$ and dash-dotted line - to $10^5P_2$.} \label{pn} \end{figure} The solution of Eq. (\ref{a1}) is simple if ratio $k_{\parallel }/k_{\perp }$ (and hence $\Delta k/ \langle k \rangle$) is constant, i.e., when the permeabilities depend on $r$ similarly, $% k_{\parallel }^{\ast },\ k_{\perp }^{\ast }\sim g\left( r\right) .$ This is the case in a thin channel with $% b_{1}=0$, when all the permeabilities are proportional to $H^{2}$~\cite% {feuillebois.f:2009}. Then the solution includes only axisymmetric term $% p_{0}$. It can be verified directly that solution \begin{equation} \frac{d\tilde p}{dr}=-\frac{\mu Ur}{2 H \langle k \rangle}=-\frac{12\mu Ur}{% H^{3}\left(k_\parallel^{\ast }(H)+k_\perp^{\ast }(H)\right)}, \label{derivP} \end{equation}% \begin{equation} \frac{\tilde p(H)}{12 \mu UR}=\int\limits_{H}^{\infty }{\frac{ dH^{\prime }}{% H^{\prime }{}^{3} \left(k_\parallel^{\ast }+k_\perp^{\ast }\right)}} \label{Pressure} \end{equation} satisfies Eq. (\ref{a1}) since the boundary conditions% are homogeneous, $\partial \widetilde{p}/\partial \varphi =0$, and terms \begin{equation*} \frac{\partial }{\partial r}\left(H \Delta k \frac{\partial \widetilde{p}% }{\partial r}\right) =\frac{\Delta k}{2 \langle k \rangle}\mu U,\quad \frac{H\Delta k }{r}\frac{\partial \widetilde{p}}{\partial r}% =\frac{\Delta k}{2 \langle k \rangle}\mu U, \end{equation*}% cancel out. The analytical expressions for $\widetilde{P}=\widetilde{p}% h^{2}(3\mu UR)^{-1}$ and corresponding resistance force in the case when $h\ll \min \{ b_{2},L \} $ are obtained in the Appendix section of the main paper. The first three harmonics in series (\ref{Gener_sol_p}), plotted as functions of $r/\sqrt{hR}$, are presented in Fig.~\ref{pn}. It shows that the axisymmetric part of pressure distribution is very close to the approximate solution $\widetilde{P}$ while the non-axisymmetric part is several orders less. The permeability difference is typically small, $|\Delta k|\ll |\langle k\rangle|$, for example, for large distances between surfaces, $H\gg L.$ The asymptotic solution of Eqs. (\ref{a5}), (\ref{a6}) can be found in this important case as power series expansion $P_{n}=\sum\nolimits_{j=0}\varepsilon ^{j}P_{n}^{(j)}$ with respect to small parameter $\varepsilon $. We do not construct the full asymptotic solution, but show that the isotropic part $P_{0}$, which only contributes to the drag force, differs from the approximate solution $% \widetilde{P}$ given by Eq. (\ref{Pressure}) by the value $O\left( \varepsilon ^{2}\right) .$ Substituting the expansions into the system (\ref% {a5}), (\ref{a6}) and collecting the terms proportional to $\varepsilon ^{j}$ one obtains equations governing the first three terms of expansion: \begin{equation} \mathrm{L}_{0}P_{0}^{(0)}=\frac{4 k}{\langle k\rangle\left( 0\right) }\exp \left( 2\xi \right) , \label{a7} \end{equation}% \begin{equation*} P_{n}^{(0)}=0\quad \text{as}\quad n>0, \end{equation*}% \begin{equation*} \mathrm{L}_{1}P_{1}^{(1)}=-\mathrm{L}_{1}^{-}P_{0}^{(0)}, \end{equation*}% \begin{equation*} P_{n}^{(1)}=0\quad \text{as}\quad n\neq 1, \end{equation*}% \begin{equation*} \mathrm{L}_{0}P_{0}^{(2)}=-\mathrm{L}_{0}^{+}P_{1}^{(1)}, \end{equation*}% \begin{equation*} \mathrm{L}_{2}P_{2}^{(2)}=-\mathrm{L}_{2}^{-}P_{1}^{(1)}, \end{equation*}% \begin{equation*} P_{1}^{(2)}=0,\ P_{n}^{(2)}=0\quad \text{as}\quad n>2. \end{equation*}% The equation (\ref{a7}) is the dimensionless form of Eq. (\ref{derivP}). For further terms, one can deduce that \begin{equation*} P_{n}=\varepsilon ^{n}P_{n}^{(n)}+\varepsilon ^{n+2}P_{n}^{(n+2)}+\varepsilon ^{n+4}P_{n}^{(n+4)}+... \end{equation*}% Thus the first-order correction to the pressure field is $\varepsilon P_{1}^{(1)},$ while the correction to the isotropic part $\varepsilon ^{2}P_{0}^{(2)}$ is much smaller. \iffals In the most practical situations the permeability ratio $\left\vert \Delta k/\langle k\rangle\right\vert $ is small and/or weakly dependent on $r$. As a result of both factors, numerical solution coincides well with approximate solution $\widetilde{P}_{0}$ (see Fig.~\ref{pn}). \f \bibliographystyle{rsc}
{'timestamp': '2011-06-02T02:02:10', 'yymm': '1106', 'arxiv_id': '1106.0122', 'language': 'en', 'url': 'https://arxiv.org/abs/1106.0122'}
\section{Introduction} Westerlund (1961) discovered that the 41day Cepheid RS Pup was surrounded by a remarkable nebulosity. This is in the shape of rudimentary rings but with much distorted structure and condensations. Havlen (1972) showed that portions of the nebulosity varied in the period of the Cepheid but with with various phase lags. A very beautiful set of measurements of phase lags at various points in the nebula has recently been obtained by Kervella et al. (2008) (= Kervella et al.). In general, the expected phase lag at a point $i$ may be written: \begin{equation} (N_{i} +\Delta \phi_{i}) = 5.7755 10^{-3} D\theta_{i} (1+ \sin \alpha_{i})/P\cos \alpha_{i} \end{equation} Here $\Delta \phi_{i}$ is the fractional phase lag, $N_{i}$ the whole number of pulsation periods elapsed, $D$ is the distance to RS Pup in parsecs, $\theta_{i}$ is the angular distance of $i$ from the star in arcsec, $P$ is the pulsation period in days and $\alpha_{i}$ is the angle between the line joining the star to $i$ and the plane of the sky (positive if $i$ is further away than the star, negative if it is nearer). The measured quantities are $\Delta \phi_{i}$ and $\theta_{i}$. $P$ is assumed known and here it is taken as 41.4389 days ( Kervella et al.). In an attempt to determine $D$, Kervella et al. assume $\alpha_{i} = \rm {constant} =0$. That is they assume that all the features measured by them lie in the plane of the sky and the values of $N_{i}$ are then chosen to obtain the best fit to this model. The justification for this assumption is that if the nebulosity consisted of a series of thin, uniform, spherical shells centred on the star, then the deviation of all measured points from the plane of the sky would be small. However an examination of the structure of the nebulosity (for instance from the figures in Kervella et al.) shows that it is far from corresponding to this idealized model. There is much distortion and density variation in the rudimentary rings. Kervella et al. place special emphasis on the ten condensations or blobs shown in their fig 7. The existence of such blobs is not consistent with the idealized model and leaves open the question of whether they or other features are actually in, or near, the plane of the sky. In view of these uncertainties it cannot be claimed that a definitive distance to RS Pup can be found based on the ``in-the-plane" assumption. In the next section this assumption is dropped and it is shown that a simple and astrophysically interesting model for the nebulosity is found if a distance for RS Pup is adopted from a period-luminosity relation. \section{An equatorial disc model} van Leeuwen et al. (2007) established a reddening-free period-luminosity relation in $V,I$ based on HST (Benedict et al. 2007) and revised Hipparcos parallaxes. This together with the data in table A1 of that paper leads to a predicted distance of 1728pc for RS Pup \footnote{The distance, $1830^{+109}_{-94}$pc derived from the pulsational parallax by Fouqu\'{e} et al. (2007) is not significantly different from this.}. Adopting this distance it is possible to use eq. 1 to study the three dimensional structure of the nebulosity. In principle the values of $N_{i}$ can be arbitrarily assigned. However they should obviously be chosen to account for apparent continuities in the structure and to conform to some simple, physically reasonable model. It was quickly found by trial and error that there is a consistent set of values of $N_{i}$ in which the points measured by Kervella et al. are further away than the star on the south side and nearer on the north, i.e. an inclined disc model is indicated. This is indeed the simplest model, if the uniform spherical shell model is rejected. In such a model the values of $N_{i}$ have to chosen such that $(N_{i} +\Delta \phi_{i})/\theta_{i}$ values are as near constant as possible in a given direction from the star and vary smoothly with direction. The details are given in Table 1. This contains data on the 31 points observed by Kervella et al. and I am greatly indebted to them for supplying detail of their observations which were not given in the original paper. The Table lists:\\ 1. Position number, $i$.\\ 2. Angular distance from the star, $\theta_{i}$, in arcsec.\\ 3.Azimuth of the point relative to the star, $\beta_{i}$, measured from north through east, in degrees.\\ 4. $\Delta \phi_{i}$ and its standard error.\\ 5. The value of $N_{i}$ ($=N_{i}^{K}$) adopted by Kervella et al. to fit their model assumptions.\\ 6. The distance, $d_{y}^{K}$, behind (positive) or in front (negative) of the plane of the sky through the star. This is found by using eq.1 together with adopted values of $N_{i}$ and $D$ to derive $\alpha_{i}$ in each case. Then, \begin{equation} d_{y} = 4.848.10^{-2} \theta_{i} D \tan\alpha_{i} \end{equation} where $d_{y}$ is in units of $10^{-4}$pc. For $d_{y}^{K}$ the value of $D$ estimated by Kervella et al. (1992 pc) was combined with their $N_{i}^{K}$ values.\\ 7. The value of $N_{i}$ adopted in the present paper\\ 8. The distance, $d_{y}$, behind or in front of the plane of the sky through the star assumed to be at its PL distance (1728pc) and adopting the revised values of $N_{i}$. The units are also in $10^{-4}$pc.\\ 9. The perpendicular distance $d_{x}$ of the point from the intersection of the disc with the plane of the sky and projected onto the plane of the sky. In the same units as $d_{y}$. This is given by: \begin{equation} d_{x} = 83.77\theta_{i} \sin(\beta_{i} -\gamma) \end{equation} where $\gamma$ is the angle (azimuth) at which the plane of the disc cuts the plane of the sky. In this test of the model this is take as close to the line from the star to point 9 (i.e. $\gamma = 80^{\circ}$).\\ \begin{table*} \centering \caption{The phase lag observations of Kervella et al. with derived linear positions} \begin{tabular}{rrrrrrrrr} \hline $i$ & $\theta_{i}$ & $\beta_{i}$ & $\Delta \phi_{i} \pm s.e.\;\;\;\;$ & $N_{i}^{K}$ & $d_{y}^{K}$ & $N_{i}$ & $d_{y}$ & $d_{x}$\\ \hline 1 & 21.10 & 139 & $0.983\pm 0.020$ & 5 & 43 & 5 & 290 & 1515\\ 2 & 21.16 & 170 & $0.809\pm 0.012$ & 5 & --24 & 5 & 233 & 1773\\ 3 & 21.65 & 196 & $0.989\pm 0.013$ & 5 & --7 & 5 & 252 & 1630\\ 4 & 16.58 & 186 & $0.576\pm 0.009$ & 4 & --10 & 4 & 190 & 1335\\ 5 & 16.03 & 167 & $0.504\pm 0.083$ & 4 & 19 & 4 & 210 & 1341\\ 6 & 12.89 & 213 & $0.102\pm 0.023$ & 3 & --179& 3 &--1 & 790\\ 7 & 10.96 & 249 & $0.098\pm 0.023$ & 3 & 19 & 3 & 148 & 175\\ 8 & 29.28 & 312 & $0.207\pm 0.036$ & 8 & 25 & 6 &--313 & --1933\\ 9 & 19.42 & 80 & $0.697\pm 0.016$ & 5 & 103 & 4 & 7 & 0 \\ 10 & 17.28 & 149 & $0.602\pm 0.014$ & 4 & --70 & 4 & 146 & 1351\\ 11 & 11.03 & 252 & $0.108\pm 0.004$ & 3 & 16 & 3 & 146 & 129\\ 12 & 11.84 & 201 & $0.692\pm 0.010$ & 3 & 133 & 3 & 259 & 850\\ 13 & 16.26 & 166 & $0.462\pm 0.001$ & 4 & --17 & 4 & 179 & 1359\\ 14 & 16.45 & 67 & $0.526\pm 0.001$ & 4 & --14 & 3 & --161 & --310\\ 15 & 16.98 & 148 & $0.572\pm 0.033$ & 4 & --49 & 4 & 159 & 1319\\ 16 & 17.04 & 187 & $0.589\pm 0.006$ & 4 & --49 & 4 & 160 & 1365\\ 17 & 17.58 & 86 & $0.394\pm 0.009$ & 4 & --178& 4 & 55 & 154\\ 18 & 18.84 & 247 & $0.543\pm 0.005$ & 5 & 108 & 4 & 2 & 355\\ 19 & 20.59 & 170 & $0.770\pm 0.015$ & 5 & 20 & 5 & 263& 1725\\ 20 & 20.99 & 140 & $0.965\pm 0.021$ & 5 & 48 & 5 & 293 & 1523\\ 21 & 24.43 & 255 & $0.925\pm 0.002$ & 6 & 50 & 5 & 15 & 178\\ 22 & 24.75 & 84 & $0.182\pm 0.025$ & 6 & --252& 6 & 76 & 145\\ 23 & 26.76 & 200 & $0.460\pm 0.003$ & 7 & 12& 7 & 329 & 1941\\ 24 & 28.35 & 273 & $0.081\pm 0.004$ & 7 & --289& 7 & 87 & --534\\ 25 & 28.74 & 156 & $0.717\pm 0.010$ & 8 & 247& 7 & 263 & 2336\\ 26 & 29.61 & 312 & $0.138\pm 0.001$ & 8 & --27 & 6 & -373 & --1955\\ 27 & 30.06 & 70 & $0.158\pm 0.015$ & 8 & --65& 7 & --28 & --437\\ 28 & 33.25 & 214 & $0.570\pm 0.055$ & 9 & 117 & 8 & 190 & 2004\\ 29 & 34.81 & 83 & $0.454\pm 0.006$ & 9 & --72 & 8 & 25 & 153\\ 30 & 37.76 & 275 & $0.956\pm 0.004$ &10 & 163 & 8 & --48 & -819\\ 31 & 39.14 & 283 & $0.938\pm 0.008$ &10 & 26 & 8 & --174 & --1281\\ \hline \end{tabular} \end{table*} Fig. 1 shows a plot of $d_{x}$ against $d_{y}$. This indicates a clear, apparently linear relation, between the two quantities. That is, the points lie in a tilted plane, presumably an equatorial disc. The line shown is a least squares fit through the origin and is given by: \begin{equation} d_{y} = 0.143(\pm 0.010)d_{x} \end{equation} The tilt of the disc to the plane of the sky is $\tan^{-1} 0.143 = 8^{\circ}.1\pm0^{\circ}.6$. The rms scatter about the line in $d_{y}$ is only $73.10^{-4}$pc, much smaller than the diameter of the disc, which is $\sim 6800.10^{-4}$pc out to the limits of the phase-lag survey. This rms scatter may be compared to the rms scatter of $d_{y}^{K}$ which is $111.10^{-4}$pc. No attempt has been made to optimize the disc model by, for instance, varying $\gamma$ to find a better fit. This might reduce the rms scatter slightly. However this is already small compared to the diameter of the part of the disc surveyed indicating a relatively thin disc. A realistic disc will have some significant depth perpendicular to its axis and, indeed, Kervella et al. note that their observations at some positions suggest smoothing attributed to a non-zero depth in the line of sight. An inclined disc model broadly similar the one just discussed could probably be derived for other distances, if there were good evidence for these. However it should be noted that to take the distance as a free parameter in an attempt to reduce the rms scatter in the model is to assume that the disc must conform as closely as possible to an idealized model which is perfectly flat and of negligible thickness. There is no a priori justification for such an assumption. \begin{figure} \includegraphics[width=8.5cm]{RSnewfig4.ps} \caption{A plot of the distances, $d_{y}$,from the plane of the sky through the star against $d_{x}$ the perpendicular distance, in the plane of the sky, to a line of azimuth $\gamma =80^{\circ}$. The units are $10^{-4}$pc. Note the expanded $d_{y}$ scale.} \end{figure} An equatorial disc model for RS Pup is particularly interesting from the astrophysical point of view. Whilst it has seemed possible, for instance, that Cepheids might have ejected shells in a previous evolutionary phase, it has been puzzling that only for RS Pup is such a prominent structure found. The interpretation of the nebulosity as a disc at a small angle to the plane of the sky opens up the possibility of a deeper understanding of this phenomenon and its rarity. Obvious possibilities are loss of mass in the equatorial plane by unusually rapid rotation and/or binary interaction at an earlier evolutionary stage. \section{Conclusion} The structure seen in the RS Pup nebulosity makes questionable the assumption that phase-lag observations all refer to points close to the plane of the sky and this makes distance estimates based on this assumption questionable. An inclined disc model at a distance predicted by a period-luminosity relation gives a good fit to the data and opens new possibilities for understanding the system, including a possible interacting binary model. \section*{Acknowledgments} I am grateful to Dr Pierre Kervella for a very helpful exchange of correspondence and to him and his colleagues for sending me detail of their beautiful work not given in their paper. I am also grateful to Dr Kurt van der Heyden for his help and to the referee for suggestions.
{'timestamp': '2008-04-01T09:28:29', 'yymm': '0804', 'arxiv_id': '0804.0092', 'language': 'en', 'url': 'https://arxiv.org/abs/0804.0092'}
\section{Introduction} Hamilton's principle is one of the basic principles of Physics. Anthony \cite{Anthony} states: "In theoretical physics, a theory is often considered to be complete if its variational principle in the sense of Hamilton is known". When a Hamilton's principle is known, the whole information concerning the processes of a particular system is included into its Lagrangian. When known, the Hamilton's principe could be used in many different ways. For example, it could serve as the basis for obtaining first integrals via N\"{o}ther's theory, or to generate approximate solutions to the relevant system of equations by the use of Ritz procedure (cf.\ \cite{VujanovicAtanackovic}). In this paper we investigate necessary conditions for solutions of fractional variational problems (Euler-Lagrange equations). Such investigations has been initiated in \cite{Riewe96, Riewe97}, and continued in \cite{Agrawal02} - \cite{AKP-fracEL} and \cite{BalTru08}; see also \cite{Rekh} for the importance of introducing fractional derivatives into the Lagrangian density of Hamilton's principle. In general, we refer to \cite{Dacorogna} - \cite{Jumarie07}, \cite{MillRoss, OldSpa, Podlubny}, \cite{SamkoKM} - \cite{WestBolognaGrigolini}, for different aspects of the calculus of variations and fractional calculus, motivations and applications. When a fractional variational problem is studied, a natural question arises how one can choose $\alpha$, the order of fractional derivative, in order to achieve the minimal value of a functional under consideration. More generally, one can address the same question for any problem involving fractional operators. Usually, in application, a good choice of $\alpha$ is obtained by experiments, numerical methods or computational simulations. However, experimental results give different values for $\alpha$ within certain interval. In this paper we propose a method which a priori gives values for $\alpha$ which optimize the considered variational problem following the fundamental minimization principle of Hamilton's action. In fact, we address the question of finding stationary points for Hamilton's action integral with fractional Lagrangian in a more general setting. Namely, we allow the stationarity of the action integral with respect to a set of admissible functions \textit{and} with respect to the order of fractional derivatives, appearing in the Lagrangian. Up to our knowledge the problem when both $y$ and $\alpha$ are varied has not been analyzed so far. It leads to stationarity conditions as a basis for generalized Hamilton's principle for the action integral \begin{equation} \label{eq:fvp-alpha} I[y,\alpha]: = \int_{0}^{b}L( t,y( t),{}_{0}D_{t}^{\alpha }y,\alpha)\, dt,\quad y\in \cU,\alpha\in A:=[0,\alpha_0],\alpha_0\leq 1, \end{equation} where $\cU$ is a set of admissible functions: Find \begin{equation}\label{eq:min} \min_{(y,\alpha)\in \cU\times A} I[y, \alpha] \qquad \mbox{or} \end{equation} \begin{equation}\label{eq:minmin} \min_{\alpha\in A}(\min_{y\in\cU} I[y, \alpha]) \qquad \mbox{or} \end{equation} \begin{equation}\label{eq:minimin} \min_{y\in\cU}( \min_{\alpha\in A} I[y, \alpha]). \end{equation} In this paper we analyze stationarity conditions for (\ref{eq:min}) and (\ref{eq:minmin}). Stationarity conditions with respect to $\alpha$ in (\ref{eq:minimin}) are more difficult, and, contrary to (\ref{eq:minmin}), this case is less natural in applications. Note that in (\ref{eq:min}), (\ref{eq:minmin}) and (\ref{eq:minimin}) one can look for maximums instead of minimums. So, our general problem is determination of stationary points. So far, parameter $\alpha$, the order of fractional derivative, has been determined experimentally (cf.\ e.g. \cite{Rogers83}). This approach offers a rational way for choosing the precise $\alpha$. The paper is organized as follows. To the end of Introduction, we recall definitions and properties of fractional derivatives. In Section \ref{sec:Hamilton} we present a framework in which we shall study variational problems (\ref{eq:min}) and (\ref{eq:minmin}). Then in Section \ref{SecOptCond} we derive stationarity conditions for (\ref{eq:min}). Section \ref{SecEqProbl} is devoted to additional assumptions which provide equivalence of problems (\ref{eq:min}) and (\ref{eq:minmin}). Results which are obtained in previous sections are illustrated by several examples in Section \ref{SecEx}. Moreover, examples of this section give further motivation for our investigation. In the last remark of Section \ref{SecEx} we propose a new formulation of a fractional variational problem. Throughout this paper we shall use the following notation. The mapping $(t,\alpha)\mapsto {}_0D_t^\alpha(y)$, which defines the left Riemann-Liouville fractional derivative of order $\alpha$, will be denoted by ${}_0D_t^\alpha y$, or by ${}_0D_t^\alpha y(t)$. Recall, $$ {}_{0}D_{t}^{\alpha}y:=\frac{1}{\Gamma(1-\alpha)}\frac{d}{dt} \int_{0}^{t}\frac{y(\tau)}{(t-\tau)^{\alpha}} \, d\tau, \quad t\in[0,b],\, 0\leq\alpha <1, $$ where $\Gamma$ is the Euler gamma function, and its existence is provided whenever \begin{equation} \label{eq:acfd} [0,b]\ni t\mapsto \int_0^t \frac{y(\tau)}{(t-\tau)^{\alpha}}\, d\tau \end{equation} is an absolutely continuous function. Recall, the space of absolutely continuous functions is denoted by $AC([0,b])$ and it is supplied with the norm $||f||=\sup_{x\in[0,b]} |f(x)|$ (clearly, it is not a Banach space). For example, (\ref{eq:acfd}) is absolutely continuous if $y\in AC([0,b])$. However, there are some cases when with less regularity in $y$ we still have a well-defined operator of fractional differentiation (cf.\ \cite{SamkoKM}). For instance, ${}_{0}D_{t}^{\alpha}y$ exists for functions with integrable singularities (a continuous and locally integrable function $f$ in $(0,b]$ has an integrable singularity at $\tau=0$ of order $r<1$ if $\lim_{\tau\to 0}\tau^r f(\tau)\not= 0$). In particular we can take $y(t)=t^{-\mu}$, $t\in(0,b]$ (for any $b>0$), $0<\mu <1$. Then we obtain the so-called Euler formula (cf.\ \cite[(2.26)]{SamkoKM}) $$ {}_0D_t^\alpha t^{-\mu} =\frac{\Gamma(1-\mu)}{\Gamma(1-\mu-\alpha)} \frac{1}{t^{\mu+\alpha}}, \quad t\in(0,b]. $$ The right Riemann-Liouville fractional derivative of order $\alpha$ is defined as $$ {}_{t}D_{b}^{\alpha}y:=\frac{1}{\Gamma(1-\alpha)} \left(-\frac{d}{dt}\right) \int_{t}^{b}\frac{y(\tau)}{(\tau-t)^{\alpha}} \, d\tau, \quad t\in[0,b],\, 0\leq\alpha <1. $$ The conditions for its existence are similar as in the case of the left fractional derivative. In the sequel we shall consider cases involving both fractional derivatives and work with integrable functions for which these derivatives (or one of them) exist. In such cases notation ${}_{0}D_{t}^{\alpha}y$, resp.\ ${}_{t}D_{b}^{\alpha}y$, $t\in[0,b]$, means that $y$ and ${}_{0}D_{t}^{\alpha}y$, resp.\ ${}_{t}D_{b}^{\alpha}y$, are considered as integrable functions which can take values $+\infty$ or $-\infty$ at some points. Recall (\cite{Nakhushev}), that ${}_0D_t^\alpha y\to y'$ and ${}_tD_b^\alpha y\to -y'$ in $\cC([0,b])$, as $\alpha\to 1^{-}$, whenever $y\in \cC^1([0,b])$. Also, we shall make use of Caputo fractional derivatives. The left, resp.\ right, Caputo fractional derivative of order $\alpha\in[0,1)$ is defined as $$ {}_{0}^{C}D_{t}^{\alpha}y: = \frac{1}{\Gamma(1-\alpha)} \int_{0}^{t} \frac{y'(\tau)}{(t-\tau)^{\alpha}}\, d\tau \quad \mbox{resp.} \quad {}_{t}^{C}D_{b}^{\alpha}y: = \frac{1}{\Gamma(1-\alpha)} \int_{t}^{b} \frac{-y'(\tau)}{(\tau-t)^{\alpha}}\, d\tau. $$ One can show that for $y\in AC([0,b])$ and $t\in [0,b]$, $$ {}_0D_t^\alpha y= {}_{0}^{C}D_{t}^{\alpha}y + \frac{1}{\Gamma (1-\alpha)} \frac{y(0)}{(t-a)^\alpha}, \quad {}_tD_b^\alpha y= {}_{t}^{C}D_{b}^{\alpha}y + \frac{1}{\Gamma (1-\alpha)} \frac{y(b)}{(b-t)^\alpha}, $$ (cf.\ e.g. \cite{KilSriTru}). Therefore, ${}_{0}D_{t}^{\alpha}y={}_{0}^{C}D_{t}^{\alpha}y$, resp.\ ${}_{t}D_{b}^{\alpha}y={}_{t}^{C}D_{b}^{\alpha}y$, whenever $y(0)=0$, resp.\ $y(b)=0$. \section{Formulation of the problem} \label{sec:Hamilton} We investigate stationary points of (\ref{eq:fvp-alpha}) for $\alpha\in[0,\alpha_0]$ and all admissible functions $y$, whose properties will be specified in the sequel. We shall distinguish two cases: $\alpha_0$ strictly less than $1$ and $\alpha_0=1$. In the case $\alpha_0<1$, set $$ \mathcal{U}_l:=\{y\in L^1([0,b])\,|\, {}_0D_t^\alpha y\in L^1([0,b])\}. $$ Obviously, $AC([0,b])$ is a subset of $\cU_l$. In the case $\alpha_0=1$ we assume that $y\in\mathcal{U}_l$ and that, in addition, ${}_0D_t^1 y$ exists, and ${}_0D_t^1 y=y'$ is an integrable function. Let us note that one can consider $\mathcal{U}_l$ defined with $L^p([0,b])$ (or their subspaces) instead of $L^1([0,b])$ (see Remark \ref{rem:lin ops}). In general, we shall use notation \beq \label{admisset} \mathcal{U}:=\{y\in \cU_l\,|\, y \mbox{ satisfies specified boundary conditions}\}. \eeq We shall sometimes write $\mathcal{U}$ also for $\mathcal{U}_l$ (then the set of specified boundary conditions is empty). In the sequel, Lagrangian $L(t,y(t),{}_{0}D_{t}^{\alpha}y,\alpha)$ (Lagrangian density, in Physics) satisfies: \beq \label{eq:conditions on L} \left.\ba{c} L\in\cC^1([0,b]\times \R\times \R\times [0,1])\\ \mbox{and}\\ t\mapsto\pd_3 L(t,y(t),{}_0D_t^\alpha y,\alpha) \in \cU_r, \mbox{ for every } y\in \cU_l \ea\right\} \eeq where $\mathcal{U}_r:=\{y\in L^1([0,b])\,|\, {}_tD_b^\alpha y\in L^1([0,b])\}$. Recall, our generalization of Hamilton's principle is realized through the determination of $(y^{\ast},\alpha^{\ast})\in\mathcal{U}\times A$ such that \begin{equation} \label{eq:fvp-alpha-min} \underset{(y,\alpha) \in \mathcal{U}\times A}{\min} \int_{0}^{b} L(t,y(t),{}_{0}D_{t}^{\alpha}y,\alpha)\, dt = \int_{0}^{b} L( t,y^{\ast}(t),{}_{0}D_{t}^{\alpha^{\ast}}y^{\ast}, \alpha^{\ast})\,dt. \end{equation} There are two special cases of (\ref{eq:fvp-alpha-min}). The first one is obtained when $A=\{1\}$. Then, since $\left.{}_0D_t^\alpha y\right\vert_{\alpha=1} =y'(t)$ for $y\in\cC^1([0,b])$, the solution $y^{\ast}$ of (\ref{eq:fvp-alpha-min}) satisfies the classical Euler-Lagrange equation $$ \frac{d}{dt}\frac{\partial L}{\partial y'}- \frac{\partial L}{\partial y} =0. $$ If $A$ has a single element $A=\{\alpha\}$, $0< \alpha < 1$, then $\underset{(y,\alpha)\in\mathcal{U}\times \{\alpha\}}{\min} I[y,\alpha]$ leads to the fractional Euler-Lagrange equation (cf.\ (\cite{Agrawal02, AKP-fracEL})) $$ {}_{t}D_{b}^{\alpha}\frac{\partial L}{\partial {}_{0}D_{t}^{\alpha}y} +\frac{\partial L}{\partial y}=0. $$ We proceed with finding stationary points related to (\ref{eq:fvp-alpha}). \section{Optimality conditions} \label{SecOptCond} A necessary condition for the existence of solutions to variational problem (\ref{eq:fvp-alpha-min}) is given in the following proposition. \bprop \label{prop:ELeqs} Let $L$ satisfy (\ref{eq:conditions on L}). Then a necessary condition that functional (\ref{eq:fvp-alpha}) has an extremal point at $(y^{\ast},\alpha^{\ast})\in\cU\times A$ is that $(y^{\ast},\alpha^{\ast})$ is a solution of the system of equations \begin{eqnarray} \frac{\partial L}{\partial y}+{}_{t}D_{b}^{\alpha} \frac{\partial L}{\partial{}_{0}D_{t}^{\alpha}y} &=& 0, \label{eq:ELeqs-y} \\ \int_{0}^{b} \left(\frac{\partial L}{\partial{}_{0}D_{t}^{\alpha }y}G( y,\alpha) +\frac{\partial L}{\partial \alpha }\right)\, dt &=& 0, \label{eq:ELeqs-alpha} \end{eqnarray} where $$ G(y,\alpha)=\frac{\partial{}_{0}D_{t}^{\alpha}y}{\partial\alpha} =\frac{d}{dt}(f_{1}\ast_t y)(t,\alpha), \, f_{1}(t,\alpha)=\frac{1}{t^{\alpha}\Gamma(1-\alpha)} [\psi(1-\alpha)-\ln t], \, t>0, $$ with the Euler function $\psi(z)=\frac{d}{dz}\ln\Gamma(z)$, and $(f_{1}\ast_t y)(t,\alpha) =\int_{0}^{t}f_{1}(\tau,\alpha)y(t-\tau)\, d\tau$. \eprop \pr Let $(y^{\ast},\alpha^{\ast})$ be an element of $\mathcal{U}\times A$ for which $I[y,\alpha]$ has an extremal value. Let $y(t) =y^{\ast}(t) +\varepsilon_{1}f(t)$, $\alpha =\alpha^{\ast}+\varepsilon_{2}$, $\varepsilon_1,\varepsilon_2\in\R$, with $f\in\cU_l$, and the boundary conditions on $f$ are specified so that the varied path $y^{\ast}+\varepsilon_{1}f$ is an element of $\mathcal{U}$. Then $I[y,\alpha]=I[y^{\ast}+\varepsilon_{1}f, \alpha^{\ast}+\varepsilon_{2}]=:I(\varepsilon_{1}, \varepsilon_{2})$. A necessary condition for an extremal value of $I[y,\alpha]$ is $$ \left.\frac{\partial I(\varepsilon_{1},\varepsilon_{2})}{\partial\varepsilon_{1}} \right\vert_{\varepsilon_{1}=0,\varepsilon_{2}=0}=0, \qquad \left.\frac{\partial I(\varepsilon_{1},\varepsilon_{2})}{\partial\varepsilon_{2}} \right\vert_{\varepsilon_{1}=0,\varepsilon_{2}=0}=0. $$ Therefore we obtain \begin{eqnarray} \int_{a}^{b}\left(\frac{\partial L}{\partial y}f(t) +\frac{\partial L}{\partial{}_{0}D_{t}^{\alpha}y}\, {}_{0}D_{t}^{\alpha }f\right)\, dt &=&0, \label{eq:11-1} \\ \int_{0}^{b}\left(\frac{\partial L}{\partial {}_{0}D_{t}^{\alpha}y}\, \frac{\partial {}_{0}D_{t}^{\alpha}y}{\partial\alpha} +\frac{\partial L}{\partial \alpha }\right)\, dt &=&0. \label{eq:11-2} \end{eqnarray} Applying the fractional integration by parts formula (cf.\ \cite{KilSriTru}): \begin{equation} \label{eq:frac int by parts} \int_{0}^{b} g(t)\cdot {}_{0}D_{t}^{\alpha }f(t)\, dt = \int_{0}^{b}\, f(t)\cdot {}_{t}D_{b}^{\alpha }g(t)\, dt, \end{equation} to (\ref{eq:11-1}), (\ref{eq:11-1}) is transformed to $$ \int_{0}^{b} \left(\frac{\partial L}{\partial y} +{}_{t}D_{b}^{\alpha }\frac{\partial L}{\partial{}_{0}D_{t}^{\alpha}y} \right)f(t)\, dt=0. $$ From this equation, using the fundamental lemma of the calculus of variations (see \cite[p.\ 115]{Dacorogna}), we conclude that condition (\ref{eq:ELeqs-y}) holds for the optimal values $y^{\ast}$ and $\alpha^{\ast}$. In (\ref{eq:ELeqs-alpha}) the term $\frac{\partial {}_{0}D_{t}^{\alpha}y}{\partial\alpha}$ is transformed by the use of expression \begin{eqnarray} \frac{\partial {}_{0}D_{t}^{\alpha}y}{\partial\alpha} &=&\psi(1-\alpha)\, {}_{0}D_{t}^{\alpha}y - \frac{1}{\Gamma(1-\alpha)}\frac{d}{dt} \int_{0}^{t} \frac{\ln(t-\tau) y(\tau)}{(t-\tau)^{\alpha}}\, d\tau \notag \\ &=&\frac{d}{dt}(f_{1}\ast_t y) (t,\alpha) \notag \\ &=& G(y,\alpha), \quad (y,\alpha)\in\cU\times A \label{eq:14} \end{eqnarray} (cf.\ \cite[p.\ 592]{AOP07}). We obtain (\ref{eq:ELeqs-alpha}) by substituting (\ref{eq:14}) into (\ref{eq:11-2}). \ep \brem In general, in solving equations (\ref{eq:ELeqs-y}) and (\ref{eq:ELeqs-alpha}), the most delicate task is the calculation of expression $\frac{\partial{}_{0}D_{t}^{\alpha}y}{\partial \alpha}$. Although its general form (\ref{eq:14}) has been derived in \cite[p.\ 592]{AOP07}, various difficulties can still appear. We illustrate this by examples in Section \ref{SecEx}. However, the simplified form of $\frac{\partial{}_{0}D_{t}^{\alpha}y}{\partial \alpha}$, in some special cases, is important. Already in \cite{AOP07} it has been shown that for $y\in AC([0,b])$ \begin{eqnarray} \left.\frac{\partial {}_{0}D_{t}^{\alpha}y}{\partial\alpha} \right\vert_{\alpha =0^{+}} &=& -(\gamma +\ln t) y(0) -\int_{0}^{t}(\gamma +\ln \tau) y( t-\tau)\,d\tau \notag \\ &=& -(\gamma +\ln t) y(t) + \int_{0}^{t}\frac{y(t)-y(t-\tau)}{\tau}\, d\tau, \label{16bbb} \end{eqnarray} where $\gamma =0.5772156...$ is the Euler constant. (Another form of $\left.\frac{\partial {}_{0}D_{t}^{\alpha}y}{\partial\alpha} \right\vert_{\alpha =0^{+}}$ is also given in \cite[p.\ 111]{WestBolognaGrigolini}.) Let us obtain a simplified form of $\frac{\partial {}_{0}D_{t}^{\alpha}y}{\partial\alpha}$ at $\alpha=1^{-}$. In order to do that we use the method proposed in \cite{TarasovZaslavsky06}. We recall the expansion of $(t-\tau)^{\eps}/\Gamma(1+\eps)$ with respect to $\eps$, at $\eps=0$, with $\tau<t$ (cf.\ \cite[p.\ 401]{TarasovZaslavsky06}), which will be used in the sequel: \begin{equation} \label{eq:expansion} \frac{(t-\tau)^{\eps}}{\Gamma(1+\eps)} = \frac{e^{\eps\ln(t-\tau)}}{\Gamma(1+\eps)} = 1+\eps(\gamma+\ln(t-\tau))+ o(\eps). \end{equation} Assume now that $y\in C^{2}([0,b])$. Then, as in \cite{Podlubny}, for $t\in[0,b]$, \begin{eqnarray*} {}_{0}D_{t}^{\alpha }y &=& \frac{1}{\Gamma(1-\alpha)} \frac{d}{dt} \int_{0}^{t}\frac{y(\tau)}{(t-\tau)^{\alpha }}\, d\tau \\ &=& \frac{y(0)}{\Gamma(1-\alpha) t^{\alpha}} +\frac{1}{\Gamma(1-\alpha)} \int_{0}^{t}\frac{y^{(1)}(\tau)}{(t-\tau)^{\alpha}}\, d\tau \\ &=& \frac{y(0)}{\Gamma(1-\alpha) t^{\alpha }}+ \frac{y^{(1)}(0)}{\Gamma(2-\alpha)t^{\alpha -1}} +\frac{1}{\Gamma(2-\alpha)}\int_{0}^{t} (t-\tau)^{1-\alpha}y^{(2)}(\tau)\, d\tau. \end{eqnarray*} With $\alpha=1-\varepsilon$ we obtain \begin{equation} \label{eq:16ab} {}_{0}D_{t}^{1-\varepsilon }y = \frac{y(0)}{\Gamma(\varepsilon) t^{1-\varepsilon}} +\frac{y^{(1)}(0) t^{\varepsilon}}{\Gamma(1+\varepsilon)} +\frac{1}{\Gamma(1+\varepsilon)} \int_{0}^{t}(t-\tau)^{\varepsilon}y^{(2)}(\tau)\, d\tau. \end{equation} From (\ref{eq:16ab}) and (\ref{eq:expansion}) it follows that \begin{eqnarray} \left.\frac{\partial {}_{0}D_{t}^{\alpha }y}{\partial \alpha} \right\vert_{\alpha =1^{-}} &\!\!\!\!\!\!\!\!\!=\!\!\!\!& \left. -\frac{\partial {}_{0}D_{t}^{1-\varepsilon}y}{\partial \varepsilon} \right\vert_{\varepsilon =0^{+}} \notag \\ &\!\!\!\!\!\!\!\!\!=\!\!\!\!& -\frac{y(0)}{t}-y^{(1)}(0) \ln t- \gamma y^{(1)}(t) - \int_{0}^{t}y^{(2)}(\tau) \ln(t-\tau)\, d\tau. \label{eq:16bbbb} \end{eqnarray} Assuming $y(0)=0$ in (\ref{eq:16bbbb}), we recover the results presented in \cite{TarasovZaslavsky06} and \cite{TofighiGolestani} for the Caputo fractional derivative. Since ${}_{0}D_{t}^{\alpha}y={}_{0}^{C}D_{t}^{\alpha}y$ when $y(0)=0$, it follows that with $y(0)=0$ (\ref{eq:16bbbb}) becomes $$ \left.\frac{\partial {}_{0}D_{t}^{\alpha}y}{\partial\alpha} \right\vert_{\alpha =1^{-}} = \left.\frac{\partial\, {}_{0}^{C}D_{t}^{\alpha}y}{\partial\alpha} \right\vert_{\alpha =1^{-}} = -y^{(1)}(0) \ln t-\gamma y^{(1)}(t) -\int_{0}^{t} y^{(2)}(\tau) \ln (t-\tau)\, d\tau. $$ \erem \brem \label{rem:lin ops} Functional (\ref{eq:fvp-alpha}) is a special case of the class of functionals with Lagrangians depending on linear operators, see \cite[p.\ 51]{Filippov}. Indeed, suppose that Lagrangian $L$ depends on $t$, $y$ and $\cL y$, where $\cL:M\to L^p([0,b])$, $p\in[1,+\infty)$, is a linear operator acting on a set of admissible functions $M$, which is linear, open and dense in $L^p([0,b])$ (i.e. $\cL$ belongs to $Lin(M,L^p([0,b]))$, the space of continuous, linear functions with the uniform norm). Suppose that $L$ is continuously differentiable with respect to $t$ and $y$ and twice continuously differentiable with respect to $\cL y$. Moreover, assume that function $t\mapsto L(t,y(t), \cL y(t))$, $t\in [0,b]$, is continuous, for all $y\in M$. Then the Euler-Lagrange equation reads $$ \frac{\pd L}{\pd y} + \cL^\ast \frac{\pd L}{\pd (\cL y)}=0, $$ where $\cL^\ast$ denotes the adjoint operator of $\cL$. In case when $\cL$ is the left Riemann-Liouville operator ${}_0D_t^\alpha$, with the adjoint ${}_tD_{b}^\alpha$, the latter equation coincides with (\ref{eq:ELeqs-y}). If instead of $\cL$ one considers a family $\{\cL_\alpha, \alpha\in A\}$, where $A=[0,1]$ (or some other interval), and the mapping $A\to Lin(M,L^p([0,b]))$, $\alpha\mapsto \cL_\alpha$, is differentiable, then a more general problem of finding stationary points with respect to $y$ and $\alpha$ can be formulated. In that case, one can derive the second stationarity condition similar to (\ref{eq:ELeqs-alpha}): $$ \int_0^b \left( \frac{\d L}{\d\cL}\frac{\d \cL}{\d\al} + \frac{\d L}{\d\al}\right)dt=0. $$ \erem \section{Equivalent problems} \label{SecEqProbl} In this section we shall give conditions which provide that problems (\ref{eq:min}) and (\ref{eq:minmin}) coincide. \bprop \label{EqvivPobl} Let the Lagrangian $L$ satisfy (\ref{eq:conditions on L}). Assume that for every $\alpha\in [0,1]$ there is a unique $y^\ast(t,\alpha)\in\cU$, a solution to (\ref{eq:ELeqs-y}), and that the mapping $\alpha\mapsto y^\ast(t,\alpha)$ is differentiable as a mapping from $[0,1]$ to $\cU$. Then the problem $\min_{(y,\alpha)\in\cU\times A}I[y,\alpha]$ is equivalent to the problem $\min_{\alpha\in A}(\min_{y\in\cU}I[y,\alpha])$. \eprop \pr As we have shown in Proposition \ref{prop:ELeqs}, any solution to the problem $\min_{(y,\alpha)\in\cU\times A}I[y,\alpha]$ satisfies system (\ref{eq:ELeqs-y})-(\ref{eq:ELeqs-alpha}). It can be solved as follows. We first solve (\ref{eq:ELeqs-y}) and the corresponding boundary conditions to obtain $y^{\ast}=y^{\ast}(t,\alpha)$. According to the assumption, the solution $y^\ast$ is unique. Then we insert $y^\ast$ in (\ref{eq:ELeqs-alpha}) to obtain $\alpha^{\ast}$. In this case, functional $I[y,\alpha]$ becomes a functional depending only on $\alpha$, $\alpha\mapsto I[y^\ast(t,\alpha),\alpha]=I[\alpha]$, and therefore (\ref{eq:ELeqs-alpha}) transforms to the total derivative of $I[\alpha]$ since \begin{eqnarray} \frac{dI[\alpha]}{d\alpha} &=& \left.\frac{dI[\alpha+\varepsilon]}{d\varepsilon}\right\vert_{\varepsilon=0} \nonumber\\ &=& \int_0^1\left[ \frac{\pd L}{\pd y}\frac{\pd y}{\pd \alpha} + \frac{\pd L}{\pd {}_{0}D_{t}^{\alpha}y} \left( {}_{0}D_{t}^{\alpha}\left(\frac{\pd y}{\pd\alpha}\right) + \left(\frac{\pd}{\pd\alpha}{}_{0}D_{t}^\alpha\right)y \right) +\frac{\pd L}{\pd\alpha} \right]\, dt \nonumber\\ &=& \int_0^1\left[ \frac{\pd y}{\pd \alpha} \left( \frac{\pd L}{\pd y} + {}_{t}D_{b}^{\alpha} \frac{\partial L}{\partial{}_{0}D_{t}^{\alpha}y}\right) + \frac{\pd L}{\pd {}_{0}D_{t}^{\alpha}y} \left(\frac{\pd}{\pd\alpha}{}_{0}D_{t}^\alpha\right)y +\frac{\pd L}{\pd\alpha} \right]\, dt \nonumber\\ &=& \int_0^1 \left( \frac{\pd L}{\pd {}_{0}D_{t}^{\alpha}y} \left(\frac{\pd}{\pd\alpha}{}_{0}D_{t}^\alpha\right)y +\frac{\pd L}{\pd\alpha} \right)\, dt, \label{eq:oio} \end{eqnarray} where we used fractional integration by parts formula (\ref{eq:frac int by parts}) in the third, and equation (\ref{eq:ELeqs-y}) in the last equality. This proves the claim. \ep The following simple assertion is of particular interest: \bprop Let $L$ satisfy (\ref{eq:conditions on L}). Assume that for every $\alpha\in[0,1]$ there exists a unique $y_{\alpha}\in\cU$, a solution to the fractional variational problem (\ref{eq:fvp-alpha-min}), and that $I[y_{\alpha},\alpha]$ is the corresponding minimal value of the functional $I$. Assume additionally that $$ \frac{dI}{d\alpha}(y,\alpha)|_{y=y_\alpha}>0,\quad \forall y_\alpha\in\cU. $$ Then the minimal, resp.\ maximal value of the functional $I[y,\alpha]$ is attained at $\alpha=0$, resp.\ at $\alpha=1$. \eprop \pr Under the above assumptions we have that $$ I[y_{0},0]\leq I[y_{\alpha},0]\leq I[y_{\alpha},\alpha]\leq I[y_1,\alpha] \leq I[y_1,1], \quad \forall\alpha\in[0,1], $$ which proves the claim. \ep \brem The same argument can be applied to the case when $dI/d\alpha<0$, for any fixed $y_\alpha\in\cU$, i.e. when $I$ is an decreasing function of $\alpha$, for any fixed $y_\alpha\in\cU$. In that case the minimal, resp.\ maximal value of $I$ is at $\alpha=1$, resp.\ $\alpha=0$. \erem \section{Examples}\label{SecEx} \subsection{Examples with Lagrangians linear in $y$} \bex Consider the action integral for the inertial motion (no force acting) of a material point of the form \begin{equation} \label{eq:16aa} I[y,\alpha]: = \int_{0}^{1} ({}_{0}D_{t}^{\alpha}y)^{2}\,dt, \quad (y,\alpha)\in\cU\times A, \end{equation} where $\cU:=\{y\in \cU_l\,|\, y(0)=0,\, y(1) =1\}$ and $A=[0,1]$. Obviously, the minimal value of $I[y,\alpha]$ is zero, and it is attained whenever ${}_0D_t^\alpha y=0$. Solutions to equation ${}_0D_t^\alpha y=0$ are of the form $y(t)=C\cdot t^{1-\alpha}$, $t\in[0,1]$, $C\in\R$ (cf.\ \cite{SamkoKM}). All solutions satisfy the Euler-Lagrange equation $$ {}_{t}D_{1}^{\alpha}({}_{0}D_{t}^{\alpha}y)=0. $$ Stationarity condition (\ref{eq:ELeqs-alpha}) reads: $$ \int_{0}^{1} {}_{0}D_{t}^{\alpha}y \left(\psi(1-\alpha) {}_{0}D_{t}^{\alpha}y -\frac{1}{\Gamma(1-\alpha)} \frac{d}{dt} \int_{0}^{t} \frac{\ln (t-\tau)y(\tau)}{(t-\tau)^{\alpha}}\,d\tau\right)\, dt=0 $$ and is automatically satisfied. Note that $C\cdot t^{1-\alpha}\in \cU_l$, for all $C\in\R$, but only $t^{1-\alpha}\in \cU$. Hence, we conclude that $(y^\ast,\alpha^\ast)=(t^{1-\alpha},\alpha)$, $\alpha\in[0,1]$, are solutions to the variational problem $I[y,\alpha]\to\min$, for $I$ defined by (\ref{eq:16aa}). \eex \brem If $L(t,y(t),{}_{0}D_{t}^{\alpha}y,\alpha)= ({}_{0}D_{t}^{\alpha}y)^{2}+(\alpha-\alpha_0)^2$, for a fixed $\alpha_0\in(0,1)$, then the problem $\int_0^1 L(t,y(t),{}_{0}D_{t}^{\alpha}y,\alpha)\, dt \to\min$ has a unique minimizer $(y^\ast,\alpha^\ast)= (t^{1-\alpha_0},\alpha_0)$. \erem \bex Let the Lagrangian $L$ be of the form $$ L(t,y(t),{}_0D_t^\alpha y,\alpha):= ({}_0D_t^\alpha y)^2- c\cdot y, \quad c\in\R, $$ and let $\cU=\{y\in \cU_l\,|\, y(0)=0\}$, $A=[0,1]$, for the variational problem: $$ \min_{(y,\alpha)\in\cU\times A}I[y,\alpha]= \min_{(y,\alpha)\in\cU\times A} \int_0^1 (({}_0D_t^\alpha y)^2- c\cdot y(t))\, dt. $$ Equations (\ref{eq:ELeqs-y}) and (\ref{eq:ELeqs-alpha}) become: \bea {}_tD_1^\alpha {}_0D_t^\al y & = & c, \label{eq:ELyEx2}\\ \int_0^1 {}_0D_t^\alpha y\cdot \frac{\pd {}_0D_t^\alpha y}{\pd\alpha}\,dt & = & 0.\nonumber \eea Equation (\ref{eq:ELyEx2}) could be solved as follows. First, one introduces a substitution $z(t)={}_0D_t^\alpha y$, $t\in[0,1]$, and solve ${}_tD_1^\alpha z = c$: $$ z(t)=c \cdot \frac{(1-t)^\al}{\Gamma(1-\al)}, \quad t\in[0,1],\alpha\in A. $$ Therefore, \beq \label{eq:fde} {}_0D_t^\al y (t)= c \cdot \frac{(1-t)^\alpha}{\Gamma(1-\al)}, \quad t\in[0,1],\alpha\in A. \eeq Recall, $$ {}_0I_t^\alpha y := \frac{1}{\Gamma(\alpha)} \int_0^t (t-\tau)^{\alpha} y(\tau)\, d\tau, \quad t\in[0,1],\alpha\in A $$ and apply it on the both sides of (\ref{eq:fde}). Using \cite[Th.\ 2.4]{SamkoKM}, i.e. ${}_0I_t^\alpha ({}_0D_t^\alpha y)= y(t)$, one obtains \beas y(t,\al) &=& \frac{c}{\Gamma(\al)\Gamma(1+\al)} \int_0^t (t-\tau)^{\al-1} (1-\tau)^{\al}\,d\tau\\ &=& \frac{c}{\Gamma (1+\al)}\sum_{p=0}^{\infty} \frac{\Gamma(p-\al)\Gamma(1+p)}{\Gamma(-\al)p\,!\Gamma (1+p+\al)}t^{p+\al}, \quad t\in[0,1],\alpha\in A. \eeas This solution is unique and belongs to $\cU$. Since $\al\mapsto y(t,\al)$ is differentiable, Proposition \ref{EqvivPobl} holds. We substitute obtained $y(t,\al)$ into $I[y,\al]$ which yields \begin{eqnarray*} I[\al] &=& \int_0^1 \left(\left(c \cdot \frac{(1-t)^\al}{\Gamma(1-\al)}\right)^2- \frac{c^2}{\Gamma(\al)\Gamma(1+\al)}\int_0^t(t-\tau)^{\al-1} (1-\tau)^{\al}d\tau\right)\,dt \\ &=& \int_0^1 \left(\left(c \cdot \frac{(1-t)^\al}{\Gamma(1-\al)}\right))^2 - \frac{c^2}{\Gamma (1+\al)}\sum_{p=0}^{\infty} \frac{\Gamma(p-\al)\Gamma(1+p)}{\Gamma(-\al)p\,!\Gamma (1+p+\al)}t^{p+\al}\right)\,dt. \end{eqnarray*} Simple numerical calculations show that $I[\alpha]$ is an increasing function and attains extremal values at the boundaries. \brem Equation (\ref{eq:ELyEx2}) represents a fractional generalization of the equation of motion for a material point (with unit mass) under the action of constant force equal to $c$. Our result shows that an optimal value of Hamilton's action is attained for $\al=1$, that is for integer order dynamics. We note that different generalizations of classical equation of motion can be found in \cite{Kwok}, where the problem ${}_0D_t^\al y = c$, $1<\alpha\leq 2$, was analyzed. \erem \eex \subsection{Examples with Lagrangians linear in ${}_0D_t^\alpha y$} \bex Let \beq \label{eq:ex3-L} L(t,y(t),{}_0D_t^\alpha y,\alpha):= \Gamma(1-\alpha){}_0D_t^\alpha y - \frac{1}{2} c y^2, \quad c>0, c\not=1, \eeq and consider the problem of finding stationary points for the functional (\ref{eq:fvp-alpha}), where $\cU:=\{y\in\cU_l\,|\,y(0)=\frac{1}{c}\}$ and $\alpha_0<\frac12$. Note that $L$ satisfies the so called primary constraint in Dirac's classification of systems with constraints (cf.\ \cite{HenneauxTeitelboim}). In the setting of fractional derivatives such Lagrangians have been recently treated in \cite{Bal04} and \cite{MuslihBaleanu}. Equations (\ref{eq:ELeqs-y}) and (\ref{eq:ELeqs-alpha}) become \beq \label{eq:ex3-ELy} \Gamma(1-\alpha){}_tD_1^\alpha 1 - c y=0 \eeq and \beq \label{eq:ex3-ELalph} \int_0^1 \left(\Gamma(1-\alpha)\frac{\pd{}_0D_t^\alpha y}{\pd\alpha} + \frac{\pd \Gamma(1-\alpha)}{\pd\alpha}\right)\, dt=0. \eeq Equation (\ref{eq:ex3-ELy}) has a unique solution $y^\ast=\frac{1}{c(1-t)^\alpha}$, $t\in[0,1]$, $\alpha\in[0,\alpha_0]$. This implies \beas I[y^\ast,\alpha] &=& \int_0^1 \left[\frac{d}{dt} \int_0^t \frac{d\tau}{c(1-\tau)^\alpha(t-\tau)^\alpha} -\frac{1}{2c(1-t)^{2\alpha}}\right] \, dt \\ &=& \left. \int_0^t \frac{d\tau}{c(1-\tau)^\alpha(t-\tau)^\alpha} \right\vert_{0}^{1} - \int_0^1 \frac{1}{2c(1-t)^{2\alpha}} \, dt \\ &=& \int_0^1 \frac{d\tau}{c(1-\tau)^{2\alpha}} - \int_0^1 \frac{1}{2c(1-t)^{2\alpha}} \, dt \\ &=& \frac{1}{2c} \int_0^1 \frac{1}{(1-t)^{2\alpha}} \, dt. \eeas Since $\alpha_0<1/2$ we have that $I[y^\ast,\alpha]$ exists and is an increasing function with respect to $\alpha$. Hence, $I[y^\ast,\alpha]$ attains its minimal value at $\alpha=0$, and it equals $1/(2c)$. We also have that the maximal value of $I[y^\ast,\alpha]$ is attained at $\alpha_0$. \eex \bex Let $\cU:=\{y\in \cU_l\,|\, y(0)=0\}$, $c\neq 0$ and let $L$ be of the form \begin{equation}\label{eq:exL} L(t,y(t),{}_0D_t^\alpha y,\alpha): = c\cdot{}_0D_t^\alpha y + f(y(t)),\quad t\in[0,1], \end{equation} where the properties of $f$ are going to be specified. In this example we are dealing with integrable functions which can take values $+\infty$ or $-\infty$ at some points. We are going to analyze stationary points of $$ I[y,\alpha]= \int_0^1 (c\cdot{}_0D_t^\alpha y(t) + f(y(t)))\, dt. $$ Equations (\ref{eq:ELeqs-y}) and (\ref{eq:ELeqs-alpha}) become \bea {}_tD_1^\alpha c + \frac{\pd f}{\pd y} & = & 0 \label{eq:ELyEx}\\ c \cdot \int_0^1 \frac{\pd {}_0D_t^\alpha y}{\pd\alpha}\,dt & = & 0. \label{eq:ELalEx} \eea Since ${}_tD_1^\alpha c = \frac{c}{\Gamma (1-\alpha) (1-t)^{\alpha}}$, $t\in[0,1]$, we see that in order to solve (\ref{eq:ELyEx})-(\ref{eq:ELalEx}) we have to assume that $f\in\cC^1(\R)$, and that $f'$ is invertible so that $t\mapsto (f')^{-1} (\frac{c}{\Gamma (1-\alpha) (1-t)^{\alpha}})\in\cU_l$. Then equation (\ref{eq:ELyEx}) is solvable with respect to $y$: \begin{equation}\label{eq:Ex_y_od_c} y_c(t,\alpha) = \left(\frac{\d f}{\d y}\right)^{-1} \left( \frac{c}{\Gamma (1-\alpha) (1-t)^{\alpha}} \right), \quad t\in [0,1]. \end{equation} Since $c\neq 0$, (\ref{eq:ELalEx}) implies $\int_0^1 \frac{\pd {}_0D_t^\alpha y}{\pd\alpha}\,dt=0$. Thus \bea 0 & = & \int_0^1 \frac{\pd {}_0D_t^\alpha y}{\pd\alpha}\,dt = \int_0^1 G(y, \alpha)(t)\,dt = \int_0^1 \frac{d}{dt}(f_1\ast_t y)(t,\alpha)\,dt \nonumber \\ & = & f_1\ast_t y(t,\alpha)|_{t=1} - f_1\ast_t y(t,\alpha)|_{t=0} = f_1\ast_t y(t,\alpha)|_{t=1},\label{integral} \eea where we have used that $f_1 \in L^1([0,1])$ and that $y\in \cU$. Substitution of (\ref{eq:Ex_y_od_c}) into (\ref{integral}) gives $(f_1\ast y_c(t,\alpha))(t)|_{t=1}=0$ or \begin{equation} \label{eq:exc} \int_0^1 \frac{\psi(1-\alpha)-\ln (1-\tau)}{\Gamma (1-\alpha)(1-\tau)^{\alpha}} \left(\frac{\d f}{\d y}\right)^{-1}\left(\frac{c}{\Gamma (1-\alpha)(\tau-1)^{\alpha}}\right) \,d\tau=0. \end{equation} Solving this equation is obviously difficult. Hence, we consider some special cases. a)\ $f(y(t)):= d \cdot \frac{y(t)^2}{2}$, $t\in[0,1]$, $d\in\R$. Then the Lagrangian is $$ L(t,y(t),{}_0D_t^\alpha y,\alpha) = c\cdot{}_0D_t^\alpha y(t) + d \cdot \frac{y(t)^2}{2}, $$ and $$ y_c(t,\alpha)=-\frac{1}{d}\frac{c}{\Gamma(1-\al)(1-t)^\al}, \quad t\in[0,1]. $$ Also, (\ref{eq:exc}) becomes $$ \int_0^1 \frac{\psi(1-\alpha)-\ln (1-\tau)} {\Gamma (1-\alpha)^2(1-\tau)^{2\alpha}}\,d\tau =0. $$ By a simple numerical calculation one shows that this equation does not have any solution for $\al\in (0,1)$. Hence, in this case there does not exist any point $(y,\alpha)$ which is an extremal of functional $I[y,\alpha]$. b)\ $f(y(t)): = \ln y(t)$, $t\in[0,1]$. Then (\ref{eq:ELyEx}) becomes $$ \frac{c}{\Gamma(1-\al)(1-t)^\al}=\frac{1}{y}, $$ and therefore $$ y=\frac{\Gamma(1-\al)}{c}(1-t)^\al\in AC([0,1]). $$ In this particular case we take the set of admissible functions to be $\cU:=\{y\in\cU_l\,|\,y(1)=0\}$. Using (\ref{integral}), equation (\ref{eq:ELalEx}) reads $$ \int_0^1 \left(\psi (1-\al)-\ln (1-\tau)\right) \,d\tau = 0, $$ which, after integration, yields $\psi (\al-1)=1$. A unique solution of this equation in $(0,1)$ is $\alpha=0.604...$. Therefore, a unique stationary point of $$ I[y,\alpha]=\int_0^1 (c\cdot{}_0D_t^\alpha y + \ln y(t))\, dt $$ is the point $(y,\alpha)= (\frac{\Gamma(0,396)}{c}(1-t)^\al;0,604)$. \eex \brem So far, we have considered variational problems defined via functionals of type (\ref{eq:fvp-alpha}). In fact, we have allowed fractional derivatives of functions to appear in Lagrangians. The natural generalization of such problems consists of replacing the Lebesgue integral in (\ref{eq:fvp-alpha}) by the Riemann-Liouville fractional integral. More precisely, for $\beta>0$ set \beas I_\beta[y,\alpha] :&=& {}_0I_b^\beta L(t,y(t),{}_{0}D_{t}^{\alpha}y,\alpha) \\ &=& \frac{1}{\Gamma (\beta)} \int_{0}^{b} (b-t)^{\beta -1}L(t,y( t),{}_{0}D_{t}^{\alpha}y,\alpha)\, dt, \quad t\in (0,b). \eeas Then the fractional variational problem consists of finding extremal values of the functional $I_\beta[y,\alpha]$. In the above construction we have used the left Riemann-Liouville fractional integral of order $\beta$ (which, in general, differs from the order of fractional differentiation $\alpha$), evaluated at $t=b$. The choice $\beta=1$ turns us back to the problem (\ref{eq:fvp-alpha}). The study of such fractional variational problems is reduced to the case we have already considered in the following way. It suffices to redefine the Lagrangian as $$ L_1(t,y(t),{}_{0}D_{t}^{\alpha}y,\alpha,\beta):= \frac{1}{\Gamma (\beta)} (b-t)^{\beta -1} L(t,y( t),{}_{0}D_{t}^{\alpha}y,\alpha). $$ Then we have to consider the functional \beq \label{eq:fracfrac-vp} I_\beta[y,\alpha]=\int_{0}^{b} L_1(t,y(t),{}_{0}D_{t}^{\alpha}y,\alpha,\beta)\, dt. \eeq In case $\beta>1$, $L_1$ is of the same regularity as $L$, so the straightforward application of the results derived in previous sections to the Lagrangian $L_1$ leads to the optimality conditions for the variational problem defined via the functional (\ref{eq:fracfrac-vp}). However, when $0<\beta<1$, continuity, as well as differentiability of $L_1$ with respect to $t$ may be violated (which depends of the explicit form of $L$), and hence it may be not possible to use the theory developed so far. \erem \section{Conclusion} We formulated Hamilton's principle so that the order of derivative in the Lagrangian is also subject to variation. The stationarity conditions are derived in (\ref{eq:ELeqs-y}) and (\ref{eq:ELeqs-alpha}). We introduced additional assumptions which resulted in equivalent problems, simpler for solving. Several examples are given in order to illustrate the theory presented in the paper. We concluded our work with a consideration of Hamilton's principle defined in terms of Riemann-Liouville fractional integrals. \subsection*{Acknowledgement} This work is supported by Projects $144016$ and $144019$ of the Serbian Ministry of Science and START-project Y-237 of the Austrian Science Fund.
{'timestamp': '2011-01-18T02:00:43', 'yymm': '1101', 'arxiv_id': '1101.2963', 'language': 'en', 'url': 'https://arxiv.org/abs/1101.2963'}
\section{Introduction} \label{sec:Introduction} The influence of Coulomb interactions on electron properties of graphene is a subject of much interest~\cite{CastroNeto2009tep, Kotov2012eei}. Prominent examples of interaction effects are the enhancement of the quasiparticle velocity $v$ above the typically quoted value $v = 1.0 \times 10^8\,\text{cm}/\text{s}$ at low momenta~\cite{Elias2011dcr, Li2008dcd, Siegel2011mbi} (as predicted by the early theoretical work\cite{Gonzalez1994nfl, Gonzalez1999mfl}), and the emergence of additional dispersion branches\cite{Bostwick2010oop, Walter2011esa} believed to be plasmarons --- bound states of electrons and plasmons. The strength of Coulomb interactions in graphene is determined by the dimensionless parameter ($\hbar \equiv 1$ in this paper) \begin{equation} \alpha = \frac{e^2}{\kappa v} = \frac{2.2}{\kappa}\,, \label{eqn:alpha} \end{equation} which can be controlled by varying the effective dielectric constant $\kappa$ of the graphene environment. Interactions modify not only the quasiparticle properties but also the response functions of graphene, e.g., the irreducible (or proper) polarization $P(q,\omega)$. This quantitiy is defined to be the coefficient of proportionality between the change in electron density and the self-consistently determined screened scalar potential. The observables directly related to $P$ are the dielectric function $\epsilon$ and the longitudinal conductivity $\sigma$: \begin{align} \epsilon(q, \omega) &= 1 - \frac{2\pi e^2}{q} \, P(q, \omega)\,, \label{eqn:epsilon}\\ \sigma(q, \omega) &= e^2\, \frac{i \omega}{q^2}\, P(q, \omega)\,. \label{eqn:sigma} \end{align} Our work is motivated in part by two recent experiments\cite{Reed2010tef, Wang2012mdq} that suggested that the standard random phase approximation\cite{Mahan1990mpp} (RPA) significantly underestimates the static dielectric function $\epsilon(q, 0)$ of graphene. The RPA amounts to replacing the exact polarization function $P$ by the noninteracting value $P_0$. For neutral graphene at zero temperature, which is the case studied here, function $P_0$ is given by\cite{Gonzalez1994nfl} \begin{equation} P_0(q,\omega) = -\frac{q^2}{4 \sqrt{v^2 q^2 - (\omega + i0)^2}} \label{eqn:P_0} \end{equation} at small enough $q$ and $\omega$ where the Dirac approximation is valid. Equations~\eqref{eqn:epsilon} and \eqref{eqn:P_0} entail~\cite{Ando2002dca} \begin{equation} \epsilon_{\text{RPA}}(q, 0) = 1 + \frac{\pi}{2}\, \alpha\,. \label{eqn:epsilon_RPA} \end{equation} For $\kappa = 1$ this formula gives $\epsilon_{\text{RPA}} \approx 4.6$. On the other hand, a much larger value $\epsilon = 15.4^{+39.6}_{-6.4}$ was inferred from inelastic x-ray scattering on bulk graphite\cite{Reed2010tef}. Similarly, for graphene on a boron nitride substrate ($\kappa \approx 2.5$, $\alpha \approx 0.9$) $\epsilon_{\text{RPA}} \approx 2.4$, whereas the study of charge profile near Coulomb impurities by means of scanning tunneling microscopy suggests\cite{Wang2012mdq} $\epsilon = 3.0 \pm 0.1$. \begin{figure}[t] \begin{center} \includegraphics[width=2.6in]{pol1cor_Fig1} \end{center} \caption{First-order diagrams for the polarization function include the self-energy part (a) and the vertex part (b). Solid lines represent electron Green's function, dashed lines the Coulomb interaction, and dotted lines the external momentum and frequency.} \label{fig:Diagrams} \end{figure} The first-order interaction correction $P_1 = \mathcal{O}(\alpha)$ beyond the RPA is represented by the diagrams depicted in Fig.~\ref{fig:Diagrams}. We show that by including this correction, \begin{equation} \label{eqn:pol} P(q,\omega) \to P_0(q,\omega) + P_1(q,\omega)\,, \end{equation} one can significantly reduce the discrepancy between the theory and experiment. Our result for $\epsilon(q, 0)$ is\footnote{Similar results have been obtained by F.~Guinea and also by A.~Principi and M.~Polini (private communications).} \begin{equation} \epsilon(q, 0) = 1 + \frac{\pi}{2}\, \alpha_q + 0.778 \alpha_q^2\,, \quad \alpha_q \ll 1\,. \label{eqn:epsilon_dc} \end{equation} (The difference between $\alpha_q$ and $\alpha$ will be clarified in Sec.~\ref{sec:results}.) This formula yields $\epsilon \approx 3.0$ at $\alpha_q = 0.9$, in excellent agreement with Ref.~\onlinecite{Wang2012mdq}. In turn, if we use Eq.~\eqref{eqn:epsilon_dc} for $\alpha_q = 2.2$, we get $\epsilon \approx 8.2$, which is close to the lower estimate of the static dielectric constant in Ref.~\onlinecite{Reed2010tef}, although, the use of a perturbative formula is questionable at such large interaction strengths. Another topic of interest is the dynamic response of graphene. For example, in the ``optical'' limit $\omega\gg v q$, Eqs.~\eqref{eqn:sigma} and \eqref{eqn:P_0} imply that noninteracting Dirac fermions have a frequency-independent conductivity\cite{Ludwig1994iqh} \begin{equation} \sigma_0 (0,\omega)= \frac{e^2}{4}\,. \label{eqn:sigma_0} \end{equation} Accordingly, deviations of the infrared optical conductivity of graphene from the universal value $\sigma_0$ may signal interaction effects. Experimentally, no such deviations have been observed\cite{Mak2008mot, Nair2008fsc, Li2008dcd} while the theoretical calculation of the interaction corrections has been a subject of a debate. Our analysis favors the result \begin{equation} \frac{\sigma(0, \omega)}{\sigma_0} - 1 \simeq \frac{19 - 6\pi}{12}\, \alpha\,, \quad \alpha \ll 1\,, \label{eqn:sigma_I} \end{equation} derived in~Refs.~\onlinecite{Mishchenko2008mci, Sheehy2009oto}. We explain why a different coefficient was obtained in Ref.~\onlinecite{Juricic2010coi}. The numerical smallness of $(19 - 6\pi) / 12 \approx 0.01$ may be the reason why the interaction corrections have not been observed so far. To get a more complete understanding of the interaction corrections we also compute $P_1(q, \omega)$ at arbitrary $\omega / v q$ ratios. This enables us to see the evolution of $P_1(q, \omega)$ in the full momentum-frequency parameter space, from the small value in the optical limit to the divergence at the spectral boundary $\omega = v q$ to a sizable effect in the static limit. Besides theoretical interest, motivation for this calculation comes from a rapidly burgeoning effort in probing regimes of intermediate $\omega/v q$ by state-of-the-art experimental techniques, such as the near-field optics\cite{Fei2011ino, Fei2012gto, Chen2012oni} and electron energy loss spectroscopies\cite{Liu2010pps, Koch2010spp, Tegenkamp2011peh, Shin2011ooi}. Such regimes are also pertinent for the electromagnetic response of graphene nanoribbons\cite{Ju2011gpf}. The remainder of the paper is organized as follows. In Sec.~\ref{sec:results} we summarize our main results. Derivation of these results is outlined in Sec.~\ref{sec:derivation}. In Sec.~\ref{sec:discussion} we compare our findings with previous work. The Appendix~\ref{sec:optical} is devoted to the critical analysis of the controversy regarding the optical limit. \section{Main results} \label{sec:results} In order to present our results we first need to explain the notation $\alpha_q$ in Eq.~\eqref{eqn:epsilon_dc} above. Crudely, $v$ and $\alpha$ represent the bare velocity and the bare coupling constant of the theory, whereas $v_q$ and $\alpha_q$ denote their renormalized values. More precisely, $v_q$ is defined to be the phase velocity of quasiparticles with momentum $q$: \begin{equation} \label{eqn:vel_I} v_q \equiv v + \frac{\Sigma_q}{q}\,, \end{equation} where $\Sigma_q$ is the on-shell self-energy. At the level of the first-order perturbation theory one finds\cite{Gonzalez1994nfl, Gonzalez1999mfl} \begin{equation} \Sigma_q = \frac{e^2 q}{4\kappa}\, \ln \frac{\Lambda}{q}\,, \label{eqn:Sigma_q} \end{equation} where $\Lambda$ is the high-momentum cutoff. Therefore, $v_q$ and $\alpha_q$ are given by \begin{align} v_q &= v + \frac{e^2}{4\kappa}\, \ln \frac{\Lambda}{q}\,, \label{eqn:vel_II}\\ \alpha_q &= \frac{e^2}{\kappa v_q} = \left(\frac{1}{\alpha} + \frac{1}{4}\, \ln \frac{\Lambda}{q}\right)^{-1}\,. \label{eqn:alpha_q} \end{align} According to the renormalization group approach, such expressions are not just first-order approximations. They are asymptotically exact at low enough $q$ where $\alpha_q \ll 1$. However, $\alpha$ should be understood as the running coupling constant evaluated at some other cutoff $\Lambda$. The choice of $\alpha$ is largely arbitrary because a change in $\alpha$ in Eq.~\eqref{eqn:alpha_q} can be absorbed into $\Lambda$. Nevertheless, $\alpha_q$, which is determined by the observable quantity, the phase velocity $v_q$, is unambiguous. Relations between various observable quantities are expressible in terms of the renormalized parameters only. For example, phase velocities at two different momenta $q$ and $k$ are linked by the relation \begin{equation} \frac{v_q}{v_k} = 1 + \frac{\alpha_k}{4}\, \ln\frac{k}{q}\,, \end{equation} which is free from the arbitrary parameters $\alpha$ and $\Lambda$. Similarly, the polarization function can and (if one desires higher accuracy) should be expressed in terms of $\alpha_q$ and $v_q$. This point will be stressed again in Sec.~\ref{sec:discussion}. Let us now present our findings. The first-order correction to the polarization function is written as \begin{equation} P_1(q,\omega) = \frac{\alpha_q q}{v_q}\, p_1(x)\,, \label{eqn:p_1} \end{equation} where $x$ is the dimensionless ratio \begin{equation} x = \frac{\omega}{v_q q} \label{eqn:x} \end{equation} and $p_1(x)$ is the complex dimensionless function, whose real and imaginary parts are displayed in Figs.~\ref{fig:p1}(a)--\ref{fig:p1}(d). In general, $p_1(x)$ has to be evaluated numerically. However, analytical results are available in several limits, as discussed later in this section. \begin{figure*} \begin{center} \includegraphics[width=7.0in]{pol1cor_Fig2.pdf} \end{center} \caption{(Color online) Panels (a) and (b) depict function $p_1(x)$ where $x = \omega / v_q q$. The red dashed line, the blue dotted line, and the black solid line are, respectively, the self-energy term, the vertex term, and their sum, which is $p_1(x)$. Although not shown in the figure, very close to $x = 1$ the divergence of the self-energy term overwhelms that of the vertex term, which causes the sign change of $\mathrm{Re}\, p_1(x)$ at $x = 0.998$ and $\mathrm{Im}\, p_1(x)$ at $x = 1.005$. Panels (c) and (d) illustrate $x p_1(x)$, the quantity which (if multiplied by $4i$) gives the interaction correction to the universal conductivity, $\sigma / \sigma_0 - 1$. The meaning of the three curves is the same as in panels (a) and (b). Panels (e) and (f) depict the real and imaginary parts of the dielectric function for $\alpha_q = 0.3$. The green solid line is the RPA result and the black curves (solid and dashed) include the interaction correction. The dashed part of the curve is within $\alpha_q$ from the absorption threshold. The first-order perturbation theory is expected to be unreliable in that region. Unphysical behavior near the threshold is exemplified by the negative sign of $\text{Im}\,\epsilon$ in the inset of panel (f). In panel (f) the second order interaction correction (black curve) becomes larger than the RPA value (green curve) at $\omega/v_q q \approx 9.1$ (not shown). \label{fig:p1} } \end{figure*} The full polarization function to order $\mathcal{O}(\alpha_q)$ is \begin{equation} P(q,\omega)=\frac{q}{v_q} \left[-\frac{1}{4}\frac{1}{\sqrt{1 - (x+i0)^2}} + \alpha_q p_1(x)\right]\,. \label{eqn:P_from_p_1} \end{equation} The finite imaginary part of $P$ appears when $x$ exceeds unity, which we refer to as the absorption threshold. Once $p_1(x)$ is known, one can use Eqs.~\eqref{eqn:epsilon} and \eqref{eqn:P_from_p_1} to get the dielectric function $\epsilon(q, \omega)$. For example, in the static limit we obtain Eq.~\eqref{eqn:epsilon_dc}. The real and imaginary parts of $\epsilon(q, \omega)$ as a function of $x$ are illustrated by Figs.~\ref{fig:p1}(e) and~\ref{fig:p1}(f) for the case of a suitably low $\alpha_q = 0.3$. The RPA predictions are also plotted in Figs.~\ref{fig:p1}(e) and~\ref{fig:p1}(f) for comparison. As one can see, the RPA underestimates $\mathrm{Re}\, \epsilon(q, \omega)$ at $x < 1$ and overestimates it at $x > 1$. Near the absorption threshold, at $|1 - x| < \alpha_q$, the first-order results are deemed unreliable and are shown by the dashed line. In that region higher order corrections become as important as the first-order ones. An example of inapplicability of the first-order perturbation theory near $x = 1$ is the negative sign of $\mathrm{Im}\, \epsilon(q, \omega)$ at sufficiently large $\alpha_q$, see the inset of Fig.~\ref{fig:p1}(f). Let us next discuss the analytical results for the function $P_1(q, \omega)$. This function can be written as \begin{equation} P_1(q,\omega) = 2 P_a(q,\omega) + P_b(q,\omega)\,, \end{equation} where $P_a$ and $P_b$ are the self-energy and vertex terms represented by the corresponding diagrams in Fig.~\ref{fig:Diagrams}. (The factor of $2$ comes from the symmetry of the self-energy diagram.) Below we describe these terms separately. For the self-energy contribution we have the result \begin{equation} 2 P_a(q,\omega)=\frac{\alpha q}{8 \pi v}\, \left[\frac{\pi}{2}\frac{1}{(1-y^2)^{3/2}}\ln \frac{\Lambda}{q}+I_a(y)\right]\,, \label{eqn:Pa2} \end{equation} where $y = \omega / (v q) + i 0$. The expression for function $I_a(y)$, which is rather cumbersome, is given by Eq.~\eqref{eqn:I_a} (Sec.~\ref{sec:derivation}). Equation~\eqref{eqn:Pa2} is written in terms of the ``bare'' parameters. As explained above, we should rewrite it in terms of the renormalized ones. To do so we combine $2 P_a(q, \omega)$ with the zeroth order polarization function $P_0(q, \omega)$ [Eq.~\eqref{eqn:P_0}] and get, to the order $\mathcal{O}(\alpha_q)$, \begin{equation} \begin{split} P_0(q,\omega) + 2 P_a(q,\omega) &= -\frac{1}{4}\, \frac{q}{v_q}\, \frac{1}{\sqrt{1 - (x + i0)^2}}\\ &+ \frac{1}{8 \pi}\, \frac{\alpha_q q}{v_q}\, I_a(x + i0)\,. \label{eqn:P02Pa} \end{split} \end{equation} The desired renormalized $P_0(q,\omega)$ and $2 P_a(q,\omega)$ are, respectively, the first and the second line of Eq.~\eqref{eqn:P02Pa}. The limiting forms of such renormalized $2 P_a$ are as follows. In the static limit we find \begin{gather} 2 P_a(q,0) = -\frac{C_a}{4}\, \frac{\alpha_q q}{v_q}\,, \label{eqn:Pa_dc}\\ C_a = \frac{1}{8} - \frac{\ln 2}{2} + \frac{1}{\pi}\left(G - \frac{1}{6}\right) \approx 0.017\,, \label{eqn:C_a} \end{gather} where $G = 0.916\ldots$ is the Catalan constant. In the optical limit, we obtain \begin{equation} 2 P_a(q,\omega) \simeq \frac{1}{16}\, \frac{\alpha_q q^2}{i \omega}\,, \quad \omega \gg v_q q\,. \label{eqn:P02Pa_optical} \end{equation} We expect that the accuracy of this expression is improved if in place of $\alpha_q$ one uses $\alpha_k$, where $k$ is such that $\omega = v_k k$. Near the absorption threshold, $|x - 1| \ll 1$, we get \begin{equation} 2 P_a \simeq \frac{1}{16}\, \frac{\alpha_q q}{v_q}\, \left[ \frac{2\ln 2 - \frac{5}{6}}{(1 - x^2)^{3/2}} -\frac{1}{3}\, \frac{1}{(1 - x^2)^{1/2}} \right]\,. \label{eqn:Pa_theshold} \end{equation} Note that the coefficient in front of the dominant $(1 - x^2)^{-3/2}$ singularity depends on the renormalization procedure. Our result for the vertex term is \begin{equation} P_b(q, \omega) = -\frac{1}{4}\, \frac{\alpha_q q}{v_q}\, I_b(x + i 0)\,, \label{eqn:P_b_II} \end{equation} where function $I_b(y)$ is given by Eqs.~\eqref{eqn:Ib}--\eqref{eqn:ImIb} of Sec.~\ref{sec:derivation} and is computed by numerical quadrature. The limiting forms of $P_b$ are as follows. In the static limit we find \begin{equation} P_b(q,0)=-\frac{q}{4 v_q} C_b \alpha_q\,, \end{equation} where $C_b\approx0.48$. In the optical limit, $\omega\gg v_q q$, we have \begin{equation} P_b(q,\omega) \simeq \frac{C'_b}{4}\, \frac{\alpha_q q^2}{i\omega}\,, \quad C'_b \approx -0.237\,, \label{eqn:P_b_optical} \end{equation} which is consistent with\cite{Mishchenko2008mci} $C'_b = (8 - 3 \pi) / 6$. Near the absorption threshold $|x - 1|\ll 1$, we find the analytical expression \begin{equation} P_b(q,\omega) \simeq \frac{1}{6 \pi}\, \frac{\alpha_q q}{v_q}\, \frac{1}{x - 1}\, \left[\ln \left(\frac{8}{1 - x}\right) - 3\right], \label{eqn:P_b_threshold} \end{equation} which agrees with the result of Ref.~\onlinecite{Gangadharaiah2008crf}, further providing the subleading divergent term (determined by the numerical factor inside the logarithm). \section{Derivation} \label{sec:derivation} In this section we discuss how our results for the self-energy and vertex corrections have been derived. Within the Matsubara formalism~\cite{Mahan1990mpp}, the diagrams we compute are expressed by the integrals \begin{widetext} \begin{align} P_a(q, i\Omega) &= -\frac{N}{\beta} \sum_{\nu} \int\frac{d^2{\bf k}}{(2\pi)^2} {\rm tr} [{\hat G}({\bf k},i\nu) {\hat G}({\bf k}+{\bf q},i\nu + i\Omega) {\hat \Sigma}({\bf k}+{\bf q}) {\hat G}({\bf k}+{\bf q},i\nu + i\Omega)]\,, \label{eqn:Pa}\\ P_b(q, i\Omega) &= -\frac{N}{\beta^2} \sum_{\nu,\nu'} \int\frac{d^2 {\bf k}}{(2\pi)^2} \frac{d^2 {\bf k}'}{(2\pi)^2} V({\bf k}-{\bf k}') {\rm tr} [\hat{G}({\bf k},i\nu) \hat{G}({\bf k}+{\bf q},i\nu + i\Omega) \hat{G}({\bf k}' + {\bf q}, i\nu' + i\Omega) \hat{G}({\bf k}',i\nu')]\,, \label{eqn:Pb} \end{align} where $N = 4$ is the spectral degeneracy and the sums are performed over fermionic Matsubara frequencies $\nu = \pi (2 n + 1) \beta$. Green's function $\hat{G}$, the self-energy matrix ${\hat \Sigma}$, and the Coulomb interaction kernel $V$ are given by \begin{equation} \hat{G}({\bf k}, i\nu) = (i\nu - v\,\mathbf{k} \cdot \bm{\sigma})^{-1}\,, \quad {\hat \Sigma}({\bf k}) = \frac{e^2}{4 \kappa} \mathbf{k}\cdot \bm{\sigma} \ln\frac{\Lambda}{|\mathbf{k}|}\,, \quad V(\mathbf{k})= \frac{2 \pi e^2}{\kappa |\mathbf{k}|}\,, \label{eqn:V} \end{equation} where $\bm{\sigma} = \{\sigma_x, \sigma_y\}$ is the vector of the Pauli matrices. To obtain results at real frequencies $\omega$ the analytical continuation $i\Omega \to \omega + i0$ has to be done at the end of the calculation, as usual. In the zero temperature limit $\beta \to \infty$, the integration over $\nu$ and the analytic continuation lead to \begin{align} 2 P_a(q,\omega) &= 2 N\int\frac{d^2 \mathbf{k}}{(2\pi)^2}\, \Sigma_k(1-\hat{n}_{\bf k} \cdot\hat{n}_{{\bf k}+{\bf q}}) \frac{E_{\mathbf{k},\mathbf{q}}^2 + \omega^2} {[E_{\mathbf{k},\mathbf{q}}^2 - (\omega + i0)^2]^2}\,, \quad E_{\mathbf{k},\mathbf{q}} \equiv v(|\mathbf{k}|+|\mathbf{k} + \mathbf{q}|)\,, \quad \hat{n}_{\mathbf{k}} \equiv \frac{\mathbf{k}}{|\mathbf{k}|}\,, \label{eqn:Pa0}\\ P_b(q,\omega) &= -\frac{N}{2}\int\frac{d^2 \mathbf{k}}{(2\pi)^2} \frac{d^2 \mathbf{k}'}{(2\pi)^2} \frac{V(\mathbf{k} - \mathbf{k}')}{[E_{\mathbf{k},\mathbf{q}}^2 - (\omega + i0)^2] [E_{\mathbf{k}',\mathbf{q}}^2 - (\omega + i0)^2]} \Bigl\{ \omega^2 (\hat{n}_{\mathbf{k}} - \hat{n}_{\mathbf{k} + \mathbf{q}}) \cdot(\hat{n}_{\mathbf{k}'} - \hat{n}_{\mathbf{k}' + \mathbf{q}}) \notag\\ &+ E_{\mathbf{k},\mathbf{q}}E_{\mathbf{k}',\mathbf{q}} [(1 - \hat{n}_{\mathbf{k}} \cdot \hat{n}_{\mathbf{k} + \mathbf{q}}) (1 - \hat{n}_{\mathbf{k}'} \cdot \hat{n}_{\mathbf{k}' + \mathbf{q}}) + (\hat{n}_{\mathbf{k}} \times \hat{n}_{\mathbf{k} + \mathbf{q}}) \cdot (\hat{n}_{\mathbf{k}'} \times \hat{n}_{\mathbf{k}' + \mathbf{q}})] \Bigr\}\,. \label{eqn:Pb1} \end{align} These integrals can be simplified by transformation from the Cartesian $(k_x, k_y)$ to the elliptic coordinate system $(\mu, \nu)$, where $0 \leq \mu < \infty$ and $0 \leq \nu < 2\pi$. In this system the coordinate grid is made of ellipses and hyperbolas with the foci at ${\bf k}=0$ and ${\bf k}=-{\bf q}$. The transformation formulas are $k_x + i k_y = (q / 2) [\cosh(\mu + i\nu) - 1]$ and $d^2 k = (q / 2)^2 (\cosh^2\mu - \cos\nu)$. The integral for $2 P_a$ becomes \begin{equation}\label{eqn:Pa_elliptic} 2 P_a(q,\omega)=\frac{N\alpha q}{8\pi^2 v}\int d\mu d\nu \left[\ln \left(\frac{2\Lambda}{q}\right) - \ln (\cosh \mu - \cos \nu) \right] (\cosh \mu-\cos \nu) \frac{4\cosh^2 \mu + (2\omega / v q)^2}{[4\cosh^2 \mu - (2\omega / v q + i0)^2]^2}\,, \end{equation} which can be evaluated analytically in terms of the dilogarithm function $\mathrm{Li}_2(z)$. For $\omega > 0$ we get Eq.~\eqref{eqn:Pa2} with \begin{equation} \begin{split} I_a(x) &= \frac{1}{3}\frac{1+2x^2}{1-x^2} - \frac{x}{6}\frac{5-2 x^2}{1-x^2} \ln \left(\frac{1-x}{1+x}\right) -\frac{\pi}{12}\frac{3-12 \ln 2 +6x^2-4x^4}{(1-x^2)^{3/2}}\\ &-\frac{i}{(1-x^2)^{3/2}} \left[ \frac{\pi^2}{4}-\mathrm{Li}_2\left(x + i\sqrt{1-x^2}\,\right) + \mathrm{Li}_2\left(-x-i\sqrt{1-x^2}\,\right) + \frac{i\pi}{2}\,\ln\left(x + i\sqrt{1-x^2}\,\right)\right]\,. \end{split} \label{eqn:I_a} \end{equation} The vertex correction to the polarizability is represented by the integral in Eq.~\eqref{eqn:Pb1}. Although not amenable to analytic evaluation, this integral can still be simplified by employing the elliptic coordinates: \begin{equation} \begin{split} P_b(q,\omega) &=-\frac{Nq\alpha}{32 \pi^3 v}\, \int\frac{d\mu d\mu' d\nu d\nu'}{\sqrt{\cosh(\mu+\mu')-\cos(\nu+\nu')} \sqrt{\cosh(\mu-\mu')-\cos(\nu-\nu')}}\, \frac{\cosh\mu\cosh\mu'\sin\nu\sin\nu'}{(\cosh^2 \mu - y^2)(\cosh^2 \mu' - y^2)}\\ &\times [(\sin\nu\sin\nu'+\sinh\mu\sinh\mu')+x^2(\cosh\mu\cosh\mu'+\tanh\mu\tanh\mu'\cos\nu\cos\nu')]\,, \label{eqn:P_b_IV} \end{split} \end{equation} where $y = x + i0$. Integrating over $\nu$ and $\nu'$, one is lead to Eq.~\eqref{eqn:P_b_II} with $I_b$ given by the two-dimensional integral \begin{gather} I_b(y) = \frac{N}{\pi^3}\int_0^{\infty}\int_0^{\infty} \frac{da db F(a, b)[\cosh a + (1 + 2 y^2) \cosh b]} {[1 - 2 y^2 + \cosh(a + b)] [1 - 2 y^2 + \cosh(a - b)]}\,, \label{eqn:Ib}\\ F(a, b) = \frac{\cosh(a / 2)}{\cosh(b / 2)} \left\{ [K_b \cosh b - E_b (1 + \cosh b) ] E_a - \frac{1}{3}\, [-E_a \cosh a + K_a(\cosh a - 1)] K_b \right\}\,, \label{eqn:F}\\ K_\tau = K\left(\operatorname{sech}^2 \frac{\tau}{2}\right)\,, \quad E_\tau = E\left(\operatorname{sech}^2 \frac{\tau}{2}\right)\,, \label{eqn:KE} \end{gather} \end{widetext} where $K(z)$ and $E(z)$ denote the complete elliptic integrals of the first and the second kind. For $x < 1$, the calculation of $P_b$ using these formulas is easily done numerically and the result is real. For $x > 1$, the standard numerical quadrature routines fail because the integration path in Eq.~\eqref{eqn:Ib} passes near the zeros of the denominator. For such $x$, one can derive the following alternative formulas for $P_b$. Denoting $x = \cosh(\rho/2)$, where $\rho > 0$ is real, we see that the poles of the integrand are at $a_{1, 2} = \rho \mp b$. The result for $I_b$ is complex, with the principal value integral (denoted by $\mathcal{P}$) giving the real part. The $a$ integral can be computed as the integral over contour depicted by the dashed line in Fig.~\ref{fig:contour}, which can be deformed into the union of contours $C_1$ and $C_2$. The real part of the integrand vanishes on $C_2$, and thus only $C_1$ contributes to $\mathrm{Re}\, I_b$. \begin{figure} \begin{center} \includegraphics[width=2.5in]{pol1cor_Fig3} \end{center} \caption{Integration contours in the complex $a$ plane to evaluate $\operatorname{Re} I_b(x)$ for $x>1$ in Eq.~\eqref{eqn:Ib}. The dots indicate the poles of the integrand. \label{fig:contour} } \end{figure} In turn, the imaginary part of $I_b$ can be obtained using the Sokhotski--Plemelj identity \begin{equation} \frac{1}{z \pm i 0} = \mathcal{P}\,\frac{1}{z} \mp i\pi \delta(z)\, , \label{eqn:Plemelji} \end{equation} followed by simple algebraic manipulations. In the end, we obtain the alternative formulas for the real and imaginary parts of $I_b$ suitable for $x > 1$: \begin{widetext} \begin{align} \mathrm{Re}\, I_b(x + i0) &= \int_0^\infty db\int_0^\pi \frac{du}{\cosh \rho -\cos u} \mathrm{Im}\, \left[\frac{F_s(b + iu, b)}{\cosh(2b + iu)-\cosh\rho}\right]\,, \label{eqn:reIb}\\ \mathrm{Im}\, I_b(x + i0) &= \frac{1}{\pi^2 \sinh \rho \sinh(\rho/2)} \int_0^\infty \frac{d\mu}{\sinh \mu} \left[\frac{F_s(\mu + \rho,\rho)}{\cosh(\mu + \rho/2)} - \frac{F_s(\mu - \rho,\rho)}{\cosh(\mu - \rho/2)}\right]\,, \label{eqn:ImIb}\\ F_s(a, b) &= F(a, b) [\cosh a +(\cosh \rho + 2) \cosh b]+ (a \leftrightarrow b)\,, \quad \rho = 2 \ln \left(x + \sqrt{x^2 - 1}\,\right)\,. \end{align} \end{widetext} Straightforward numerical evaluation and asymptotic analysis of these expressions lead to the results presented in Sec.~\ref{sec:results} above. Since all the above calculations have been done within the Dirac approximation, it is instructive to discuss how they can be generalized to a more realistic lattice model. We will consider in particular the self-energy term (the vertex correction can be analyzed similarly). We will show that the correction calculated from the Dirac model, Eq.~\eqref{eqn:Pa0}, and that computed from the lattice model is vanishingly small in the limit of interest, $\{\omega \ll v / |\bm{a}|, q \ll 1/ |\bm{a}|\}$, where $\bm{a}$ is the vector connecting a pair of nearest lattice sites. Let the kinetic energy matrix on the lattice be $\hat{H} = \mathcal{H}_{\mathbf{k}}\cdot \bm{\sigma}$, where $\mathcal{H}_{\mathbf{k}}=(\mathcal{H}^x_{\mathbf{k}},\mathcal{H}^y_{\mathbf{k}})$, and let the on-site density distributions be described by a form-factor function $\mathcal{F}(\mathbf{q})$, which is close to unity at $q \ll 1 / b$ and rapidly decays to zero at $q \gg 1 / b$, where $b \sim a$ is the characteristic size of the site orbitals. In terms of these notations the self-energy on the lattice $\hat{\Sigma}_{\mathbf{q}}=\mathbf{\Sigma}_{\mathbf{q}} \cdot \bm{\sigma}$, with $\mathbf{\Sigma}_{\mathbf{q}}=(\Sigma^x_{\mathbf{q}},\Sigma^y_{\mathbf{q}})$, is given by the equation \begin{widetext} \begin{equation} \Sigma^x_\mathbf{q} + i \Sigma^y_\mathbf{q} = \int\frac{d^2 \mathbf{k}}{(2\pi)^2} \mathcal{V}({\bf q}-{\bf k}) e^{i(\mathbf{q} - \mathbf{k}) \cdot \bm{a}} (\hat{\mathcal{H}}^x_{\mathbf{k}} + i \hat{\mathcal{H}}^y_{\mathbf{k}}) \,, \quad \mathcal{V}(\mathbf{q}) =\frac{2 \pi e^2}{\kappa |\mathbf{q}|} \mathcal{F}(\mathbf{q})\,,\quad \hat{\mathcal{H}}_{\mathbf{k}} \equiv \frac{\mathcal{H}_\mathbf{k}}{|\mathcal{H}_\mathbf{k}|} \,. \label{eqn:Sigma_lat} \end{equation} Note that the form-factor $\mathcal{F}(\mathbf{q})$ regularizes the \emph{short-range} behavior of the interaction potential and serves as a cutoff on the momentum transfer introduced in Refs.~\onlinecite{Mishchenko2008mci, Sheehy2009oto}. Note also that the integration in Eq.~\eqref{eqn:Sigma_lat} is over the entire momentum space to account for Umklapp processes. From Eq.~\eqref{eqn:Sigma_lat} we see that the self-energy in the lattice model and the Dirac model [Eq.~\eqref{eqn:Sigma_q}] have the same functional form if the deviation $\delta \mathbf{k}$ of $\mathbf{k}$ from a corner of the BZ $\mathbf{K}$ is small: \begin{equation} \mathbf{\Sigma}_{\mathbf{K}+\delta\mathbf{k}} = \frac{e^2}{4\kappa}\, \delta\mathbf{k} \ln \left|\frac{\Lambda_{\text{lat}}}{\delta \mathbf{k}}\right| \left[1 + \mathcal{O}\left(\left|\frac{\delta \mathbf{k}}{\Lambda_{\text{lat}}} \right|\right)\right]\,, \quad \Lambda_{\text{lat}} \sim \frac{1}{a}\,. \end{equation} The lattice analog of Eq.~\eqref{eqn:Pa0} is \begin{equation} 2 P^{\mathrm{lat}}_a(q,\omega) = N_s \int_{\mathrm{BZ}}\frac{d^2 \mathbf{k}}{(2\pi)^2} \mathbf{\Sigma}_{\mathbf{k}}\cdot(\hat{\mathcal{H}}_{\mathbf{k}} - \hat{\mathcal{H}}_{\mathbf{k} + \mathbf{q}}) \frac{\mathcal{E}_{\mathbf{k},\mathbf{q}}^2 + \omega^2} {[\mathcal{E}_{\mathbf{k},\mathbf{q}}^2 - (\omega + i0)^2]^2}\,, \quad \mathcal{E}_{\mathbf{k},\mathbf{q}} \equiv |\mathcal{H}_\mathbf{k}|+|\mathcal{H}_{\mathbf{k} + \mathbf{q}}|\,, \label{eqn:P_a_lat} \end{equation} \end{widetext} where the integral is now taken over a Brillouin zone (BZ) and $N_s = 2$ accounts for the spin degeneracy only. It is easy to see that this integral is convergent and that the difference between Eq.~\eqref{eqn:Pa0} and Eq.~\ref{eqn:P_a_lat} is of the order of $(q / \Lambda_{\text{lat}}) \ln (q / \Lambda_{\text{lat}})$. This difference is vanishingly small in the limit of interest, as we stated above. \section{Discussion} \label{sec:discussion} In this section we compare our work with previous literature and discuss some effects beyond the first-order perturbation theory. The interaction correction to the static dielectric function $\epsilon(q, 0)$ has been considered in Ref.~\onlinecite{Kotov2008eei}. Our Eq.~\eqref{eqn:epsilon_dc} for this quantity differs from the result obtained therein by the numerical coefficient of the quadratic term, $0.778$ versus $0.53$. This discrepancy is for two reasons. First is the apparent error in the numerical evaluation of the vertex diagram in Ref.~\onlinecite{Kotov2008eei}. Second is the different treatment of the self-energy contribution. In Ref.~\onlinecite{Kotov2008eei}, this contribution is assumed to be completely absorbed into the velocity renormalization. In our renormalization scheme, only the first term in Eq.~\eqref{eqn:Pa2} is absorbed while the second leaves a finite remainder, Eq.~\eqref{eqn:Pa_dc}. This difference is not merely a matter of convention. It has to do with the principal distinction regarding the relations between observable and nonobservable quantities. While the former relations do not depend on the renormalization scheme, the latter do. In our case, the expansion of the static dielectric function $\epsilon(q, 0)$ in powers of $\alpha_q$ (defined by the quasiparticle phase velocity $v_q$) is unique and given by Eq.~\eqref{eqn:epsilon_dc}. On the other hand, the expansion of $\epsilon(q, 0)$ in powers of the ``bare'' coupling $\alpha$ has the form \begin{equation} \epsilon(q, 0) \simeq 1 + \frac{\pi}{2}\, \alpha + \left(0.778 - \frac{\pi}{8}\, \ln \frac{\Lambda}{q}\right) \alpha^2\,, \label{eqn:epsilon_dc_II} \end{equation} in which the coefficient for $\alpha^2$ depends on the nonuniversal cutoff parameter $\Lambda$. Clearly, this coefficient can be discussed only after the renormalization scheme is precisely defined, as we have done here. Next, the optical limit of the polarization function has been vigorously debated~\cite{Mishchenko2008mci, Sheehy2009oto, Katsnelson2008opo, Juricic2010coi, Giuliani2012erg} in the context of the interaction correction to the universal conductivity of graphene. Our method of calculation and the final result, Eq.~\eqref{eqn:sigma_I}, are the same as in Ref.~\onlinecite{Mishchenko2008mci}. The interaction correction to the optical conductivity of \emph{doped} graphene has been studied in Ref.~\onlinecite{Abedinpour2011dwp}. In the limit $\omega \gg \mu$, where $\mu$ is the chemical potential measured with respect to the Dirac point, one expects to recover the behavior characteristic of neutral graphene. In this limit the results of Ref.~\onlinecite{Abedinpour2011dwp} agree with those of Ref.~\onlinecite{Mishchenko2008mci} and therefore with ours. The origin of the discrepancy with Ref.~\onlinecite{Juricic2010coi} is discussed in Appendix~\ref{sec:optical}. Finally, the behavior of $P(q, \omega)$ near the absorption threshold $|\omega/{v_q q} - 1|\ll 1$ has been studied by the authors of Ref.~\onlinecite{Gangadharaiah2008crf} (henceforth GFM). In this special region the perturbative expansion of the dielectric function diverges, see Fig.~\ref{fig:threshold}. The divergence of the self-energy term is stronger, $\sim (1 - x^2)^{-3 / 2}$ [Eq.~\eqref{eqn:Pa_theshold}] than that $\sim (1 - x)^{-1} \ln (1 - x)$ [Eq.~\eqref{eqn:P_b_threshold}] of the vertex term. However, GFM argued that the divergence of the self-energy term can be trivially absorbed into the velocity renormalization thus leaving only the divergence of the vertex. Summing the ladder series for the vertex corrections, they obtained a nonperturbative expression for $\epsilon(q, \omega)$. This expression is also plotted in Fig.~\ref{fig:threshold} assuming the renormalized velocity of GFM coincides with $v_q$. \begin{figure} \begin{center} \includegraphics[width=2.8in]{pol1cor_Fig4} \end{center} \caption{(Color online) The real part of the dielectric function near the absorption threshold for $\alpha_q = 0.3$. The black solid line is our first-order theory, the red dash-dotted line is the ladder sum from Ref.~\onlinecite{Gangadharaiah2008crf}. \label{fig:threshold} } \end{figure} An important qualitative prediction of GFM theory is vanishing of $\mathrm{Re}\, \epsilon$ at certain $x < 1$, see Fig.~\ref{fig:threshold}, which signals the presence of a new collective mode --- ``excitonic plasmon.'' While we find this prediction very interesting, we wish to express some reservations in the validity of GFM approach. We believe that the nonperturbative treatment should begin with the resummation of the self-energy not the vertex term because the former is more divergent. Such a resummation would make the quasiparticle velocity $v_q$ momentum-dependent, cf.~Eq.~\eqref{eqn:vel_I}. In other words, the linear Dirac spectrum would be replaced by a spectrum with a finite curvature. As in the case of the static response, it is not possible to faithfully represent the effect of such a curvature by simply replacing $v$ with another constant number. It is easy to see that the finite curvature of the spectrum modifies the behavior of the dielectric function over the range of frequencies $\sim \alpha q$, which is much wider than the interval $\alpha^2 q$ where the higher-order terms considered by GFM are important, see also Fig.~\ref{fig:threshold}. Our preliminary analysis suggests that this significantly modifies the analytical structure of the ladder sum compared to what was obtained by GFM. This intriguing problem warrants further study. Our work is supported by the Grants NSF PHY11-25915, by UCOP (M.M.F.), by the KITP Graduate Fellows Program (I.S.), the Welch Foundation Grant No. TBF1473 (I.S.) and the NRI SWAN program (I.S.). We are grateful to the KITP at UCSB, where this work has been carried out, for hospitality. We are thankful to I.~Herbut, P.~Koroteev, V.~Kotov, V.~Mastropietro, E.~Mishchenko, M.~Polini, O.~Vafek, and B.~Uchoa for discussions and comments on the manuscript. We thank M.~Vozmediano for bringing Ref.~\onlinecite{Teber} to our attention.
{'timestamp': '2012-09-20T02:00:54', 'yymm': '1206', 'arxiv_id': '1206.3519', 'language': 'en', 'url': 'https://arxiv.org/abs/1206.3519'}
\section{Approach} \label{sec:approach} We elaborate our approach to the query-focused video summarization in this section. Denote by ${\cal V} = \{{\cal V}_t\}_{t=1}^T$ a video that is partitioned to $T$ segments, and by $q$ the query about the video. In our experiments, every segment ${\cal V}_t$ consists of $10$ video shots each of which is 5-second long and is used in Section~\ref{subsec:GS} to collect the concept annotations. \begin{figure*}[t] \includegraphics[width=\textwidth]{./Framework} \caption{\small{Our query-focused video summarizer: Memory network (right) parameterized sequential determinantal point process (left).}} \label{fig:framework} \vspace{-10pt} \end{figure*} \subsection{Query Conditioned Sequential DPP} The sequential determinantal point process (DPP)~\cite{gong2014diverse} is among the state-of-the-art models for generic video summarization. We condition it on the query $q$ as our overarching video summarization model, \begin{align} &P(Y_1=\bm{y}_1, Y_2 = \bm{y}_2, \cdots, Y_T=\bm{y}_T | \mathcal{V}, q) \\ = &P(Y_1=\bm{y}_1 | \mathcal{V}_1, q)\prod_{t=2}^T P(Y_t = \bm{y}_t | \mathcal{V}_t, \bm{y}_{t-1}, q) \end{align} where the $t$-th DPP variable $Y_t$ selects subsets from the $t$-th segment ${\cal V}_t$, i.e., $\bm{y}_t\subseteq{\cal V}_t$, and the distribution $P(Y_t = \bm{y}_t | \mathcal{V}_t, \bm{y}_{t-1}, q)$ is specified by a conditional DPP~\cite{kulesza2012determinantal}, \begin{align} P(Y_t = \bm{y}_t | \mathcal{V}_t, \bm{y}_{t-1}, q) = \frac{\det [\bm{L}(q)]_{\bm{y}_t\cup\bm{y}_{t-1}}}{\det\big(\bm{L}(q) + \bm{I}_t\big)}. \end{align} The nominator on the right-hand side is the principle minor of the (L-ensemble) kernel matrix $\bm{L}(q)$ indexed by the subsets $\bm{y}_t\cup\bm{y}_{t-1}$. The denominator calculates the determinant of the sum of the kernel matrix and a corrupted identity matrix whose elements indexed by $\bm{y}_{t-1}$ are 0's. Readers are referred to the great tutorial~\cite{kulesza2012determinantal} on DPP for more details. Note that the DPP kernel $\bm{L}(q)$ is parameterized by the query $q$. We have to carefully devise the way of parameterizing it in order to take account of the following properties. In query-focused video summarization, a user selects a shot to the summary for two possible reasons. One is that the shot is quite related to the query and thus becomes appealing to the user. The other may attribute to the contextual importance of the shot; e.g., the user would probably choose a shot to represent a prominent event in the video even if the event is not quite relevant to the query. To this end, we use a memory network to model the two types of importance (query-related and contextual) of a video shot simultaneously. \eat{ In this section, we briefly describe the state-of-the-art in query-focused video summarization, depicting their shortcomings. Subsequently, we identify and explain the building blocks of our pipeline followed by a novel framework that addresses the limitations and outperforms the existing work. \subsection{Query-Focused Video Summarization} \label{subsec:qfvs} Video summarization is highly subjective; given a video, different users may summarize it differently. The reason behind this is twofold: 1) each user finds some events (collection of concepts) more important and appealing than other users, or 2) some concepts are main focus of certain videos, e.g. given a video of an \textit{auto show} has to be summarized in a way that is collective mainly around different cars. Therefore in ~\cite{sharghi2016query}, authors designed the first framework for query-focused video summarization to address different user needs and also content-aware video indexing. They use a hierarchical graphical model which requires both static and dynamic features. The first stage in their approach is to generate \textit{query-dependent} (dynamic) from static features, that are high-level SentiBank ~\cite{borth2013sentibank} concept scores for all dictionary elements. The scheme employed to acquire the dynamic feature set is a re-weighting function that scales down the concept detection scores for the concepts that are not present in the user preferences list while preserving full score for those who user has indicated to be interested in. The dynamic feature set is then fed to the first summarizer unit to acquire \textit{query-relevant} summary of the video. At the second stage, static features are used to \textit{complement} the summary from the first stage by adding parts of the video that may not correspond to any query, but are import in the context of the video; hence, both the \textit{query} and the \textit{story} of the video is maintained in the system summary. Even though this approach is able to generate descent summaries, we argue the existence of the following unappealing constraints:\\ \begin{itemize} \item Mandating high-level concept detectors as features \item Restricted query representation \item High supervision in training the model \end{itemize} } \eat{ Requiring high concept detectors as features is very limiting, mainly because itself is an open problem in computer vision; it is not easy to train high-level detectors that work well in unconstrained settings (such as an untrimmed egocentric video). Additionally, adding any item to the dictionary means one needs to either find or train a detector, which in various cases is extremely hard. As for the query, ideally, it must be free entry text, such that a user can enter words or sentences that describes their desire instead of using limited elements in a predefined dictionary. It is hard to augment the approach in ~\cite{sharghi2016query} to address this issue. Finally, training the model requires high supervision; one not only needs to know the groundtruth summaries, but they must be able to infer which elements in the groundtruth summary is query-relevant/irrelevant. In this work we propose an attention-based query-focused video summarization approach, that addresses the limitations mentioned above, while pushing the performance drastically. \subsection{Background} In this section, we provide some background information that facilitates understanding our proposed approach. Our Attention-based Query-Focused video summarization approach consists of two units working jointly to generate summaries. At the first step, a memory network cell receives the shot features as well as the query, and generates a query-dependent feature that is then fed to a determinantal point process ~\cite{kulesza2012determinantal} (DPP) summarizer unit. In what follows, we first explain how memory network generates query-dependent features, and subsequently we review basics of DPP unit. } \subsection{Memory Network to Parameterize DPP Kernels} The memory network~\cite{sukhbaatar2015end} offers a neural network architecture to naturally attend a question to ``facts'' (cf.\ the rightmost panel of Figure~\ref{fig:framework}). In our work, we shall measure the relevance between the query $q$ and a video shot and incorporate such information into the DPP kernel $\bm{L}(q)$. Therefore, it is straightforward to substitute the question in memory network by our query, but the ``facts'' are less obvious. As discussed in Section~\ref{subsec:Dict}, there could be various scenarios for a query and a shot. All the query concepts may appear in the shot but possibly in different frames; one or two concepts of the query may not be present in the shot; it is also possible that none of the concepts are relevant to any frame in the shot. In other words, the memory network is supposed to screen all the video frames in order to determine the shot's relevance to the query. Hence, we uniformly sample 8 frames from each shot as the ``facts''. The video frames are represented using the same feature as~\cite{sharghi2016query} (cf.\ $\bm{f}_1,\cdots,\bm{f}_K$ on the rightmost panel of Figure~\ref{fig:framework}). The memory network takes as input the video frames $\{\bm{f}_k\}$ of a shot and a query $q$. The frames are transformed to memory vectors $\{\bm{m}_k\}$ through an embedding matrix $A$. Similarly, the query ${q}$, represented by a binary indication vector, is mapped to the internal state $\bm{u}$ using an embedding matrix $C$. The attention scheme is implemented simply by a dot product followed by a softmax function, \begin{equation} p_k = \text{Softmax}(\bm{u}^T\bm{m}_k), \end{equation} where $p_k$ carries how much attention the query $q$ incurred over the frame $\bm{f}_k$. Equipped with the attention scores $\{p_k\}$, we assemble another embedding $\{\bm{c}_k\}$ of the frames, obtained by the mapping matrix $B$ in figure~\ref{fig:framework}, into the video shot representation $\bm{o}$: \begin{equation} \bm{o} = \sum_{k}p_i\bm{c}_k, \end{equation} which is conditioned on the query $q$ and entails the relevance strength of the shot to the query. As a result, we expect the DPP kernel parameterized by the following \begin{equation} [\bm{L}(q)]_{ij} = \bm{o}_i^T {D}^T {D} \bm{o}_j \label{eWeightCombine} \end{equation} is also flexible in modeling the importance of the shots to be selected into the video summary. Here $i$ and $j$ index two shots, and ${D}$ is another embedding matrix. Note that the contextual importance of a shot can be inferred from a shot's similarities to the others by the kernel matrix, while the query-related importance is mainly by the attention scheme in the memory network. \subsection{Learning and Inference} We learn the overall video summarizer, including the sequential DPP and the memory network, by maximizing the log-likelihood of the user summaries in the training set. We use stochastic gradient descent with mini-batching to optimize the embedding matrices $\{A, B, C, D\}$. The learning rates and numbers of epochs are chosen using the validation set. At the test stage, we sequentially visit the video segments ${\cal V}_1, \cdots, {\cal V}_T$ and select shots from them using the learned summarization model. It is notable that our approach requires less user annotations than the SH-DPP~\cite{sharghi2016query}. It learns directly from the user summaries and implicitly attend the queries to the video shots. However, SH-DPP requires very costly annotations about the relevances between video shots and queries. Our new dataset does supply such supervisions, so we shall include SH-DPP as the baseline method in our experiments. \eat{ In \textit{question-answering} frameworks, the response vector is subsequently passed to an answering module (e.g. softmax) to infer the answer to the question. \subsubsection{Determinantal Point Process} \label{subsec:dpp} \textbf{DPP} ~\cite{kulesza2012determinantal} has been introduced and used in order to model negative correlation. DPP fits summarization tasks as it is able to integrate two components closely; individual importance, and collective diversity. Vanilla DPP was used effectively for document summarization, achieving the state-of-the-art. Define the groundset by $\cal Y$ $= \{1,2,...,N\}$. A DPP defines a probability distribution over a subset selection random variable by: \begin{align} P(Y=y) = {\det(\mat{L}_y)}/{\det(\mat{L}+\mat{I})}, \quad \forall y\subseteq \cal Y, \label{eDef} \end{align} } \eat{ where $\mat{L} \in \mathbb{S}^{N \times N}$ is a positive semidefinite kernel matrix, $\mat{I}$ is identity matrix, and $\mat{L}_y$ is the submatrix indexed by $y$. The individual importance of an item in the set is represented by $P(i \in y) \propto \mat{L}_{ii}$ while the repulsion of any two items can be inferred by $P(i,j \in y) \propto \mat{L}_{ii}\mat{L}_{jj} - \mat{L}^2_{ij}$. In other terms, DPP assigns the highest probability to a subset which best spans the groundset while preserving the diversity in the selected elements. DPP has been successfully used in document-summarization tasks, achieving state-of-the-art performance ~\cite{chao2015large,kulesza2012determinantal}. ~\cite{gong2014diverse,sharghi2016query} succeeded in employing DPP for video summarization. Inspired by them, we use a conditional version of DPP as the summarizer unit in our pipeline. \subsection{Framework} As explained in \ref{subsec:qfvs}, the framework in ~\cite{sharghi2016query} uses a re-weighting scheme in order to generate query-dependent features, however, this introduces unappealing limitations; 1) one has to have access to detectors for every concept in the dictionary, 2) hard restriction over query representation, and 3) hierarchical supervision required to train the model. Our novel \textbf{attention-based} DPP summarizer heals these problems as it is able to: 1) latently learn to distinguish concepts based on low-level cues in the shot, hence no restriction over the input features, 2) query embedding layer in the memory network can be easily swapped with a recurrent neural network (such as LSTM) that is able to transform free text into rich representations, and 3) model is trained end-to-end, and having one layer of DPP summarizer lowers the supervision required to train the model. Figure \ref{fig:framework} shows our proposed method unrolled in time. The framework consists of two elements: attention-based memory network unit and Determinantal Point Process (DPP) summarizer unit. Memory network unit (that is shown on the right side of Figure ~\ref{fig:framework}) takes as input the feature set of a shot $\mat{f}_1 ...\mat{f}_k$ and the query $\mat{q}$, embeds them into the same space via embedding matrices $\mat{A}$ and $\mat{C}$ respectively. Obviously, these two embedding matrices share the size of their first dimension, so that embedded features have the same length. This allows our approach to receive a feature set and a query that are not initially in the same space, letting it accept features of any nature. In the figure, embedded features are shown by $\mat{m}_i$, and the query $\mat{q}$ after transformation via embedding matrix $\mat{C}$ is shown by $\mat{u}$. In this space, the inner product of $\mat{m}_i$ and $\mat{u}$ is meaningful, and represents how well the $\text{i}^\text{th}$ feature of the shot correlates with $\mat{u}$. Having computed the inner products of all features of the shot with $\mat{u}$, we convert them into probability using a softmax layer, shown by $p_i$. The softmax layer represents the attention in the framework; if one of the features in the set has high correlation with $\mat{u}$, then softmax assigns a value of close to $1$ to it while giving much less \textit{attention} to the rest of the features. Probabilities $p_i$ are then used to combine $\mat{c}_i$ to generate $\mat{o}$, that are transformed input features via another embedding matrix $\mat{B}$. Hence, the memory network is able to convert static feature set of a shot into \textit{one} feature vector based on their correlation with the query. By changing the size of embedding matrix $\mat{B}$, one can control the output feature dimension, that is another advantage to our approach. Having acquired query-relevant features $\mat{o}_i$, at the next step they are fed into the summarizer, that is a conditional DPP unit. At time step $t$, the groundset is defined as, $\cal Y$$_t = \{\mat{o}_i,...,\mat{o}_{i+N}\} \cup \{Y_{t-1} = y_{t-1}\}$, where $N$ represents the number of shots to select from and $y_{t-1}$ is the subset selection output from time step $t-1$. This is to enforce Markov diversity between two consecutive time step to ensure less redundancy: \begin{equation} P(Y_t = y_t) = \frac{\det \mat{L}_{y_t \cup y_{t-1}}}{\det (\mat{L} + \mat{I}_t)} \end{equation} where $\mat{L}$ represents the kernel, $Y_t$ being the subset selection random variable, $\mat{L}_{y_t \cup y_{t-1}}$ indicating the square sub-kernel whose columns and rows are indexed by $y_t \cup y_{t-1}$, and $\mat{I}_t$ representing an identity like with the exception of having zero diagonal elements indexed by $y_{t-1}$. } \section{Conclusion} \label{sec:conc} \vspace{-5pt} In this work, our central theme is to study the \textit{subjectiveness} in video summarization. We have analyzed the key challenges caused the subjectiveness and proposed some solutions. In particular, we compiled a dataset that is densely annotated with a comprehensive set of concepts and designed a novel evaluation metric that benefits from the collected annotations. We also devised a new approach to generating personalized summaries by taking user queries into account. We employed memory networks and determinantal point processes in our summarizer, so that our model leverages their attention schemes and diversity modeling capabilities, respectively. Extensive experiments verify the effectiveness of our approach and reveals some nice behaviors of our evaluation metric. \vspace{-10pt} \paragraph{Acknowledgements.} This work is supported by NSF IIS \#1566511, a gift from Adobe Systems, and a GPU from NVIDIA. We thank Fei Sha, the anonymous reviewers and area chairs, especially R2, for their insightful suggestions. \section{Dataset} \label{sec:dataset} In this section, we provide the details on compiling a comprehensive dataset for video summarization. We opt to build upon the currently existing UT Egocentric (UTE) dataset~\cite{lee2012discovering} mainly for two reasons: 1) the videos are consumer grade, captured in uncontrolled everyday scenarios, and 2) each video is 3--5 hours long and contains a diverse set of events, making video summarization a naturally desirable yet challenging task. In what follows, we first explain how we define a dictionary of concepts and determine the best queries over all possibilities for the query-focused video summarization. Then we describe the procedure of gathering user summaries for the queries. We also show informative statistics about the collected dataset. \begin{figure*} \centering \includegraphics[width=\linewidth]{./shot_tags} \vspace{-18pt} \caption{\small{All annotators agree with each other on the prominent concepts in the video shot, while they miss different subtle concepts. }} \label{fig:tags} \vspace{-2pt} \end{figure*} \subsection{Concept Dictionary and Queries} \label{subsec:Dict} We plan to have annotators to transform the semantic information in each video shot to a binary semantic vector (cf.\ Figures~\ref{fig:captionvstags} and \ref{fig:tags}), with 1's indicating the presence of the corresponding concepts and 0's the absence. Such annotations serve as the foundation for an efficient and automatic evaluation method for video summarization described in Section~\ref{subsec:tag}. The key is thus to have a dictionary that covers a wide range and multiple levels of concepts, in order to have the right basis to encode the semantic information. \eat{ \begin{table}[t] \centering \begin{small} \begin{tabular}{ll} \hline \toprule \multicolumn{1}{c}{\normalsize{Concepts}} \\ \hline Baby, ~Beach, ~Bed, ~Blonde, ~Boat, ~Book, ~Building, ~Car, Chair\\ Chocolate, ~Computer, Cup/glass, ~Dance, Desk, Drink, Exercise\\ Face, ~Flower, ~Food, ~Garden, ~Grass, ~Hall, ~Hands, ~~Hat, ~Kids\\ Lady, ~Lake, ~Market, ~Men, ~Musical/instrument, ~Ocean, ~Office\\ Park, ~Party, ~Pets/animal, ~Phone, ~Room, ~School, ~Shoes, ~Sign\\ Sky, ~Sports, ~Street, ~Student, ~Sun, ~Toy, ~Tree, ~Window\\ \bottomrule \end{tabular} \end{small} \caption{\small{Dictionary of concepts}} \label{table:dict} \vspace{-10pt} \end{table} } In~\cite{sharghi2016query}, we have constructed a lexicon of concepts by overlapping nouns in the video shot captions~\cite{yeung2014videoset} with those in the SentiBank~\cite{borth2013sentibank}. Those nouns serve as a great starting point for us since they are mostly entry-level~\cite{ordonez2013large} words. We prune out the concepts that are weakly related to visual content (e.g., \textsc{''Area''}, which could be interpreted in various ways and applicable to most situations). Additionally, we merge the redundant concepts such as \textsc{''Children''} and \textsc{''Kids''}. We also add some new concepts in order to construct an expressive and comprehensive dictionary. Two strategies are employed to find the new concept candidates. First, after watching the videos, we manually add the concepts that appear for a significant frequency, e.g., \textsc{''Computer''}. Second, we use the publicly available statistics about YouTube and Vine search terms to add the terms that are frequently searched by users, e.g., \textsc{''Pet/Animal''}. The final lexicon is a concise and diverse set of 48 concepts (cf.\ Figure~\ref{fig:stats}) that are deemed to be comprehensive for the UTE videos of daily lives. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{./Summary_Comparison} \vspace{-18pt} \caption{\small{Two summaries generated by the same user for the queries $\{\textsc{Hat},\textsc{Phone}\}$ and $\{\textsc{Food},\textsc{Drink}\}$, respectively. The shots in the two summaries beside the green bars exactly match each others, while the orange bars show the query-specific shots.}} \label{fig:sumcompare} \vspace{-10pt} \end{figure*} We also construct queries, to acquire query-focused user summaries, using two or three concepts as opposed to singletons. Imagine a use case of video search engines. The queries entered by users are often more than one word. For each video, we formalize 46 queries. They cover the following four distinct scenarios: i) all the concepts in the query appear in the same video shots together (15 such queries); ii) all concepts appear in the video but never jointly in a single shot (15 queries), iii) only one of the concepts constituting the query appears in some shots of the video (15 queries), and iv) none of the concepts in the query are present in the video (1 such query). We describe in the Suppl.\ Materials how we obtain the 46 queries to cover the four scenarios. Such queries and their user annotated summaries challenge an intelligent video summarizer from different aspects and extents. \subsection{Collecting User Annotations} \label{subsec:GS} We plan to build a video summarization dataset that offers 1) efficient and automatic evaluation metrics and 2) user summaries in response to different queries about the videos. For the former 1), we collect user annotations about the presence/absence of concepts in each video shot. This is a quite daunting task conditioning on the lengths of the videos and the size of our concept dictionary. We use Amazon Mechanical Turk (MTurk) (\url{http://www.mturk.com/}) for economy and efficiency considerations. For the latter 2), we hire three student volunteers to have better quality control over the labeled video summaries. We uniformly partition the videos to 5-second-long shots. \vspace{-5pt} \subsubsection{Shot Tagging: Visual Content to Semantic Vector} \label{subsec:tag} \vspace{-5pt} We ask MTurkers to tag each video shot with all the concepts that are present in it. To save the workers' time from watching the shots, we uniformly extract five frames from each shot. A concept is assumed relevant to the shot as long as it is found in any of the five frames. Figure ~\ref{fig:tags} illustrates the tagging results for the same shot by three different workers. While all the workers captured the prominent concepts like \textsc{Sky}, \textsc{Lady}, \textsc{Street}, \textsc{Tree}, and \textsc{Car}, they missed different subtle ones. The union of all their annotations, however, provides a more comprehensive semantic description about the video shot than that of any individual annotator. Hence, we ask three workers to annotate each shot and take their union to obtain the final semantic vector for the shot. On average, we have acquired $4.13$, $3.95$, $3.18$, and $3.62$ concepts per shot for the four UTE videos, respectively. In sharp contrast, the automatically derived concepts~\cite{sharghi2016query} from the shot captions~\cite{yeung2014videoset} are far from enough; on average, there are only $0.29$, $0.58$, $0.23$, and $0.26$ concepts respectively associated with each shot of the four videos. \vspace{-13pt} \paragraph{Evaluating video summaries.} Thanks to the dense concept annotations per video shot, we can conveniently contrast a system generated video summary to user summaries according to the semantic information they entail. We first define a similarity function between any two video shots by intersection-over-union (IOU) of their corresponding concepts. For instance, if one shot is tagged by \{\textsc{Car}, \textsc{Street}\} and another by \{\textsc{Street}, \textsc{Tree}, \textsc{Sign}\}, then the IOU similarity between them is ${1}/{4} = 0.25$. To find the match between two summaries, it is convenient to execute it by the maximum weight matching of a bipartite graph, where the summaries are on opposite sides of the graph. The number of matched pairs thus enables us to compute precision, recall, and F1 score. Although this procedure has been used in the previous work~\cite{khosla2013large,de2011vsumm}, there the edge weights are calculated by low-level visual features which by no means match the semantic information humans obtain from the videos. In sharp contrast, we use the IOU similarities defined directly over the user annotated semantic vectors as the edge weights. \vspace{-5pt} \subsubsection{Acquiring User Summaries} \label{subsec:user_sums} In addition to the dense per-video-shot concept tagging, we also ask annotators to label query-focused video summaries for the 46 queries described in Section~\ref{subsec:Dict}. To ensure consistency in the summaries and better quality control over the summarization process, we switch from MTurk to three student volunteers in our university. We meet and train the volunteers in person. They each summarize all four videos by taking queries into account --- an annotator receives 4 (videos) $\times$ 46 (queries) summarization tasks in total. We thus obtain three user summaries for each query-video pair. \eat{ \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{./InterUser} \caption{\small{This figure compares and contrasts the summaries labeled by the same user for two queries $\{\textsc{Hat},\textsc{Phone}\}$ and $\{\textsc{Food},\textsc{Drink}\}$, respectively. Pairs that are marked with green margin exactly match in the user summaries while orange margin depicts the query-relevant shots in the summaries.}} \label{fig:interuser} \end{figure*} } However, we acknowledge that it is infeasible to have the annotators to summarize all the query-video pairs from scratch --- the UTE videos are each 3--5 hours long. To overcome this issue, we expand each temporal video to a set of static key frames. First, we uniformly extract five key frames to represent each shot in the same way as in Section~\ref{subsec:tag}. Second, we pool all the shots corresponding to the three textual summaries~\cite{yeung2014videoset} as the initial candidate set. Third, for each query, we further include all the shots that are relevant to it into the set. A shot is relevant to the query if the intersection of the concepts associated with it and the query is nonempty. As a result, we have a set of candidate shots for each query that covers the main story in the video as well as those of relevance to the query. The annotators summarize the video by removing redundant shots from the set. There are $2500$ to $3600$ shots in the candidate sets, and the summaries labeled by the participants contain only $71$ shots on average. \vspace{-12pt} \begin{table}[t]\centering \small \caption{\small{Inter-user agreement evaluated by F1 score (\%) (U1, U2, and U3: the three student volunteers, O: the oracle summary).}} \label{tab:interuser} \vspace{-10pt} \begin{tabular}{cccccccccccccccccccccc}\toprule U1-U2 & U1-U3 & U2-U3 & U1-O & U2-O & U3-O \\ \midrule 55.27 & 55.85 & 62.67 & 64.97 & 79.75 & 80.07\\ \bottomrule \end{tabular} \label{table:InterUser} \vspace{-15pt} \end{table} \paragraph{Oracle summaries.} Supervised video summarization methods~\cite{gong2014diverse,gygli2015video,sharghi2016query,zhang2016summary,zhang2016video} often learn from one summary per video, or per query-video pair in query-focused summarization, while we have three user generated summaries per query. We aggregate them into one, called the oracle summary, per query-video pair by a greedy algorithm. The algorithm starts from the common shots in the three user summaries. It then greedily chooses one shot every time such that this shot gives rise to the largest marginal gain over the evaluated F1 score. We leave the details to the Suppl.\ Materials. The oracle summaries achieve better agreements with the users than the inter-user consensus (cf.\ Table~\ref{tab:interuser}). \vspace{-12pt} \begin{table} \small \centering \caption{\small{The average lengths and standard deviations of the summaries for different queries. }} \vspace{-10pt} \label{table:sumstats} \begin{tabular}{ccccccccc} \toprule & User 1 & User 2 & User 3 & Oracle \\ \cmidrule{2-5} Vid1 & \small{143.7$\pm$}32.5 & \small{80.2$\pm$}47.1 & \small{62.6$\pm$}15.7 & \small{82.5$\pm$}33.9 \\%&& 54.7$\pm$0.13 & 63.4$\pm$0.11 & 61.8$\pm$0.08 & 64.3$\pm$0.09\\ Vid2 & 103.0$\pm$45.0 & 49.9$\pm$25.2 & 64.4$\pm$11.7 & 64.1$\pm$11.7 \\%&& 52.0$\pm$0.13 & 65.6$\pm$0.08 & 59.6$\pm$0.1 & 63.2$\pm$0.08\\ Vid3 & 97.3$\pm$38.9 & 50.1$\pm$9.6 & 58.4$\pm$9.3 & 59.2$\pm$9.6 \\%&& 56.2$\pm$0.13 & 67.4$\pm$0.06 & 66.7$\pm$0.08 & 68.2$\pm$0.06\\ Vid4 & 79.9$\pm$30.3 & 34.4$\pm$7.3 & 28.9$\pm$8.7 & 35.6$\pm$8.5 \\%&& 42.9$\pm$0.15 & 54.3$\pm$0.08 & 52.4$\pm$0.09 & 54.1$\pm$0.07\\ \bottomrule \end{tabular} \vspace{-15pt} \end{table} \paragraph{Summaries of the same video differ due to queries.} Figure~\ref{fig:sumcompare} shows two summaries labeled by the same user for two distinct queries, $\{\textsc{Hat},\textsc{Phone}\}$ and $\{\textsc{Food},\textsc{Drink}\}$. Note that the summaries both track the main events happening in the video while they differ in the query-specific parts. Besides, table~\ref{table:sumstats} reports the means and standard deviations of the lengths of the summaries per video per user. We can see that the queries highly influence the resulting summaries; the large standard deviations attribute to the queries. \vspace{-22pt} \paragraph{Budgeted summary.} For all the summaries thus far, we do not impose any constraints over the total number of shots to be included into the summaries. After we receive the annotations, however, we let the same participants further reduce the lengths of their summaries to respectively 20 shots and 10 shots. We call them \emph{budgeted} summaries and leave them for future research. \section{Experimental Setup} \label{subsec:eval} In this section, we propose a new evaluation metric and contrast it against other existing metrics. \section{Summary Examples} \label{Examples} In addition to the text, we have enclosed two system summaries to provide qualitative results. The first video summary corresponds to query $q$=\textsc{\{Food,Drink\}} (scenario i), and consists of 93 shots (each shot is 5 seconds long), making it less than 8 minutes long while the original video is $\sim 4$ hours. The second, with query of $q$=\textsc{\{Chocolate,Street\}} (scenario ii), is a summary of length $\sim$ 5 minutes (56 shots) generated by our model for a 3 hours long video. \subsection{A Nice Behavior of Our Evaluation Metric} \vspace{-5pt} Our evaluation method for video summarization is mainly motivated by Yeung et al.~\cite{yeung2014videoset}. Particularly, we share the same opinion that the evaluation should focus on the semantic information which humans can perceive, rather than the low-level visual features or temporal overlaps. However, the captions used in \cite{yeung2014videoset} are diverse, making the ROUGE-SU4 evaluation unstable and poorly correlated with human judgments~\cite{chen2015microsoft}, and often missing subtle details (cf.\ Figure~\ref{fig:captionvstags} for some examples). We rectify those caveats by instead collecting dense concept annotations. Figure~\ref{fig:captionvstags} exhibits a few video shots where the concepts we collected provide a better coverage than the captions about the semantics in the shots. Moreover, we conveniently define an evaluation metric based on the IOU similarity function between any two shots (cf.\ Section~\ref{subsec:tag}) thanks to the concept annotations. Our evaluation metric has some nice behaviors. If we randomly remove some video shots from the user summaries and compare the corrupted summaries with the original ones, an accuracy-like metric should give rise to linearly decreasing values. This is indeed what happens to our recall as shown in Figure~\ref{fig:Del}. In contrast, the ROUGE-SU4 recall, taking as input the shot captions, exhibits some nonlinearality. More results on randomly replacing some shots in the user summaries are included in the Suppl. Materials. \vspace{-5pt} \section{Experimental Results} \label{sec:expset} \begin{table*}[t] \centering \small \caption{\small{Comparison results for query-focused video summarization (\%). }} \label{table:results} \vspace{-10pt} \begin{tabular}{@{}rrrrcrrrcrrr@{}}\toprule & \multicolumn{3}{c}{SeqDPP~\cite{gong2014diverse}} & \phantom{abc}& \multicolumn{3}{c}{SH-DPP~\cite{sharghi2016query}} & \phantom{abc} & \multicolumn{3}{c}{\textbf{Ours}}\\ \cmidrule{2-4} \cmidrule{6-8} \cmidrule{10-12} & Precision & Recall & F1 && Precision & Recall & F1 && Precision & Recall & F1\\ \midrule Vid1 & \textbf{53.43} & 29.81 & 36.59 && 50.56 & 29.64 & 35.67 && 49.86 & \textbf{53.38} & \textbf{48.68}\\ Vid2 & \textbf{44.05}& 46.65 & \textbf{43.67} && 42.13& 46.81& 42.72&& 33.71& \textbf{62.09}& 41.66\\ Vid3 & 49.25& 17.44 & 25.26 && 51.92& 29.24& 36.51&& \textbf{55.16} & \textbf{62.40} & \textbf{56.47} \\ Vid4 & 11.14 & \textbf{63.49} & 18.15 && 11.51& 62.88& 18.62&& \textbf{21.39}& 63.12& \textbf{29.96}\\ \bottomrule Avg. & 39.47 & 39.35 & 30.92 && 39.03 & 42.14 & 33.38 && \textbf{40.03} & \textbf{60.25} & \textbf{44.19} \\ \bottomrule \end{tabular} \vspace{-2pt} \end{table*} \eat{ \begin{table*}[t]\centering \small \caption{\small{Testing effectiveness of individual components in our proposed attention-based query-focused summarizer.}} \label{table:elementtest} \vspace{-10pt} \begin{tabular}{@{}rrrrcrrrcrrr@{}}\toprule & \multicolumn{3}{c}{NoAttention} & \phantom{}& \multicolumn{3}{c}{-Emb $D$} & \phantom{} & \multicolumn{3}{c}{EmbSize 256}\\ \cmidrule{2-4} \cmidrule{6-8} \cmidrule{10-12} & Precision & Recall & F1 && Precision & Recall & F1 && Precision & Recall & F1\\ \midrule Vid1 & 39.68 & 61.21 & 45.66 && 34.58 & 66.28 & 42.95 && 44.12 & 59.44 & 47.64\\ Vid2 & 25.24 & 67.04 & 35.38 && 24.40 & 68.78 & 34.31 && 30.68 & 65.53 & 39.91\\ Vid3 & 36.67 & 74.27 & 47.51 && 51.26 & 64.60 & 54.46 && 53.88 & 62.58 & 55.61 \\ Vid4 & 14.32 & 65.58 & 22.38 && 15.36 & 67.31 & 23.52 && 17.01 & 66.44 & 25.61\\ \bottomrule Avg. & 28.98 & 67.03 & 37.73 && 31.4 & 66.74 & 38.81 && 36.42 & 63.5 & 42.19\\ \bottomrule \end{tabular} \end{table*} } We report experimental setup and results in this section. \vspace{-10pt} \paragraph{Features.} We extract the same type of features as used in the existing SH-DPP method~\cite{sharghi2016query} in order to have fair comparisons. First, we employ 70 concept detectors from SentiBank~\cite{borth2013sentibank} and use the detection scores for the features of each key frame (8 key frames per 5-second-long shot). However, it is worth mentioning that our approach is not limited to using concept detection scores and, more importantly unlike SH-DPP, does not rely on the per-shot annotations about the relevance to the query --- the per shot user labeled semantic vectors serve for evaluation purpose only. Additionally, we extract a six dimensional contextual feature vector per shot as the mean-correlations of low-level features (including color histogram, GIST~\cite{oliva2001modeling}, LBP~\cite{ojala2002multiresolution}, Bag-of-Words, as well as an attribute feature~\cite{yu2013designing}) in a temporal window whose size varies from 5 to 15 shots. The six-dimensional contextual features are appended to the key frame features in our experiments. \vspace{-10pt} \paragraph{Data split.} We run four rounds of experiments each leaving one video out for testing and one for validation, while keeping the remaining two for training. Since our video summarizer and the baselines are sequential models, the small number (i.e., two) of training videos is not an issue as the videos are extremely long, providing many variations and supervisions at the training stage. \subsection{Comparison Results} \eat{ \begin{table}[t]\centering \small \caption{\small{Comparison results for generic video summarization, i.e., when no video shots are relevant to the query.}} \label{table:glob} \vspace{-10pt} \begin{tabular}{cccc}\toprule & SubMod~\cite{gygli2015video} & Quasi~\cite{zhao2014quasi} & \textbf{Ours}\\ \midrule Vid1 & 49.51 & 53.06 & \textbf{62.66}\\ Vid2 & 51.03 & \textbf{53.80} & 46.11\\ Vid3 & 64.52 & 49.91 & \textbf{58.85} \\ Vid4 & \textbf{35.82} & 22.31 & 33.5\\ \bottomrule Avg. & 50.22 & 44.77 & \textbf{50.29}\\ \bottomrule \end{tabular} \vspace{-10pt} \end{table} } \begin{table*}[t]\centering \small \caption{\small{Comparison results for generic video summarization, i.e., when no video shots are relevant to the query.}} \label{table:glob} \vspace{-10pt} \begin{tabular}{@{}rrrrcrrrcrrr@{}}\toprule & \multicolumn{3}{c}{SubMod~\cite{gygli2015video}} & \phantom{abc}& \multicolumn{3}{c}{Quasi~\cite{zhao2014quasi}} & \phantom{abc} & \multicolumn{3}{c}{\textbf{Ours}}\\ \cmidrule{2-4} \cmidrule{6-8} \cmidrule{10-12} & Precision & Recall & F1 && Precision & Recall & F1 && Precision & Recall & F1\\ \midrule Vid1 & 47.86 & 51.28 & 49.51 && 57.37 & 49.36 & 53.06 && \textbf{65.88} & 59.75 & \textbf{62.66}\\ Vid2 & 56.53 & 46.50 & 51.03 && 46.75& 63.34& \textbf{53.80} && 35.07 & \textbf{67.31} & 46.11\\ Vid3 & 62.46 & 66.72 & 64.52 && 53.93 & 46.44 & 49.91&& \textbf{65.95} & 53.12 & \textbf{58.85} \\ Vid4 & \textbf{34.49} & 37.25 & \textbf{35.82} && 13.00 & \textbf{77.88} & 22.31 && 22.29 & 67.74 & 33.5\\ \bottomrule Avg. & \textbf{50.34} & 50.44 & 50.22 && 42.76 & 59.25 & 44.77 && 47.3 & \textbf{61.98} & \textbf{50.29}\\ \bottomrule \end{tabular} \end{table*} \section{Introduction} \label{sec:intro} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{./Caption_vs_tags} \vspace{-18pt} \caption{\small{Comparing the semantic information captured by captions in~\cite{yeung2014videoset} and by the concept tags we collected.}} \label{fig:captionvstags} \vspace{-10pt} \end{figure*} Recent years have witnessed a resurgence of interest in video summarization, probably due to the overwhelming video volumes showing up in our daily life. Indeed, both consumers and professionals have the access to ubiquitous video acquisition devices nowadays. While the video data is a great asset for information extraction and knowledge discovery, due to its size and variability, it is extremely hard for users to monitor or find the occurrences in it. Intelligent video summarization algorithms allow us to quickly browse a lengthy video by capturing the essence and removing redundant information. Early video summarization methods were built mainly upon basic visual qualities (e.g., low-level appearance and motion features)~\cite{goldman2006schematic,gygli2015video,laganiere2008video,liu2002optimization,rav2006making,wolf1996key,zhao2014quasi}, while recently more abstract and higher-level cues are leveraged in the summarization frameworks~\cite{gong2014diverse,khosla2013large,kim2014joint,kwon2012unified,sharghi2016query,xiong2014detecting,yaohighlight,zhang2016summary}. However, one of the main obstacles to the research on video summarization is the user subjectivity --- users have various preferences over the summaries they would like to watch. The subjectiveness causes at least two problems. First, no single video summarizer fits all users unless it interacts with and adapts to the users. Second, it is very challenging to evaluate the performance of a video summarizer. In an attempt to solve the first problem, we have studied a new video summarization mechanism, query-focused video summarization~\cite{sharghi2016query}, that introduces user preferences in the form of text queries about the video into the summarization process. While this may be a promising direction to \textbf{\em personalize} video summarizers, the experimental study in \cite{sharghi2016query} was conducted on the datasets originally collected for the conventional generic video summarization~\cite{lee2012discovering,yeung2014videoset}. It remains unclear whether the real users would generate distinct summaries for different queries, and if yes, how much the query-focused summaries differ from each other. In this paper, we explore more thoroughly the query-focused video summarization and build a new dataset particularly designed for it. While we collect the user annotations, we meet the challenge how to define a good evaluation metric to contrast system generated summaries to user labeled ones --- the second problem above-mentioned due to the user subjectivity about the video summaries. We contend that the pursuit of new algorithms for video summarization has actually left one of the basic problems underexplored, i.e., how to benchmark different video summarizers. User study~\cite{lee2015predicting,lu2013story} is too time-consuming to compare different approaches and their variations at large scale. In the prior arts of automating the evaluation procedure, on one end, a system generated summary has to consist of exactly the same key units (frame or shot) as in the user summaries in order to be counted as a good one~\cite{chu2015video,song2015tvsum,xu2015gaze}. On the other end, pixels and low-level features are used to compare the system and user summaries~\cite{gong2014diverse,khosla2013large,kim2014joint,zhang2016summary,zhao2014quasi}, whereas it is unclear what features and distance metrics match users' criteria. Some works strive to find a balance between the two extremes, e.g., using the temporal overlap between two summaries to define the evaluation metrics~\cite{gygli2014creating,gygli2015video,potapov2014category,zhang2016video}. However, all such metrics are derived from either the temporal or visual representations of the videos, without explicitly encoding how humans perceive the information --- after all, the system generated summaries are meant to deliver similar information to the users as those directly labeled by the users. In terms of defining a better measure that closely tracks what humans can perceive from the video summaries, we share the same opinion as Yeung et al.'s~\cite{yeung2014videoset}: it is key to evaluate how well a system summary is able to retain the semantic information, as opposed to the visual quantities, of the user supplied video summaries. Arguably, the semantic information is best expressed by the concepts that represent the fundamental characteristics of what we see in the video at multiple grains, with the focus on different areas, and from a variety of perspectives (e.g., objects, places, people, actions, and their finer-grained entities, etc.). Therefore, as our first contribution, we collect dense per-video-shot concept annotations for our dataset. In other words, we represent the semantic information in each video shot by a binary semantic vector, in which the 1's indicate the presence of corresponding concepts in the shot. We suggest a new evaluation metric for the query-focused (and generic) video summarization based on these semantic vector representations of the video shots\footnote{Both the dataset and the code of the new evaluation metric are publicly available at \url{http://www.aidean-sharghi.com/cvpr2017}.}. In addition, we propose a memory network~\cite{sukhbaatar2015end} parameterized sequential determinantal point process~\cite{gong2014diverse} for tackling the query-focused video summarization. Unlike the hierarchical model in~\cite{sharghi2016query}, our approach does not rely on the costly user supervision about which queried concept appears in which video shot or any pre-trained concept detectors. Instead, we use the memory network to implicitly attend the user query about the video onto different frames within each shot. Extensive experiments verify the effectiveness of our approach. The rest of the paper is organized as follows. We discuss some related works in Section \ref{sec:related}. Section \ref{sec:dataset} elaborates the process of compiling the dataset, acquiring annotations, as well as a new evaluation metric for video summarization. In section \ref{sec:approach} we describe our novel query-focused summarization model, followed by detailed experimental setup and quantitative results in Sections \ref{sec:expset}. \begin{figure*} \centering \includegraphics[width=\linewidth]{./stats} \vspace{-25pt} \caption{\small{The frequencies of concepts showing up in the video shots, counted for each video separately.}} \label{fig:stats} \vspace{-10pt} \end{figure*} \section{Generating Oracle Summaries} \label{Oracle} Supervised video summarization methods often learn from one summary per video, or in the case of query-focused summarization per query-video pair. On the other hand, for evaluation purposes, it is better to contrast a system summary against multiple references and report the average. Thus, we collected 3 user summaries per query-video pair to use for evaluation purposes. However, in order to train the model, we obtain \textit{oracle} summaries that have maximum overall agreement with all three reference summaries (per query-video pair). The algorithm~\cite{kulesza2012determinantal} starts with the set of common shots ($y^0$) in all three reference summaries. Next, at each iteration, it greedily includes the shot that returns the largest marginal gain $G(i)$, \begin{align} \centering G(i) = \sum_u \text{F1-score}(y^0\cup i,y_u) - \sum_u \text{F1-score}(y^0,y_u) \end{align} where $u$ iterates over the user summaries (in our case, $u \in \{1,2,3\}$) and F1-score is obtained from our proposed evaluation metric. Table 1 in the main text shows the correlation between the obtained oracle summaries and user summaries, showing the oracle summary has very high agreement with all the user summaries. \section{Constructing Queries} \label{Queries} In this section, we thoroughly describe the process of generating the queries from dense concept annotations. While users often input free text to query videos through search engines, we simulate the real scenarios and construct the queries using the dense concept annotations we have collected (cf.\ Section 3.2 in the main text) to ease benchmarking different algorithms. By processing the dense user annotation data, we extract various statistics that enable us to have the queries covering a wide range of varieties. Initially, a concept is assumed present in the video if it appears in at least $T$ shots. This is to filter the present noise in annotations acquired from AMT workers and to make sure the concepts really appear together (to steer clear of the pairs that are tagged together as a result of noise or bias). As described in the main text, when a user enters a query $q$ (for instance, on a video search engine), which is usually more than one word, we have four distinct scenarios; i) all the concepts in the query appear in the same video shots together, ii) all concepts appear in the video, but never jointly in a single shot, iii) only one of the concepts constituting the query appears in the some shots of the video, and iv) none of the concepts in the query are present in the video (1 such query). A robust video summarizer, must be able to maintain good performance under any of the scenarios. Therefore, by including enough samples of all the scenarios, we build a comprehensive and diverse dataset. For the scenario i, we create a list of pairs that appear together in the same shots and sort it in descending order. There are two approaches to select concept pairs from this list: 1) to employ a random selection process where the probability of selecting a pair from the list is proportional to the number of times the pair appeared together in the video (this gives higher chance to the concepts that tend to happen together in the video while not completely crossing out the concepts that are not dominant in the video), and 2) to pick few top concept pairs. We opt to use the random selection process to better generalize the dataset and remove bias. For the scenario ii, we are interested in concept pairs that are present in the video but not in the same shots, e.g., concept pairs such as \textsc{Car} and \textsc{Room} that are unlikely to appear in the same shots of the video. To this end, for each pair we compute their harmonic mean of frequencies: \begin{equation} score(f_{c_1},f_{c_2}) = \frac{f_{c_1} \times f_{c_2}}{f_{c_1} + f_{c_2}} \end{equation} where $f_{c_1}$ and $f_{c_2}$ are the frequencies of concepts $c_1$ and $c_2$, respectively. This formulation has two interesting features that make it useful in this regard; 1) the resulting combination of numbers fed to it is always smaller than the smallest entry, 2) it is maximized when both inputs are large and identical. By computing the harmonic mean of frequencies for all the pairs in the list and sorting it in descending order, the concept pairs that have high frequencies for both concepts constituting the query are ranked higher. At this point, we employ the same random selection process to randomly choose pairs from this list. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{./Caption_vs_tags2} \caption{\small{Comparing semantic information in our dense tags vs captions provided by~\cite{yeung2014videoset}. The figure illustrates that the caption is targeting limited information about the scene, while the dense annotations are able to better explain the characteristics of the scene.}} \label{fig:captionvstags} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{./Summary_comparison2} \caption{\small{This figure compares two user summaries (generated by the same user) for two different queries. Both summaries contain shared segments, that are assumed important in the context of the video, while they disagree on the query-relevant segments.}} \label{fig:sumcom} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{ROUGEvsIOU_Chng} \caption{\small{Studying the effect of randomly \textit{replacing} some video shots in the user summary on the performance. The evaluation by ROUGE-SU4~\cite{lin2004rouge} is included for reference.}} \label{fig:ROUGEvsIOU} \end{figure*} For the third scenario, we concentrate on pairs that only one concept constituting the query is present in the video, e.g., if there is no \textsc{Car} present in the entire video while there exists shots with \textsc{Computer} appearing in them, the pair \textsc{Car} and \textsc{Computer} is a candidate for this scenario. To make sure that the constructed dataset is comprehensive and benefits from the versatile dictionary, we first exclude the concepts that were used in the first two scenarios, we put the rest in a list and use their frequencies to randomly sample from them. For the last scenario, where neither of concepts in pairs must be present in the video, we simply use the concepts that never appear in the video. For scenarios i, ii, and iii, we select 15 queries. For scenario iv, we only choose one query; summarizing based on any such query consisting of concepts that are not present in the entire video must result in about the same summary. In other terms, when a user wants the model to summarize the video based on a query consisting of non-present concepts, the summarizer must only return \textit{contextually} important segments of the video, that is essentially what a conventional generic video summarization approach (as opposed to query-dependent approaches) generates. Figure~\ref{fig:sumcom} shows that queries play a major role in the summaries that users generate. For a particular video, the same user has selected summaries that have both common (green margin) and uncommon (orange margin) segments. \section{Related Work} \label{sec:related} We discuss some related works in this section. This work extends our previous efforts~\cite{sharghi2016query} on \emph{personalizing} video summarizers. Both works explore the query-focused video summarization, but we study this problem more thoroughly in this paper through a new dataset with dense per-video-shot tagging of concepts. Our memory network based video summarizer requires less supervision for training than the hierarchical model in~\cite{sharghi2016query}. Unlike our user-annotated semantic vectors for the video shots, Yeung et al.\ asked annotators to caption each video shot using a sentence~\cite{yeung2014videoset}. A single sentence targets only limited information in a video shot and misses many details. Figure~\ref{fig:captionvstags} contrasts the concept annotations in our dataset with the captions for a few video shots. The concept annotations clearly provide a more comprehensive coverage about the semantic information in the shots. Memory networks~\cite{bahdanau2014neural,sukhbaatar2015end,weston2015towards,weston2014memory,xiong2016dynamic} are versatile in modeling the attention scheme in neural networks. They are widely used to address question answering and visual question answering~\cite{antol2015vqa}. The query focusing in our summarization task is analogous to attending questions to the ``facts'' in the previous works, but the facts in our context are temporal video sequences. Moreover, we lay a sequential determinantal point process~\cite{gong2014diverse} on top of the memory network in order to promote diversity in the summaries. A determinantal point process (DPP)~\cite{kulesza2012determinantal} defines a distribution over the power sets of a ground set that encourages diversity among items of the subsets. There have been growing interest in DPP in machine learning and computer vision~\cite{affandi2014learning,agarwal2014notes,batmanghelich2014diversifying,chao2015large,gartrell2016low,gillenwater2014expectation,DBLP:conf/icml/KuleszaT11,kulesza2011learning,kwok2012priors,li2016fast,mariet2015fixed,mariet2016kronecker,snoek2013determinantal}. Our model in this paper extends DPPs' modeling capabilities through the memory neural network. \section{Evaluation Metric Behavior} \label{IOUvsROUGE} As described in Section 5.2 of the main text, we studied the effect of randomly \textbf{removing} some video shots from the user summary on our proposed metric, observing that our metric has a linear drop in recall. Due to the fact that captions only capture limited information about the scene (cf.\ Figure~\ref{fig:captionvstags}), repeating the same experiment and evaluating with ROUGE-SU4 on captions provided by~\cite{yeung2014videoset}, recall showed a non-linear drop. As a side experiment, figure~\ref{fig:ROUGEvsIOU} illustrates the effect of randomly \textbf{replacing} some video shots in the user summary, studying the effect of noise on performance. Here we are swapping some shots with others that might be very similar or different to the original shots. For reference, we include the ROUGE-SU4 metric in our experiments as well.
{'timestamp': '2017-07-18T02:07:49', 'yymm': '1707', 'arxiv_id': '1707.04960', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.04960'}
\section[]{Introduction} In the last few years a new generation of models for the chemical evolution of the Galaxy has started to appear, in which also the dynamics of the system is taken into account (e.g. Sommer-Larsen \& Yoshii 1990, Chamcham \& Tayler 1994, Hensler, this volume, and references therein). This new class of models can provide a complete scenario for the evolution of the Milky Way but is still in a rather preliminary phase. The aim of this presentation is then to review the current state of {\it standard} models for the chemical evolution of the galactic disk, with particular emphasis on the effect of gas accretion on the element abundances and gradients. These models are quite successful in accounting for the large scale - long term phenomena taking place in the Galaxy, and reproduce its major observed features, such as the age-metallicity relation and the G-dwarf distribution in the solar neighbourhood, the elemental and isotopic abundances and ratios, the present star formation rate, gas and total mass densities. To avoid misleading conclusions, it is however necessary to test the models by comparing their predictions not only with the observational constraints derived for the solar neighbourhood but also with the data relative to other galactic regions; first of all because the solar neighbourhood is not representative of the whole disk, and secondly because the distribution with galactocentric distance of several quantities provides important information on the history of the Milky Way. The current rate of star formation (SFR) represents an excellent example of the importance of modeling the whole disk. One of the most popular approximations for the SFR is a linear proportionality with the gas density, and some authors consider it not only a simple and intuitive law, but also a realistic one because it can reproduce several properties observed in the solar neighbourhood objects. Since the observed radial distribution of the gas in the disk is rather flat (see the shaded area in the bottom panel of Fig.2), this approximation inevitably implies a flat radial distribution of the present SFR. However, Lacey \& Fall (1985) have shown that the current SFR in the disk, derived from a large sample of young objects (pulsars, O-stars, etc.), is actually a steep decreasing function of the galactocentric distance. Thus, the radial distribution predicted by models assuming a SFR linearly proportional to the gas density is totally inconsistent with the observed one. On the contrary, models assuming the SFR proportional to both the gas and the total mass densities (e.g. Tosi 1982 and 1988, Matteucci \& Fran\c cois 1989, hereinafter MF) are in agreement with the observed trend, thanks to the steep decrease of the total mass density with galactic radius. \section[]{Gas infall and abundance gradients} The idea of a long-lasting infall of metal poor gas on the galactic disk was first suggested by Larson (1972) and Lynden-Bell (1975) to solve several inconsistencies of the first simple models: a too rapid gas consumption that prevented to reproduce the amount of gas currently observed in the disk, the unlikelihood of a complete collapse of the whole protogalactic halo in a few 10$^8$ yr, and the existence of very few metal poor, long living stars in the solar neighbourhood, compared to the relatively large predicted percentage (the so called G-dwarf problem). Since then, gas accretion has turned out to be necessary to explain most of the characteristics of our disk, and all the chemical evolution models (with or without dynamics) in better agreement with the largest set of observational constraints assume a significant amount of gas infall throughout the disk lifetime. \subsection[]{Interstellar medium abundances} One of the most evident effects of gas infall on galactic evolution concerns the absolute value and the radial distribution of the element abundances. If no gas accretion is assumed after the disk formation, all chemical evolution models with reasonable SFR and initial mass function (IMF) predict too large present abundances and/or inconsistent radial distributions. This problem is apparent in the top panel of Fig.1 where the distribution of the current oxygen abundances predicted by models with exponentially decreasing SFR and Tinsley's (1980) IMF is compared with that derived by Peimbert (1979) and Shaver \mbox{\it et al.} (1983) from HII regions observations. The models divide the disk in concentric rings, 1 kpc wide, and assume the sun at 8 kpc. Gas motion can be allowed between consecutive rings, whereas stars are assumed to die in the same region where they were born. If the galactocentric distances of the HII regions are properly rescaled assuming the sun at R=8 kpc, the observational oxygen gradient is $\Delta$log(O/H)/$\Delta$R = -0.103 dex kpc$^{-1}$. The long-dashed line corresponds to a model with no infall after the disk formation 13 Gyr ago: it is too flat and lies above the observed range of oxygen abundances. The solid line, instead, fits very well the data and corresponds to a model with SFR e-folding time of 15 Gyr and constant infall of primordial gas with density rate F = 4 10$^{-3}$ M$_{\odot}$kpc$^{-2}$yr$^{-1}$ all over the disk. This uniform density rate implies a larger mass of metal free gas infalling in the outer than in the inner rings and favours the development of negative metallicity gradients as steep as observed. This model reproduces the most important features of the Milky Way and from now on it will be referred to as the {\it reference} model. It must be emphasized, however, that other combinations of the SFR and infall parameters may lead to a similarly good agreement with the data, as shown in Fig.1 by the short-dashed line, corresponding to a model assuming a shorter e-folding time for the SFR (5 Gyr) and a lower infall density rate (F = 2 10$^{-3}$ M$_{\odot}$kpc$^{-2}$yr$^{-1}$). What is generally found is that models in better agreement with the observational constraints assume SFR e-folding times in the range 5-15 Gyr and e-folding times for the gas accretion rate longer than for the SFR. This requirement is not unrealistic, if we consider that according to Sofue's (1994) models the Magellanic Stream is regularly supplying the Milky Way with gas since 10 Gyr. \begin{figure} \vspace{9truecm} \caption{Present distribution of the oxygen abundance with galactocentric distance, as derived from HII region observations (dots) and theoretical models. The average observational uncertainty is shown in the bottom left corner. Top panel: models with primordial infall (see text for details). Bottom panel: models with metal enriched infall.} \end{figure} If the mass of infalling gas is assumed to increase inwards rather than outwards the model predictions are much less satisfactory. The dotted line corresponds to a model with infall rate proportional to the total mass of each ring, which is then increasing toward the center. As a consequence, there is a larger dilution of the inner interstellar medium (ISM) resulting in a flat abundance distribution inconsistent with that derived from HII regions. The above results refer to infalling gas with primordial chemical composition, which would be available at best in the intergalactic medium or in the early halo. If the gas originates from regions already polluted by stellar nucleosynthesis, such as the current halo or the Magellanic Stream, it has most probably a non negligible metal content. The intermediate solid line in the bottom panel of Fig.1 shows that if the metallicity of the infalling gas is 0.5 Z$_{\odot}$ the predictions of the {\it reference} model are at the upper edge of the observed distribution. The top solid line shows that if the infall metallicity is solar the resulting oxygen abundance is definitely outside the observational range. From a large variety of models, Tosi (1988b) has found that to allow for a good agreement with the data the infall metallicity should not exceed 0.3 Z$_{\odot}$. The same result has been obtained by MF with different models and different assumptions on the relative abundances of the infalling elements. This limit is perfectly consistent with the metal content attributable to both the galactic halo and the Magellanic Clouds. Depending on the model parameters, the present infall rate for the whole disk ranges between 0.3 and 1.8 M$_{\odot}$yr$^{-1}$. The lower limit of this range is in agreement with the amount inferred from Very High Velocity Clouds which, with the Magellanic Stream, are the most reliable observational evidence for this phenomenon. If a fraction of High Velocity Clouds could be considered of non disk origin as well (see Danly, this volume), the amount of infalling gas observationally detected would cover all the theoretical range. \begin{figure} \vspace{9truecm} \caption{Radial distribution at three different epochs of quantities predicted by the {\it reference} model. Top panel: SF and infall rates; bottom panel: gas and total mass. The shaded area corresponds to the observed gas mass range as published by Lacey \& Fall (1985)} \end{figure} The metallicity gradients predicted by chemical evolution models depend on the ratio between the SFR and the interstellar and infall gas predicted at each epoch and at each galactocentric distance. The top panel of Fig.2 shows the radial distribution of the SFR and infall rate resulting from the {\it reference} model at three different epochs: the dotted line corresponds to the epoch of disk formation (assumed to be 13 Gyr ago), the dashed line to the epoch of sun formation (8.5 Gyr later), and the solid line to the present. Since the infall rate is assumed to be constant, only one line appears in the figure. The bottom panel of Fig.2 displays the radial distributions of the gas and total mass at the same three epochs. The disk is supposed to evolve from an initial configuration of pure gas with radially decreasing mass (dotted line). The initial SFR is radially decreasing as well, so that in inner regions there is more astration and therefore larger stellar production of metals. However, the amount of ISM gas which must be polluted by these metals is much larger in the inner regions and therefore the efficiency of the ISM enrichment is quite modest. Thus, at early epochs the predicted abundance gradients are either flat or even positive, depending on the model parameters (see also Moll\'a \mbox{\it et al.} 1990). After several Gyr, the situation changes significantly, because the larger astration of the inner regions leads to a higher gas consumption which is not totally compensated by infall since the gas accretion is assumed to be increasing outwards. Thus, at a certain time (see for instance the dashed line corresponding to the situation 4.5 Gyr ago) the larger SFR in the inner regions corresponds to a higher efficiency in the metal enrichment of the medium and negative abundance gradients start to develop. Since then, and as long as the star formation activity remains a decreasing function of the galactocentric distance, the slope of the gradients keeps steepening, because the gas radial distribution becomes increasingly flat, the infall dilution is more efficient outside, and the metal enrichment inside. As shown in Fig.2, the disk of the Galaxy is currently in this phase with a very flat radial distribution of the gas mass, a radially decreasing SFR and a SFR/infall rate ratio progressively smaller for increasing R and equal to 1 at 9-10 kpc (see also Wilson \& Matteucci 1992). A steepening with time of the abundance gradients is predicted by most of the models (with or without dynamics) which are able to reproduce the observational features of the Galaxy (e.g. MF, Chamcham \& Tayler 1994, Koppen 1994) despite the rather different model characteristics (see, however, Ferrini \mbox{\it et al.} 1994 for different predictions). It is then important to verify if this predicted trend is indeed consistent with the available observational constraints. Since the gradient derived from data on HII regions is representative of the situation in the current ISM, to test the model predictions older objects must be examined as well. \begin{figure} \vspace{9truecm} \caption{Top panel: Radial distribution of the oxygen abundances 3 Gyr ago derived from the {\it reference} model (solid line) and from observations of PNeII (dots). The data are from PP and the dotted line represent their best fit. Bottom panel: same, but for the He abundance.} \end{figure} Planetary Nebulae, specially of Peimbert's (Peimbert \& Torres-Peimbert 1983) type II (PNeII), are also very good indicators of the ISM metallicity. PNeII have stellar progenitors with lifetimes in the range 1-5 Gyr and therefore represent the ISM conditions around 3 Gyr ago. Two recent and extensive studies (Pasquali \& Perinotto 1993, hereinafter PP, and Maciel \& Koppen 1994) show that the abundance gradients derived for several elements in PNeII are systematically flatter than the corresponding gradients derived from HII regions. For instance, the oxygen gradient derived by PP is $\Delta$log(O/H)/$\Delta$R = -0.03$\pm$0.01 dex kpc$^{-1}$ and that derived by Maciel \& Koppen is -0.07$\pm$0.01. The latter authors have also found hints of increasing slopes of the gradients with decreasing age of the PNe (i.e. from type III to type I), in agreement with the model predictions. Fig.3 shows the helium (bottom) and oxygen (top) abundances as derived by PP from PNeII and the corresponding predictions of the {\it reference} model. The agreement between the model solid line and the empirical best fit to the data (dotted line) is excellent. This confirms that in the last few billion years the slope of the abundance gradients in the ISM has actually steepened. No other gaseous indicators are available to check whether the gradients were increasingly flatter at earlier epochs. Stars and stellar clusters of whatever age are instead visible in a fairly large range of distances and can therefore indicate what was the earlier scenario. \subsection[]{Stellar abundances} As far as single stars are concerned, the situation is unfortunately rather confuse. Lewis \& Freeman (1989) found no significant metallicity gradient in a sample of 600 old K-giants, but more recently Edvardsson \mbox{\it et al.} (1993) have argued that the radial metallicity distribution derived from a sample of 189 F and G-dwarfs is similar to that derived from HII regions. The major result of this accurate and extensive work is the scatter on the derived abundances which turns out to be much larger than the observational uncertainties and should then be considered an intrinsic feature of the analysed stellar population. Edvardsson \mbox{\it et al.} therefore avoided to formally derive the slopes of the abundance gradients, but one can obtain them from their table 14 where the analysed stars are divided in different groups according to their age and galactocentric distance and the average [Fe/H] of each group is given. Despite the poorness of the sample in the older and more distant bins, and the corresponding weakness of the statistics, it is interesting to point out that the resulting formal slopes get flatter for increasing age (i.e. toward earlier epochs) and that the oldest bin even shows a positive gradient (derived from two single points, however !), thus giving some further support to the predictions of the chemical evolution models. \begin{figure} \vspace{6truecm} \caption{Age-metallicity distribution for $\alpha$ elements as derived from the {\it reference} model and from Edvardsson \mbox{\it et al.} (1993) data. Filled circles and dashed curve refer to stars in rings with 4$\leq$R$\leq$7 kpc, open circles and solid curve to stars with 7$\leq$R$\leq$9 kpc, crosses and dotted line to stars with 9$\leq$R$\leq$11 kpc. Age is in Gyr.} \end{figure} Another interesting feature of the Edvardsson \mbox{\it et al.} data is the different distribution of metallicity with age for stars at different galactic locations. As pointed out by Pagel (1994) and shown in Fig.4, if one divides their stars into three groups according to their mean galactocentric distances (inner objects, solar ring objects, and outer ones), one finds that the outer stars show a much flatter age-metallicity distribution, with average abundances in the last ten billion years (i.e. over most of the disk lifetime) systematically lower that those of the other objects. As already mentioned above, the data show a large intrinsic scatter in the derived metallicities of stars of any age and this scatter cannot be directly reproduced by standard chemical evolution models like the {\it reference} one which assumes both the SFR and the gas accretion in a sort of steady-state. However, the age-metallicity relations predicted for the three ranges of galactocentric distances are consistent with each of the corresponding average empirical distributions. Notice that the relations predicted for the solar (solid line) and the outer (dotted line) rings flatten off at recent epochs, whereas the relation predicted for the inner ring (dashed line) keeps increasing up to the present time, as already shown by MF. Besides, Fran\c cois \& Matteucci (1993) have argued that even the spread of the observed age-metallicity distribution can be accounted for by {\it standard} models, once the different birthplaces of the sample stars are considered. The major problem of single stars analyses is the uncertainty in the derived R, age and metallicity of objects beyond a limited distance from the sun as confirmed by the Edvardsson's \mbox{\it et al.} survey. From this point of view, open clusters are in principle safer indicators. There are, however, several problems affecting also the determination of the cluster parameters, such as the non homogeneity of most age estimates and the uncertainty on the cluster original birthplace, the cluster disruption due to disk friction which can alter the original distributions, etc. (see, however, Carraro \& Chiosi 1994). Several years ago, young open clusters have been suggested to indicate steeper abundance gradients than old open clusters (Mayor 1976, Panagia \& Tosi 1981). However, more recent and extensive studies (Friel \& Janes 1993, Thogersen \mbox{\it et al.} 1993) do not seem to support this hypothesis. The metallicity gradient derived by Janes and collaborators is $\Delta$[Fe/H]/$\Delta$R = -0.09$\pm$0.02 dex kpc$^{-1}$ for the whole sample of clusters of any age, and does not seem to depend on the cluster age. Bearing in mind that for field stars in the disk oxygen has been empirically found to follow the relation [O/Fe]$\simeq-$0.3[Fe/H] (e.g. Edvardsson \mbox{\it et al.} 1993), and assuming that this relation applies to open clusters as well, the iron gradient corresponds to an oxygen gradient $\Delta$log(O/H)/$\Delta$R = -0.06, flatter indeed than that derived from HII regions and more similar to that of objects (as the PNeII discussed above) a few Gyr old. \begin{figure} \vspace{5.5truecm} \caption{Radial distribution of open clusters metallicity as derived from Friel \& Janes (1993) and Thogersen \mbox{\it et al.} (1993) samples. The clusters have been divided in age bins and the linear best fit for each bin is shown (dotted line for the oldest bin, long-dashed for the 4-5 Gyr bin, short-dashed for the 2-4 Gyr bin, and solid line for the youngest one.} \end{figure} The most striking feature of Friel \& Janes' sample is that at each galactic radius the oldest clusters are also the most metal rich (see Fig.5). It is true that the published clusters in the oldest age bin ($\geq$8 Gyr) are only four and the corresponding statistics is therefore too poor; however two additional clusters of roughly the same age have been found (Friel 1994, private communication) with less extreme metal abundances but still higher than average. It is of crucial importance to verify this result with a larger sample of old clusters and with more accurate and homogeneous methods to derive their ages, chemical abundances and galactic original locations. It might well be, in fact, that this anomaly is fictitious and resulting from the uncertainty in the metallicity and/or, more probably, age determination. However, if confirmed, this phenomenon would have remarkable implications on our understanding of the Galaxy evolution, because it is opposite to any intuitive age-metallicity relation derivable from {\it steady-state} scenarios where old stars are inevitably more metal poor than young objects and may result from short, intense phenomena not considered in our models. Another characteristics of old open clusters is that all of them are located beyond 7-7.5 kpc from the galactic center, contrary to younger clusters which are likely to be equally distributed everywhere in the disk (Janes \& Phelps 1994). On the one hand, the external location of the older clusters in the observed sample might not reflect an odd distribution of all the clusters formed several Gyr ago and be the result of a more efficient disruption in the inner than in the outer regions. On the other hand, it may instead correspond to a non homogeneous star formation activity, perhaps related to external phenomena like the first impact on the disk of the Magellanic Stream (Sofue 1994). The latter scenario might also provide an explanation to the large metallicity of the oldest clusters, in terms of a transitory metal enhancement of the ISM due to the larger SFR triggered by the sudden event and later smeared out during the following {\it steady-state} evolution. \section[]{Summary} In conclusion, the comparison between the abundance distributions in the galactic disk predicted by {\it standard} chemical evolution models and derived from observational data can be summarized as follows:\\ a) Abundance distributions derived from observations of gaseous objects of various ages (HII regions and PNe) are very well reproduced by {\it steady-state} models with slowly decreasing SFR and large - long lasting infall of metal poor gas.\\ b) Average abundance distributions derived from stars are also reproduced. These {\it standard} models, however, do not reproduce the observed spread in the metallicity distribution of field stars and the anomalously high [Fe/H] of the oldest open clusters. We must bear in mind, however, that old stars can have quite eccentric orbits and can therefore have formed in galactic regions different from those where they are observed now. According to Fran\c cois \& Matteucci this might explain most of the abundance spread in the Edvardsson's \mbox{\it et al.} sample. On the other hand, Carraro \& Chiosi (1994) have suggested that the same argument cannot apply to the case of old open clusters, for which other explanations are thus needed, unless all the inconsistencies with the model predictions can be attributed to observational errors. A possible reason for the different agreement found for gas and stellar objects is that the gas mixes rapidly, compared to the timescales for galactic evolution, therefore forgets local perturbations occurred in the past and follows the {\it steady-state} scenario. Stars, instead, keep memory of the local perturbations occurring at, or just before, their birth and therefore deviate more from that scenario, showing intrinsic large scatter and anomalous behaviours. To interpret in detail their observed features more sophisticated models taking into account also the small scale, short term phenomena are therefore required. \vskip 1truecm I wish to thank Francesca Matteucci for always being ready to discuss and compare the results and the different approaches of our models: a praiseworthy attitude rather unusual among theoreticians.
{'timestamp': '1997-03-10T19:36:58', 'yymm': '9411', 'arxiv_id': 'astro-ph/9411057', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9411057'}
\section{Introduction} Graphs considered in this paper are finite, undirected and simple. For a graph $G$, let $V(G)$ be the vertex set of $G$ and $E(G)$ be the edge set of $G$. Let $v(G)$ denote the order of $G$, i.e., $|V(G)|=v(G)$. For $u\in V(G)$, $N(u)=\{v: uv\in E(G)\}$, $N[u]=\{u\}\cup N(u)$ and $d(u)=|N(u)|$. Denote the chromatic number of $G$ by $\chi(G)$, and let $s(G)$ denote the chromatic surplus of $G$, which is the number of vertices in a minimum color class over all proper $\chi(G)$-colorings of $V(G)$. Let $U$ be a subset of $V(G)$. Denote $G-U$ as the graph obtained from $G$ by deleting $U$ and all edges incident to $U$. Denote a complete graph on $n$ vertices by $K_n$, and a path on $n$ vertices by $P_n$. The union of $k$ disjoint copies of a graph $F$ is denoted by $kF$ and the disjoint union of $G$ and $H$ is denoted by $G\cup H$. The join graph of $G$ and $H$, denoted by $G\vee H$, is the graph obtained from the vertex disjoint union $G\cup H$ by joining each vertex of $G$ to each vertex of $H$. Let $G$ and $H$ be graphs without isolated vertices. The Ramsey number $R(G,H)$ is the minimum integer $N$ such that any coloring of the edges of $K_N$ in red or blue yields a red $G$ or a blue $H$. Determining $R(G,H)$ in general is a very challenging problem, there are several excellent surveys on Ramsey numbers. In this paper, we consider a problem related to Ramsey goodness. In \cite{B}, Burr gave the following lower bound. \begin{theo}\label{Burr} {\rm(Burr \cite{B})} For a connected graph $G$ and a graph $H$ with $v(G)\geq s(H)$, $$R(G,H) \geq (v(G)-1)(\chi(H)-1)+s(H).$$ \end{theo} Burr defined $G$ to be $H$-good if the equality $$R(G,H) = (v(G)-1)(\chi(H)-1)+s(H)$$ holds under the conditions of Theorem \ref{Burr}. In the 1970s, before the definition of Ramsey-goodness was given, a well-known result of Chv\'atal \cite{C} showed that any tree is $K_m$-good for every $m\geq2$ and an earlier result of Chv\'atal and Harary \cite{CH} showed that any tree is $2K_2$-good. By applying Chv\'atal's theorem, Stahl \cite{St} determined the Ramsey number of a forest versus $K_m$. In 1983, Burr and Erd\H{o}s \cite{BE} proved that for any fixed $k$ and $m$, there exists $n_0$ such that the family of connected graphs with bandwidth at most $k$ and at least $n_0$ vertices is $K_m$-good, where the bandwidth of a graph $G$ is the smallest number $k$ such that there is an ordering $v_1,\dots,v_n$ of $V(G)$ such that each edge $v_iv_j$ satisfies $|v_i-v_j|\leq k$. This result was recently extended by Allen, Brightwell and Skokan \cite{ABS} that for each fixed graph $H$ and $k$, there exists $n_0$ such that the family of connected graphs with bandwidth at most $k$ and at least $n_0$ vertices is $H$-good. These two results implies a path (with bandwidth $1$) or a cycle (with bandwidth $2$) with sufficiently large number of vertices has good goodness properties. Results without the assumption of sufficiently large number of vertices are also interesting. In \cite{SAM}, Sudarsana, Adiwijaya and Musdalifah showed that $P_n$ is $2K_m$-good for $n\geq3$ and $m\geq2$, and conjectured that any tree $T_n$ with $n$ vertices is $2K_m$-good. Recently, Pokrovskiy and Sudakov \cite{PS} proved that for a fixed graph $H$, the path on $n$ vertices with $n\geq4v(H)$ is $H$-good. Balla, Pokrovskiy and Sudakov \cite{BPS} showed that for all $\Delta$ and $k$, there exists a constant $C_{\Delta,k}$ such that for any tree $T$ with maximum degree at most $\Delta$ and any $H$ with $\chi(H)=k$ satisfying $v(T)\geq C_{\Delta,k}v(H)\log^4 v(H)$, $T$ is $H$-good. In \cite{LLD}, Lin, Li and Dong proved that if $T_n$ is $G$-good and $s(G)=1$, then $T_n$ is $K_1\vee G$-good. For other results concerning Ramsey-goodness graphs, we refer the reader to the survey papers by Conlon, Fox and Sudakov \cite{CFS}, and Radziszowski \cite{R}. In this paper, we consider the Ramsey number of a forest versus disjoint union of complete graphs. In Section $2$, we show the following result which is a continuation of Chv\'atal's classical result. It also confirms the mentioned conjecture by Sudarsana, Adiwijaya and Musdalifah \cite{SAM}. \begin{theo}\label{Tn} Let $n\geq 3$ and $m\geq2$ be integers. Let $T_n$ be a tree with order $n$, then $$R(T_n,2K_m)=(n-1)(m-1)+2.$$ \end{theo} The proof of Theorem \ref{Tn} will be given in Section $2$. In Section $3$, we show a result which yields that $T_n$ is $K_m\cup K_l$-good, where $n\geq 3$ and $m>l\geq2$ are integers. In Section $4$, we introduce a construction which yields a general lower bound of $R(G,H)$ for arbitrary graphs $G$ and $H$. On this foundation, we extend the Ramsey goodness of connected graphs to disconnected graphs and explore the relation between the Ramsey number of a disconnected graph $\mathrm{F}$ versus a graph $H$ and the Ramsey number of its components versus $H$. We extend an upper bound given by Gould and Jacobson \cite{GJ}, and show that if each component of a graph $F$ is $H$-good, then $F$ is $H$-good. Furthermore, we will apply the Ramsey goodness results of trees to obtain the Ramsey number of a forest versus disjoint union of complete graphs. Next, we outline the idea of the proof of Theorem \ref{Tn}. \subsection{Two Operations on a Tree and Outline of the Proof of Theorem \ref{Tn}} A key observation in the proof of Theorem \ref{Tn} is that any tree with $n$ vertices can be obtained from $P_n$ by performing a series of two operations. Furthermore, we show that these two operations preserve the “$2K_m$-goodness” property. Let us describe these two operations precisely below. \ \noindent{\bf Stretching a tree $T$ at a leaf $a$ :} {\em Let $T$ be a tree with $n\geq 3$ vertices and $a$ be a leaf in $T$. Let $T'$ be obtained from $T$ by deleting a leaf $b$ (other than $a$) of $T$ and adding a new vertex connecting to $a$. We say that $T'$ is obtained by Stretching $T$ at leaf $a$.} \ \noindent{\bf Expanding a tree $T$ at a vertex $u$ :} {\em Let $2\leq d\leq n-2$ and $T$ be a tree with $n\geq4$ vertices. Let $u\in V(T)$ with $N(u)=\{z_0,z_1,\dots,z_{d-1}\}$ satisfying that $z_i$ is a leaf of $T$ for each $i\in[d-1]$ and $z_0$ is not a leaf. Let $T'$ be obtained from $T$ by deleting a leaf $b\in T- N[u]$ and adding a new vertex connecting to $u$. We say that $T'$ is obtained by Expanding $T$ at vertex $u$.} \begin{defi}\label{defi} We say that a property $\mathcal{P}$ is Stretching-preserving (or Expanding-preserving) if a tree $T$ satisfying $\mathcal{P}$ implies that $T'$ also satisfying $\mathcal{P}$, where $T'$ is obtained by Stretching (or Expanding) $T$ at a leaf (or a vertex). \end{defi} In Section $2.2$, we will prove the following key observation. \begin{prop}\label{remark} Given a tree $T$ on $n$ vertices, we can obtain any tree on $n$ vertices from $T$ by applying Stretching and Expanding multiple times. \end{prop} It's easy to see that Proposition \ref{remark} will imply the following corollary. \begin{coro}\label{111} If a tree $T$ satisfies property $\mathcal{P}$, and $\mathcal{P}$ is Stretching-preserving and Expanding-preserving, then every tree satisfies $\mathcal{P}$. \end{coro} In Section $2$, we show that the property “$2K_m$-goodness” is Stretching-preserving and Expanding-preserving, and prove Theorem \ref{Tn}. \section{Tree is $2K_m$-good} \subsection{Proof of Theorem \ref{Tn}} In this subsection we prove Theorem \ref{Tn}. We first prove the crucial lemmas that the property “$2K_m$-goodness” is Stretching-preserving and Expanding-preserving under some conditions. \begin{lemma}\label{1} Let $n\geq 3$ and $m\geq2$ be integers. Assume that $R(T,2K_{m-1})\leq (n-1)(m-2)+2$ holds for any tree $T$ with order $n$. Let $T_n^*$ be a tree with $n$ vertices and $T_n^{**}$ be obtained by Stretching $T_n^{*}$ at a leaf $a$ (see Figures \ref{T3} and \ref{T4}). If $R(T_n^*,2K_m)\leq (n-1)(m-1)+2$, then $R(T_n^{**},2K_m)\leq (n-1)(m-1)+2$. \end{lemma} \begin{proof} If $n=3$, a tree with $3$ vertices must be $P_3$ and the result holds. If $T_n^*=P_n$ is a path, then $T_n^{**}=P_n$ and the result holds. So we may assume that $n\geq4$ and $T_n^*$ is not a path. Let $u$ be the vertex adjacent to $a$ in $T_n^*$. Since $T_n^*$ is not a path and $n\geq4$, $T_n^*$ has at least two leaves $b$ and $c$ in $V(T_n^*)\setminus\{a,u\}$. Let $vc$ be an edge (as in Figure \ref{T3}). We remark that it's possible that $v=u$. \begin{figure}[H] \centering \begin{minipage}{5cm} \includegraphics[width=0.9\textwidth, height=0.7\textwidth]{T3} \caption{$T_n^*$} \label{T3} \end{minipage} \begin{minipage}{5cm} \includegraphics[width=0.87\textwidth, height=0.7\textwidth]{T4} \caption{$T_n^{**}$} \label{T4} \end{minipage} \end{figure} \noindent Let $N=(n-1)(m-1)+2$ and the edges of $K_N$ be colored by red or blue. We will show that $K_N$ contains a red $T_n^{**}$ or a blue $2K_m$. Since $R(T_n^*,2K_m)\leq (n-1)(m-1)+2$, then $K_N$ contains a red $T_n^{*}$ or a blue $2K_m$. We just need to consider that $K_N$ contains a red $T_n^{*}-\{b\}\subseteq T_n^{*}$, whose order is $n-1$. Since $N-(n-1)=(n-1)(m-2)+2$ and $R(T,2K_{m-1})\leq (n-1)(m-2)+2$ holds for any tree $T$ with order $n$, then $K_N-(T_n^{*}-\{b\})$ contains a red $T_n^{**}$ or a blue $2K_{m-1}$. We just need to consider that $K_N-(T_n^{*}-\{b\})$ contains a blue $2K_{m-1}$, denoted by $A$ and $B$ (see Figure \ref{T5}). We note that the edges between $a$ and $A$, and $a$ and $B$ are all blue. Otherwise there will be a red copy of $T_n^{**}$ and we are done. Let $F=K_N-(T_n^{*}-\{b\})-A-B+\{c\}$. Clearly $|V(F)|=(n-3)(m-2)+1$. By Chv\'atal's theorem, $F$ contains a red $T_{n-2}=T_n^{**}-\{b',c'\}$ or a blue $K_{m-1}$. Case $1$. $F$ contains a red $T_n^{**}-\{b',c'\}$ as in Figure \ref{T5}. Note that there exist $x_A'\in A$ and $x_B'\in B$ such that $x_A'v'$ and $x_B'a'$ are red. Otherwise, $\{v'\}\cup A$ and $\{a\}\cup B$ or $\{a'\}\cup B$ and $\{a\}\cup A$ form a blue copy of $2K_m$, we are done. Hence $T_n^{**}-\{b',c'\}+\{x_A',x_B'\}$ contains a red copy of $T_n^{**}$ (see Figure \ref{T5}), and we are done. \begin{figure}[H] \centering \includegraphics[height=5.5cm]{T5} \caption{$F$ contains a red $T_n^{**}-\{b',c'\}$} \label{T5} \end{figure} Case $2$. $F$ contains a blue $K_{m-1}$, denoted by $C$ (see Figure \ref{T6}). To avoid a blue copy of $2K_m$, there exist $x_A\in A$ and $x_B\in B$ such that $ux_A$ and $vx_B$ are red. Otherwise, $\{u\}\cup A$ and $\{a\}\cup B$ or $\{v\}\cup B$ and $\{a\}\cup A$ form a blue copy of $2K_m$, we are done. Furthermore, there exists $x_C\in C$ such that $x_Ax_C$ is red. Otherwise, $\{x_A\}\cup C$ and $\{a\}\cup B$ form a blue copy of $2K_m$, we are done. Hence $T_n^{*}-\{b\}-\{a,c\}+\{x_A,x_B,x_C\}\subseteq K_N$ contains a red copy of $T_n^{**}$ (see Figure \ref{T6}). and we are done. \begin{figure}[H] \centering \includegraphics[height=5.5cm]{T6} \caption{$F$ contains a blue $K_{m-1}$} \label{T6} \end{figure} \end{proof} \begin{lemma}\label{2} Let $n\geq 3$ and $m\geq2$ be integers. Assume that $R(T,2K_{m-1})\leq (n-1)(m-2)+2$ holds for any tree $T$ with order $n$. Let $T_n^*$ be a tree with $n$ vertices. Let $T_n^{**}$ be obtained by Expanding $T_n^{*}$ at a vertex $u$ (see Figures \ref{T1} and \ref{T2}). If $R(T_n^*,2K_m)\leq (n-1)(m-1)+2$, then $R(T_n^{**},2K_m)\leq (n-1)(m-1)+2$. \begin{figure}[H] \centering \begin{minipage}{5cm} \includegraphics[width=0.85\textwidth, height=0.8\textwidth]{T1} \caption{$T_n^*$} \label{T1} \end{minipage} \begin{minipage}{5cm} \includegraphics[width=0.87\textwidth, height=0.8\textwidth]{T2} \caption{$T_n^{**}$} \label{T2} \end{minipage} \end{figure} \end{lemma} \begin{proof} We just need to consider that $n\geq4$. Since $T_n^*- N_1(u)+\{z_0\}$ is also a tree, then there exists at least a leaf $b$ other than $z_0$ in $T_n^*- N_1(u)$. Note that $b$ is also a leaf of $T_n^*$. Recall that $d\leq n-2$ in view of the definition of Expanding. Let $N=(n-1)(m-1)+2$ and the edges of $K_N$ be colored by red or blue. We will show that $K_N$ contains a red $T_n^{**}$ or a blue $2K_m$. Since $R(T_n^*,2K_m)\leq (n-1)(m-1)+2$, then $K_N$ contains a red $T_n^{*}$ or a blue $2K_m$. We just need to consider that $K_N$ contains a red $T_n^{*}-\{b\}\subseteq T_n^{*}$, whose order is $n-1$. Since $N-(n-1)=(n-1)(m-2)+2$ and $R(T,2K_{m-1})\leq (n-1)(m-2)+2$ holds for any tree $T$ with order $n$, then $K_N-(T_n^{*}-\{b\})$ contains a red $T_n^{**}$ or a blue $2K_{m-1}$. We just need to consider that $K_N-(T_n^{*}-\{b\})$ contains a blue $2K_{m-1}$, denoted by $A$ and $B$ (see Figure \ref{T7}). We note that the edges between $u$ and $A$, and $u$ and $B$ are all blue. Otherwise there will be a red copy of $T_n^{**}$ and we are done. \begin{figure}[H] \centering \includegraphics[height=4cm]{T7} \caption{$K_N-(T_n^{*}-\{b\})$ contains a blue $2K_{m-1}$} \label{T7} \end{figure} Let $H_1=K_N-(T_n^{*}-\{b\})-A-B+\{z_1\}$. Then $|V(H_1)|=(n-3)(m-2)+1$. If $d\geq3$, we perform the following procedure. By Chv\'atal's theorem, $H_1$ contains a red $T_{n-2}=T_n^{**}-\{z_1',z_2'\}$ or a blue $K_{m-1}$. Let us consider that $H_1$ contains a red $T_n^{**}-\{z_1',z_2'\}$ first, as in Figure \ref{T8}. Note that there exist $x_A'\in A$ and $x_B'\in B$ such that $x_A'u'$ and $x_B'u'$ are red. Otherwise, $\{u'\}\cup A$ and $\{u\}\cup B$ or $\{u'\}\cup B$ and $\{u\}\cup A$ form a blue copy of $2K_m$, we are done. Hence $T_n^{**}-\{z_1',z_2'\}+\{x_A',x_B'\}$ contains a red copy of $T_n^{**}$ (see Figure \ref{T8}), and we are done. \begin{figure}[H] \centering \includegraphics[height=5cm]{T8} \caption{$H_1$ contains a red $T_n^{**}-\{z_1',z_2'\}$} \label{T8} \end{figure} \noindent Now we consider that $H_1$ contains a blue $K_{m-1}$, denoted by $C_1$ (see Figure \ref{T9}). Let $H_{2}=H_1-C_1+\{z_2\}$ (see Figure \ref{T9}). Then $|V(H_2)|=(n-4)(m-2)+1$. By Chv\'atal's theorem, $H_2$ contains a red $T_{n-3}=T_n^{**}-\{z_1',z_2',z_3'\}$ or a blue $K_{m-1}$. Let us consider that $H_2$ contains a red $T_n^{**}-\{z_1',z_2',z_3'\}$ first, as in Figure \ref{T9}. Note that there exist $x_A'\in A$, $x_B'\in B$ and $x_{C_1}'\in C_1$ such that $x_A'u'$, $x_B'u'$ and $x_{C_1}'u'$ are red. Otherwise, $\{u'\}\cup A$ and $\{u\}\cup B$ or $\{u'\}\cup B$ and $\{u\}\cup A$ or $\{u'\}\cup C_1$ and $\{u\}\cup A$ form a blue copy of $2K_m$, and we are done. Hence $T_n^{**}-\{z_1',z_2',z_3'\}+\{x_A',x_B',x_{C_1}'\}$ contains a red copy of $T_n^{**}$ (see Figure \ref{T9}), and we are done. \begin{figure}[H] \centering \includegraphics[height=5cm]{T9} \caption{$H_2$ contains a red $T_n^{**}-\{z_1',z_2',z_3'\}$} \label{T9} \end{figure} \noindent We now consider that $H_2$ contains a blue $K_{m-1}$, denoted by $C_2$. We take a similar procedure as before and continue the process at most $d-2$ times. Either we are done or we obtain $d-2$ blue copies of $K_{m-1}$, denoted by $C_1,C_2,\dots,C_{d-2}$ (see Figure \ref{T10}) and $H_1,H_2,\dots,H_{d-2}$ with $H_{i+1}=H_i-C_i+\{z_{i+1}\}$ for $i=1,2,\dots,d-3$. Let $H_{d-1}=H_{d-2}-C_{d-2}+\{z_{d-1}\}$. If $d=2$, then $H_1=K_N-(T_n^{*}-\{b\})-A-B+\{z_1\}$, as in the beginning of the last paragraph. Then $|V(H_{d-1})|=[n-2-(d-1)](m-2)+1=(n-d-1)(m-2)+1$. Recall that $d\leq n-2$, then $|V(H_{d-1})|\geq m-1$. By Chv\'atal's theorem, $H_{d-1}$ contains a red $T_{n-d}=T_n^{**}-\{z_1',z_2',\dots,z_d'\}$ or a blue $K_{m-1}$ (see Figure \ref{T10}). Let us consider that $H_{d-1}$ contains a red $T_n^{**}-\{z_1',z_2',\dots,z_d'\}$ first. Note that there exist $x_A'\in A,x_B'\in B,x_{C_1}'\in C_1,\dots,x_{C_{d-3}}'\in C_{d-3}$ and $x_{C_{d-2}}'\in C_{d-2}$, such that $x_A'u',x_B'u',x_{C_1}'u',\dots,x_{C_{d-3}}'u'$ and $x_{C_{d-2}}'u'$ are red. Otherwise, $\{u'\}\cup A$ and $\{u\}\cup B$ or $\{u'\}\cup B$ and $\{u\}\cup A$ or $\{u'\}\cup C_i$ and $\{u\}\cup A$ form a blue copy of $2K_m$, where $i\in[d-2]$, and we are done. Hence $T_n^{**}-\{z_1',z_2',\dots,z_d'\}+\{x_A',x_B',x_{C_1}',\dots,x_{C_{d-3}}',x_{C_{d-2}}'\}$ contains a red copy of $T_n^{**}$ (see Figure \ref{T10}), and we are done. \begin{figure}[H] \centering \includegraphics[height=5cm]{T10} \caption{$H_{d-1}$ contains a red $T_n^{**}-\{z_1',z_2',\dots,z_d'\}$} \label{T10} \end{figure} We now consider that $H_{d-1}$ contains a blue $K_{m-1}$, denoted by $C_{d-1}$ (see Figure \ref{T11}). To avoid a blue copy of $2K_m$, there exists $x_{C_1}\in C_1$ such that $z_0x_{C_1}$ is red. Otherwise, $\{z_0\}\cup C_1$ and $\{u\}\cup B$ form a blue copy of $2K_m$ and we are done. Furthermore, there exist $x_A\in A,x_B\in B,x_{C_2}\in C_2,\dots,x_{C_{d-1}}\in C_{d-1}$, such that $x_{C_1}x_A,x_{C_1}x_B,x_{C_1}x_{C_2},\dots,x_{C_1}x_{C_{d-1}}$ are red. Otherwise, $\{x_{C_1}\}\cup A$ and $\{u\}\cup B$ or $\{x_{C_1}\}\cup B$ and $\{u\}\cup A$ or $\{x_{C_1}\}\cup C_i$ and $\{u\}\cup A$ form a blue copy of $2K_m$, where $i\in[d-1]\setminus\{1\}$. Then $T_n^{*}-\{b\}-\{u\}-\{z_1,\dots,z_{d-1}\}+\{x_{C_1}\}+\{x_A,x_B,x_{C_2},\dots,x_{C_{d-1}}\}$ contains a red copy of $T_n^{**}$ (see Figure \ref{T11}), and we are done. \begin{figure}[H] \centering \includegraphics[height=5cm]{T11} \caption{$T_n^{*}-\{b\}-\{u\}-\{z_1,\dots,z_{d-1}\}+\{x_{C_1},x_A,x_B,x_{C_2},\dots,x_{C_{d-1}}\}$ contains a red copy of $T_n^{**}$} \label{T11} \end{figure} \end{proof} Before giving the proof of Theorem \ref{Tn}, we show the following result given by Sudarsana, Adiwijaya and Musdalifah \cite{SAM}. \begin{lemma}\label{Pn} Let $n\geq3$ and $m\geq2$ be positive integers, then $R(P_n,2K_m)=(n-1)(m-1)+2$. \end{lemma} \begin{proof} We just need to prove the upper bound since the lower bound follows by Theorem \ref{Burr} directly. Let $N=(n-1)(m-1)+2$ and the edges of $K_N$ be colored by red or blue. We will show that $K_N$ contains a red $P_n$ or a blue $2K_m$. We first claim that there is a red $P_{n-1}$ contained in $K_N$. By the result given by Chv\'atal \cite{C} that $R(T_n,K_m)=(n-1)(m-1)+1$, $K_N$ contains a red $P_n$ or a blue $K_m$. We just need to consider that $K_N$ contains a blue $K_m$ (denote it by $K$), otherwise we are done. Since $N-m=(n-2)(m-1)+1$ and we continue to apply the result of Chv\'atal \cite{C} that $R(P_{n-1},K_m)=(n-2)(m-1)+1$, then the graph induced on $V(K_N)-V(K)$ contains either a blue $K_m$ and we are done, or a red $P_{n-1}$. Hence, there is a red $P_{n-1}$ contained in $K_N$ and we denote it by $P$. Let us label the end vertices of $P$ as $v_1$ and $v_2$. Now we use induction on $m$ to complete the proof. By a result of Chv\'atal and Harary \cite{CH}, the conclusion holds for $m=2$. Assume that the conclusion holds for $m-1$ and we will show that it holds for $m$. Let $H$ be the graph induced on $V(K_N)-V(P)$, then $v(H)=(n-1)(m-2)+2$. Since the conclusion holds for $m-1$, then we obtain either a red $P_{n}$ and we are done, or a blue $2K_{m-1}$ in $H$. We denote the blue $2K_{m-1}$ by $A$. Now we consider the edges between $\{v_1,v_2\}$ and $V(A)$. To avoid a red $P_n$, all edges between $\{v_1,v_2\}$ and $V(A)$ are blue, which will imply a blue $2K_m$ and we are done. \end{proof} {\bf Proof of Theorem \ref{Tn}} \ By Lemma \ref{Pn}, we just need to consider that $n\geq4$ and $T_n$ is not a path. The lower bound holds by Theorem \ref{Burr}. Now we use induction on $m$ to prove the upper bound. By a result of Chv\'atal and Harary \cite{CH}, the conclusion holds for the case that $m=2$. Now we assume that the conclusion holds for $m-1$, i.e., $R(T,2K_{m-1})\leq (n-1)(m-2)+2$ holds for any tree $T$ with $n$ vertices. Under this assumption, Lemma \ref{1} and Lemma \ref{2} guarantee that the property “$2K_m$-good” is Stretching-preserving and Expanding-preserving. Applying Lemma \ref{Pn} that $P_n$ is $2K_m$-good and Corollary \ref{111}, we obtain that $R(T_n,2K_{m})\leq (n-1)(m-1)+2$. \hskip 1.3em\hfill\rule{6pt}{6pt} \subsection{Obtain $T_n$ from $P_n$ by Stretching and Expanding} In this subsection, we give the proof of Proposition \ref{remark}. \ {\bf Proof of Proposition \ref{remark}} \ For any fixed tree $T_n$ on $n$ vertices, let $P$ be a longest path in $T_n$. Let us label the end vertices of $P$ as $v$ and $w$, and label the vertices with degree greater than $2$ in the path as $u_1,u_2,\dots,u_l$. Assume that the distance between $v$ and $u_1$, denoted by $d(v,u_1)$, is $t_0+1$, and $d(u_i,u_{i+1})=t_i+1$ for each $i\in[l-1]$ and $d(u_l,w)=t_l+1$. Let $d_i=d(u_i)-2$. We label the vertices not in $P$ and connecting to $u_i$ as $y_1^i,y_2^i,\dots,y_{d_i}^i$. For each $i\in[l]$, let $B^i$ be a subset of $V(T_n)$ such that the subgraph induced on $\{u_i\}\cup B^i$ is a subtree of $T_n$ and this subtree just contain one vertex of $P$, i.e., $u_i$. Clearly, $V(T_n)=V(P)\cup B^1\cup\dots\cup B^l$ as in Figure \ref{T12}. \begin{figure}[H] \centering \includegraphics[height=4.5cm]{T12} \caption{$T_n$} \label{T12} \end{figure} It's easy to see that $P_n$ can be obtained from $T_n$ by applying Stretching multiple times. Actually, for any $i\in[l]$, the subgraph induced on $\{u_i\}\cup B^i$ is a subtree of $T_n$, thus there is at least one leaf in every such subtree. Firstly, we can Stretching $T_n$ at the leaf $v$ by deleting a leaf $v'\in B^1$, then we lengthen path $P$ with the end vertices as $v'$ and $w$. We continue Stretching this tree at the leaf $v'$ by deleting a leaf in $B^1-\{v'\}$ (if $B^1-\{v'\}=\emptyset$, then we deleting a leaf in $B^2$). Continue this progress, we can obtain $P_n$ from $T_n$ by applying Stretching multiple times (clearly the number of operations is equal to $|B^1|+\dots+|B^l|$). Now we prove that $T_n$ can be obtained from $P_n$ by applying Stretching and Expanding multiple times. We label the vertices of $P_n$ as in Figure \ref{T14} (if $t_1=0$, then $x_1^1$ is $u_2$). We will delete all vertices in $\{b_{n-t_0-3},\dots,b_1\}$ in turn and add all corresponding vertices to obtain $T_n$. \begin{figure}[H] \centering \includegraphics[height=1.7cm]{T14} \caption{$P_n$} \label{T14} \end{figure} Our general operation is divided into the following steps. Step $1$. By applying Stretching and Expanding multiple times, add all vertices in $B^1$. We first relabel all vertices of $B^1$ in $T_n$. We relabel $y_{i}^1=i$ for all $i\in[d_1]$ and relabel all vertices in $B^1-\{1,2\dots,d_1\}$ as $d_1+1,d_1+2,\dots,|B^1|$ so that every $j$ is connected to exactly one of the vertices $1,2,\dots,j-1$, where $j\in[|B^1|]\setminus[d_1]$. We define $$N_k(u_1)=\{j\in B^1:d(j,u_1)=k\}.$$ Denote $|N_k(u_1)|=N_k$ for all $k\geq 1$. Clearly, $N_1(u_1)=\{1,2,\dots,d_1\}$, $N_2(u_1)=\{d_1+1,\dots,d_1+N_2\},\dots$. We first show that how to delete $d_1$ vertices in $P_n$ and add all corresponding vertices in $N_1(u_1)$. We apply Expanding $P_n$ at vertex $u_{1}$ $d_1$ times by deleting $b_{n-t_0-3},b_{n-t_0-4},\dots$ and $b_{n-t_0-(d_1+2)}$, and add $1,2,\dots$ and $d_1$ connecting to $u_{1}$ in turn, then we obtain $T'_n$ as in Figure \ref{T15}. \begin{figure}[H] \centering \includegraphics[height=3.5cm]{T15} \caption{$T'_n$} \label{T15} \end{figure} \noindent If for each $i\in[d_1]$, $i$ is a leaf of $T_n$, then we are done and go to step $2$. Now we assume that there is $i$ not being a leaf of $T_n$ for some $i\in[d_1]$. We assume that we have added all vertices in $N_k(u_1)$, and show that how to add all vertices in $N_{k+1}(u_1)$, where $k\geq1$. For $v\in N_k(u_1)$ so that the degree of $v$ in $T_n$ is more than $1$, we perform Stretching the tree at every such $v$ (in turn) by deleting an end point $b_\ell\in P_n$, where $\ell\in[n-t_0-d_1-3]$. On this foundation, for $v\in N_k(u_1)$ so that the degree of $v$ in $T_n$ is more than $2$, we perform Expanding the tree at every such $v$ $d_{T_n}(v)-2$ times (in turn) by deleting an end point $b_\ell\in P_n$ in turn, where $\ell\in[n-t_0-d_1-3]$. Hence we add all vertices in $N_{k+1}(u_1)$, where $k\geq1$. Step $2$. By applying Stretching multiple times, add all vertices of $\{x^1_2,\dots,x^1_{t_1},u_2,x^2_1\}$. We first apply Stretching the tree obtained by step $1$ at the end vertex $x^1_i\in P_n$ (in turn) by deleting $b_{n-t_0-|B^1|-i-2}$, and add $x^1_{i+1}$ connecting to $x^1_i$ in turn, where $i\in[t_1-1]$. Next we apply Stretching this tree at the end vertex $x^1_{t_1}$ by deleting $b_{n-t_0-|B^1|-t_1-2}$, and add $u_2$ connecting to $x^1_{t_1}$. At last we apply Stretching this tree at the end vertex $u_2$ by deleting $b_{n-t_0-|B^1|-t_1-3}$, and add $x^2_1$ connecting to $u_2$. Hence, we obtain $T''_n$ as in Figure \ref{T16}. \begin{figure}[H] \centering \includegraphics[height=4cm]{T16} \caption{$T''_n$} \label{T16} \end{figure} On this foundation, we continue to attach $B_i$ to $u_i$ by performing Stretchings and Expandings similar to step $1$ and $2$ for each $i\in\{2,\dots,2l-1\}$ until we obtain $T_n$. \hskip 1.3em\hfill\rule{6pt}{6pt} \section{Tree is $K_m\cup K_l$-good} Firstly, we prove the following theorem which implies that $T_n$ is $K_m\cup K_l$-good for $n\geq 3$ and $m>l\geq2$. \begin{theo}\label{Tnn} Let $G$ be a $2K_{m-1}$-good graph with $n\geq3$ vertices. If there exist two vertices $u$ and $v$ with degree one in $G$ such that $G-\{u,v\}$ is $K_m$-good, then $G$ is $K_m\cup K_l$-good, where $m>l\geq2$. \end{theo} \begin{proof} By Theorem \ref{Burr}, we just need to prove the upper bound that $R(G,K_m\cup K_l)\leq (n-1)(m-1)+1$. Let $N=(n-1)(m-1)+1$ and the edges of $K_N$ be colored by red or blue. We will show that $K_N$ contains a red $G$ or a blue $K_m\cup K_{l}$. Since $R(G,2K_{m-1})=(n-1)(m-2)+2\leq N$, hence $K_N$ contains a red $G$ or a blue $2K_{m-1}$. We just need to consider that $K_N$ contains a blue $2K_{m-1}$. Denote the vertex sets of these two disjoint copies of blue $K_{m-1}$ by $A$ and $B$. Let $F=K_N-A-B$, then $|V(F)|=(n-3)(m-1)+1$. Since $R(G-\{u,v\},K_m)=(n-3)(m-1)+1$, then we just need to consider that $F$ contains a red $G-\{u,v\}$. Let $u'$ and $v'$ be the vertices connecting with $u$ and $v$ in $G$, respectively (It's possible that $v'=u'$). If the edges between $u'$ and $A$ or $v'$ and $B$ are all blue, then we have a blue $K_m\cup K_{m-1}$. Otherwise, there is at least one red edge between $u'$ and $A$, and at least one red edge between $v'$ and $B$, so there is a red $G$ and we are done. \end{proof} Taking $G$ to be $T_n$ in Theorem \ref{Tnn}, and applying Theorems \ref{Tn} and \ref{Tnn}, we obtain the following result. \begin{coro}\label{coro} $T_n$ is $K_m\cup K_l$-good for $n\geq3$ and $m>l\geq2$. \end{coro} \section{The Ramsey number of a forest versus a disjoint union of complete graphs} In \cite{St}, Stahl applied Chv\'atal's theorem about the exact value of $R(T_n,K_m)$ to determine the Ramsey number of a forest versus $K_m$. In this section, we also obtain the Ramsey number $R(F,K_m\cup K_l)$ by applying Theorems \ref{Tn} and Corollary \ref{coro}, where $F$ is a forest and $m,l\geq2$. Indeed, we prove a general result under some conditions. Before that, we extend the Ramsey goodness of connected graphs to all graphs. Close behind the lower bound for a connected graph $G$ and $H$ with $|V(G)|\geq s(H)$ given by Burr \cite{B} that $R(G,H)\geq (|V(G)|-1)(\chi(H)-1)+s(G)$, Gould and Jacobson \cite{GJ} observed a construction which yields a general lower bound of $R(\mathcal{F},H)$ for arbitrary graph $\mathcal{F}$ and $H$. \begin{theo}{\rm (Gould-Jacobson \cite{GJ})}\label{GJ} Let $\mathcal{F}$ be a graph, let $F_1,F_2,\cdots, F_{r-1}$ and $F_r$ be the components (maximal connected subgraphs) of $\mathcal{F}$. Let $I$ be the set of orders of $F_1,F_2,\cdots, F_r$, i.e., for every $i\in I$, there exists at least a graph $F_l$ with $|V(F_l)|=i$, $l\in[r]$. Let $n(\mathcal{F})$ be the maximum element in $I$ and $k_i(\mathcal{F})$ be the number of components with order $i$. Let $H$ be a graph, let $\beta_i=R(F_i,H)-(|V(F_i)-1|)(\chi(H)-1)-s(H)$ and $$p=\max_{ j\in I}\Bigg\{(j-1)(\chi(H)-2)+\sum_{i=j}^{n(\mathcal{F})}ik_i(\mathcal{F})\Bigg\}+s(H)-1.$$ then $$p\leq R(\mathcal{F},H)\leq p+\max_{i}(\beta_i).$$ \end{theo} We say that a graph $\mathcal{F}$ is $H$-good if the equality $$R(\mathcal{F},H)= \max_{ j\in I}\Bigg\{(j-1)(\chi(H)-2)+\sum_{i=j}^{n(\mathcal{F})}ik_i(\mathcal{F})\Bigg\}+s(H)-1$$ holds under the conditions of Theorem \ref{GJ} and $s(H)\geq j_0$, where $j_0\in I$ be the maximum integer satisfying $(j_0-1)(\chi(H)-2)+\sum_{i=j_0}^{n(\mathcal{F})}ik_i(\mathcal{F})=\max_{ j\in I}\left\{(j-1)(\chi(H)-2)+\sum_{i=j}^{n(\mathcal{F})}ik_i(\mathcal{F})\right\}$. This definition is consistent for a connected graph with the definition of Ramsey-goodness given by Burr. We extend the upper bound given by Theorem \ref{GJ}. Before giving the upper bound of $R(\mathcal{F},H)$ (Theorem \ref{mathcal{F}}), we remark the following consequence which is implicit in Theorem \ref{GJ}. \begin{remark}\label{5.1} Let $H$ be a graph and let $F$ be the disjoint union of graphs $F_1, F_2,\dots, F_k$, where each of $F_1,F_2,\cdots, F_k$ has $n$ vertices. Then $$R(F,H)\leq \max_{i\in[k]}\left\{R(F_i,H)\right\}+n(k-1).$$ Moreover, if $F_i$ is a connected and $H$-good graph with $n\geq s(H)$ for each $i\in[r]$, then $F$ is $H$-good. \end{remark} \begin{theo}\label{mathcal{F}} Let $H$ be a graph, let $\mathcal{F}$ be the disjoint union of graphs $F_1,F_2,\cdots, F_{r-1}$ and $F_r$. Let $I$ be the set of orders of $F_1,F_2,\cdots, F_{r}$, i.e., for every $i\in I$, there exists at least a graph $F_l$ with $|V(F_l)|=i$, $l\in[r]$. Let $n(\mathcal{F})$ be the maximum element in $I$ and $k_i(\mathcal{F})$ be the number of graphs with order $i$. Then $$R(\mathcal{F},H)\leq\max_{ j\in I}\Bigg\{\max_{|V(F_p)|=j,p\in[r]}\left\{R(F_p,H)\right\}+\sum_{i=j}^{n(\mathcal{F})}ik_i(\mathcal{F})-j\Bigg\}.$$ Moreover, if $F_i$ is a connected and $H$-good graph with $|V(F_i)|\geq s(H)$ for each $i\in[r]$, then $\mathcal{F}$ is $H$-good. \end{theo} \begin{proof} Let $$N=\max_{ j\in I}\Bigg\{\max_{|V(F_p)|=j,p\in[r]}\left\{R(F_p,H)\right\}+\sum_{i=j}^{n(\mathcal{F})}ik_i(\mathcal{F})-j\Bigg\}$$ and the edges of $K_N$ be colored by red or blue. Now we prove the upper bound by assuming that there is no blue $H$ and using the descending induction to show the existence of a red $\mathcal{F}$ in $K_N$. Let $G_j$ be the graph consisting of all $F_i$ with order at least $j$. Clearly, \begin{eqnarray*} N&\geq& \max_{|V(F_i)|=n(\mathcal{F})}\left\{R(F_i,H)\right\}+n(\mathcal{F})k_{n(\mathcal{F})}(\mathcal{F})-n(\mathcal{F})\\ &=& \max_{|V(F_i)|=n(\mathcal{F})}\left\{R(F_i,H)\right\}+n(\mathcal{F})(k_{n(\mathcal{F})}(\mathcal{F})-1)\\ & \overset{Remark\; \ref{5.1}}{\geq}& R(G_{n(\mathcal{F})},H). \end{eqnarray*} Apply the assumption that there is no blue $H$, there is a red $G_{n(\mathcal{F})}$ in $K_N$. So the base case holds. Now assume that there is a red $G_{j+1}$ in $K_N$, our goal is to show that there is a red $G_{j}$ in $K_N$. Since $|V(G_{j+1})|=\sum_{i=j+1}^{n(\mathcal{F})}ik_i(\mathcal{F})$, then the order of $K_N-G_{j+1}$ is \begin{eqnarray*} N-\sum_{i=j+1}^{n(\mathcal{F})}ik_i(\mathcal{F})&\geq& \max_{|V(F_p)|=j,p\in[r]}\left\{R(F_p,H)\right\}+\sum_{i=j}^{n(\mathcal{F})}ik_i(\mathcal{F})-j-\sum_{i=j+1}^{n(\mathcal{F})}ik_i(\mathcal{F})\\ &=& \max_{|V(F_p)|=j,p\in[r]}\left\{R(F_p,H)\right\}+jk_j(\mathcal{F})-j\\ &=& \max_{|V(F_p)|=j,p\in[r]}\left\{R(F_p,H)\right\}+j(k_j(\mathcal{F})-1)\\ & \overset{Remark\; \ref{5.1}}{\geq}& R(G_j-G_{j+1},H). \end{eqnarray*} Hence there is a red $G_j-G_{j+1}$ in $K_N-G_{j+1}$, which yields a red $G_j$ in $K_N$. By induction the proof of the first part of the theorem is completed. If $F_i$ is a connected and $H$-good graph with $|V(F_i)|\geq s(H)$ for each $i\in[r]$, then $R(\mathcal{F},H)\leq\max_{ j\in I}\left\{(j-1)(\chi(H)-1)+s(H)+\sum_{i=j}^{n(\mathcal{F})}ik_i(\mathcal{F})-j\right\}.$ Combining with the lower bound of Theorem \ref{GJ}, we obtain that $\mathcal{F}$ is $H$-good. \end{proof} By Corollary \ref{coro} and Theorem \ref{mathcal{F}}, we can obtain the following exact value of $R(F,K_m\cup K_l)$, where $F$ is a forest and $m,l\geq2$, and a forest is $K_m\cup K_l$-good. \begin{coro} Let $m\geq l \geq 2$ be positive integers. Let $F$ be a forest, then $F$ is $K_m\cup K_l$-good. \end{coro} \section{Remarks} Determining $R(G,H)$ in general is a very challenging problem. When we focus on the problem related to Ramsey goodness, we are interested in exploring what kind of conditions can yield good Ramsey goodness property of graphs. As mentioned in Section $1$, nice work in \cite{BPS, C, CH, PS} concern this type of conditions. Theorem \ref{mathcal{F}} shows that if every component of a disconnected graph $F$ is $H$-good, then $F$ is $H$-good. It is natural to study whether $F$ is $H$-good and $G$-good implies that $F$ is $H\cup G$-good. Clearly, this is not true for graphs such as $K_n$. An obvious result is that any connected graph $F$ is $K_2$-good, but $R(K_n,2K_2)=n+2$, which implies that $K_n$ is not $2K_2$-good. Certainly, some results provide evidence for that the answer of this question is yes for some $F$. Sudarsana \cite{Su} showed that $P_n$ with $n\geq (t-2)((tm-2)(m-1)+1)+3$ is $tK_m$-good for $m$ and $t\geq2$ be integers. Indeed, the condition on the number of vertices $n$ can not be released completely. For example, we can prove that $T_n$ is not $tK_2$-good when $n\leq t$ (actually we determined the exact Ramsey number $R(T_n,tK_2)$ in \cite{HP}). It's interesting to explore what kind of graph $F$ can satisfy that $F$ is $H$-good and $G$-good implies that $F$ is $H\cup G$-good when the number of vertices in $F$ is sufficiently large.
{'timestamp': '2022-01-14T02:15:45', 'yymm': '2201', 'arxiv_id': '2201.04884', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.04884'}
\section{Introduction} The Very Large Array (VLA) has played a key role in exploring the radio Universe for almost three decades. To continue the unparalleled success of this facility, the VLA is currently undergoing a drastic rejuvenation process to become the 'Expanded VLA' or EVLA. The conversion is very comprehensive and comprises additional and upgraded receivers, new broad-band fiber optics, new online control systems, new digital electronics, and a state-of-the-art correlator called WIDAR (Wideband Interferometric Digital Architecture). The upgrades will not only improve the continuum sensitivity by about an order of magnitude largely due to a substantial increase of instantaneous receiver and correlator bandwidth, but they will also provide extreme spectral resolution and a wide coverage of the radio spectrum -- for the first time it will be possible to observe at any chosen frequency in the entire 1-50\,GHz radio window. The EVLA specifications and its current status is described on the following webpage:\\ \verb=http://www.aoc.nrao.edu/evla=\\ (see also \cite{nap06}). With its unparalleled sensitivity the EVLA is a true pathfinder for the Square Kilometer Array (SKA). Even the EVLA's mapping speed matches that of special-purpose, ultra-wide field but lower sensitivity and narrow band SKA pathfinders like ASKAP, ATA, or MEERKAT in their early incarnations. One of the fields of astronomy that the VLA pioneered is the observation of atomic, neutral hydrogen (H{\sc i}) in our Milky Way, in nearby galaxies and the intragroup medium, as well as at higher redshifts. Those observations revealed the complex properties of the neutral ISM, its importance on star formation and the physics of the interface between a galaxy's disk, its halo and the intergalactic medium. The abundance of H{\sc i}\ and its 21\,cm line properties also proved to be indispensable tools to derive the complex dynamics of galaxies in the form of, for example, density waves, rotation curves (and thus the dark matter distribution), tidal interactions, and mergers of galaxies. In this article we like to summarize what EVLA offers for future H{\sc i}\ observations. \section{EVLA Receivers for H{\sc i}\ Observations} The H{\sc i}\ hyperfine line rest frequency of $\sim 1.420$\,GHz falls into the radio L--band. The VLA L--band receivers cover a frequency range of 1.25--1.8\,GHz. The EVLA receivers will widen this to 0.93--2.1\,GHz. At the lower end this will increase the H{\sc i}\ redshift coverage from a current upper limit of $z\sim 0.14$ to $z\sim 0.53$. This is an improvement of almost $\Delta z\sim 0.4$ or an additional $\sim 3.5$\,Gyr of look-back time (in a WMAP $\Lambda$CDM cosmology). The system temperature over telescope efficiency T$_{\rm sys}/\epsilon$ of the EVLA is expected to decrease from $\sim 75$\,K to about 60\,K or less (T$_{\rm sys}$ of the EVLA is designed to hover around $\sim 26$\,K), saving about 30\% of integration time to reach the same sensitivity. The old VLA L--band feedhorn design featured a microwave lens in the optical path. With the redesigned EVLA receivers, this lens is not required anymore. Thus, ground radiation scattered by the lens (and other structural dish elements) into the feedhorn is largely reduced and the EVLA L--band system temperature improves substantially over the VLA at lower elevations; at an elevation of $\sim 20^{\circ}$ the EVLA system temperature is about half that of the VLA which corresponds to a four fold increase in sensitivity. As part of the EVLA conversion, the L--band receivers will be equipped with new orthomode transducers. Until they become available for all antennas in 2012, an interim L--band system is currently being installed (with a slightly lower sensitivity and a minimum frequency of 1\,GHz, corresponding to $z\sim0.42$). For more information on the performance of the EVLA L--band system upgrade and performance, we would like to refer to the EVLA webpages and also to Emmanuel Momjian's contribution in this volume. At lower frequencies the EVLA offers a P--band receiver which covers frequencies of 300--340\,MHz equaling an H{\sc i}\ $z\sim 3.2-3.7$ redshift range. This band remains unchanged in the VLA to EVLA conversion. The system is not sensitive enough to observe typical, gas rich galaxies at the available redshift in H{\sc i}\ emission. However, searches for H{\sc i}\ absorption in P--band have been conducted in the past (e.g.\cite{tar94}) and are still an option for the EVLA. \section{The WIDAR correlator} Since the installation of the VLA correlator, Moore's law pushed processing speeds of computers by $\sim 5$ orders of magnitude. Taking advantage of this, a new correlator, WIDAR, will be commissioned in 2008/2009. However, one should keep in mind that the current VLA correlator was designed to be very suitable for H{\sc i}\ observations toward nearby galaxies. The bandwidth and resolution almost perfectly matches what is needed for such observations (bandwidth of $\sim 200$\,km\,s$^{-1}$\ at a resolution of $\sim 5$\,km\,s$^{-1}$). But the VLA correlator design severely limits the amount of 'discovery space', e.g., very wide and shallow lines (for an example, see \cite{mor07}) would not be discovered with the VLA without prior knowledge of these features. The narrow bandwidth typically used for galaxies of $\sim 1.5$\,MHz has only few, supposedly line--free channels at the band edges. Any wide lines extending across these channels would not be discovered because they would be removed in the process of continuum subtraction. Another case of limited discovery space is that other H{\sc i}\ sources in the field, e.g., companion galaxies, remain undetected with the VLA if they are at a slightly different velocity than the main target. At the other extreme, very narrow line features, e.g., caused by H{\sc i}\ self--absorption are smeared out and would be missed in a typical extragalactic H{\sc i}\ setup with its velocity resolution of a few km\,s$^{-1}$. To open up new discovery space, wide bands at high spectral resolution are desired. Such capabilities are provided by the new WIDAR correlator (for a description of the technology, see \cite{car00}). WIDAR will have a spectral resolution of down to Hz ranges and a bandwidth of up to 8\,GHz. The full bandwidth is split up into four 2\,GHz baseband pairs and in each baseband pair up to 16 independent sub-band pairs can be selected with bandwidths between 31.25\,kHz and 128\,MHz. The full 8\,GHz baseband pairs have a {\it minimum} of 16384 channels which will always be available. Recirculation trades bandwidth for more channels and the maximum number of channels are of order 4 million. Such a flexible design will cover virtually any need for setups to observe H{\sc i}\ in single targets with thousands of km\,s$^{-1}$\ bandwidth and sub-km\,s$^{-1}$\ velocity resolution. But the EVLA will be able to do more. The L--band receivers and WIDAR will cover the entire H{\sc i}\ redshift range of $z=0-0.53$ at a resolution of 3.2\,km\,s$^{-1}$\ when observed with two polarization products, and at 6.4\,km\,s$^{-1}$\ when observed at full stokes. Other configurations will be able to, e.g., stack multiple radio recombination lines in order to improve the signal--to--noise of Zeeman splitting experiments in a single observation. \section{Piggy-backing} The velocity resolution of a few km\,s$^{-1}$\ over the full 0.93-2.1\,GHz L--band range enables new, unique synergies with other observations. Every EVLA L--band continuum observation will also be an H{\sc i}\ redshift survey and vice versa. Also, virtually every targeted H{\sc i}\ observation leaves enough computing power in WIDAR to once more perform a simultaneous H{\sc i}\ survey over the entire $z=0-0.53$ range at good velocity resolution. As if this would not be enough, the minimum data dumping time for 1\,GHz bandwidth L--band data is $\sim 100$\,ms which allows monitoring of and searches for transient sources in the field while simultaneously observing H{\sc i}\ or radio continuum projects (but note that current data output limitations imposed by archiving are $\sim 25$\,MB\,s$^{-1}$, equaling to dumping times of or $\sim 20$\,s, when the maximum number of channels and baselines is read out). These are exciting new opportunities that will pave the way toward SKA H{\sc i}\ surveys. For example, without spending any dedicated survey time, the $z=0-0.53$ redshift volume around a VLA standard calibrator will accumulate hundreds of hours prior to the commissioning of the SKA. This will provide very deep H{\sc i}\ and radio continuum images essentially for free. The up to 8 bit quantization of WIDAR also delivers improved high dynamic range imaging capabilities which reduce current sensitivity limitations due to the inevitable presence of strong sources in any field. The new digital transmission system also removes the infamous ``3 MHz ripple'' and related spectral baseline instabilities. This reduces the systematic uncertainties of deep H{\sc i}\ observations dramatically. \section{Summary} Over the VLA, the EVLA will improve H{\sc i}\ observations in terms of a wider L--band frequency range (down to $\sim 930$\,MHz), a better $T_{\rm sys}/\epsilon$ sensitivity (in particular at off-zenith elevations), much improved spectral baseline stability, and, most importantly, spectral bandwidth and resolution. The L--band receiver improvement guarantee that the EVLA will still remain the most sensitive interferometer for H{\sc i}\ in the world for at least a decade, until SKA pathfinders will be expanded far beyond the currently planned prototypes. The new WIDAR correlator is flexible enough to allow observations of virtually any galaxy at sub--km\,s$^{-1}$\ resolution with thousands of km\,s$^{-1}$\ bandwidth. At the same time, every L--band continuum or targeted H{\sc i}\ observation will also yield a blind $z=0-0.53$ redshift H{\sc i}\ survey at a velocity resolution of a few km\,s$^{-1}$, and vice versa. It is clear that these new possibilities have their price in a very large data rate and that data reduction will challenge today's computing capabilities. The opportunities, however, are tremendous and it is up to the community to develop new strategies on how to take advantage of the wealth of EVLA data in order to answer the open questions of galaxy evolution and cosmology. \begin{theacknowledgments} The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. \end{theacknowledgments} \bibliographystyle{aipproc}
{'timestamp': '2008-05-29T20:09:13', 'yymm': '0805', 'arxiv_id': '0805.4595', 'language': 'en', 'url': 'https://arxiv.org/abs/0805.4595'}
\section{Introduction} The tidal disruption of stars by supermassive black holes (SMBH) lights up dormant systems and can be used to probe accretion and outflow processes. Theoretical calculations indicate that most tidal disruption events (TDEs) lead to super-Eddington fallback, which in turn drives outflows \citep{rees88,ek89,sq09,gr13}. The discovery of luminous radio emission from the $\gamma$-ray TDE Sw\,J1644+57 revealed the formation of a relativistic jetted outflow \citep{zbs+11,bzp+12}, but such events represent at most a few percent of the TDE population \citep{zbs+11,bgm+11,bkg+11,mgm+15}. While the sample of well-studied TDE candidates has expanded greatly in recent years, direct evidence for outflows in the bulk of the TDE population, discovered through optical, ultraviolet (UV), and X-ray observations, has been lacking. Radio observations are an ideal way to search for outflows in TDEs, as radio emission is expected to persist for months or years after the event even if the jet's orientation is off-axis. Most TDEs detected within the past decade have been followed up in the radio, but no ``typical" TDEs (i.e. those lacking $\gamma$-ray and hard X-ray emission) have been convincingly detected \citep{bmc+13,vfk+13}. (Weak radio emission has been seen in one or two TDE host galaxies, but the emission does not appear to be transient and these detections have been attributed to AGN activity; \citealt{vfk+13}.) Furthermore, due to the large distances of most TDEs discovered to date, the resulting upper limits are only able to rule out the presence of off-axis relativistic jets similar to those observed in gamma ray bursts or in Sw\,J1644+57 \citep{vfk+13,cbg+14}. The existence of lower energy, non-relativistic outflows cannot be ruled out by these observations. On 2014 November 22, the All Sky Automated Survey for SuperNovae (ASAS-SN) reported the discovery of the new transient ASASSN-14li, coincident with the nucleus of the nearby galaxy PGC\,043234 (redshift $z = 0.0206$ luminosity distance $d_L\approx90$ Mpc). Extensive optical, UV, and X-ray follow-up have confirmed that ASASSN-14li can be consistently modeled as a TDE, and is atypical for an AGN flare or supernova \citep{hol15,mkm15}. In this paper, we report the discovery and follow-up of transient radio emission from ASASSN-14li. The transient nature of the radio emission was independently reported by \cite{vv15}, although most of their observations were taken at a single frequency, strongly limiting their ability to constrain the evolution of the spectral energy distribution (SED). \begin{figure*} \centerline{\includegraphics[width=\textwidth]{fig1}} \caption{Radio observations of the TDE ASASSN-14li spanning December 2014 to September 2015. Filled circles mark the observed radio flux densities (in many cases, the errorbars, which correspond to 1 standard deviation, are smaller than the points; Table 1), while solid lines are best-fit models for synchrotron emission from a power-law distribution of electrons \citep{gs02,dnp13}, $N(\gamma)\propto\gamma^{-3}$ for $\gamma\geq\gamma_m$ (Section \ref{sec:mod}). (a) The total flux observed at each frequency. The dashed black line indicates a $F_\nu\propto \nu^{-1}$ power law model for the underlying quiescent emission component, whose existence is implied by the archival radio detections. (b) Residual transient radio flux density obtained by subtracting the modeled quiescent emission component. These residual flux densities have a spectral shape characteristic of a synchrotron self-absorbed spectrum, with a spectral slope of $F_\nu\propto \nu^{5/2}$ below the peak and $F_\nu\propto \nu^{-1}$ above the peak. The evolution of the SED is typical of synchrotron emission from an expanding outflow. We note that our 2014 December 24 observations only weakly constrain the location of the spectral peak, so all parameters inferred for this epoch are considered to be lower limits.} \label{fig:sed} \end{figure*} The rest of this paper is structured as follows. In Section \ref{sec:obs}, we present our radio observations of ASASSN-14li. In Section \ref{sec:arc}, we discuss archival observations of ASASSN-14li's host galaxy PGC\,043234 to provide a context for our modeling. In Section \ref{sec:mod}, we outline our model for the radio emission and use it to infer physical properties of the outflow launched by the TDE and the pre-event circumnuclear density. In Section 5, we compare our results to independent modeling of the X-ray, UV, and optical observations of ASASSN-14li and address alternate explanations for the emission. We conclude in Section \ref{sec:conc}. \section{Radio Observations and Data Analysis}\label{sec:obs} Following the optical discovery of ASASSN-14li, we initiated radio follow-up observations with the Karl G. Jansky Very Large Array (VLA) on 2014 December 24 at a frequency of 21.8 GHz and detected a source with a flux density of $1.85\pm 0.03$ mJy. The position of the radio source, $\alpha_{\rm J2000}=$\ra{12}{48}{15.226}, $\delta_{\rm J2000}=$\dec{+17}{46}{26.47} ($\pm 0.01$ arcsec), is consistent with the optical position. We continued to monitor the source and obtained six epochs of observations spaced at $1-2$ month intervals between 2014 December 24 and 2015 September 11 UT. Our observations span frequencies between 1.45 GHz and 24.5 GHz and reveal significant fading at high frequencies, a steady decline in the peak of the radio SED as a function of time (to $\approx 2$ GHz by September 2015), and a spectral slope of $F_\nu\propto\nu^{-1}$ above the peak frequency (Figure \ref{fig:sed}). These properties are typical of synchrotron emission from an expanding outflow. All radio observations were obtained with the VLA in the A, B, C, and intermediate configurations (program codes 14B-493 and 15A-476). For all epochs and frequencies, we used 3C\,286 for bandpass and flux density calibration, and J1254+1141 for phase calibration. We processed and imaged the data using the Common Astronomy Software Applications (CASA) software package \citep{mws+07}. The flux densities and associated uncertainties were determined using the {\tt imtool} program within the {\tt pwkit} package\footnote{Available at {\tt https://github.com/pkgw/pwkit}} (version 0.6.99) and are summarized in Table~\ref{tab:obs}. The time evolution of the radio SED is also shown in Figure \ref{fig:sed}. \section{Archival Radio Observations and Arguments Against an AGN Flare Origin for the Radio Emission from ASASSN-14li}\label{sec:arc} The host galaxy of ASASSN-14li was previously detected in the NVSS (December 1993) and FIRST (November 1999) 1.4 GHz radio surveys \citep{bec95,con98}. The FIRST and NVSS flux densities are $2.96\pm 0.15$ mJy and $3.2\pm0.4$ mJy respectively, corresponding to a radio luminosity of $L_\nu(1.4\,{\rm GHz})\approx 3\times 10^{28}$ erg s$^{-1}$ Hz$^{-1}$. If this radio emission is due to star formation activity in the host galaxy, then the inferred star formation rate is ${\rm SFR}\approx 2$ M$_{\odot}$ yr$^{-1}$ \citep{yc02}. However, this is ruled out by archival optical, near-infrared, and far-infrared (FIR) observations of the host galaxy, which indicate that ${\rm SFR}\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.1$ M$_{\odot}$ yr$^{-1}$, and that the observed emission violates the radio-FIR correlation of star forming galaxies \citep{hol15}. Thus, the radio emission is more likely due to a weak AGN, and indeed the archival radio luminosity places the host galaxy in the range of luminosities observed in low-luminosity Seyfert galaxies \citep{ho01}. Our brightest 1.45 GHz flux density measurement constrains the maximum brightness of the quiescent component to be $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 2$ mJy, indicating that the archival source has declined in brightness by about $30\%$ over the 16-year period between the FIRST measurement and our observations. This is typical of long-term AGN variability \citep{hov08}. It is clear, however, that the event ASASSN-14li has more in common with previously-studied TDEs than with typical AGN flares. Optical spectra and UV/optical imaging obtained during the outburst show strong blue continuum emission and broad hydrogen and helium emission lines, consistent with previously-observed TDEs and inconsistent with the evolution expected for an AGN or a supernova \citep{hol15}. Furthermore, the dramatic change in brightness we observe at our highest radio frequencies -- an order of magnitude decline over an 9 month period -- is much larger and more rapid than the radio variability observed in typical AGN flares, and is only comparable to the most extreme flares observed in BL Lacertae Objects \citep{hov08,niep09}. Our radio spectral energy distributions of ASASSN-14li are also steeper in both the optically-thick ($F_{\nu} \propto \nu^{2.5}$) and optically-thin ($F_{\nu} \propto \nu^{-1}$) portions compared to typical AGN flares, which exhibit an average rising power law of $F_{\nu} \propto \nu^{0.4}$ and a declining power law of $F_{\nu} \propto \nu^{-0.2}$ \citep{hov08}. \begin{center} \setlength\LTcapwidth{2.5in} \begin{longtable}{lccc} \caption{Radio Observations} \label{tab:obs} \\ \hline \hline\noalign{\smallskip} UT Date & $\Delta t$ & $\nu$ & $F_\nu$ \\ & (days) & (GHz) & (mJy) \\ \hline\noalign{\smallskip} Dec 24.69 & 128.69 & 19.2 & 1.97 $\pm$ 0.03 \\ Dec 24.69 & 128.69 & 24.5 & 1.64 $\pm$ 0.03 \\ \hline\noalign{\smallskip} Jan 6.38 & 141.38 & 5.0 & 1.91 $\pm$ 0.03 \\ Jan 6.38 & 141.38 & 7.1 & 2.00 $\pm$ 0.02 \\ Jan 6.38 & 141.38 & 8.5 & 2.04 $\pm$ 0.04 \\ Jan 6.38 & 141.38 & 11.0 & 2.08 $\pm$ 0.04 \\ Jan 13.32 & 148.32 & 19.2 & 0.91 $\pm$ 0.08 \\ Jan 13.32 & 148.32 & 24.5 & 0.65 $\pm$ 0.15 \\ \hline\noalign{\smallskip} Mar 13.33 & 207.33 & 5.0 & 1.74 $\pm$ 0.02 \\ Mar 13.33 & 207.33 & 7.1 & 1.34 $\pm$ 0.02 \\ Mar 13.33 & 207.33 & 8.5 & 1.31 $\pm$ 0.06 \\ Mar 13.33 & 207.33 & 11.0 & 1.11 $\pm$ 0.05 \\ \hline\noalign{\smallskip} Apr 21.25 & 246.25 & 1.4 & 2.18 $\pm$ 0.08 \\ Apr 21.25 & 246.25 & 1.5 & 2.12 $\pm$ 0.10 \\ Apr 21.25 & 246.25 & 1.8 & 2.13 $\pm$ 0.09 \\ Apr 21.25 & 246.25 & 2.6 & 2.00 $\pm$ 0.05 \\ Apr 21.25 & 246.25 & 3.4 & 1.84 $\pm$ 0.03 \\ Apr 21.25 & 246.25 & 5.0 & 1.56 $\pm$ 0.03 \\ Apr 21.25 & 246.25 & 7.1 & 1.26 $\pm$ 0.03 \\ Apr 22.21 & 247.21 & 8.5 & 1.06 $\pm$ 0.02 \\ Apr 22.21 & 247.21 & 11.0 & 0.84 $\pm$ 0.04 \\ Apr 22.21 & 247.21 & 13.5 & 0.73 $\pm$ 0.02 \\ Apr 22.21 & 247.21 & 16.0 & 0.59 $\pm$ 0.02 \\ Apr 22.21 & 247.21 & 19.2 & 0.44 $\pm$ 0.09 \\ Apr 22.21 & 247.21 & 24.5 & 0.30 $\pm$ 0.04 \\ \hline\noalign{\smallskip} Jun 17.01 & 303.01 & 1.4 & 2.49 $\pm$ 0.09 \\ Jun 17.01 & 303.01 & 1.5 & 2.50 $\pm$ 0.10 \\ Jun 17.01 & 303.01 & 1.8 & 2.24 $\pm$ 0.06 \\ Jun 17.01 & 303.01 & 2.6 & 1.93 $\pm$ 0.04 \\ Jun 17.01 & 303.01 & 3.4 & 1.66 $\pm$ 0.04 \\ Jun 17.01 & 303.01 & 5.0 & 1.26 $\pm$ 0.04 \\ Jun 17.01 & 303.01 & 7.1 & 0.89 $\pm$ 0.04 \\ Jun 21.08 & 307.08 & 8.5 & 0.72 $\pm$ 0.04 \\ Jun 21.08 & 307.08 & 11.0 & 0.56 $\pm$ 0.03 \\ Jun 21.08 & 307.08 & 13.5 & 0.46 $\pm$ 0.02 \\ Jun 21.08 & 307.08 & 16.0 & 0.36 $\pm$ 0.02 \\ Jun 21.08 & 307.08 & 19.2 & 0.28 $\pm$ 0.03 \\ Jun 21.08 & 307.08 & 24.5 & 0.22 $\pm$ 0.03 \\ \hline\noalign{\smallskip} Aug 28.94 & 375.94 & 1.4 & 2.15 $\pm$ 0.07 \\ Aug 28.94 & 375.94 & 1.5 & 2.22 $\pm$ 0.08 \\ Aug 28.94 & 375.94 & 1.8 & 2.13 $\pm$ 0.07 \\ Aug 28.94 & 375.94 & 2.6 & 1.58 $\pm$ 0.05 \\ Aug 28.94 & 375.94 & 3.4 & 1.26 $\pm$ 0.04 \\ Aug 28.94 & 375.94 & 5.0 & 0.81 $\pm$ 0.06 \\ Aug 28.94 & 375.94 & 7.1 & 0.49 $\pm$ 0.07 \\ Sep 8.96 & 386.96 & 1.4 & 2.49 $\pm$ 0.08 \\ Sep 8.96 & 386.96 & 1.5 & 2.49 $\pm$ 0.11 \\ Sep 8.96 & 386.96 & 1.8 & 2.15 $\pm$ 0.09 \\ Sep 8.96 & 386.96 & 2.6 & 1.65 $\pm$ 0.04 \\ Sep 8.96 & 386.96 & 3.4 & 1.30 $\pm$ 0.04 \\ Sep 8.96 & 386.96 & 5.0 & 0.89 $\pm$ 0.03 \\ Sep 8.96 & 386.96 & 7.1 & 0.61 $\pm$ 0.03 \\ Sep 11.92 & 389.92 & 13.5 & 0.23 $\pm$ 0.02 \\ Sep 11.92 & 389.92 & 16.0 & 0.17 $\pm$ 0.02 \\ \hline\noalign{\smallskip} \caption[]{Radio observations of ASASSN-14li. All values of $\Delta t$ are relative to 2014 August 18.00 UT, the mean outflow launch date estimated from our modeling.} \end{longtable} \end{center} \begin{figure*} \centerline{\includegraphics[width=6.8in]{fig2}} \caption{The temporal and radial dependencies of several physical quantities of the outflow inferred from synchrotron equipartition model fits to our radio observations. In each panel the dotted and solid lines mark the fits to the total radio flux densities (Figure 1, panel (a)) and transient flux density only (Figure 1, panel (b)), respectively. The red circles mark the results for a spherical outflow while the blue squares mark the results for a conical outflow with a covering fraction of $10\%$. We determine the radius of the emitting region as a function of time (a), the outflow kinetic energy as a function of time (b), the outflow expansion velocity as a function of time (c), the outflow mass as a function of time (d), the circumnuclear radial density profile (e), and the magnetic field radial profile (f). The errorbars on the data points in each panel correspond to 1 standard deviation and are computed using a Markov Chain Monte Carlo approach that takes into account the uncertainties in the synchrotron model parameters. The inferred quantities are summarized in Table 2.} \label{fig:params} \end{figure*} Motivated by the archival radio detections, we assume that some portion of the radio emission we observe is due to a steady source not associated with the TDE. For simplicity, we assume that this component is constant in time for the period of our observations and follows a single power law shape, which we find to be $F_\nu\approx 1.8\,{\rm mJy}\,\,(\nu/1.4\,{\rm GHz})^{-1}$, accounting for about $80\%$ of our measured flux density at 1.4 GHz. This spectral index is typical of at least some AGN of comparable luminosity in quiescence \citep{ho01}. We subtract this model from our observed flux densities (Figure~\ref{fig:sed}(a)) and find that the remaining transient component exhibits a synchrotron self-absorbed spectral shape ($F_\nu\propto\nu^{5/2}$) below the peak frequency (Figure~\ref{fig:sed}(b)). We model the SED of the transient source at each epoch of observations using the standard synchrotron equipartition model outlined in Section \ref{sec:mod} \citep{sr77,dnp13}. For completeness, we also model the emission assuming that all of the flux we detect originates in a single component associated with the TDE, but find that this model provides a worse fit to the data, does not explain the archival radio detections, and leads to other inconsistencies (Section \ref{sec:1comp}); however, we note that the results of this model do not alter the basic conclusions of our analysis. \section{Synchrotron Emission Model}\label{sec:mod} We model our radio data with the standard synchrotron emission model, in which the blastwave generated by the outflow amplifies the magnetic field and accelerates the ambient electrons into a power law distribution, $N(\gamma) \propto \gamma^{-p}$ for $\gamma \geq \gamma_m$; here, $\gamma$ is the electron Lorentz factor, $\gamma_m$ is the minimum Lorentz factor of the distribution, and $p$ is the power law index. This is the same model used to fit the radio emission from the relativistic TDE Sw\,J1644+57 \citep{zbs+11,bzp+12,zbm+13}, as well as from core-collapse SNe and GRBs. We follow the procedures of \cite{dnp13} by assuming the outflow energy is minimized when the electron and magnetic field energy densities are in equipartition \citep{pac70,sr77,chev98}. Given the shape of the observed SEDs, we associate the peak frequency $\nu_p$ with the synchrotron self-absorption frequency $\nu_a$ and assume that the frequency corresponding to $\gamma_m$ is $\nu_m\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}\nu_a$; this is generally the case for non-relativistic outflows \citep{dnp13}. A comparison of the observed ($F_\nu\propto \nu^{-1}$) and model ($F_{\nu} \propto \nu^{-(p-1)/2}$) optically-thin power laws indicates that $p\approx 3$ \citep{gs02}. We further build on the results from modeling of radio emission in other transients to assume that the fraction of energy in the relativistic electrons \citep{dnp13} is $\epsilon_e=0.1$, and that the kinetic energy is dominated by protons. The minimum energy analysis can also accommodate a non-spherical outflow, characterized by emitting area and volume fractions of $f_A \equiv A/\pi R^2$ and $f_V \equiv V/\pi R^3$, respectively; the spherical case corresponds to $f_A=1$ and $f_V=4/3$. We explore two models, with $f_A=1$ (spherical outflow) and $f_A=0.1$ (conical outflow) to assess the effects of mild collimation, and we further assume that the emission emanates from a shell with a thickness of $0.1$ of the blastwave radius. With this setup we can directly infer the equipartition radius $R_{\rm eq}$ and kinetic energy $E_{\rm eq}$ from the observed values of $\nu_p$ and $F_{\nu,p}$ at each epoch \citep{dnp13}: \begin{eqnarray*} R_{\rm eq}&=&(3.2\times10^{15}\,{\rm cm}) F_{\nu,p,mJy}^{\frac{9}{19}}d_{L,26}^{\frac{18}{19}} \nu_{p,10}^{-1} \\ &&\times \; (1+z)^{-\frac{10}{19}}f_A^{-\frac{8}{19}}f_V^{-\frac{1}{19}}\\ E_{\rm eq}&=&(1.9\times10^{46}\,{\rm erg}) F_{\nu,p,mJy}^{\frac{23}{19}}d_{L,26}^{\frac{46}{19}}\nu_{p,10}^{-1} \\ && \times \; (1+z)^{-\frac{42}{19}}f_A^{-\frac{12}{19}}f_V^{\frac{8}{19}} \end{eqnarray*} where we have scaled $\nu_p$ in units of 10 GHz, $F_{\nu,p}$ in units of mJy, and the luminosity distance ($d_L$) in units of $10^{26}$ cm. For the spherical nonrelativistic case, these equations should be multiplied by factors of $4^{1/19}$ and $4^{11/19}$ due to additional geometric effects. With the inferred values of $R_{\rm eq}$ and $E_{\rm eq}$ we can furthermore derive other physical properties of the system, notably the ambient density ($n$), the magnetic field strength ($B$), the outflow velocity ($v_{\rm ej}$, or $\beta_{\rm ej}$ when scaled to $c$), and the outflow mass ($M_{\rm ej}$), as well as their time and radial dependencies. We refer the reader to \cite{dnp13} for the exact formulae. The resulting parameters for our two models ($f_A=1$ and $0.1$) are listed in Table~\ref{tab:params} and the results are shown in Figure \ref{fig:params}. We derive the uncertainties on $\nu_p$ and $F_p$ for each epoch via a Markov Chain Monte Carlo fitting technique. The uncertainties on the derived parameters are then computed using standard propagation of error. Using our model fits to the individual epochs of observations we robustly measure the source size and kinetic energy as functions of tim . We find that for an assumed spherical geometry, the radio observations require a non-relativistic outflow with a steady velocity of $v_{\rm ej}\approx 12,000$ km s$^{-1}$, freely expanding ($R_{\rm ej}\propto t$) from a radius of $\approx 1.5\times 10^{16}$ cm (January 2015) to $\approx 3.8\times 10^{16}$ cm (August/September 2015). This velocity is larger than the width of the hydrogen and helium emission lines in the optical spectra of ASASSN-14li \citep{hol15}, indicating that these lines do not originate in the outflow. Using the observed radius and extrapolating the observed constant expansion rate backwards we infer that the outflow was launched on 2014 August 11--25. This date range is consistent with an independent estimate of the period of super-Eddington accretion derived from optical, UV, and X-ray observations of the TDE, which gives 2014 June 1--July 10 as the onset of super-Eddington accretion and 2014 September 1--September 15 as the time of peak accretion rate (with a level of about 2.5 times the Eddington rate); see Section \ref{sec:uvot}. We therefore conclude that the outflow is linked to the super-Eddington accretion phase, rather than to the unbound tidal debris, which were launched much earlier at the time of disruption. We note that assuming a conical outflow with $f_A=0.1$ instead of a spherical geometry increases the inferred radius and expansion velocity by about a factor of 3 (Figure~\ref{fig:params}), but the outflow launch date remains essentially unchanged. \clearpage \setlength\LTcapwidth{7in} \begin{center} \begin{longtable*}{cccccccccc} \caption{Best-Fit Model Parameters} \label{tab:params} \\ \hline \hline\noalign{\smallskip} Model & $\Delta t$ & $\nu_p$ & $F_p$ & $R_{\rm eq}$ & $E_{\rm eq}$ & $\beta_{\rm ej}$ & $n$ & $M_{\rm ej}$ & $B$ \\ & (days) & (GHz) & (mJy) & ($10^{16}$ cm) & ($10^{47}$) erg & & (cm$^{-3}$) & $10^{-4} M_{\odot}$ & (G) \\%\hline \hline\noalign{\smallskip} &128 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 16.8 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 1.91 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 0.745 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 4.2 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 0.023 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 1430 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 9.3 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 2.82 \\ &143 & 8.20 $\pm$ 0.10 & 1.76 $\pm$ 0.01 & 1.47 $\pm$ 0.02 & 7.8 $\pm$ 0.1 & 0.040 $\pm$ 0.001 & 350 $\pm$ 40 & 6.0 $\pm$ 0.7 & 1.39 $\pm$ 0.10 \\ Spherical &207 & 4.37 $\pm$ 0.20 & 1.23 $\pm$ 0.03 & 2.33 $\pm$ 0.10 & 9.5 $\pm$ 0.5 & 0.043 $\pm$ 0.002 & 110 $\pm$ 40 & 6.0 $\pm$ 1.0 & 0.77 $\pm$ 0.20 \\ ($f_A=1$)&246 & 4.00 $\pm$ 0.06 & 1.14 $\pm$ 0.01 & 2.45 $\pm$ 0.04 & 9.4 $\pm$ 0.2 & 0.038 $\pm$ 0.001 & 90 $\pm$ 10 & 7.0 $\pm$ 0.6 & 0.71 $\pm$ 0.07 \\ &304 & 2.55 $\pm$ 0.06 & 0.94 $\pm$ 0.02 & 3.51 $\pm$ 0.08 & 11.7 $\pm$ 0.4 & 0.045 $\pm$ 0.001 & 38 $\pm$ 8 & 7.0 $\pm$ 0.7 & 0.46 $\pm$ 0.07 \\ &381 & 1.91 $\pm$ 0.07 & 0.62 $\pm$ 0.02 & 3.84 $\pm$ 0.10 & 9.4 $\pm$ 0.4 & 0.039 $\pm$ 0.001 & 24 $\pm$ 7 & 7.0 $\pm$ 1.0 & 0.36 $\pm$ 0.08 \\ \hline\noalign{\smallskip} &128 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 16.80 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 1.91 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 2.22 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 1.7 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 0.067 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$ 874 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 0.4 & $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ 2.2 \\ &143 & 8.20 $\pm$ 0.10 & 1.76 $\pm$ 0.01 & 4.37 $\pm$ 0.06 & 3.19 $\pm$ 0.05 & 0.118 $\pm$ 0.004 & 210 $\pm$ 20 & 0.26 $\pm$ 0.03 & 1.08 $\pm$ 0.09\\ Conical &207 & 4.37 $\pm$ 0.20 & 1.23 $\pm$ 0.03 & 6.9 $\pm$ 0.3 & 3.9 $\pm$ 0.2 & 0.129 $\pm$ 0.006 & 60 $\pm$ 20 & 0.26 $\pm$ 0.05 & 0.6 $\pm$ 0.2 \\ ($f_A=0.1$)&246 & 4.00 $\pm$ 0.06 & 1.14 $\pm$ 0.01 & 7.3 $\pm$ 0.1 & 3.85 $\pm$ 0.07 & 0.114 $\pm$ 0.003 & 55 $\pm$ 7 & 0.33 $\pm$ 0.03 & 0.55 $\pm$ 0.05 \\ &304 & 2.55 $\pm$ 0.06 & 0.94 $\pm$ 0.02 & 10.0 $\pm$ 0.2 & 4.8 $\pm$ 0.1 & 0.133 $\pm$ 0.004 & 23 $\pm$ 5 & 0.31 $\pm$ 0.03 & 0.36 $\pm$ 0.05 \\ &381 & 1.91 $\pm$ 0.07 & 0.62 $\pm$ 0.02 & 11.4 $\pm$ 0.4 & 3.8 $\pm$ 0.2 & 0.116 $\pm$ 0.004 & 14 $\pm$ 4 & 0.32 $\pm$ 0.05 & 028 $\pm$ 0.06 \\ \hline\noalign{\smallskip} \caption[]{Physical parameters of the outflow and circumnuclear environment derived from the synchrotron equipartition model that provides the best fit to our radio observations of ASASSN-14li. We fit only the transient component of the radio fluxes. We show values for two possible geometries: a spherical outflow ($f_A=1$) and a conical outflow with a covering fraction of $10\%$ ($f_A=0.1$). In both cases, we assume that the emitting region is a shell of thickness $0.1R_{\rm eq}$. All values of $\Delta t$ are given relative to the mean outflow launch date of 2014 August 18.00 UT, inferred from the model. The uncertainties correspond to 1 standard deviation and are computed using a Markov Chain Monte Carlo approach.} \end{longtable*} \end{center} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig3}} \caption{The radial density profile in the circumnuclear region of ASASSN-14li in comparison to other SMBHs. We infer a density profile of $\rho(R)\propto R^{-2.5}$ on a scale of about 0.01 pc. For comparison, we show the density profiles for the Sgr A$^*$ \citep{bmm+03}, the nucleus of M87 \citep{rfm+15}, and the circumnuclear region of the $\gamma$-ray TDE Sw\,J1644+57 \citep{bzp+12}, which span the range of $\rho(R)\propto R^{-3/2}$ to $R^{-1}$. To facilitate the comparison we scale the radii by the Schwarzschild radius of each SMBH ($R_s=2GM_{\rm BH}/c^2$, where $M_{\rm BH}$ is the black hole mass), using an estimate of $M_{\rm BH}\approx 10^6$ M$_\odot$ for ASASSN-14li \citep{hol15,mkm15}. We find that for the circumnuclear region of ASASSN-14li the density profile is steeper than previously seen in the other SMBH systems, but the density normalization is comparable.} \label{fig:n} \end{figure} We find that the kinetic energy of the outflow is $E_K\approx 4-10\times10^{47}$ erg and is constant in time, in agreement with the inferred free expansion of the ejecta, but distinct from the increasing energy as a function of a time observed in core-collapse SNe (c.f.~\citealt{bkc02}). Combining the outflow velocity and kinetic energy we infer an ejected mass of $M_{\rm ej}\approx 3\times10^{-5}-7\times10^{-4}$ M$_{\odot}$, dependent on the outflow geometry. This is $\sim 1-10\%$ of the mass accreted during the super-Eddington phase as inferred from modeling of the optical, UV, and X-ray emission (Figure \ref{fig:uvot}), consistent with theoretical estimates of the fraction of mass ejected in a wind during super-Eddington accretion \citep{sq09,lr11}. We also find that independent of the outflow geometry, the pre-existing density profile in the circumnuclear region follows $\rho(R)\propto R^{-2.5}$ on a scale of $\sim 0.01$ pc (Figure~\ref{fig:n}), much smaller than the scale that can be directly probed in any extragalactic SMBH and even around Sgr A$^*$ \citep{bmm+03,rfm+15}. The inferred profile is steeper than the $\rho(R)\propto R^{-3/2}$ profile expected for Bondi accretion in the circumnuclear regions of low accretion rate systems \citep{bon52}, and from the $\rho(R) \propto R^{-1}$ profile inferred within the Bondi radius of Sgr A$^*$ and the AGN in M87 \citep{bmm+03,rfm+15}. The circumnuclear density profile inferred from radio observations of the relativistic TDE Sw\,J1644+57 is consistent with $\rho(R)\propto R^{-3/2}$ but shows a hint of a steeper slope at $R\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.05$ pc, the smallest radius probed \citep{bzp+12}. The normalization of our inferred density profile depends on the outflow geometry, with $n\approx 60-500$ cm$^{-3}$ at a radius of 0.01 pc. This is comparable to the density found for Sgr A* and Sw\,J1644+57 at similar radii \citep{bmm+03,bzp+12}. We note that the pre-TDE density inferred by our modeling is lower than the density required for spherical Bondi accretion at the rate implied by the archival observations \citep{bon52,vv15}. The calculated density increases somewhat if we assume that the system is not perfectly in equipartition (for example, if we use $\epsilon_B=0.01$ the overall density scale increases by about a factor of 5), but still falls short of the density required for Bondi accretion. However, this comparison relies on the assumption of spherical symmetry. In fact, simulations have shown that the density around an accreting black hole can be highly asymmetric, with densities in the plane of the accretion disk orders of magnitude higher than in the funnel carved out by a jet/outflow \citep{sn15}. It is likely that a jet existed prior to the onset of elevated accretion due to ASASSN-14li, as is typical of slowly accreting systems. If the outflow generated by the TDE was expelled along the same axis as the pre-existing jet, we could be probing this low-density funnel. Such an alignment is plausible if both outflows are aligned along the spin axis of the black hole. We therefore do not consider the inferred density to be problematic. In fact, it may be indicative of alignment of the mildly collimated outflows before and after the TDE. The model described above assumes that synchrotron and Compton cooling are unimportant. With the parameters inferred from our radio observations for ASASSN-14li we expect these cooling breaks to be located at $\nu_c\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 10-20$ GHz, which is greater than $v_a$ and hence self-consistent with the model results. The precision of this calculation is limited by uncertainties in the the age of the outflow and propagated errors from uncertainties in the peak flux and peak frequency, but for any reasonable combination of parameters, the cooling breaks rapidly move to high frequencies during the span of our observations. Our January high-frequency flux deficit (see Figure \ref{fig:sed}) may be due to a cooling break, but may also be due to calibration errors arising from the fact that the VLA was in an intermediate configuration during that time, with larger uncertainties in the antenna position that will affect the high-frequency data. We also see a high-frequency flux deficit in our September observations, but this cannot be due to a cooling break because we see no evidence of such a break at lower frequencies in earlier epochs. There are no obvious calibration errors in the September high-frequency observations, so it is possible that the deficit may arise from some other mechanism. We note that this deficit does not affect our analysis, as the only quantities we need are the peak flux density and the frequency at which it occurs for each epoch. Additional effects that reduce the high-frequency flux, while interesting, will not affect the main results of our analysis. The synchrotron equipartition model readily generalizes to the case of relativistic expansion, with the bulk Lorentz factor of the outflow ($\Gamma$) as an additional parameter \citep{dnp13}. In this case, to reach a self-consistent result in which $\Gamma\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 2$ (i.e., the outflow is relativistic) requires an unreasonably small value of $f_A$ that corresponds to a jet with an opening angle of $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.1^{\circ}$. This is two orders of magnitude narrower than the typical jets in GRBs \citep{fks+01}, and it would require fine-tuning in the jet orientation relative to our line of sight of $\sim 1.5\times 10^{-6}$ in order to detect the radio emission. We therefore conclude that for any reasonable geometry the outflow from ASASSN-14li is non-relativistic. \subsection{Interstellar Scintillation} Using the inferred angular size of the outflow ($\theta_s\approx 8-80$ $\mu$as), we consider whether the observed radio emission might be affected by interstellar scintillation, which could lead to frequency- and time-dependent random variations in the radio flux density \citep{w98,gn06}. Using the NE2001 Galactic free electron energy density model \citep{cl02}, we find that for the line of sight to ASASSN-14li the transition frequency between strong and weak scintillation is about 7 GHz, in the middle of our observation band. At $\nu\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 7$ GHz we find that the fractional modulation level ($m_p$) due to ISS is at most a few percent (decreasing from $m_p\sim 10\%$ in our earliest 22.5 GHz observation to $m_p\sim 2\%$ in our final one). However, at $\nu\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 7$ GHz we find an expected level of variation of up to $\sim 25\%$ at 1.45 GHz. The 2015 August/September 1.45 GHz flux density presented in Figure 1 is an average of two observations obtained about 10 days apart. Prior to averaging, the two epochs exhibit a $\sim 20\%$ flux density variation, consistent with the estimated effect of ISS. This provides an independent confirmation of the small source size inferred from the equipartition analysis. To verify that ISS-induced flux density variations do not bias our results, we repeated our equipartition analysis with larger errorbars on each data point, computed by adding in quadrature the measurement uncertainties and the expected ISS-induced modulation. We find that while this increases the uncertainty on the derived physical properties of ASASSN-14li, the best-fit parameter values change by at most a few percent for the epochs with broad frequency coverage. \subsection{Inconsistencies of a Single Component Model for the Radio Flux}\label{sec:1comp} In Figure 2, we show the radial and time evolution of the model parameters derived from fitting the total radio flux (dotted lines) and the transient component only (solid lines). The fits to the latter give a constant energy and velocity as a function of time, indicating that the outflow is in free expansion ($R_{\rm eq}\propto t$). The outflow should continue expanding freely until it has swept up an amount of mass equal to its own initial mass. We can compute the amount of mass swept up from our derived density profile and we find that this is less than the inferred mass of the outflow, $M_{swept} \sim$ (0.04-0.4)$M_{ej}$ depending on the assumed outflow geometry. (In fact, $M_{swept}$ may be an even smaller fraction of the total outflow mass because we use the equipartition energy $E_{eq}$ to estimate $M_{ej}$, and $E_{eq}$ is the minimum energy of the system.) This result provides a self-consistency check for our model since the parameters are inferred from fitting the individual radio SEDs without an assumed temporal evolution. Given the inferred steep density profile, we expect that the outflow will continue to expand freely for years to decades. \begin{figure*} \begin{center} \includegraphics[width=6.4in]{fig4} \end{center} \caption{Accretion parameters for ASASSN-14li estimated from modeling of the optical, UV, and X-ray observations. (a) Histogram of the accretion milestone dates for the ensemble of model fits as compared to our determination of the outflow launch date (yellow band). The purple histogram shows the time when each realization in the ensemble of model fits first crosses the Eddington limit, and the brown histogram shows the time when each realization reaches its maximum accretion rate. We find good agreement between our inferred outflow launch date and the times of super-Eddington and peak accretion. (b) Histogram of the maximum accretion rate normalized to the Eddington accretion rate ($\dot{M}_{\rm Edd}$) for each realization in our ensemble of model fits to the optical/UV light curves. We find that ASASSN-14li exceeded the Eddington accretion rate by about a factor of 2.5. (c) Histogram of the total amount of mass accreted during the super-Eddington phase for each realization in our ensemble of model fits. The outflow mass that we infer from our radio observations is about $1-10\%$ of this total, in line with theoretical expectations.} \label{fig:uvot} \end{figure*} In contrast, modeling of the total radio flux with a single component leads to energy and velocity evolution that are less natural. The model fits imply that the outflow energy is increasing with time and that the outflow is accelerating, with $R_{\rm eq}\propto t^{1.6}$. In core-collapse supernovae the kinetic energy is observed to increase with time due to the existence of ejecta at progressively slower velocities, with a steep profile of $E_K\propto v_{\rm ej}^{-5.2}$ \citep{tmm01}, but the velocity decreases with time. The same is true for the behavior inferred from radio observations of the relativistic $\gamma$-ray TDE Sw\,J1644+57, in which an episode of energy increase by an order of magnitude was accompanied by a declining velocity \citep{bzp+12}. Furthermore, an epoch-by-epoch comparison of the model fits to the total flux and to only the transient flux show that the total flux is not as well-fit by the synchrotron model, especially in our April 2015 observations (Figure \ref{fig:sed}). For these reasons, and the archival radio detections, we conclude that the two-component model is correct, but we note that the overall main conclusion of a non-relativistic outflow is robust to our choice of model. \section{Comparison with Other Modeling} In this section we compare our results to independent modeling of the X-ray, UV, and optical observations (Guillochon et al. in prep) and consider alternate explanations for the radio emission. We find that our interpretation of the emission as a non-relativistic outflow launched during the period of super-Eddington accretion onto the SMBH is robust. \subsection{Independent Modeling of the Accretion Rate from X-ray/UV/Optical Observations}\label{sec:uvot} To determine the times at which the Eddington accretion limit is exceeded and when peak accretion is achieved, as well as the peak accretion rate and the total mass accreted in the super-Eddington phase we fit the optical, UV, and X-ray data of ASASSN-14li using the code {\tt TDEFit}; the data we fit against are the same data presented in \cite{mkm15} (see their Figure 1). Because the fallback of matter onto a black hole following a disruption only follows the canonical -5/3 law for half of disruptions, and only several months after the peak fallback rate \citep{gr13}, the fitting of tidal disruption light curves using a Monte Carlo approach is a far more robust procedure for constraining important temporal milestones for a given flare, such as the time of disruption and when the accretion rate crosses various thresholds such as the Eddington limit. {\tt TDEFit} utilizes a maximum-likelihood analysis to determine the most likely combination of disruption parameters, with one of the products being an ensemble of accretion rates onto the SMBH as functions of time. We find that the most likely black hole mass is $\approx 10^6 M_{\odot}$, and that the peak accretion rate is significantly in excess of the Eddington limit (Figure~\ref{fig:uvot}). Our modeling includes both the effects of inefficient circularization, which simulations have found significantly reduces the accretion rate onto the black hole relative to the fallback rate \citep{gmr14, skc15}, and limits the luminosity of the disk component to the Eddington limit. We find that the best-fitting circularization time is roughly three times longer than the timescale of peak accretion, resulting in a time of disruption that occurs much earlier than in models in which the viscous effects are neglected; this is the expected behavior for low-mass black holes ($M_{\rm BH} \sim 10^6 M_\odot$) where circularization takes place at large distances from the black hole \citep{gmc15}. This also reduces the peak accretion rate onto the black hole and imposes deviations from the canonical -5/3 decay law. We also find that the Eddington limit we impose reduces the luminosity of the flare significantly near the time of peak accretion onto the black hole, resulting in a reduced efficiency of conversion of accretion energy into observable optical/UV emission at these times. Our modeling is completely consistent with the early-time photometric limits for ASASSN-14li presented in \cite{hol15}. Because our radio observations indicate that the outflow is in free expansion, we can extrapolate the observed radius to estimate $t_0$, the time at which the outflow was launched. The launch time depends only weakly on the outflow geometry; we obtain $t_0=2014$ August $21$ ($\pm4$ days) for the spherical outflow ($f_A=1$) and $t_0=2014$ August $15$ ($\pm4$ days) for a conical outflow ($f_A=0.1$). This time range is shown in comparison to the results from modeling of the optical, UV, and X-ray data in Figure~\ref{fig:uvot}. We find that the outflow was launched at a time that straddles the onset of super-Eddington accretion and the time of peak accretion. This supports our conclusion that the radio emission is due to an accretion-driven wind rather than being associated with the unbound debris, which would have been launched months earlier at the time of disruption. Figure~\ref{fig:uvot} also shows the total mass accreted during the super-Eddington phase as inferred from modeling of the optical, UV, and X-ray emission. Our estimate of the outflow mass is $\sim$a few percent of this number, consistent with theoretical estimates of the fraction of mass ejected in a wind during super-Eddington accretion \citep{sq09,lr11}. We defer further description of the modeling work to a future paper (Guillochon et al. in prep). \subsection{Radio Emission from the Unbound Debris} After a TDE, approximately half of the stellar debris will be unbound from the black hole. The unbound debris around a non-spinning black hole will be very narrow in most cases as the stream is self-gravitating for low-beta encounters \citep{k94,gmr14,cn15}. When it is self-gravitating, its cross-section actually shrinks as it leaves the vicinity of the black hole, and likely only begins homologous expansion at a distance of $\sim10^{16}$ cm. At this distance, the stream covers a solid angle of $((r/r_t)^{1/4}r_{star}q^{-1/6}r)/(4r^2) \sim 10^{-5}$ steradians \citep{gmc15}. When the stream is not self-gravitating (which only occurs for deep, rare encounters, $\beta \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 3$), the maximum spread is given by the spread in velocity, estimated to be 0.2 steradians for a $10^6M_{\odot}$ black hole \citep{sq09}. The addition of spin will not dramatically alter these numbers; as described by \cite{k12} the maximum difference in the velocity spread will be about a factor of 2 (but often times can be reduced by a factor of 2). In our model, the physical size of the emitting region is well constrained by the equipartition argument. (The total energy of the system is a very strong function of radius, so this size estimate is robust even if the system is not perfectly in equipartition.) Therefore, if we assume that the radio emission covers only a small solid angle, we must conclude that the emission is emitted at a larger radius from the central black hole. This also naturally leads to a larger velocity of the emitting material, as the same fractional increase in the size of the emitting region requires covering a larger distance in the same amount of time. A self-gravitating debris stream covering a solid angle of $10^{-5}$ steradians at a radius of $10^{16}$ cm would produce a flux orders of magnitude too small to explain the observed radio emission. If we keep this solid angle and allow the emission to occur at a larger radius, the inferred velocity of the emitting material is $\Gamma\sim$ 2-3, which is much too fast to correspond to the unbound debris. For a non self-gravitating stream, the velocities are more reasonable; indeed, a solid angle of 0.2 steradians is not much more concentrated than the conical $f_A=0.1$ case we consider here. In this case, apart from the rarity of such high-beta encounters, an additional issue is matching the overall energies. The total energy we infer corresponds to a very small amount of material ($\sim2\times10^{-5}M_{\odot}$ for the 0.2 steradians case), while the total mass of the unbound material is orders of magnitude larger for the disruption of a solar mass or even 0.1 solar mass star. Even if we assume that only the fastest-moving tail of the distribution of unbound debris produces the radio emission, as recently suggested by \cite{kro16}, the emission expected in this case would still require a density tens to hundreds of times higher than the density we compute to match our observed fluxes. While the density we derive by assuming perfect equipartition is, like the energy, a lower limit, it is difficult to explain such a large discrepancy. Furthermore, at such high densities, the radio flux would be decreased by other effects, such as free-free absorption, and would not match the SEDs we observe. An additional issue is one of timing. As stated above, if we assume that the outflow has been moving at a constant velocity then we obtain a launch date that corresponds to the onset of super-Eddington accretion -- several months after the time of disruption. (Given that the current estimated radius of the emitting region is $\sim10^5R_s$, assuming that the emission was launched at a few $R_s$ instead of $R=0$ does not change this calculation.) It therefore seems unlikely that the radio emission could be generated by the unbound debris for any plausible geometry of the initial star-SMBH encounter. \subsection{Comparison with a Decelerated Jet Model} Our multi-frequency data rule out the interpretation of the radio emission as due to a decelerated (initially relativistic) jet, as recently proposed by \cite{vv15}. While their model provides a good fit to their observations, they are unable to constrain the evolution of $F_p$ and $\nu_p$ directly because most of their data is collected at a single frequency. This also means that they are forced to fix the circumnuclear density and density profile (which they assume to be flat). The density that they require to decelerate the jet at a radius of $10^{17}$ cm is much higher than the density we compute at that radius directly from our observations. In Figure \ref{fig:vanvel}, we present a modified version of their Figure 2B, which shows that their model does not fit our additional observations. Notably, their model predicts a steady decline in L band after ~March 2015, while we find that the total flux at 1.4 GHz remains roughly constant through September, with the exact level of variability difficult to quantify due to significant scintillation effects. The existence of a second steady-state component will not affect the quality of the model fit; subtracting the contribution of such a component would simply vertically shift all points at each frequency by the same amount. \begin{figure} \centerline{\includegraphics[width=4in]{fig5}} \caption{All currently available radio observations of ASASSN-14li at three representative frequency bands, as reported in \cite{vv15} (diamonds) and this work (circles). The solid lines show the expected flux evolution for the best-fit decelerated jet model presented in \cite{vv15}. (The time axis is chosen to match \citealt{vv15}'s Figure 2B.) We see that their model cannot reproduce our observed fluxes at 5.0 GHz and 1.4 GHz.} \label{fig:vanvel} \end{figure} \section{Conclusions}\label{sec:conc} \begin{figure} \centerline{\includegraphics[width=3.5in]{fig6}} \caption{Kinetic energy ($E_K$) as a function of outflow velocity ($\Gamma\beta$) from radio observations of TDEs. We show the inferred values for ASASSN-14li (black square; horizontal bar represents the range of velocity for a range of outflow geometries) in comparison to the two $\gamma$-ray TDEs with radio emission: Sw\,J1644+57 (red circles; \citealt{zbs+11} and \citealt{bzp+12}) and Sw\,J2058+05 (blue diamonds; \citealt{ckh+12}). The data for Sw\,J1644+57 are from detailed modeling of the radio emission as a function of time, including a correction for jet collimation with an opening angle of about 0.1 rad \citep{zbs+11,bzp+12}. The data point and velocity range for Sw\,J2058+05 are based on an identical analysis to the one carried out here. The vertical dashed line at $\Gamma\beta=1$ roughly separates the phase-space into events with non-relativistic and relativistic expansion. The $\gamma$-ray TDEs exhibit relativistic outflows with a large kinetic energy, but they represent $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ a few percent of the overall TDE volumetric rate \citep{mgm+15}. On the other hand, ASASSN-14li exhibits a non-relativistic outflow with a lower kinetic energy but appears to represent the bulk of the TDE population. Also shown for comparison are the data for long-duration $\gamma$-ray bursts (LGRBs; magenta stars) and Type Ib/c core-collapse supernovae (Type Ib/c SNe; cyan stars) \citep{mms+14}. The LGRBs exhibit relativistic outflows with $E_K\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 10^{50}$ erg, while Type Ib/c SNe have non-relativistic outflows with $E_K\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 10^{49}$ erg. In addition, LGRBs represent $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1\%$ of the Type Ib/c SN rate \citep{wp10}. The TDE sample, although small, appears to trace the same relation seen in LGRBs and Type Ib/c SNe, with a small fraction of events (by volumetric rate) producing energetic relativistic outflows, and the bulk of the population producing lower energy non-relativistic outflows.} \label{fig:context} \end{figure} We have detected transient radio emission associated with the nearby TDE ASASSN-14li, consistent with a non-relativistic outflow launched during the period of super-Eddington accretion. We conclude with several important implications of our results. First, the velocity and kinetic energy of the outflow in ASASSN-14li are significantly lower than inferred for the two relativistic $\gamma$-ray TDEs previously detected in the radio (Figure \ref{fig:context}), which represent $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$ a few percent of the TDE population \citep{zbs+11,bgm+11,bkg+11,mgm+15}. Although the TDE sample with detected radio emission is small, this is reminiscent of the same relation observed in Type Ib/c core-collapse supernovae (Type Ib/c SNe) and long-duration gamma-ray bursts (LGRBs), in which a small fraction of events (LGRBs: $\sim 1\%$ by volumetric rate) produce energetic relativistic outflows while the bulk of the population (Type Ib/c SNe) produces lower energy non-relativistic outflows (Figure \ref{fig:context}; \citealt{mms+14}). Second, ASASSN-14li is the nearest TDE discovered to date and the first to reveal radio emission associated with a non-relativistic outflow; previous upper limits on the radio luminosity of optical/UV TDEs are all at least a factor of a few above the level of emission detected here, and could only rule out the presence of relativistic jets \citep{bmc+13,vfk+13,cbg+14}. This suggests that non-relativistic outflows are likely ubiquitous in most TDEs. This conclusion is further supported by observations of the optical TDE PS1-11af at $z=0.405$ which revealed a broad rest-frame UV absorption feature with $v\sim 13,000$ km s$^{-1}$ suggestive of a similar outflow \citep{cbg+14}; such absorption was not detectable in other TDEs due to their lower redshift and hence lack of rest-frame UV spectral coverage. Finally, given the likely ubiquity of outflows from most TDEs we expect such events to be detected in future sensitive wide-field radio surveys of the local universe; for example, the Square Kilometer Array will be able to probe a volume $\sim$100 times larger than that accessible to current facilities for a radio luminosity comparable to that of ASASSN-14li \citep{cr+04}. Time-series rest-frame UV spectroscopy of more distant TDEs may also serve to infer the presence of outflows and the timing of their ejection. \begin{acknowledgements} K.D.A., E.B., and P.K.G.W.~are supported in part by NSF and NASA grants. J.~G.~ acknowledges support from Einstein grant PF3-140108. A.~Z.~ acknowledges support from NSF grant AST-1302954. The VLA is operated by the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. \end{acknowledgements}
{'timestamp': '2016-03-09T02:01:06', 'yymm': '1510', 'arxiv_id': '1510.01226', 'language': 'en', 'url': 'https://arxiv.org/abs/1510.01226'}